why is bash(.sh) script not executable by default - linux

Why is bash(.sh) script not executable by default.
I agree that while we touch any file in linux it is created for reading purpose.
But since file name extensions such as sh and csh are for execution purpose.
Won't it be ideal to touch them in an executable mode.
Question might sound redundant to, but i still thought of asking it :)

Ultimately the answer to this question is that that's not what touch was designed to do.
In fact, touch was not even designed to create files; its primary purpose in life is to change file timestamps. It is merely a side effect of that purpose, and the fact that the designers decided to be a little more generous and create the target file if it doesn't already exist (and if you didn't provide the -c option), that it allows you to create files at all.
It should also be mentioned that there are other techniques that create files, such as redirection (echo 'echo '\''I am a script.'\'';' >|script.sh;). The act of file creation is a generic one, and the whole concept of a file is a generic one. A file is just a byte stream; what goes in the byte stream is unspecified at the file abstraction layer. As #AdamGent mentioned, Windows requires certain types of executable files to have certain extensions in order to be executed properly, but even in Windows, you can put executable code in non-executable-extensioned files, and you can put non-executable content in executable-extensioned files. There's no enforcement of file name / file content correspondence at the file layer.
All of that being said, it would often be a convenience if you could easily create a script in Unix that automatically has executable permission set. I've actually written a script to allow me to edit a new file in vim, and then have its permissions set to executable after write-quitting. The reason this potential convenience has not been standardized into a utility is likely related to the concern about security; you don't want people to accidentally make files executable, because that raises the risk of security holes.
You can always write your own program to create a file and make it executable, perhaps based on the extension of the file name.
Another thing that can be added here is that even shell scripts don't always need to be executable. For example, if you write a script that is only intended to be sourced from existing shell processes (via the source or classic . builtins), then the script does not need to be executable at all. Thus, there are cases where the file extension itself does not provide enough information to determine what the appropriate permissions are for the file.

There is nothing in the file name that says a file is even a script. Common practice perhaps says that .sh and .csh are scripts but there is no such rule.
What makes a file an executable script is the magic number in the beginning of the file. The magic number #! (the shebang, which has many other names as well) means the file is a script. For example
#!/bin/bash
When you try to execute the file (it must then also be set to executable in the permissions), the Linux kernel looks at the first two bytes of the file and finds the magic number (#!). It then knows it is supposed to execute whatever comes after the Shebang with the name of the file as argument followed by the rest of the arguments passed.
If you type the following in the shell
blablabla.sh lol1 lol2
The shell recognizes that you are trying to execute something so it invokes the kernel
exec blablabla.sh lol1 lol2
The kernel finds the shebang, and it becomes
exec /bin/bash blablabla.sh lol1 lol2
By exec i mean one of the exec family system calls.
Other fun names for #! include sha-bang, hashbang, pound-bang, hash-exclam and hash-pling

Because the .sh script extension is completely arbitrary in Unix environments. Its only a convention. You can name your script whatever you like so long as it has an executable bit set. This is unlike Windows where I believe its required (.com, .exe, and I think .dll).
touch just changes the timestamp of the file. It again does not care what the file extension of the file is. In fact most tools in Unix do not care about file extension.

Related

File name multiple extensions order

I want to create some bash scripts. They're actually going to be build scripts for Scala, so I'm going to identify them with my own .bld extension. They will be a sort of sub type of a shell script. Hence I want them to be easily recognised as a shell script. Should I call them
ProjectA.bld.sh //or
ProjectA.sh.bld
Edit: My natural inclination would be to go for the former but .tar.gz files seem to follow the latter naming convention.
A shell script doesn't mind what you call it.
It just needs to be..
executable (chmod +x)
in your path
contain a "shebang" as it's first line #!/bin/sh
The shebang determines which program is used to execute your script.
Call it ProjectA.bld.sh (or preferably buildProjectA.sh).
The .sh extension (although not necessary for the script to run) will allow you and everyone else to easily recognise it as a shell script.
While for the most part, naming conventions like this don't really matter at all to Unix/Linux, the usual convention is for the "extensions" to be in the order of the steps used to create the file. So, for example, a file named foo.tar.bz2.gpg.part01 would indicate a sequence of operations like the following:
Use tar to create foo.tar, which contains some other files
Use bzip2 to compress foo.tar into foo.tar.bz2
Use gnupg to encrypt foo.tar.bz2 into foo.tar.bz2.gpg
Use split or something similar to break the file into chunks for transmission/storage, resulting in one or more foo.tar.bz2.gpg.part* files.
The naming conventions are mostly just for human semantic meaning, though, and there's nothing stopping you from doing exactly the opposite, or even something completely random, except your own ability to remember exactly what you did...

What is PATH on a Mac (UNIX) system?

I'm trying to setup a project, storm from git:
https://github.com/nathanmarz/storm/wiki/Setting-up-development-environment
Download a Storm release , unpack it, and put the unpacked bin/ directory on your PATH
My question is: What does PATH mean? What exactly do they want me to do?
Sometimes I see some /bin/path, $PATH, or echo PATH.
Can someone explain the concept of PATH, so I can setup everything easily in the future without just blindly following the instructions?
PATH is a special environment variable in UNIX (and UNIX-like, e.g. GNU/Linux) systems, which is frequently used and manipulated by the shell (though other things can use it, as well).
There's a somewhat terse explanation on wikipedia, but basically it's used to define where to search for executable files (whether binaries, shell scripts, whatever).
You can find out what your current PATH is set to with a simple shell command:
: $; echo $PATH
(Note: the : $; is meant to represent your shell prompt; it may be something very different for you; just know that whatever your prompt is, that's what I'm representing with that string.)
Depending on your system and prior configuration, the value will vary, but a very simple example of the output might be something like:
/usr/bin:/bin:/usr/local/bin
This is a colon(:)-separated list of directories in which to search for executable files (things like ls, et cetera.) In short, when you try to execute a command from your shell (or from within some other program in certain ways), it will search through each of the directories in this list, in order, looking for an executable file of the name you're provided, and run the first one it finds. So that's the concept, per your question.
From there, what this documentation is telling you to do is to add the directory where you've unpacked the software, and in particular its bin subdirectory, into your $PATH variable. How to do this depends a bit on which shell you're using, but for most (Bourne-compatible) shells, you should be able to do something like this, if you're in the directory where that bin directory is:
: $; PATH="$PATH:$PWD/bin"; export PATH
In just about all but an actual Bourne shell, this can be shortened to:
: $; export PATH="$PATH:$PWD/bin"
(I won't bother explaining for CSH-compatible shells (because: I agree with other advice that you don't use them), but something similar can be done in them, as well, if that happens to be your environment of choice for some reason.)
Presumably, though, you'll want to save this to a shell-specific configuration file (could be ~/.profile, ~/.bashrc, ~/.zshrc... depending on your shell), and without reference to $PWD, but rather to whatever it expanded to. One way you might accomplish this would be to do something like this:
: $; echo "export PATH=\"\$PATH:$PWD/bin\""
and then copy/paste the resulting line into the appropriate configuration file.
Of course you could also generate the appropriate command in other ways, especially if your $PWD isn't currently where that bin directory is.
See also:
An article about $PATH (and more)
a related question on superuser.com

Bash script ignores arguments when in /bin

I have this bash script that I can pass up to three arguments to. It works like a charm when I call it from the directory ./script -h but when I copy the same file to /bin and call it from anywhere with script -h, it seems to ignore the arguments passed.
Why? or maybe more importantly:
What can I do do change that?
script is a very useful standard utility program which take a copy of your current session (look for a file called typescript). It creates another shell interface, so you probably didn't notice it was running.
When you write a new program, use a naming convention, like script.sh.
Edit:
If you don't like using a file suffix (because it looks too much like Windows) then fine, but use some other naming convention which will ensure your script names do not clash with existing commands. test is another favorite, for example. You can use type to check a command, but that only checks your current environment, you might still have a name collision when running from a different username, for example.

How to edit a file owned by root on SSH connect

Hi I'm sure there is some way of doing what I want, but maybe I'm just attacking it the wrong way. Hope someone can help.
I have a dev box that I SSH in to from several other machines. In order to debug remotely I need to configure my debugger with my client machine's IP, which changes when I log in from different machines. I'm getting bored of doing this manually all the time so thought I'd try and automate it.
I'm creating a script that is automatically run upon SSH connection that will modify a configuration setting in a PHP ini file. The problem is the PHP ini files are all owned by root so I'm not sure how to modify those files if I'm just logging in as a normal user.
There's not really a security concern with my dev box so I could just change the owner of the ini file, but I wanted it to be more automated than that.
My current attempt is a python script located in my home dir, which is called from .bashrc when I connect via SSH. I don't see how I can gain root privileges from there, I am pretty new to linux though. I thought maybe there would be some other method I'm not aware of.
You have a file that is owned by root. You clearly need to either find a way to mark the file as modifiable by you; or a way for you to elevate your privileges so that you are allowed to modify it.
This leads to the two traditional unix approachs to doing this. They are:
To create a group with which to mark the file, ie. initdebug; chgrp/chmod the file so it has the initdebug group and is group writable; and, add yourself to the initdebug group so you can use the group write permission to modify the file.
To create a very small, audited binary executable (this won't work with a script) that will perform the specific modifications you desire (for simplicity I would suggest copying one of a selection of root owned PHP ini files into the right place). Then chown'ing the file so it is owned by root, and setting the suid bit on the executable so it will execute as root.
You can also combine the two approaches, either:
Not making yourself a member of the initdebug group or suid on the executable, but rather setting group of the executable to initdebug and setting its sgid bit; or,
Keeping the executable suid root but making it only executable by initdebug and therefore only executable by users added to that group.
The security trade off is in the ease/risk of privilege escalation should someone hack your account. If there is a stack/heap overflow or similar vulnerability in the executable and it is executing as root, then you are gone. If the PHP ini file can be modified to open a remote-vulnerability then if they can directly access the ini file you are gone.
As I suspect the latter is possible, you are probably best off with a small executable.
Note: As I alluded to above, unix does not acknowledge the s[ug]id bits on scripts (defined as anything using the #!... interpreter syntax). Hence, you will have to use something you can compile down to a binary. So that probably means either C, C++, Java(using gcj), ML, Scheme(mit), Haskell(ghc).
If you haven't done any C or C++ programming before, I would recommend one of the others as a suid binary is not a project with which to learn C/C++. If you don't know any of the other languages, I would recommend either ML or Java as the easiest to to write something small and simple.
(btw, http://en.wikipedia.org/wiki/List_of_compilers includes a list of alternative compilers you can use. Just make sure it compiles to native, not bytecode. As far as the OS is concerned a bytecode vm is just another interpreter).
you can do it with insert your user to sudoers file on mechine that you want to remote,
for the example you can see my blog.
this is the url : http://nanamo3lyana.blogspot.com/2012/06/give-priviledge-normal-user-as-root.html
and then on your automaticly script add sudo on your command.
i'm sorry my english not good.

What does "rc" mean in dot files

In my home folder in Linux I have several config files that have "rc" as a file name extension:
$ ls -a ~/|pcregrep 'rc$'
.bashrc
.octaverc
.perltidyrc
.screenrc
.vimrc
What does the "rc" in these names mean?
It looks like one of the following:
run commands
resource control
run control
runtime configuration
Also I've found a citation:
The ‘rc’ suffix goes back to Unix's grandparent, CTSS. It had a command-script feature called "runcom". Early Unixes used ‘rc’ for the name of the operating system's boot script, as a tribute to CTSS runcom.
Runtime Configuration normally if it's in the config directory. I think of them as resource files. If you see rc in file name this could be version i.e. Release Candidate.
Edit: No, I take it back officially... "run commands"
[Unix: from runcom files on the CTSS system 1962-63, via the startup script /etc/rc]
Script file containing startup instructions for an application program (or an entire operating system), usually a text file containing commands of the sort that might have been invoked manually once the system was running but are to be executed automatically each time the system starts up.
Thus, it would seem that the "rc" part stands for "runcom", which I believe can be expanded to "run commands". In fact, this is exactly what the file contains, commands that bash should run.
Quoted from What does “rc” in .bashrc stand for?
I learnt something new! :)
In the context of Unix-like systems, the term rc stands for the phrase "run commands". It is used for any file that contains startup information for a command. It is believed to have originated somewhere in 1965 from a runcom facility from the MIT Compatible Time-Sharing System (CTSS).
Reference: https://en.wikipedia.org/wiki/Run_commands
In Unix world, RC stands for "Run Control".
http://www.catb.org/~esr/writings/taoup/html/ch10s03.html
Figure I would add my finding on a previous dive into this subject.
The short, imho: The rc in both bashrc and init rc both stand for runcom, short for run commands.
The init origin owing as a homage to the runcom's of CTSS while in the case of shells, the "shell" and indeed the underlying macro procedure processor are all direct descendant of the CTSS concept first described by Louis Pouzin in 1965. See snippets below
SUBJECT: The SHELL: A Global Tool for Calling and Chaining Procedures in the System
FROM: Louis Pouzin .
DATE: April 2, 1965
https://people.csail.mit.edu/saltzer/Multics/Multics-Documents/MDN/MDN-4.pdf
SUBJ: RUNCOM • A Macro Procedure Processor for the 636 System
FROM:FROM: Louis Pouzin .
DATE: April 7 1965
https://people.csail.mit.edu/saltzer/Multics/Multics-Documents/MDN/MDN-5.pdf
The SHELL
4.1 We may envision a common procedure called automatically by
the supervisor whenever a user types in some message at his console,
at a time when he has no other process in active execution under con-
sole control (presently called command level). This procedure acts
as an interface between console messages and subroutine. The purpose
of such a procedure is to create a medium of exchange into which one
could activate any procedure, ~ if _g ~ called~~ inside of
another program. Hereafter, for simplification, w·e shall refer to that
procedure as the ''SHELL
Requests Stacking
7.1 The chaining of requests, similar to those typed at the console,
is straightforward. Consecutive calls to the SHELL, from any procedure,
and at any level of recursion, allows an unlimited chaining of requests.
7.2 Another feature con~only used on the present system is the
execution of a stack of requests stored into a BCD file. This mode is
a easy variation, as it oonsist:s in reading a block of several BCD request
strings, and postpone the return to the calling program until the block has
been exhausted. Due to the present system conventions, the SHELL selects.
this mode of execution when the name of the request is RUNCCflM, while the
first argument is· the BCD namE! of the file. But any other convention may
work as well.
In addition to this, I offer "mk -- how to remake the system and commands" from the Unix Users Manual Release 3, June 1980
"The lib directory contains libraries used when loading user programs. The
largest and most important of these is the C library. All libraries are in
sub-directories and are created by a makefile or runcom. A runcom is a
Shell command procedure used specifically to remake a piece of the system.
:lib will rebuild the libraries that are given as arguments."
http://bitsavers.trailing-edge.com/pdf/att/unix/System_III/UNIX_Users_Manual_Release_3_Jun80.pdf
Further, interestingly the original Bourne shell, bsh, did not have a file read when started like csh and ksh which came after.
https://www.ibm.com/support/pages/overview-shell-startup-files
Given the time both ksh and csh came out and both make use of a start-up shell initializer of stacked commands it really makes a lot of sense it would be the shell's startup runcom.
-IdS
To understand rc files, it helps to know that Ubuntu boots into several different runlevels. They are 0-6, 0 being "halt", 1 being "single-user", 2 being "multi-user"(the default runlevel), etc. This system has now been outdated by the Upstart and initd programs in most Linux Distros. It is still maintained for backwards compatibility.
Within the /etc directory are several folders labeled "rc0.d, rc1.d" etc, through rc6.d. These are the directories the kernel refers to to know which init scripts it should run for that runlevel. They are symbolic links to the system service scripts residing in the /etc/init.d directory.
In the context you are using it, it would appear that you are listing any files with rc in the name. The code in these files will set the way the services/tasks startup and run when initialized.

Resources