Get full command from shell script - linux

I'm looking for a way to access the full command from shell script, e.g.
Assume I have a script called test.sh. When I run it, the command line is passed to ruby as is (except the script itself is removed).
$ test.sh print ENV['HOME']
Is equivalent to
$ ruby -e "print ENV['HOME']"

When you run:
test.sh print ENV['HOME']
...then, before test.sh is started, the shell runs string-splitting, expansion, and similar processes. Thus, what's eventually run is (assuming no glob expansion):
execvp("test.sh", {"test.sh", "print", "ENV[HOME]"});
If you have a file named ENVH in the current directory, the shell may treat ENV['HOME'] as a glob, expanding it by replacing the glob expression with the filename, and thus running:
execvp("test.sh", {"test.sh", "print", "ENVH"});
...in any event, what exists on the other side of the execv*-series call done to run the new program has no information which was local to the original shell -- and thus no way of knowing what the original command was before parsing and expansion. Thus, it is impossible to retrieve the original string unless the outer shell is modified to expose it out-of-band (as via an environment variable).
This is why your calling convention should instead require:
test.sh "print ENV['HOME']"
or, allowing even more freedom from shell quoting/escaping syntax, passing program text via stdin, as with:
test.sh <<'EOF'
print ENV['HOME']
EOF
Now, if you want to modify your shell to do that, I'd suggest a function that exposes BASH_COMMAND. For instance:
shopt -s extdebug
expose_command() {
export SHELL_COMMAND="$BASH_COMMAND"
return 0
}
trap expose_command DEBUG
...then, inside test.sh, you can refer to SHELL_COMMAND. Again, however: This will only work if the calling shell had that trap configured, as within a user's ~/.bashrc; you can't simply put the above content in a script and expect it to work, because it's only the interactive shell -- the script's parent process -- that has access to this information and is thus able to expose it.

Related

how can we change the directory in linux using perl script after taking input from user [duplicate]

How to set a global environment variable in a bash script?
If I do stuff like
#!/bin/bash
FOO=bar
...or
#!/bin/bash
export FOO=bar
...the vars seem to stay in the local context, whereas I'd like to keep using them after the script has finished executing.
Run your script with .
. myscript.sh
This will run the script in the current shell environment.
export governs which variables will be available to new processes, so if you say
FOO=1
export BAR=2
./runScript.sh
then $BAR will be available in the environment of runScript.sh, but $FOO will not.
When you run a shell script, it's done in a sub-shell so it cannot affect the parent shell's environment. You want to source the script by doing:
. ./setfoo.sh
This executes it in the context of the current shell, not as a sub shell.
From the bash man page:
. filename [arguments]
source filename [arguments]
Read and execute commands from filename in the current shell
environment and return the exit status of the last command executed
from filename.
If filename does not contain a slash, file names in PATH are used to
find the directory containing filename.
The file searched for in PATH need not be executable. When bash is not
in POSIX mode, the current directory is searched if no file is found
in PATH.
If the sourcepath option to the shopt builtin command is turned off,
the PATH is not searched.
If any arguments are supplied, they become the positional parameters
when filename is executed.
Otherwise the positional parameters are unchanged. The return status
is the status of the last command exited within the script (0 if no
commands are executed), and false if filename is not found or cannot
be read.
source myscript.sh is also feasible.
Description for linux command source:
source is a Unix command that evaluates the file following the command,
as a list of commands, executed in the current context
#!/bin/bash
export FOO=bar
or
#!/bin/bash
FOO=bar
export FOO
man export:
The shell shall give the export attribute to the variables corresponding to the specified names, which shall cause them to be in the environment of subsequently executed commands. If the name of a variable is followed by = word, then the value of that variable shall be set to word.
A common design is to have your script output a result, and require the cooperation of the caller. Then you can say, for example,
eval "$(yourscript)"
or perhaps less dangerously
cd "$(yourscript)"
This extends to tools in other languages besides shell script.
In your shell script, write the variables to another file like below and source these files in your ~/.bashrc or ~/.zshrc
echo "export FOO=bar" >> environment.sh
In your ~/.bashrc or ~/.zshrc, source it like below:
source Path-to-file/environment.sh
You can then access it globally.
FOO=bar
export FOO

How does a subshell's executed lines get printed to the main shell without running the source command?

Let's say I have an executable shell script called foo.sh. Inside it is a simple echo "Hello World". From my understanding, when I run this via ./foo.sh, a subshell is invoked which executes the echo "Hello World" line.
Why, then, do I see the output of the echo command in my main shell/terminal? I would think you'd have to do a "source ./foo.sh" instead of the simple "./foo.sh" to see the output in your current shell.
Can any of you help clarify?
The standard output is inherited. Quoting from Bash Reference Manual:
Command Execution Environment
When a simple command other than a builtin or shell function is to be
executed, it is invoked in a separate execution environment that
consists of the following. Unless otherwise noted, the values are
inherited from the shell.
the shell’s open files, plus any modifications and additions specified by redirections to the command
...

bash + Linux + how to ignore the character "!"

I want to send little script to remote machine by ssh
the script is
#!/bin/bash
sleep 1
reboot
but I get event not found - because the "!"
ssh 183.34.4.9 "echo -e '#!/bin/bash\nsleep 1\reboot>'/tmp/file"
-bash: !/bin/bash\nsleep: event not found
how to ignore the "!" char so script will so send successfully by ssh?
remark I cant use "\" before the "!" because I get
more /tmp/file
#\!/bin/bash
sleep 1
Use set +H before your command to disable ! style history substitution:
set +H
ssh 183.34.4.9 "echo -e '#!/bin/bash\nsleep 1\reboot>'/tmp/file"
# enable hostory expnsion again
set -H
I think your command line is not well formated. You can send this:
ssh 183.34.4.9 'echo -e "#!/bin/bash\nsleep 1\nreboot">/tmp/file'
When I say "not well formated" I mean you put ">" inside the "echo" and you forgot to add "n" before "reboot", and you put "\reboot", wich will be interpreted as "CR" (carriage return) followed by "eboot" command (which I don't think that exists).
But what did the trick here is to invert the comas changing (') with (") and viceversa.
Bash is running interactively (which means that you are feeding commands to it from the standard input and not exec(2)ing a command from a shell script) so you don't need to include the line #!/bin/bash in that case (even more, bash should just ignore it, but not the included bang, as it is part of the active history mechanism)
But why? the first two characters in an executable file (any file capable of being exec(2)ed from secondary storage, not your case) have a special meaning (for the kernel and for the shell): they are the magic number that identifies the kind of executable file the kernel is loading. This allows the kernel to select the proper executable loading routines depending on the binary executable format (and what allows you for example to execute BSD programs in linux kernels, and viceversa)
A special value for this magic numbers is composed by the two characters # and ! (in that order) that forces the kernel to read the complete first line of that file and load the executable file specified in that line instead, allowing you to execute shell scripts for different interpreters directly from the command line. And it is done on purpose, as the # character is commonly in shell script parlance a comment character. This only happens when the shell that is interpreting the commands is not an interactive shell. When the shell loads a script with those characters, it normally reads the first line also to check if it has the #! mark and load the proper interpreter, by replicating the kernel function that does this. Despite of being a comment for the shell, it does this to allow to treat as executables files that are not stored on secondary storage (the only ones the exec(2) system call can deal with), but coming from stdin (as happens to yours).
As your shell is running interactively and you do want to execute its commands without a shell change, you don't need that line and can completely eliminate it without having to disable the bang character.
Sorry, but the solution given about executing the shell with -H option will probably not be viable, as the shell executing the commands is the login shell in the target machine, so you cannot provide specific parameters to it (parameters are selected by the login(8) program and normally don't include arbitrary parameters like -H).
The best solution is to fully eliminate the #!/bin/bash line, as you are not going to exec(2) that program in the target. In case you want to select the shell from the input line (case the user has a different shell installed as login shell), it is better to invoke the wanted shell in the command line and pass it (through stdin, or making it read the shell script as a file) the shell commands you wan to execute (but again, without the #! line).
NOTE
Its important to ensure you'll execute the whole thing, so it's best to pass all the script contents in the destination target, and once assured you have passed the whole thing to execute it as a whole. Then your #! first line will be properly processed, as the executable will be run by means of an exec(2) made from the kernel.
Example:
DIRECTORY=/bla/bla
FILE=/path/to/file
OUTPUT=/path/to/output
# this is the command we want to pass through the line
cat <<EOF | ssh user#target "cat >>/tmp/shell.sh"
cd $DIRECTORY
foo $FILE >$OUTPUT
exit 0
EOF
# we have copied the script file in a remote /tmp/shell.sh
# and we are sure it has passed correctly, so it's ready
# for local execution there.
# now, execute it.
# the remote shell won't be interactive, and you'll ensure that it is /bin/bash
ssh user#target "/bin/bash /tmp/shell.sh" >remote_shell.out
A more sophisticate system is one that allows to to sign the shell script before sending, and verify the script signature before executing it, so you are protected against possible trojan horse attacks. But this is out of scope on this explanation.
Another alternative is to use the batch(2) command remotely and pass it all the commands you want executed. you'll get a sessionless executing environment, more suitable to the task you are demanding (despite the fact that you'll get the script output by email to the target user running the script)
Interactively, beware that ! triggers history expansion inside double quotes
from here: https://riptutorial.com/bash/example/2465/quoting-literal-text
my recommended solution is to use single quotes to define the string (and either escape single quotes \' or use double quotes " within the string):
ssh 183.34.4.9 'echo -e "#!/bin/bash\nsleep 1\reboot>"/tmp/file'

How to run a csh script from a sh script

I was wondering if there is a way to source a csh script from a sh script. Below is an example of what is trying to be implemented:
script1.sh:
#!/bin/sh
source script2
script2:
#!/bin/csh -f
setenv TEST 1234
set path = /home/user/sandbox
When I run sh script1.sh, I get syntax errors generated from script2 (expected since we are using a different Shebang). Is there a way I can run script2 through script1?
Instead of source script2 run it as:
csh -f script2
Since your use case depends on retaining environment variables set by the csh script, try adding this to the beginning of script1:
#!/bin/sh
if [ "$csh_executed" -ne 1 ]; then
csh_executed=1 exec csh -c "source script2;
exec /bin/sh \"$0\" \"\$argv\"" "$#"
fi
# rest of script1
If the csh_executed variable is not set to 1 in the environment, run a csh script that sources script2 then executes an instance of sh, which will retain the changes to the environment made in script2. exec is used to avoid creating new processes for each shell instance, instead just "switching" from one shell to the next. Setting csh_executed in the environment of the csh command ensures that we don't get stuck in a loop when script1 is re-executed by the csh instance.
Unfortunately, there is one drawback that I don't think can be fixed, at least not with my limited knowledge of csh: the second invocation of script1 receives all the original arguments as a single string, rather than a sequence of distinct arguments.
You don't want source there; it runs the given script inside your existing shell, without spawning a subprocess. Obviously, your sh process can't run something like that which isn't a sh script.
Just call the script directly, assuming it is executable:
script2
The closest you can come to sourcing a script with a different executor than your original script is to use exec. exec will replace the running process space with the new process. Unlike source, however, when your exec-ed program ends, the entire process ends. So you can do this:
#!/bin/sh
exec /path/to/csh/script
but you can't do this:
#!/bin/sh
exec /path/to/csh/script
some-other-command
However, are you sure you really want to source the script? Maybe you just want to run it in a subprocess:
#!/bin/sh
csh -f /path/to/csh/script
some-other-command
You want the settings in your csh script to apply to the sh script that invokes it.
Basically, you can't do that, though there are some (rather ugly) ways you could make it work. If you execute your csh script, it will set those variables in the context of the process running the script; they'll vanish as soon as it returns to the caller.
Your best bet is simply to write a new version of your csh script as an sh script, and source or . it from the calling sh script.
You could translate your csh script:
#!/bin/csh -f
setenv TEST 1234
set path = /home/user/sandbox
to this:
export TEST=1234
export PATH=/home/user/sandbox
(csh treats the shell array variable $path specially, tying it to the environment variable $PATH. sh and its derivatives don't do that, they deal with $PATH itself directly.)
Note that a script intended to be sourced should not have a #! line at the top, since it doesn't make sense to execute it in its own process; you need to execute its contents in the context of the caller.
If maintaining two copies of the script, one to be sourced from csh or tcsh scripts and another to be sourced or .ed from sh/ksh/bash/zsh script, is not practical, there are other solutions. For example, your script can print a series of sh commands to be executed; you can then do something like
eval `./foo.csh`
(line endings will pose some issues here).
Or you can modify the csh script so it sets the required environment variables and then invokes some specified command, which could be a new interactive shell; this is inconvenient, since it doesn't set those variables in the interactive shell you're running.
If a software package requires some special environment variables to be set, it's common practice to provide scripts called, for example, setup.sh and setup.csh, so that sh/ksh/bash/zsh users can do:
. /path/to/package/setup.sh
and csh/tcsh users can do:
source /path/to/package/setup.csh
Incidentally, this command:
set path = /home/user/sandbox
in your sample script is probably not a good idea. It replaces your entire $PATH with just a single directory, which means you won't be able to execute simple commands like ls unless you specify their full paths. You'd usually want something like:
set path = ( $path /home/user/sandbox )
or, in sh:
PATH=$PATH:/home/user/sandbox

Need explanations for Linux bash builtin exec command behavior

From Bash Reference Manual I get the following about exec bash builtin command:
If command is supplied, it replaces the shell without creating a new process.
Now I have the following bash script:
#!/bin/bash
exec ls;
echo 123;
exit 0
This executed, I got this:
cleanup.sh ex1.bash file.bash file.bash~ output.log
(files from the current directory)
Now, if I have this script:
#!/bin/bash
exec ls | cat
echo 123
exit 0
I get the following output:
cleanup.sh
ex1.bash
file.bash
file.bash~
output.log
123
My question is:
If when exec is invoked it replaces the shell without creating a new process, why when put | cat, the echo 123 is printed, but without it, it isn't. So, I would be happy if someone can explain what's the logic of this behavior.
Thanks.
EDIT:
After #torek response, I get an even harder to explain behavior:
1.exec ls>out command creates the out file and put in it the ls's command result;
2.exec ls>out1 ls>out2 creates only the files, but do not put inside any result. If the command works as suggested, I think the command number 2 should have the same result as command number 1 (even more, I think it should not have had created the out2 file).
In this particular case, you have the exec in a pipeline. In order to execute a series of pipeline commands, the shell must initially fork, making a sub-shell. (Specifically it has to create the pipe, then fork, so that everything run "on the left" of the pipe can have its output sent to whatever is "on the right" of the pipe.)
To see that this is in fact what is happening, compare:
{ ls; echo this too; } | cat
with:
{ exec ls; echo this too; } | cat
The former runs ls without leaving the sub-shell, so that this sub-shell is therefore still around to run the echo. The latter runs ls by leaving the sub-shell, which is therefore no longer there to do the echo, and this too is not printed.
(The use of curly-braces { cmd1; cmd2; } normally suppresses the sub-shell fork action that you get with parentheses (cmd1; cmd2), but in the case of a pipe, the fork is "forced", as it were.)
Redirection of the current shell happens only if there is "nothing to run", as it were, after the word exec. Thus, e.g., exec >stdout 4<input 5>>append modifies the current shell, but exec foo >stdout 4<input 5>>append tries to exec command foo. [Note: this is not strictly accurate; see addendum.]
Interestingly, in an interactive shell, after exec foo >output fails because there is no command foo, the shell sticks around, but stdout remains redirected to file output. (You can recover with exec >/dev/tty. In a script, the failure to exec foo terminates the script.)
With a tip of the hat to #Pumbaa80, here's something even more illustrative:
#! /bin/bash
shopt -s execfail
exec ls | cat -E
echo this goes to stdout
echo this goes to stderr 1>&2
(note: cat -E is simplified down from my usual cat -vET, which is my handy go-to for "let me see non-printing characters in a recognizable way"). When this script is run, the output from ls has cat -E applied (on Linux this makes end-of-line visible as a $ sign), but the output sent to stdout and stderr (on the remaining two lines) is not redirected. Change the | cat -E to > out and, after the script runs, observe the contents of file out: the final two echos are not in there.
Now change the ls to foo (or some other command that will not be found) and run the script again. This time the output is:
$ ./demo.sh
./demo.sh: line 3: exec: foo: not found
this goes to stderr
and the file out now has the contents produced by the first echo line.
This makes what exec "really does" as obvious as possible (but no more obvious, as Albert Einstein did not put it :-) ).
Normally, when the shell goes to execute a "simple command" (see the manual page for the precise definition, but this specifically excludes the commands in a "pipeline"), it prepares any I/O redirection operations specified with <, >, and so on by opening the files needed. Then the shell invokes fork (or some equivalent but more-efficient variant like vfork or clone depending on underlying OS, configuration, etc), and, in the child process, rearranges the open file descriptors (using dup2 calls or equivalent) to achieve the desired final arrangements: > out moves the open descriptor to fd 1—stdout—while 6> out moves the open descriptor to fd 6.
If you specify the exec keyword, though, the shell suppresses the fork step. It does all the file opening and file-descriptor-rearranging as usual, but this time, it affects any and all subsequent commands. Finally, having done all the redirections, the shell attempts to execve() (in the system-call sense) the command, if there is one. If there is no command, or if the execve() call fails and the shell is supposed to continue running (is interactive or you have set execfail), the shell soldiers on. If the execve() succeeds, the shell no longer exists, having been replaced by the new command. If execfail is unset and the shell is not interactive, the shell exits.
(There's also the added complication of the command_not_found_handle shell function: bash's exec seems to suppress running it, based on test results. The exec keyword in general makes the shell not look at its own functions, i.e., if you have a shell function f, running f as a simple command runs the shell function, as does (f) which runs it in a sub-shell, but running (exec f) skips over it.)
As for why ls>out1 ls>out2 creates two files (with or without an exec), this is simple enough: the shell opens each redirection, and then uses dup2 to move the file descriptors. If you have two ordinary > redirects, the shell opens both, moves the first one to fd 1 (stdout), then moves the second one to fd 1 (stdout again), closing the first in the process. Finally, it runs ls ls, because that's what's left after removing the >out1 >out2. As long as there is no file named ls, the ls command complains to stderr, and writes nothing to stdout.

Resources