I am writing a script to become a user (let's call it genomics) via the cmd "sudo /etc/bgenomics" (this is setup by our admin) and run some bash code as that user, namely run a cmd, catch the exit code and take the appropriate action.
The problem is the bash code inside the here doc get printed to the screen, which is distracting and looks really unelegant.
Here's an illustration:
#!/bin/bash
name='George'
sudo /etc/bgenomics <<Q
/bin/bash
if (( 2 == 2 )); then
echo "my name is $name"
grep zzz /etc # will return nothing and $? = 1
echo \$? # this should be 1 after the above cmd
fi
Q
The if statement is just there to show how annoying it is when printed.
Right now all of the following is printed to the screen:
Script started, file is /var/tmp/genomicstraces/c060644.20140617143003.11536
Script done, file is /var/tmp/genomicstraces/c060644.20140617143003.11536
brainiac-login-02$brainiac-login-02$/bin/bash
bash-3.2$ if (( 2 == 2 )); then
> echo "my name is George"
> grep zzz /etc # will return nothing and 0 = 1
> echo $? # this should be 1 after the above cmd
> fi
my name is George
1
The only parts I want to see are "my name is George" and "1". Can it be done?
Is another process calling this script? Output shouldn't normally appear unless bash is called with '-x'. Try modifying the first line of your script if you cannot disable echo in the calling process:
#!/bin/bash +x
You may also want to remove the call to /bin/bash after the sudo command unless you really wish to start another shell within your shell.
The here document supplies input to the bgenomics script via its standard input. What happens to that input is up to that script.
If you want the script to print some of its input, and not print some of its input, you have to modify the script.
If bgenomics is actually a wrapper for an interactive shell session (as it seems to be, judging by the Script started and Script done traces), then here documents are not the best way to feed input into it.
A good way is to use the expect utility, which controls interactive programs via a pseudo-terminal device and provides a scripting language with a great deal of control. expect can suppress all unwanted input from an interactive program. It can look for specific outputs from the program, and supply responses. For instance it can look for a login: string coming from the interactive session, and send a user name.
The program bgenomics has an invocation of script in it to record what the script did. Talk to the person in charge of that to understand what their intentions are. Until you understand the purpose of bgenomics you risk screwing up what the author of that is trying to do.
$ script /tmp/junk.txt
Script started, file is /tmp/junk.txt
$ date # this is a child shell of the script command
Tue Jun 17 21:04:14 EDT 2014
$ exit
Script done, file is /tmp/junk.txt
Related
We have two bash scripts to start up an application. The first (Start-App.sh) one sets up the environment and the second (startup.sh) is from a 3rd party that we are trying not to heavily edit. If someone runs the second script before the first the application does not come up correctly.
Is there a way to ensure that the startup.sh can only be called from the Start-App.sh script?
They are both in the same directory and run via bash on Red Hat Linux.
Is there a way to ensure that the startup.sh can only be called from the Start-App.sh script?
Ensure? No. And even less so without editing startup.sh at all. But you can get fairly close.
Below are three suggestions − you can either use one of them, or any combination of them.
The simplest, and probably the best, way is to add a single line at the top of startup.sh:
[ -z $CALLED_FROM_START_APP ] && { echo "Not called from Start-App.sh"; exit 42; }
And then call it from Start-App.sh like so:
export CALLED_FROM_START_APP=yes
sh startup.sh
of course, you can set this environment variable from the shell yourself, so it won't actually ensure anything, but I hope your engineering staff is mature enough not to do this.
You can also remove the execute permissions from startup.sh:
$ chmod a-x startup.sh
This will not prevent people from using sh startup.sh, so there is a very small guarantee here; but it might prevent auto-completion oopsies, and it will mark the file as "not intended to be executed" − if I see a directory with only one executable .sh file, I'll try and run that one, and not one of the others.
Lastly, you could perhaps rename the startup.sh script; for example, you could rename it to do_not_run, or "hide" it by renaming it to .startup. This probably won't interfere with the operation of this script (although I can't check this).
TL;DR:
[ $(basename "$0") = "Start-App.sh" ] || exit
Explanation
As with all other solutions presented it's not 100% bulletproof but this covers most common instances I've come across for preventing accidentally running a script directly as opposed to calling it from another script.
Unlike other approaches presented, this approach:
doesn't rely on manually set file names for each included/sourced script (i.e. is resilient to file name changes)
behaves consistently across all major *nix distros that ship with bash
introduces no unnecessary environment variables
isn't tied to a single parent script
prevents running the script through calling bash explicitly (e.g. bash myscript.sh)
The basic idea is having something like this at the top of your script:
[ $(basename "$0") = $(basename "$BASH_SOURCE") ] && exit
$0 returns the name of the script at the beginning of the execution chain
$BASH_SOURCE will always point to the file the currently executing code resides in (or empty if no file e.g. piping text directly to bash)
basename returns only the main file name without any directory information (e.g. basename "/user/foo/example.sh" will return example.sh). This is important so you don't get false negatives from comparing example.sh and ./example.sh for example.
To adapt this to only allow running when sourced from one specific file as in your question and provide a helpful error message to the end user, you could use:
[ $(basename "$0") = "Start-App.sh" ] || echo "[ERROR] To start MyApplication please run ./Start-App.sh" && exit
As mentioned from the start of the answer, this is not intended as a serious security measure of any kind, but I'm guessing that's not what you're looking for anyway.
You can make startup.sh non-executable by typing chmod -x startup.sh. That way the user would not be able to run it simply by typing ./startup.sh.
Then from Start-App.sh, call your script by explicitly invoking the shell:
sh ./startup.sh arg1 arg2 ...
or
bash ./startup.sh arg1 arg2 ...
You can check which shell it's supposed to run in by inspecting the first line of startup.sh, it should look like:
#!/bin/bash
You can set environment variable in your first script and before running second script check if that environment variable is set properly.
Another alternative is checking the parent process and finding the calling script. This also needs adding some code to the second script.
For example, in the called script, you can check the exit status of this and terminate.
ps $PPID | tail -1 | awk '$NF!~/parent/{exit 1}'
As others have pointed out, the short answer is "no", although you can play with permissions all day but this is still not bulletproof. Since you said you don't mind editing (just not heavily editing) the second script, the best way to accomplish this would be something along the lines of:
1) in the parent/first script, export an environment variable with its PID. This becomes the parent PID. For example,
# bash store parent pid
export FIRST_SCRIPT_PID = $$
2) then very briefly, in the second script, check to see if the calling PID matches the known acceptable parent PID. For example,
# confirm calling pid
if [ $PPID != $FIRST_SCRIPT_PID ] ; then
exit 0
fi
Check out these links here and here for reference.
To recap: the most direct way to do this is adding at least a minimal line or two to the second script, which hopefully doesn't count as "heavily editing".
You can create a script, let's call it check-if-my-env-set containing
#! /bin/bash
source Start-App.sh
exec /bin/bash $#
and replace the shebang (see this) on startup.sh by that script
#! /abs/path/to/check-if-my-env-set
#! /bin/bash
...
then, every time you run startup.sh it will ensure the environment is set correctly.
To the best of my knowledge, there is no way to do this in a way that it would be impossible to get around it.
However, you could stop most attempts by using permissions.
Change the owner of the startup.sh file:
sudo chown app_specific_user startup.sh
Make startup.sh only executable by the owner:
chmod u+x startup.sh
Run startup.sh as the app_specific_user from Start-App.sh:
sudo -u app_specific_user ./startup.sh
I have a script that calls an application that requires user input, e.g. run app that requires user to type in 'Y' or 'N'.
How can I get the shell script not to ask the user for the input but rather use the value from a predefined variable in the script?
In my case there will be two questions that require input.
You can pipe in whatever text you'd like on stdin and it will be just the same as having the user type it themselves. For example to simulating typing "Y" just use:
echo "Y" | myapp
or using a shell variable:
echo $ANSWER | myapp
There is also a unix command called "yes" that outputs a continuous stream of "y" for apps that ask lots of questions that you just want to answer in the affirmative.
If the app reads from stdin (as opposed to from /dev/tty, as e.g. the passwd program does), then multiline input is the perfect candidate for a here-document.
#!/bin/sh
the_app [app options here] <<EOF
Yes
No
Maybe
Do it with $SHELL
Quit
EOF
As you can see, here-documents even allow parameter substitution. If you don't want this, use <<'EOF'.
the expect command for more complicated situations, you system should have it. Haven't used it much myself, but I suspect its what you're looking for.
$ man expect
http://oreilly.com/catalog/expect/chapter/ch03.html
I prefer this way: If You want multiple inputs... you put in multiple echo statements as so:
{ echo Y; Y; } | sh install.sh >> install.out
In the example above... I am feeding two inputs into the install.sh script. Then... at the end, I am piping the script output to a log file to be archived and viewed for later.
I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"
I have a bash script, a.sh
And when I run a.sh, I need to fill several read. Let's say it like this
./a.sh
Please input a comment for script usage
test (I need to type this line mannually when running the script a.sh, and type "enter" to continue)
Now I call a.sh in my new script b.sh. Can I let b.sh to fill in the "test" string automaticlly ?
And one other question, a.sh owns lots of prints to the console, can I mute the prints from a.sh by doing something in my b.sh without changing a.sh ?
Thanks.
Within broad limits, you can have one script supply the standard input to another script.
However, you'd probably still see the prompts, even though you'd not see anything that satisfies those prompts. That would look bad. Also, depending on what a.sh does, you might need it to read more information from standard input — but you'd have to ensure the script calling it supplies the right information.
Generally, though, you try to avoid this. Scripts that prompt for input are bad for automation. It is better to supply the inputs via command line arguments. That makes it easy for your second script, b.sh, to drive a.sh.
a.sh
#!/bin/bash
read myvar
echo "you typed ${myvar}"
b.sh
#!/bin/bash
echo "hello world"
You can do this in 2 methods:
$ ./b.sh | ./a.sh
you typed hello world
$ ./a.sh <<< `./b.sh`
you typed hello world
I have the following scenario. I have a shell script that is generated automatically, that I want to run. The general format of the script looks something like this:
#!/bin/sh
command_1 #something like mkdir dir1 or chmod -R 775 dir1, you get the idea
command_2
...
...
command_n
Like I said the script will be automatically generated in a way that I don't have much control of the commands that are written in the script (the purpose of the script is to use it for fuzz testing, so it makes sense). The problem is that some commands require some sort of user input (for example "chfs --some arguments" will sometimes prompt me for the root password), and therefore the script will not pass to the next command until it gets the proper input.
So, my question is: Is there a way to skip the commands that require user input when they are met in such a script, so that the script finishes and executes all the other commands? Any idea is greatly appreciated.
You can use expect script to work around this, something like this
spawn /bin/bash yourscipt.sh
expect "password:"
# Send the password, and then wait for a shell prompt.
send "xxxxx\r"
Here XXXX isyour password.
Lets say your script requires a user to enter a choice interactively.
User press y then again it askes user name.
User enter his name and then script continues.
Enter choice (y/n):_
Enter name :_
So you can pass inputs by preparing an input file with choices written in each line.
content of input file :
y
Inderdeep
And run the script as : cat inputfile | ./script