"Ambiguous output redirect" trying to send both stdout and stderr to mailx from a command sent to at - linux

I have a bash script called test.sh which, for the sake of simplicity, prints one line to stdout and one line to stderr.
test.sh:
#!/bin/bash
echo "this is to stdout"
echo "this is to stderr" 1>&2
I want to run the script test.sh at 7:00 PM, but only if certain conditions are met. To this end, I have another bash script called schedule.sh, which checks some stuff and then submits the command to at to be run later.
I want the output of test.sh (both stdout and stderr) to be sent to me in an email. I use mailx to do this so I can get a nice subject name.
Furthermore, I want at to shut up. No output from at because it always sends me ugly emails (no subject line) if at produces any output.
schedule.sh:
#!/bin/bash
my_email="me#example.com" # Email is a variable
# Check some stuff, exit if certain conditions not met
echo "~/test.sh 2>&1 | mailx -s\"Cool title\" $my_email" | at 7:00 PM &> /dev/null
What's interesting is that when I run schedule.sh from cron (which runs the script with sh), it works perfectly. However, when I manually run schedule.sh from the terminal (NB: I'm using tcsh), at (not mailx) sends me an email saying
Ambiguous output redirect.
I'm not sure why the shell I run schedule.sh from makes a difference, when schedule.sh is a bash script.
Here is my thinking in looking at schedule.sh. Everything within the quotation marks "~/test.sh 2>&1 | mailx -s\"Cool title\" me#email.com" should be an argument to at, and at runs that argument as a command using sh. The redirection 2>&1 | is in the style of sh for this reason.
When I remove 2>&1 and only pipe the stdout of test.sh to mailx, it does work; however, I receive 2 emails: one with stdout from mailx and another from stderr from at.
What gives? How can I make this work regardless of the shell I'm calling it from?
Thanks.
edit:
uname -o says my OS is GNU/Linux
Here is uname -a if it helps:
Linux [hostname censored] 2.6.9-89.ELlargesmp #1 SMP Mon Jun 22 12:46:58 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
When I check the at contents using at -c, here's what I see:
#!/bin/sh
# atrun uid=xxxxx gid=xxxxx
# mail username 0
# ...
SHELL=/bin/tcsh; export SHELL
# ...
${SHELL:-/bin/sh} << `(dd if=/dev/urandom count=200 bs=1 2>/dev/null|LC_ALL=C tr -d -c '[:alnum:]')`
~/test.sh 2>&1 | mailx -s"Cool title" me#example.com
I'm having a hard time understanding the second to last line... is this going to execute using $SHELL or /bin/sh?

The command executed via at is:
~/test.sh 2>&1 | mailx -s\"Cool title\" $my_email
The behavior of at command varies from one system to another. On Linux, the command is executed using /bin/sh. In fact, on my system (Linux Mint 14), it prints a warning message:
$ echo 'printenv > at.env' | at 19:24
warning: commands will be executed using /bin/sh
On Solaris, the command is executed by the shell specified by the current value of the $SHELL environment variable. Using an account where my default shell is /bin/tcsh on Solaris 9, I get:
% echo 'printenv > at.env' | at 19:25
commands will be executed using /bin/tcsh
job 1397874300.a at Fri Apr 18 19:25:00 2014
% echo 'printenv > at.env' | env SHELL=/bin/sh at 19:28
commands will be executed using /bin/sh
job 1397874480.a at Fri Apr 18 19:28:00 2014
Given that at's behavior is inconsistent (and frankly confusing), I suggest having it execute just a single command, with any I/O redirection being performed inside that command. That's the best way to ensure that the command will be executed correctly regardless of which shell is used to execute it.
For example (untested code follows):
echo '#!/bin/bash' > tmp.bash
echo "~/test.sh 2>&1 | mailx -s\"Cool title\" $my_email" >> tmp.bash
chmod +x tmp.bash
echo "./tmp.bash" | at 7:00 PM

Related

running sudo -i command in bash script? [duplicate]

I have a script where I need to start a command, then pass some additional commands as commands to that command. I tried
su
echo I should be root now:
who am I
exit
echo done.
... but it doesn't work: The su succeeds, but then the command prompt is just staring at me. If I type exit at the prompt, the echo and who am i etc start executing! And the echo done. doesn't get executed at all.
Similarly, I need for this to work over ssh:
ssh remotehost
# this should run under my account on remotehost
su
## this should run as root on remotehost
whoami
exit
## back
exit
# back
How do I solve this?
I am looking for answers which solve this in a general fashion, and which are not specific to su or ssh in particular. The intent is for this question to become a canonical for this particular pattern.
Adding to tripleee's answer:
It is important to remember that the section of the script formatted as a here-document for another shell is executed in a different shell with its own environment (and maybe even on a different machine).
If that block of your script contains parameter expansion, command substitution, and/or arithmetic expansion, then you must use the here-document facility of the shell slightly differently, depending on where you want those expansions to be performed.
1. All expansions must be performed within the scope of the parent shell.
Then the delimiter of the here document must be unquoted.
command <<DELIMITER
...
DELIMITER
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<END
a=1
mylogin=$(whoami)
echo a=$a
echo mylogin=$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=0
mylogin=leon
a=0
mylogin=leon
2. All expansions must be performed within the scope of the child shell.
Then the delimiter of the here document must be quoted.
command <<'DELIMITER'
...
DELIMITER
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<'END'
a=1
mylogin=$(whoami)
echo a=$a
echo mylogin=$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=1
mylogin=root
a=0
mylogin=leon
3. Some expansions must be performed in the child shell, some - in the parent.
Then the delimiter of the here document must be unquoted and you must escape those expansion expressions that must be performed in the child shell.
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<END
a=1
mylogin=\$(whoami)
echo a=$a
echo mylogin=\$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=0
mylogin=root
a=0
mylogin=leon
A shell script is a sequence of commands. The shell will read the script file, and execute those commands one after the other.
In the usual case, there are no surprises here; but a frequent beginner error is assuming that some commands will take over from the shell, and start executing the following commands in the script file instead of the shell which is currently running this script. But that's not how it works.
Basically, scripts work exactly like interactive commands, but how exactly they work needs to be properly understood. Interactively, the shell reads a command (from standard input), runs that command (with input from standard input), and when it's done, it reads another command (from standard input).
Now, when executing a script, standard input is still the terminal (unless you used a redirection) but the commands are read from the script file, not from standard input. (The opposite would be very cumbersome indeed - any read would consume the next line of the script, cat would slurp all the rest of the script, and there would be no way to interact with it!) The script file only contains commands for the shell instance which executes it (though you can of course still use a here document etc to embed inputs as command arguments).
In other words, these "misunderstood" commands (su, ssh, sh, sudo, bash etc) when run alone (without arguments) will start an interactive shell, and in an interactive session, that's obviously fine; but when run from a script, that's very often not what you want.
All of these commands have ways to accept commands by ways other than in an interactive terminal session. Typically, each command supports a way to pass it commands as options or arguments:
su root -c 'who am i'
ssh user#remote uname -a
sh -c 'who am i; echo success'
Many of these commands will also accept commands on standard input:
printf 'uname -a; who am i; uptime' | su
printf 'uname -a; who am i; uptime' | ssh user#remote
printf 'uname -a; who am i; uptime' | sh
which also conveniently allows you to use here documents:
ssh user#remote <<'____HERE'
uname -a
who am i
uptime
____HERE
sh <<'____HERE'
uname -a
who am i
uptime
____HERE
For commands which accept a single command argument, that command can be sh or bash with multiple commands:
sudo sh -c 'uname -a; who am i; uptime'
As an aside, you generally don't need an explicit exit because the command will terminate anyway when it has executed the script (sequence of commands) you passed in for execution.
If you want a generic solution which will work for any kind of program, you can use the expect command.
Extract from the manual page:
Expect is a program that "talks" to other interactive programs according to a script. Following the script, Expect knows what can be expected from a program and what the correct response should be. An interpreted language provides branching and high-level control structures to direct the dialogue. In addition, the user can take control and interact directly when desired, afterward returning control to the script.
Here is a working example using expect:
set timeout 60
spawn sudo su -
expect "*?assword" { send "*secretpassword*\r" }
send_user "I should be root now:"
expect "#" { send "whoami\r" }
expect "#" { send "exit\r" }
send_user "Done.\n"
exit
The script can then be launched with a simple command:
$ expect -f custom.script
You can view a full example in the following page: http://www.journaldev.com/1405/expect-script-example-for-ssh-and-su-login-and-running-commands
Note: The answer proposed by #tripleee would only work if standard input could be read once at the start of the command, or if a tty had been allocated, and won't work for any interactive program.
Example of errors if you use a pipe
echo "su whoami" |ssh remotehost
--> su: must be run from a terminal
echo "sudo whoami" |ssh remotehost
--> sudo: no tty present and no askpass program specified
In SSH, you might force a TTY allocation with multiple -t parameters, but when sudo will ask for the password, it will fail.
Without the use of a program like expect any call to a function/program which might get information from stdin will make the next command fail:
ssh use#host <<'____HERE'
echo "Enter your name:"
read name
echo "ok."
____HERE
--> The `echo "ok."` string will be passed to the "read" command

is .bashrc getting run twice when entering a new bash instance?

I want to display the number of nested sub-shells in my bash prompt.
I often type ":sh" during a vim editing session in order to do something, then exit back to the editor. Sometimes I attempt to exit back to the editor out of habit, forgetting that I am not in any editing session and my terminal closes!
To avoid this, I added a bit of code to my .bashrc that would keep a count of the number of nested sub-shells and display it in the prompt.
Here is the code:
echo "1: SHLVL=$SHLVL"
if [[ -z $SHPID ]] ; then
echo "2: SHLVL=$SHLVL"
SHPID=$$
let "SHLVL = ${SHLVL:0} + 1"
fi
echo "3: SHLVL=$SHLVL"
(For those who may wonder, the test "-z $SHPID" insures that $SHLVL won't get incremented again if I run ". .bashrc" again in the same shell, perhaps to test something.)
But the output looks like this:
lsiden#morpheus ~ (morpheus) (2) $ bash
1: SHLVL=3
2: SHLVL=3
3: SHLVL=4
lsiden#morpheus ~ (morpheus) (4) $ ps
PID TTY TIME CMD
10421 pts/2 00:00:00 bash
11363 pts/2 00:00:00 bash
11388 pts/2 00:00:00 ps
As you can see, there are now two instances of bash on the stack, but the variable $SHLVL has been incremented twice. The output shows that before this snippet of code even executes in my .bashrc, SHLVL has already been incremented by 1!
Is it possible for .bashrc to get run twice somehow without seeing the output of the echo commands?
SHLVL is incremented automatically whenever you fire up a shell:
~$ echo $SHLVL
1
~$ bash -c 'echo $SHLVL'
2
and then you're incrementing it again in the .bashrc.

Script produces different result when executed by Bash than by cron

Please consider following crontab (root):
SHELL=/bin/bash
...
...
0 */3 * * * /var/maintenance/raid.sh
And the bash script /var/maintenance/raid.sh:
#!/bin/bash
echo -n "Checking /dev/md0... "
if ! [ $(mdadm --detail /dev/md0 | grep -c "active sync") -eq 2 ]; then
mdadm --detail /dev/md0 | mail -s "Raid problem /dev/md0" "my#email.com";
echo "ERROR"
else
echo "ALL OK"
fi;
#-------------------------------------------------------
echo -n "Checking /dev/md1... "
...
And this is what happen when...
...executed from shell prompt (bash):
Mail with mdadm --detail /dev/md0 output is sent to my email (proper behaviour)
...executed by cron:
Blank mail is sent to my email (subject is there, but there is no message)
Why such difference and how to fix it?
As indicated in the comments, do use full paths on crontab scripts, because crontab does have different environment variables than the normal user (root in this case).
In your case, instead of mdadm, /sbin/mdadm makes it.
How to get the full path of a command? Using the command command -v:
$ command -v rm
/bin/rm
cron tasks run in a shell that is started without your login scripts being run, which set up paths, environment variables etc.
When building cron tasks, prefer things like absolute paths and explicit options etc
Before running your script as a cron job, you can test it with no environment variables using env -i
env -i /var/maintenance/raid.sh

Running bash script in cron

I have the following bash script:
!/bin/bash
# script to send simple email
#Wait 5 seconds
#sleep 05
# email subject
SUBJECT="SUBJECT"
# Email To ?
EMAIL="EMAIL"
# Email text/message
MYIP=$(curl http://ipecho.net/plain)
LASTIP=$(head -n 1 /home/ubuntu/myip.txt)
echo "Last IP:" $LASTIP
echo "Current IP:" $MYIP
#Check if IPs are the same
if [[ "$MYIP" == "$LASTIP" ]]; then
echo "IP hasn't changed. Do nothing."
else
sendemail -f $EMAIL -t $EMAIL -u $SUBJECT -m $MYIP -s smtp.gmail.com -o tls=yes -xu username -xp password
echo $MYIP > myip.txt
fi
When I try to run it in the command line, it works perfectly. The problem starts when I include it in "crontab -e" like this: "* * * * * /home/ubuntu/myip.sh".
Then it does not work. Seem to be that the sendmail is not functioning properly.
When I do a: tail -f /var/log/syslog
Sep 18 21:48:02 gpuserver sendmail[18665]: r8J1m1gO018665: from=ubuntu, size=314, class=0, nrcpts=1, msgid=<201309190148.r8J1m1gO018665#gpuserver>, relay=ubuntu#localhost
Sep 18 21:48:02 gpuserver sendmail[18665]: r8J1m1gO018665: to=ubuntu, ctladdr=ubuntu (1000/1000), delay=00:00:01, xdelay=00:00:00, mailer=relay, pri=30314, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection refused by [127.0.0.1]
Any ideas?
When you run a script from the command line you are utilizing your current environment. Cron doesn't have that info ... So what you should do from the command line is:
env > me
Then edit the script and include ./me at the start of the script. That way your cron task will run with the same environment.
When a cron prints something to stdout or stderr, cron itself tries to send a report by e-mail. Since you probably don't have system-wide e-mail configured, this is probably where the error message in syslog is coming from.
To prevent cron from sending any mail, make sure your script and any commands it spawns don't output anything to stdout or stderr. An easy way to do this is to add "&>/dev/null" to your cron line.
You can also add 'MAILTO=""' at the beginning of your crontab.
This all documented in the man pages for cron/crontab (man -k cron).

Redirecting Output of Bash Child Scripts

I have a basic script that outputs various status messages. e.g.
~$ ./myscript.sh
0 of 100
1 of 100
2 of 100
...
I wanted to wrap this in a parent script, in order to run a sequence of child-scripts and send an email upon overall completion, e.g. topscript.sh
#!/bin/bash
START=$(date +%s)
/usr/local/bin/myscript.sh
/usr/local/bin/otherscript.sh
/usr/local/bin/anotherscript.sh
RET=$?
END=$(date +%s)
echo -e "Subject:Task Complete\nBegan on $START and finished at $END and exited with status $RET.\n" | sendmail -v group#mydomain.com
I'm running this like:
~$ topscript.sh >/var/log/topscript.log 2>&1
However, when I run tail -f /var/log/topscript.log to inspect the log I see nothing, even though running top shows myscript.sh is currently being executed, and therefore, presumably outputting status messages.
Why isn't the stdout/stderr from the child scripts being captured in the parent's log? How do I fix this?
EDIT: I'm also running these on a remote machine, connected via ssh using pseudo-tty allocation, e.g. ssh -t user#host. Could the pseudo-tty be interfering?
I just tried your the following: I have three files t1.sh, t2.sh, and t3.sh all with the following content:
#!/bin/bash
for((i=0;i<10;i++)) ; do
echo $i of 9
sleep 1
done
And a script called myscript.sh with the following content:
#!/bin/bash
./t1.sh
./t2.sh
./t3.sh
echo "All Done"
When I run ./myscript.sh > topscript.log 2>&1 and then in another terminal run tail -f topscript.log I see the lines being output just fine in the log file.
Perhaps the things being run in your subscripts use a large output buffer? I know when I've run python scripts before, it has a pretty big output buffer so you don't see any output for a while. Do you actually see the entire output in the email that gets sent out at the end of topscript.sh? Is it just that while the processes run you're not seeing the output?
try
unbuffer topscript.sh >/var/log/topscript.log 2>&1
Note that unbuffer is not always available as a std binary in old-style Unix platforms and may require a search and installation for a package to support it.
I hope this helps.

Resources