I have some problems with ntpd sync on servers. Unless we resolve this issue I have written script to set date on servers manually for those who are not in sync.
For this I have taken one reference machine and I am catching current date of that machine and trying to set it on all other machines.
I'm using following command in script
ssh -i /mnt/keys/g.pem -o StrictHostKeyChecking=no root#$IP 'date --set="$ref_date"'
but when command runs it set wrong date.
e.g ref_date=Sat Sep 24 06:52:17 UTC 2016
when I echo above command it shows following line
ssh -i /mnt/keys/g.pem -o StrictHostKeyChecking=no root#xx.xx.xx.xx 'date --set="Sat Sep 24 06:52:17 UTC 2016"'
but when same command actually runs it gives following output
ssh -i /mnt/keys/g.pem -o StrictHostKeyChecking=no root#xx.xx.xx.xx 'date --set="Sat Sep 24 06:52:17 UTC 2016"'
Sat Sep 24 00:00:00 UTC 2016
Note: I have replaced $IP with xx.xx.xx.xx in above outputs.
Kindly provide solution to this.
ssh -i /mnt/keys/g.pem -o StrictHostKeyChecking=no root#$IP "date --set=\"$ref_date\""
See: Difference between single and double quotes in Bash
Related
I run into an issue last week that drives me crazy. I wrote a BASH script which does a remote ssh connection to acamai and than performs a simple 'ls'. I want to redirect the 'ls' sdtout output to a given file.
While the script itself works like a charm when run manually, it does not while it runs via cron. The cronjob runs as root and each command works as expected expect the ssh command. My System is Gentoo Linux and cron is the old but gold vixie-cron.
To reduce the 200 LOC I put the basics herein which alone (as a single script) are enough to demonstrate the problem.
#!/bin/bash
PATH='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin'
#set -x
shopt -s lastpipe
exec 2>log.out
(ssh -i <path to key> -o HostKeyAlgorithms=+ssh-dss -o StrictHostKeyChecking=no <account#example.com> 'ls -r <path>') > '/root/listing.txt'
Even in -vvv debug mode of ssh I can see, that everything works...just except that I get no stdout output.
Than I tried something else that I found in another posting on the internet:
#!/bin/bash
PATH='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin'
#set -x
shopt -s lastpipe
exec 2>log.out
(ssh -T -i <path to key> -o HostKeyAlgorithms=+ssh-dss -o StrictHostKeyChecking=no <account#example.com> 'ls -r <path>' </dev/zero) > '/root/listing.txt'
Drawback here, I start a ssh session that I can't close and I guess its due to /dev/zero.
Another approach was to TEE Pipe the sub-shell of the ssh command...this worked for a short time ( and why not yet anymore ?!)
Now I'm clueless and need help. Cron has its PATH, uses BASH etc. Curious my boss did that with success with java (and he hates BASH...).
Any explanation and helpful tips are greatly welcome.
I have same issue, I make script for CRON and it gets output from remote SSH host.
If i run script manually - it works as should. But when CRON runs it - i get just a part of remote output.
I cant realise why its happening.
#!/bin/sh
pass=123
filelist=$(sshpass -p "$pass" ssh -q -tt -o StrictHostKeyChecking=no user#"10.10.10.10" "list")
filestring=$(echo "$filelist" | grep -Po "(\S+\s\S+\s+\d+\s\d{2}:\d{2}:\d{2}\s\d{4})\slist0\.lst")
filedate=${filestring% list0.lst}
echo $filedate
filestamp=$(date -d "$filedate" +"%s")
echo $filestamp
When i get echos in file via CRON - there are date from 0:00:00 - field with date (echo $filedate) is empty. But when i run manually - i get normal date with time...
It really bother me.
Help?
I found solution - add "-tt" to ssh command and all input goes to variable.
filelist=$(sshpass -p "$pass" ssh -q -tt -o StrictHostKeyChecking=no user#"10.10.10.10" "list")
I am having confusion in touch command
while i execute "touch -m 0303 10 30 filename" it should actually update the modification time to
03 - march
03 - date
10 hours
30 - min
But while executing it is creating 5 empty files
named 03 03 10 30 filename
I executed this command in OSX
Try this command line, Instead of taking the current time-stamp, you can explicitly specify the time using -t flag.
The format for specifying -t is [[CC]YY]MMDDhhmm[.SS]
touch -m -t 03041030 filename
Read man touch again. -m doesn't take any arguments, you have to add -t or -d (I'm not sure whether both options are available on OSX).
After research the whole Internet can't find a solution to it.
I have a script which is working perfectly when I execute it from terminal.
#!/bin/bash
zip -r -j Tato.zip Csv
rm -r Csv/*.*
echo "blahblahblah" | mutt -s "Test" email#gmail.com -a Tato.zip
rm *.zip
but actually it does not work when I put it inside a crontab.
55 15 * * 7 /home/pi/Script.sh
wanted it to be executed on Sundays at 15.55.
And this is the output that /var/log/syslog tells me
Nov 1 15:55:01 raspberrypi sSMTP[3939]: Unable to locate mail
Nov 1 15:55:01 raspberrypi sSMTP[3939]: Cannot open mail:25
Nov 1 15:55:01 raspberrypi /USR/SBIN/CRON[3936]: (pi) MAIL (mailed 1 byte of output; but got status 0x0001, #012)
Don't know what to do anymore.
All help will be appreciated.
I'm trying to set a fake date via bash script
I'm using the following commands:
#!/bin/bash
echo 'myPass' | sudo -s 'date -s "1 NOV 2011 09:00:00"'
But I'm getting command error.
What is the right way to do it ?
sudo does not read the password from standard input by default, but from the terminal itself, so you cannot pipe your password into sudo this way. You need to use the -S option to read from standard input.
echo "myPass" | sudo -S date -s "1 NOV 2011 09:00:00"
(note that you don't need to use the -s (lowercase) option; sudo can run date directly without starting an intervening shell).
Exposing your password like this, however, is a security risk. It would be better to configure sudo to allow you (or anyone who is intended to run this script) to run
this particular date command without a password.
For sudo, the first parameter without a dash is the command to execute, the following parameters are the arguments to give to that command. If you wrap the command and its arguments together in quotes (e.g. "echo foo"), then sudo tries to execute the command "echo foo" instead of the command "echo" with parameter "foo". Hence, you need to omit the outermost quotes:
sudo date -s "1 NOV 2011 09:00:00"
I have a bash script called test.sh which, for the sake of simplicity, prints one line to stdout and one line to stderr.
test.sh:
#!/bin/bash
echo "this is to stdout"
echo "this is to stderr" 1>&2
I want to run the script test.sh at 7:00 PM, but only if certain conditions are met. To this end, I have another bash script called schedule.sh, which checks some stuff and then submits the command to at to be run later.
I want the output of test.sh (both stdout and stderr) to be sent to me in an email. I use mailx to do this so I can get a nice subject name.
Furthermore, I want at to shut up. No output from at because it always sends me ugly emails (no subject line) if at produces any output.
schedule.sh:
#!/bin/bash
my_email="me#example.com" # Email is a variable
# Check some stuff, exit if certain conditions not met
echo "~/test.sh 2>&1 | mailx -s\"Cool title\" $my_email" | at 7:00 PM &> /dev/null
What's interesting is that when I run schedule.sh from cron (which runs the script with sh), it works perfectly. However, when I manually run schedule.sh from the terminal (NB: I'm using tcsh), at (not mailx) sends me an email saying
Ambiguous output redirect.
I'm not sure why the shell I run schedule.sh from makes a difference, when schedule.sh is a bash script.
Here is my thinking in looking at schedule.sh. Everything within the quotation marks "~/test.sh 2>&1 | mailx -s\"Cool title\" me#email.com" should be an argument to at, and at runs that argument as a command using sh. The redirection 2>&1 | is in the style of sh for this reason.
When I remove 2>&1 and only pipe the stdout of test.sh to mailx, it does work; however, I receive 2 emails: one with stdout from mailx and another from stderr from at.
What gives? How can I make this work regardless of the shell I'm calling it from?
Thanks.
edit:
uname -o says my OS is GNU/Linux
Here is uname -a if it helps:
Linux [hostname censored] 2.6.9-89.ELlargesmp #1 SMP Mon Jun 22 12:46:58 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
When I check the at contents using at -c, here's what I see:
#!/bin/sh
# atrun uid=xxxxx gid=xxxxx
# mail username 0
# ...
SHELL=/bin/tcsh; export SHELL
# ...
${SHELL:-/bin/sh} << `(dd if=/dev/urandom count=200 bs=1 2>/dev/null|LC_ALL=C tr -d -c '[:alnum:]')`
~/test.sh 2>&1 | mailx -s"Cool title" me#example.com
I'm having a hard time understanding the second to last line... is this going to execute using $SHELL or /bin/sh?
The command executed via at is:
~/test.sh 2>&1 | mailx -s\"Cool title\" $my_email
The behavior of at command varies from one system to another. On Linux, the command is executed using /bin/sh. In fact, on my system (Linux Mint 14), it prints a warning message:
$ echo 'printenv > at.env' | at 19:24
warning: commands will be executed using /bin/sh
On Solaris, the command is executed by the shell specified by the current value of the $SHELL environment variable. Using an account where my default shell is /bin/tcsh on Solaris 9, I get:
% echo 'printenv > at.env' | at 19:25
commands will be executed using /bin/tcsh
job 1397874300.a at Fri Apr 18 19:25:00 2014
% echo 'printenv > at.env' | env SHELL=/bin/sh at 19:28
commands will be executed using /bin/sh
job 1397874480.a at Fri Apr 18 19:28:00 2014
Given that at's behavior is inconsistent (and frankly confusing), I suggest having it execute just a single command, with any I/O redirection being performed inside that command. That's the best way to ensure that the command will be executed correctly regardless of which shell is used to execute it.
For example (untested code follows):
echo '#!/bin/bash' > tmp.bash
echo "~/test.sh 2>&1 | mailx -s\"Cool title\" $my_email" >> tmp.bash
chmod +x tmp.bash
echo "./tmp.bash" | at 7:00 PM