Starting bash script - linux

Hello I have such problem. I have init scripts, and I must run syslogd (busybox), so I have such code:
...
"$__start_program" $OPTIONS
....
If I
echo "$__start_program $OPTIONS"
it prints
/sbin/syslogd -s 512 -l 6 -L -O "/var/log/a.log"
I see this process in ps, but actually syslog don't start (there is no messages in log file about start and logger don't write something to log at all). But if I run this script manually from command line (with same arguments) it works fine. Can some one help me with this problem?

Don't use a string to store commands, that's not what they're for. The link provided in the comments contains some good discussion on the potential problems that this may cause.
It's not clear from the question where one string starts and the other ends but you should use a function to achieve what you are trying to do. Something like this:
log_daemon() {
param_s="$1"
logfile="$2"
/sbin/syslogd -s "$param_s" -l 6 -L -O "$logfile"
}
Then call it from your script like:
log_daemon 512 /var/log/a.log

Related

Running a process with the TTY detached

I'd like to run a linux console command from a terminal, preventing it from accessing the TTY by itself (which will, for example, happen often when the console command tries to request a password from the user - this should just fail). The closest I get to a solution is using this wrapper:
temp=`mktemp -d`
echo "$#" > $temp/run.sh
mkfifo $temp/out $temp/err
setsid sh -c "sh $temp/run.sh > $temp/out 2> $temp/err" &
cat $temp/err 1>&2 &
cat $temp/out
rm -f $temp/out $temp/err $temp/run.sh
rmdir $temp
This runs the command as expected without TTY access, but passing the stdout/stderr output through the FIFO pipes does not work for some reason. I end up with no output at all even though the process wrote to stdout or stderr.
Any ideas?
Well, thank you all for having a look. Turns out that the script already contained a working approach. It just contained a typo which caused it to fail. I corrected it in the question so it may serve for future reference.

How to add timstamps to crond's native logs?

I know this has been asked countless times but I am looking for a solution that uses crond's native log function. I do not want to pipe the output of each cron and prepend the timestamp.
I am launching crond like this:
crond -L /var/log/cron.log -f
the logs are like this:
crond: crond (busybox 1.30.1) started, log level 8
crond: USER root pid 16 cmd echo "hello"
crond: USER root pid 18 cmd echo "hello"
crond: USER root pid 19 cmd echo "hello"
I'd like to add the timestamp before the line. I do not want to add some stdout command to each individual cron and prepend the date.
Maybe I could watch the file and append to each new line or something? How do I get access to crond's stream and modify it?
I believe that the answer is that it's not possible to modify the crond output file.
The actual implementation detail of the cron do not make it easy to control the log file for individual jobs. Also, the crond is running as root, which will make it hard to user jobs to change the file. Trying to change the file, while crond is running will likely result in problems.
Consider instead the following option
Write a process that will tail -f the log file, and create a new log file, with each line prefixed by the timestamp.
Run the process at boot time.
tail -f /var/log/cron.log | while read x ; do echo "$(date) $x" ; done >> /var/log/cron-ts.log
Or configure to whatever format you need.

pkill with -f flag in crontab not running command after semi colon

I wanted to kill a process and remove a flag indicating that process is running. cron:
00 22 * * 1-5 pkill -f script.sh >log 2>&1 ; rm lock >log 2>&1
This works perfectly when I run it on terminal. But in crontab rm is not running. All I can think of is that whole line after -f flag is being taken as arguments for pkill.
Any reason why this is happening?
Keeping them as separate cron entries is working. Also pkill without -f flag is running (though it doesn't kill process as I want pattern to be searched in whole command).
Ran into this problem today and just wanted to post a working example for those who run into this:
pkill -f ^'python3 /Scripts/script.py' > /dev/null 2>&1 ; python3 /Scripts/script.py > /tmp/script.log 2>&1
This runs pkill and searches the whole command (-f) that starts with (regex ^) python3 /Scripts/script.py. As such, it'll never kill itself because it does not start with that command (it starts with pkill).
the short answer: it simply killed itself!
my answer explained:
if you let a command get started by a crond it'll be executed in a subshell. most probably the line you'll find in ps or htop will look like this:
/bin/sh -c pkill -f script.sh >log 2>&1 ; rm lock >log 2>&1
(details may vary. e.g. you might have bash instead of sh)
the point is, that the whole line got one PID (process id) and is one of the command lines which pgrep/pkill is parsing when using the '-f' parameter. as stated in the man page:
-f, --full
The pattern is normally only matched against the process name. When -f is set, the full command line is used.
now your pkill is looking for any command line in your running process list, which somehow contains the expression 'script.sh' and eventually will find that line at some point. as a result of it's finding, it'll get that PID and terminate it. unfortunately the very same PID holds the rest of you command chain, which just got killed by it self.
so you basically wrote a 'suicide line of commands' ;)
btw: i just did the same thing today and thats how i found your question.
hope this answer helps, even if it comes a little late
kind regards
3.141592 and nanananananananananananaBATMAN's answer is correct.
I worked around this problem like this.
00 22 * * 1-5 pkill -f script.[s][h] >log 2>&1 ; rm lock >log 2>&1
This works because script.[s][h](string) is not matched with script.[s][h](regex).

Keep a script running through ssh after logout

This is the first question that I post here. I tried to do a throughout search, but if I haven't (and the answer is obvious somewhere else), please just let me know.
I have a script that runs a program for me, here it is:
csv_file=../data/teste_nohup.csv
trace_file=../data/gnp.trace
declare -i n=100
declare -i p=1
declare -i counter=0
while [ $counter -lt 3 ];
do
n=100
while true
do
nice -19 sage gnptest.py ${n} ${p} | tee -a ${csv_file}
notify-send "finished test gnp ${n} ${p}"
done
done
So, what I'm trying to do is run the gnptest.py program a few times, and have the result be written to the csv_file.
The problem is, that depending on the input, the program may take a long time to complete. So I'd like to connect to the server over ssh, start the program, close the terminal, and check the output file from time to time.
I've tried nohup and disown. Nohup creates a huge nohup.out file, full with errors that I don't get while normally running the script (it complains about using the -lt operand, for example). But the biggest problem that I'm facing is that no command (nohup ou disown -h) is executing the program and sending the output to the file that I've specified in the csv_file variable, which is being done using the tee command. Also, none of them seem to continue running after I logout...
Any help will be much appreciated.
Thanks in advance!!
i hv just joined so cannt add comment
Please try by using redirection instead of tee in script
And to get rid of Nohup.out use following to run script
nohup script.sh > /dev/null 2>&1 &
If above produces error use
nohup script.sh > /dev/null 2>&1 </dev/null &
Hope this will help.

Redirecting Output of Bash Child Scripts

I have a basic script that outputs various status messages. e.g.
~$ ./myscript.sh
0 of 100
1 of 100
2 of 100
...
I wanted to wrap this in a parent script, in order to run a sequence of child-scripts and send an email upon overall completion, e.g. topscript.sh
#!/bin/bash
START=$(date +%s)
/usr/local/bin/myscript.sh
/usr/local/bin/otherscript.sh
/usr/local/bin/anotherscript.sh
RET=$?
END=$(date +%s)
echo -e "Subject:Task Complete\nBegan on $START and finished at $END and exited with status $RET.\n" | sendmail -v group#mydomain.com
I'm running this like:
~$ topscript.sh >/var/log/topscript.log 2>&1
However, when I run tail -f /var/log/topscript.log to inspect the log I see nothing, even though running top shows myscript.sh is currently being executed, and therefore, presumably outputting status messages.
Why isn't the stdout/stderr from the child scripts being captured in the parent's log? How do I fix this?
EDIT: I'm also running these on a remote machine, connected via ssh using pseudo-tty allocation, e.g. ssh -t user#host. Could the pseudo-tty be interfering?
I just tried your the following: I have three files t1.sh, t2.sh, and t3.sh all with the following content:
#!/bin/bash
for((i=0;i<10;i++)) ; do
echo $i of 9
sleep 1
done
And a script called myscript.sh with the following content:
#!/bin/bash
./t1.sh
./t2.sh
./t3.sh
echo "All Done"
When I run ./myscript.sh > topscript.log 2>&1 and then in another terminal run tail -f topscript.log I see the lines being output just fine in the log file.
Perhaps the things being run in your subscripts use a large output buffer? I know when I've run python scripts before, it has a pretty big output buffer so you don't see any output for a while. Do you actually see the entire output in the email that gets sent out at the end of topscript.sh? Is it just that while the processes run you're not seeing the output?
try
unbuffer topscript.sh >/var/log/topscript.log 2>&1
Note that unbuffer is not always available as a std binary in old-style Unix platforms and may require a search and installation for a package to support it.
I hope this helps.

Resources