How can I 'nohup' a command and log the output of 'time' - linux

I want to time a long running script, and log the output of the time command to a log file, like so:
(time php mylongcommand.php) &> dump
This works, but what if I want to nohup the command so I can check the logs later. The following code does not work:
nohup (time php mylongcommand.php) &>dump &
Any suggestions?

(time php mylongcommand.php) &> dump < /dev/null &
Should also do the trick. By redirecting input from /dev/null and using & to put the process into the background, the same effect is obtained as if you used nohup. You should be able to exit your shell session without any stopped jobs error, and the process will continue to run.

Remove the brackets:
nohup time php mylongcommand.php &> dump &

Related

Start lots of background jobs but keep their logs separated

I have little experiences in shell commands in unix.
So far, I have checked stackOverflow and know how to run simple shell scripts in order by
using echo
echo $(sh dosomthing1.sh)
echo $(sh dosomthing2.sh)
directly using sh xxx and wait
sh dosomthing1.sh
wait
sh dosomthing2.sh
using &&
sh dosomthing1.sh && sh dosomthing2.sh
But these ways seem to be helpless to solve my problem...
Here is my problem:
I have a basic shell script to do a maven compile and then using "nohup xxx &" to start a java application in background. the script is shown below:
#get the input env parameter
env=$1
#goto application root directory
cd /applicationDir
#to compile
mvn install -Dmaven.test.skip=true
#to start with parameter env
nohup java -jar -Dspring.profiles.active=$env myApplication.jar &
#to tail the log
tail -20f myApplication.log
I have too many different applications with the same startup scripts and it is hard to start them one by one. I need to start them with one command.
All the shell scripts are expected to be processed one by one in order. If there are any exceptions, skip and run the next one.
And when I tried to write a script like this:
sh start1.sh
wait
echo "application 1 was start up"
sh start2.sh
wait
echo "application 2 was start up"
...
sh startxxx.sh
wait
echo "application xxx was start up"
Though all the children shell scripts will process in order as what I expected, and the output infomations looked like the shell is functioning well, but the fact is only the last application will be started, all the previous command "nohup xxxx &" will be shut down.
Also I have tried to write like this:
sh start1.sh &
sh start2.sh &
...
sh startxxx.sh &
Although the result was what I want, all the application will be started well, but during processing the scripts, because of the parallel running of the scripts, the consoled output is unreadable. It comes to a good result but not a graceful way.
I have no idea how to solve this problem...
Please help me with this, thank you very much!
When you have a script with commands, you cam do chmod +x start.sh. Now the script can be started with ./start.sh. You will avoid an additional sh process and with ls -l you can see which scripts are executable.
In your scripts you have tail -f. This will be very confusing for a backgound process. Start the scripts in the background and view the logging from the console. I do hope that each script is using a different myApplication.jar and myApplication.log.
When the logging in the logfile is duplicated in stdout (your commandline window), you can throw that logging away.
./start1.sh > /dev/null 2>&1 &
./start2.sh > /dev/null 2>&1 &
./startxxx.sh > /dev/null 2>&1 &
The processes will be killed when you logout before the scripts are terminated. This can be avoided with nohup:
nohup ./start1.sh > /dev/null 2>&1 &
nohup ./start2.sh > /dev/null 2>&1 &
nohup ./startxxx.sh > /dev/null 2>&1 &
Edit:
OPS wants to start programs in a fixed order.
Starting scripts exactly one after another in order, should be possible by calling them in the right order (perhaps with an additional sleep 1).
When you need to wait for program 1 finished some init stuff, you need to check that. Use 1 script calling all scripts and add some control statements, like
nohup java something &
while ! grep -q "Started" myApplication.log; do
sleep 1
done
When the java program has an error the while will wait for ever, so replace this with some max retrycount
for ((retry=0l retry<100; retry++)); do
grep -q "Started" myApplication.log && break
sleep 1
done
https://man7.org/linux/man-pages/man8/cron.8.html
This might help you. Cron is a task scheduler, which you can use to run programs in sequence. If the man page is difficult to understand, look for tutorials on it; I'm sure some would exist.

nohup - Not printing all logs

I am running two scripts
# Script 1
nohup sh {command} &
and the nohup.out is having all logs in details (for script 1)
# Script 2
nohup sh {command} > {log_path} 2>&1 &
But nohup.out having only limited log as listed below (for script 2),
## Script2 output
Shutdown message has been posted to the server.
Server shutdown may take a while - check logfiles for completion
How can i generate all logs by using script 2 format in nohup.out itself .
If you want to have both files (nohup.out and {log_path}) you can try:
((nohup {command}) > >(tee {log_path}) 2> >(tee {log_path}))>> nohup.out
the first part of the command line is explained here.
After this, you only have to redirect (append) output to nohup.out.

Bash redirection in a script in parallel

I have a bash script with a loop of processes that I want to run in parallel:
for i in {1..5}
do
echo Running for simulation $i
python script.py $i > ./outlogs/$i.log 2>&1 &
done
But when I do this the file redirection doesn't work, so $i.log just stays empty. The redirection only works when I do not use the & at the end, but then the script waits for each process to finish before starting the next one, which I don't want.
I tried a solution using script -c, but this does not update in realtime, only once the process ends. Does anyone have better suggestions, where the file redirection works in this script but it still updates in realtime?
You need simply add -u option so it will look like this:
python -u script.py $i > ./outlogs/$i.log 2>&1 &
Option -u is for unbuffered binary stdout and stderr

Running nohup without it sending a message

Hi i have a bash script and run a proces like this
nohup $proces &
When i do that the scipt sends a massage like
nohup: appending..
Is there a way to prevent the massage output? Or is there an alternative command that does the same and there is no massage send out?
Discard all output: nohup $process > /dev/null 2>&1 &

How can I look into nohup file while the program is still running?

I was using
nohup ./program_name &
to run my program, program_name prints out some values and status of the running process including how much percentage the program has finished, but since I'm running it using nohup so I can't see how close my program to finish is, is there anyway I can still get that information?
We have to Just open nohup.out to see output. Probably you want
tail -f nohup.out
for streaming output
Perhaps adjust your nohup command line to capture all output to a file:
nohup ./program_name > /tmp/programName.log 2>&1 &
Then, you can monitor programName.log using tail:
tail -f /tmp/programName.log
Put the below command in current terminal where the program is running
jobs command used to lists the jobs that you are running in the background and in the foreground
jobs -l
[6]+ 6069 Running nohup perl test1.pl &
[6]+ 6069 Done nohup perl test1.pl

Resources