How do run commands in background one by one? - linux

nohup php /home/www/api/24 > 24.out 2> 24.err < /dev/null
nohup php /home/www/api/27 > 27.out 2> 27.err < /dev/null
nohup php /home/www/api/19 > 27.out 2> 16.err < /dev/null
I have a few thousand api calls I need to make and need to be done one by one so I don't flood the other server with web calls. After I run the sh file, how can I close the terminal without interrupting the process, CTRL+Z ?

You type...
$ screen
...and hit enter.
Run the command or script.
Press control-a, then d
Then you can disconnect, log out, do whatever... come back later and check on the script:
$ screen -r
Then you wonder how you ever got along without it.
https://www.gnu.org/software/screen/

Put everything in a script, and then run that script with nohup:
#!/bin/bash
for i in 24 27 19 ...
do
php /home/www/api/$i > $i.out 2> $i.err
done
Then do:
nohup /path/to/script </dev/null >/dev/null 2>&1 &

You could also use the batch(1) command with a here document, e.g:
batch << EOJ
php /home/www/api/24 > 24.out 2> 24.err < /dev/null
php /home/www/api/17 > 17.out 2> 17.err < /dev/null
php /home/www/api/19 > 19.out 2> 19.err < /dev/null
EOJ

Related

nohup append the executed command at the top of the output file

Let's say that we invoke the nohup in the following way:
nohup foo.py -n 20 2>&1 &
This will write the output to the nohup.out.
How could we achieve to have the whole command nohup foo.py -n 20 2>&1 & sitting at the top of the nohup.out (or any other specified output file) after which the regular output of the executed command will be written to that file?
The reason for this is for purely debugging purpose as there will be thousands of commands like this executed and very often some of them will crash due to various reasons. It's like a basic report kept in a file with the executed command written at the top followed by the output of the executed command.
A straightforward alternative would be something like:
myNohup() {
(
set +m # disable job control
[[ -t 0 ]] && exec </dev/null # redirect stdin away from tty
[[ -t 1 ]] && exec >nohup.out # redirect stdout away from tty
[[ -t 2 ]] && exec 2>&1 # redirect stderr away from tty
set -x # enable trace logging of all commands run
"$#" # run our arguments as a command
) & disown -h "$!" # do not forward any HUP signal to the child process
}
To define a command we can test this with:
waitAndWrite() { sleep 5; echo "finished"; }
...and run:
myNohup waitAndWrite
...will return immediately and, after five seconds, leave the following in nohup.out:
+ waitAndWrite
+ sleep 5
+ echo finished
finished
If you only want to write the exact command run without the side effects of xtrace, replace the set -x with (assuming bash 5.0 or newer) printf '%s\n' "${*#Q}".
For older versions of bash, you might instead consider printf '%q ' "$#"; printf '\n'.
This does differ a little from what the question proposes:
Redirections and other shell directives are not logged by set -x. When you run nohup foo 2>&1 &, the 2>&1 is not passed as an argument to nohup; instead, it's something the shell does before nohup is started. Similarly, the & is not an argument but an instruction to the shell not to wait() for the subprocess to finish before going on to future commands.

How to run script multiple times and after every execution of command to wait until the device is ready to execute again?

I have this bash script:
#!/bin/bash
rm /etc/stress.txt
cat /dev/smd10 | tee /etc/stress.txt &
for ((i=0; i< 1000; i++))
do
echo -e "\nRun number: $i\n"
#wait untill module restart and bee ready for next restart
dmesg | grep ERROR
echo -e 'AT+CFUN=1,1\r\n' > /dev/smd10
echo -e "\nADB device booted successfully\n"
done
I want to restart module 1000 times using this script.
Module is like android device witch has linux inside it. But I use Windows.
AT+CFUN=1,1 - reset
When I push script, after every restart I need a command which will wait module and start up again and execute script 1000 times. Then I do pull in .txt file and save all output content.
Which command should I use?
I try commands like wait, sleep, watch, adb wait-for-device, ps aux | grep... Nothing works.
Can someone help me with this?
I find the solution. This is how my script actually looks:
#!/bin/bash
cat /dev/smd10 &
TEST=$(cat /etc/output.txt)
RESTART_TIMES=1000
if [[ $TEST != $RESTART_TIMES ]]
then
echo $((TEST+1)) > /etc/output.txt
dmesg
echo -e 'AT+CFUN=1,1\r\n' > /dev/smd10
fi
These are the steps that you need to do:
adb push /path/to/your/script /etc/init.d
cd /etc
cat outputfile.txt - make an output file and write inside file 0 ( echo 0 > output.txt )
cd init.d
ls - you should see rc5.d
cd .. then cd rc5.d - go inside
ln -s ../init.d/yourscript.sh S99yourscript.sh
ls - you should see S99yourscript.sh
cd .. return to init.d directory
chmod +x yourscript.sh - add permision to your script
./yourscript.sh

why nohup does not launch my script?

Here is my script.sh
for ((i=1; i<=400000; i++))
do
echo "loop $i"
echo
numberps=`ps -ef | grep php | wc -l`;
echo $numberps
if [ $numberps -lt 110 ]
then
php5 script.php &
sleep 0.25
else
echo too much process
sleep 0.5
fi
done
When I launch it with:
./script.sh > /dev/null 2>/dev/null &
that works except when I logout from SSH and login again, I cannot stop the script with kill%1 and jobs -l is empty
When I try to launch it with
nohup ./script.sh &
It just ouputs
nohup: ignoring input and appending output to `nohup.out'
but no php5 are running: nohup has no effect at all
I have 2 aleternatives to solve my problem:
1) ./script.sh > /dev/null 2>/dev/null &
If I logout from SSH and login again, How can I delete this job ?
or
2) How to make nohup run correctly ?
Any idea ?
nohup is not supposed to allow you to use jobs -l or kill %1 to kill jobs after logging out and in again.
Instead, you can
Run the script in the foreground in a GNU Screen or tmux session, which lets you log out, log in, reattach and continue the same session.
killall script.sh to kill all running instances of script.sh running on the server.

How to run nohup and write its pid file in a single bash statement

I want to run my script in background and then write its pid file. I am using nohup to do this.
This is what i came up with,
nohup ./myprogram.sh > /dev/null 2>&1 & && echo $! > run.pid
But this gives a syntax error.
The following doesn't give syntax error but the problem is echo $! doesn't write the correct pid since nohup is run in a sub shell
(nohup ./myprogram.sh > /dev/null 2>&1 &) && echo $! > run.pid
Any solutions for this, given i want a single line statement for achieving this?
You already have one ampersand after the redirect which puts your script in background. Therefore you only need to type the desired command after that ampersand, not prefixed by anything else:
nohup ./myprogram.sh > /dev/null 2>&1 & echo $! > run.pid
This should work:
nohup ./myprogram.sh > /dev/null 2>&1 &
echo $! > run.pid
Grigor's answer is correct, but not complete.
Getting the pid directly from the nohup command is not the same as getting the pid of your own process.
running ps -ef:
root 31885 27974 0 12:36 pts/2 00:00:00 sudo nohup ./myprogram.sh
root 31886 31885 25 12:36 pts/2 00:01:39 /path/to/myprogram.sh
To get the pid of your own process, you can use:
nohup ./myprogram.sh > /dev/null 2>&1 & echo $! > run.pid
# allow for a moment to pass
cat run.pid | pgrep -P $!
Note that if you try to run the second command immediately after nohup, the child process will not exist yet.

Getting sudo and nohup to work together

Linux newbie here.
I have a perl script which takes two command line inputs. I tried to run it in the background but this is what I got:
[~user]$ nohup sudo ./ascii_loader_script.pl 20070502 ctm_20070502.csv &
[2] 19603
[~user]$ nohup: appending output to `nohup.out'
after the system returns "nohup: appending output to `nohup.out'", no new prompt will appear. Then as long as I type in some other command, the shell will tell me that the process is stopped:
[~user]$ nohup sudo ./ascii_loader_script.pl 20070502 ctm_20070502.csv &
[2] 19603
[~user]$ nohup: appending output to `nohup.out'
ls
ascii_loader_script.pl format_wrds_trd.txt nohup.out norm_wrds_trd.cfg
[2]+ Stopped nohup sudo ./ascii_loader_script.pl 20070502 ctm_20070502.csv
I've looked at this post and tried to do "sudo date" before executing the command. Still got the same thing.
http://www.sudo.ws/pipermail/sudo-users/2003-July/001648.html
The solution is to use the -b flag for sudo to run the command in the background:
$ sudo -b ./ascii_loader_script.pl 20070502 ctm_20070502.csv
You should only use nohup if you want the program to continue even after you close your current terminal session
The problem here, imho, is not nohup, but background processing sudo.
You are putting the process in background (& at end of command) but probably sudo needs password authentication, and that is why the process stops.
Try one of these:
1) remove the ampersand from end of command, reply to passord prompt and afterwords put it in background (by typing CTRL-Z - which stops the process and issuing the bg command to send it to background)
2) Change the /etc/sudoers to not ask for users password by including the line:
myusername ALL=(ALL) NOPASSWD: ALL
If besides the password reply your application waits for other input, then you can pipe the input to the command like this:
$ cat responses.txt|sudo mycommand.php
hth
You can Try
sudo su
and then
nohup ./ascii_loader_script.pl 20070502 ctm_20070502.csv &
instead of
nohup sudo ./ascii_loader_script.pl 20070502 ctm_20070502.csv &
You must use sudo first, nohup second.
sudo nohup ./ascii_loader_script.pl 20070502 ctm_20070502.csv &
My working solution for evaluating disk fragmentation in the background:
Exec sudo with nohup without ampersand (&) at the end:
$ sudo nohup nice -20 find / -type f -exec filefrag "{}" \; | sed 's/^\(.*\): \([0-9]\+\) extent.*/\2\t\1/'| awk -F ' ' '$1 > 0' | sort -n -r | head -50 > filefrag.txt
Enter password for sudo;
Press Ctrl+Z;
Put the running process in the background.
$ bg 1
[1]+ sudo nohup nice -20 find / -type f -exec filefrag "{}" \; | sed 's/^\(.*\): \([0-9]\+\) extent.*/\2\t\1/' | awk -F ' ' '$1 > 0' | sort -n -r | head -50 > filefrag.txt &
Now you can exit the terminal and log in later. The process will remain running in the background. Because nohup is used.
First of all, you should switch sudo and nohup.
And then:
if sudo echo Starting ...
then
sudo nohup <yourProcess> &
fi
The echo Starting ... can be replaced by any command that does not do much.
I only use it as dummy command for the sudo.
By this the sudo in the if-condition triggers the password-check.
If it is ok then the sudo session is logged in and the second call will succeed, otherwise the if will fail and not execute the actual command.
I open an editor and typed these lines:
#!/bin/bash
sudo echo Starting ...
sudo -b MyProcess
(Where MyProcess is anything I want to run as superuser.)
Then I save the file where I want it as MyShellScript.sh .
Then change the file permissions to allow execution.
Then run it in a terminal. the "-b" option tells sudo to run the process separately in the background, so the process keeps running after the terminal session dies.
Worked for me in linux-mint.
You can set it as your alias:
sudo sh -c 'nohup openvpn /etc/openvpn/client.ovpn 2>&1 > /dev/null &'
This should work
sudo -b -u userName ./myScript > logFile
I am just curious to understand that can I send this logFile as a email after the ./myScript is successful running in background.
Try:
xterm -e "sudo -b nohup php -S localhost:80 -t /media/malcolm/Workspace/sites &>/dev/null"
When you close xterm, the PHP web server still alive.
Don't put nohup before sudo or else the PHP web server will be killed after closing xterm.

Resources