Running rsync in background - linux

I use this to run rsync in background
rsync -avh /home/abc/abac /backups/ddd &
When i do that i see line saying 1 process stopped.
Now does that mean my process is still running ot it is stopped

When you press "ctrl + z" then the process stopped and go to background.
[1]+ Stopped rsync -ar --partial /home/webup/ /mnt/backup/
Now press "bg" and will start in background the previous process you stopped.
[1]+ rsync -ar --partial /home/webup/ /mnt/backup/ &
Press "jobs" to see the process is running
[1]+ Running rsync -ar --partial /home/webup/ /mnt/backup/ &
If you want to to go in foreground press "fg 1" 1 is the process number

The solution to keep rsync running in background
nohup rsync -a host.origin:/path/data destiny.host:/path/ &
Nohup allows to run a process/command or shell script to continue working in the background even if you close the terminal session.
In our example, we also added ‘&’ at the end, that helps to send the process to background.
Output example:
nohup rsync -avp root#61.0.172.109:/root/backup/uploads/ . &
[1] 33376
nohup: ignoring input and appending output to 'nohup.out'
That’s all, now your rsync process will run in the background, no matter what happens it will be there unless you kill the process from command line, but it will not be interrupted if you close your linux terminal, or if you logout from the server.
RSync Status
cat nohup.out

It is probably trying to read from the terminal (to ask you for a password, perhaps). When a background process tries to read from the terminal, it gets stopped.
You can make the error go away by redirecting stdin from /dev/null:
rsync -avh /home/abc/abac /backups/ddd < /dev/null &
...but then it will probably fail because it will not be able to read whatever it was trying to read.

No, it means it has been stopped.
You can check it with jobs.
Example output:
jobs
[1]+ Stopped yes
Then you can reactivate with fg, example:
fg 1

This is safe, you can monitor nohup.out to see the progress.
nohup rsync -avrt --exclude 'i386*' --exclude 'debug' rsync://mirrors.kernel.org/centos/6/os . &

If everything works, your call should return you the PID of the new progress and some time later a "Done" message.
So yeah, your output looks like your process is not running.
Check ps to see if rsync is running.

File Transfer:
nohup scp oracle#<your_ip>:/backup_location/backup/file.txt . > nohup.out 2>&1 &
then hit ctrl-z
$ bg
To bring the command alive

Related

rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(544) [sender=3.0.6]

I am trying to delete huge data(~ in TiBs) using rsync command.
The command is running in background process with nohup. But still it is failing without successfully completing the process with the below mentioned error in log file.
rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(544) [sender=3.0.6]
Please suggest what should be done in this case.
This is the command that I am executing.
nohup rsync -a --delete empty_dir/ dir_to_be_deleted/ &
rsync doesn't work with nohup but using screen we can successfully run rsync in background.
Below is the command
1) Open a screen
screen -S rsync
2) Run the rsync process
rsync -rvz --delete syncing_to_empty_dir/ folder_marked_for_deletion/
3) detach the screen
ctrl+A+d
This solved my problem and hopes will work for others too.

How can I look into nohup file while the program is still running?

I was using
nohup ./program_name &
to run my program, program_name prints out some values and status of the running process including how much percentage the program has finished, but since I'm running it using nohup so I can't see how close my program to finish is, is there anyway I can still get that information?
We have to Just open nohup.out to see output. Probably you want
tail -f nohup.out
for streaming output
Perhaps adjust your nohup command line to capture all output to a file:
nohup ./program_name > /tmp/programName.log 2>&1 &
Then, you can monitor programName.log using tail:
tail -f /tmp/programName.log
Put the below command in current terminal where the program is running
jobs command used to lists the jobs that you are running in the background and in the foreground
jobs -l
[6]+ 6069 Running nohup perl test1.pl &
[6]+ 6069 Done nohup perl test1.pl

How can place a job of linux terminal to background after enter password?

I use this command in linux terminal to connect to a server and use it as proxy :
ssh -N -D 7070 root#ip_address
it's get the password and connect and everything is Ok but how can I put this process in background ?
I used CTRL+Z but it stop not put this process in background ...
CTRL-Z is doing exactly what it should, which is stop the process. If you then want to put it in the background, the shell command for doing that is bg:
$ ssh -N -D 7070 -l user 192.168.1.51
user#192.168.1.51's password:
^Z
[1]+ Stopped ssh -N -D 7070 -l mjfraioli 192.168.1.51
$ bg
[1]+ ssh -N -D 7070 -l user 192.168.1.51 &
That way you can enter the password interactively, and only once that is complete, stop it and put it into the background.
Try adding an ampersand to the end of your command:
ssh -N -D 7070 root#ip_address &
Explanation:
This trailing ampersand directs the shell to run the command in the background, that is, it is forked and run in a separate sub-shell, as a job, asynchronously. The shell will immediately return the return status of 0 for true and continue as normal, either processing further commands in a script or returning the cursor focus back to the user in a Linux terminal.
The shell will print out the forked process’s job number and process ID (PID) like so:
$ ./myscript.py &
[1] 1337
The stdout of the forked process will still be attached to the parent, so any output will still appear in your terminal.
After a process is forked using a single trailing ampersand &, its process ID (PID) is stored in a special variable $!. This can be used later to refer to the process:
$ echo $!
1337
Once a process is forked, it can be seen in the jobs list:
$ jobs
[1]+ Running ./myscript.py &
And it can be brought back to the command line before it finishes with the foreground command:
fg
The foreground command takes an optional argument of the job number, if you have forked multiple processes.
A single ampersand & can also delimit a list of commands to be run asynchronously.
./script.py & ./script2.py & ./script3.py &
In this example, all 3 python scripts are run at the same time, in separate sub-shells. Their stdout will still be attached to the parent shell, so if running this from a Linux terminal, you will still see the outputs.
This can also be used as a quick hack to take advantage of multiple cores with shell scripts, but be warned, it is a hack!
To detach a process completely from the shell, you may want to pipe the stdout and stderr to a file or to /dev/null. A nice way of doing this is with the nohup command.
source for above explanation: http://bashitout.com/2013/05/18/Ampersands-on-the-command-line.html
You can add option -f to make the ssh command run in background.
So the answer is ssh -f -D port username#hostname -N.

How to properly sigint a bash script that is run from another bash script?

I have two scripts, in which one is calling the other, and needs to kill it after some time. A very basic, working example is given below.
main_script.sh:
#!/bin/bash
cd "${0%/*}" #make current working directory the folder of this script
./record.sh &
PID=$!
# perform some other commands
sleep 5
kill -s SIGINT $PID
#wait $PID
echo "Finished"
record.sh:
#!/bin/bash
cd "${0%/*}" #make current working directory the folder of this script
RECORD_PIDS=1
printf "WallTimeStart: %f\n\n" $(date +%s.%N) >> test.txt
top -b -p $RECORD_PIDS -d 1.00 >> test.txt
printf "WallTimeEnd: %f\n\n" $(date +%s.%N) >> test.txt
Now, if I run main_script.sh, it will not nicely close record.sh on finish: the top command will keep on running in the background (test.txt will grow until you manually kill the top process), even though the main_script is finished and the record script is killed using SIGINT.
If I ctrl+c the main_script.sh, everything shuts down properly. If I run record.sh on its own and ctrl+c it, everything shuts down properly as well.
If I uncomment wait, the script will hang and I will need to ctrl+z it.
I have already tried all kinds of things, including using 'trap' to launch some cleanup script when receiving a SIGINT, EXIT, and/or SIGTERM, but nothing worked. I also tried bring record.sh back to the foreground using fg, but that did not help too. I have been searching for nearly a day now already, with now luck unfortunately. I have made an ugly workaround which uses pidof to find the top process and kill it manually (from main_script.sh), and then I have to write the "WallTimeEnd" statement manually to it as well from the main_script.sh. Not very satisfactory to me...
Looking forward to any tips!
Cheers,
Koen
Your issue is that the SIGINT is delivered to bash rather than to top. One option would be to use a new session and send the signal to the process group instead, like:
#!/bin/bash
cd "${0%/*}" #make current working directory the folder of this script
setsid ./record.sh &
PID=$!
# perform some other commands
sleep 5
kill -s SIGINT -$PID
wait $PID
echo "Finished"
This starts the sub-script in a new process group and the -pid tells kill to signal every process in that group, which will include top.

Kill ssh or\and remote process from bash script

I am trying to run the following command as part of the bash script which suppose to open ssh channel, run the program on the remote machine, save the output to the file for 10 sec, kill the process, which was writing to the file and then give the control back to bash script.
#!/bin/bash
ssh hostname '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null; sshpid=!$; sleep 10; kill -9 $sshpid 2>/dev/null &'
Unfortunately, what it seems to be doing is starting the program: nodes-listener remotely, but it never gets any further and it doesn't give control to the bash script. So, the only way to stop the execution is to do Ctrl+C.
Killing ssh doesn't help (or rather can't be executed) since the control is not with bash script as it waits for the command within the ssh session to complete, which of course never happens as it has to be killed to stop.
Here's the command line that you're running on the remote system:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null
sshpid=!$
sleep 10
kill -9 $sshpid 2>/dev/null &
You should change it to this:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null & <-- Ampersand goes here
sshpid=$!
sleep 10
kill -9 $sshpid 2>/dev/null
You want to start nodes-listener and then kill it after ten seconds. To do this, you need to start nodes-listener as a background process, so that the shell which is executing this command line to move on to the next command after starting nodes-listener. The & in your command line is in the wrong place, and would apply only to the kill command. You need to apply it to the nodes-listener command.
I'll also note that your sshpid=!$ line was incorrect. You want sshpid=$!. $! is the process ID of the last command started in the background.
You need to place the ampersand after the first command, then put the remaining commands onto the next line:
ssh hostname -- '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=$!; sleep 10; kill $sshpid 2>/dev/null'
Btw, ssh is returning after all commands had been executed. This does mean it will close the allocated pty as well. If there are still background jobs running in that shell session, they would being killed by SIGHUP. This means, you can probably omit the explicit kill command. (Depends on whether nodes-listener handles SIGHUP and SIGTERM differently). Having this, you could simplify the code to the following:
ssh hostname -- sh -c '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sleep 10'
I have resolved this by pushing the shell script to the remote machine and executing it there. It is actually less tidy and relies on space being available on the remote computer.
Since my remote machine is a small physical device, the issue of the space usage is important (even for the tiny amount of space required in this case).
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=!$
sleep 20
sync
# killing nodes-listener process and giving control back to the base bash
killall -9 nodes-listener 2>/dev/null && echo "nodes-listener is killed"

Resources