I would like my script to run in the backround, SSH into another computer, run tcpdump, produce a pcap file and save it to my local computer. I have all of this working save for the running in the background portion.
I have looked at several solutions on Stack Overflow (example) but they don't seem to work for me. Admittedly I am a novice with bash however so it is entirely possible that I am reading them incorrectly.
ssh root#ipaddress "tcpdump -c 400000 -s 0 -U -n -w - -i eth0 not arp" &>/dev/null &disown \ > /root/Destop/BashPcap/01Bash.pcap
Check your quotation endings maybe that's the problem...
Or you can save the file remotely and download back using scp (SecureCoPy).
Eg:
scp root#ipaddress:/path/to/file ~/Documents/path-where you-want-to-save.pcap
As far as I understood your task this is what you want:
nohup ssh root#ipaddress "tcpdump -c 400000 -s 0 -U -n -w - -i eth0 not arp" &> /root/Destop/BashPcap/01Bash.pcap &
In simple words:
nohup - it will allow you to close your terminal and the script will continue to run
ssh ... - this is the command to execute
&> - redirect both stdout and stderr to file (Bash 4)
& - sends command to the background
Note: &> will send to the file both stdout and stderr, you need this if you want to have in your file the summary lines from tcpdump. They are written to stderr:
N packets captured
X packets received by filter
Y packets dropped by kernel
If you do not want to have these lines, then send stderr to /dev/null
nohup ssh root#ipaddress "tcpdump -c 400000 -s 0 -U -n -w - -i eth0 not arp" 2>/dev/null > /root/Destop/BashPcap/01Bash.pcap &
Related
So we have Embedded Linux board running Linux.
We can connect to that board using telnet and that spawns shell and gives access to it.
Now I am writing a bash script where I want to run commands on that shell and get its output.
e.g. My commands are something like below command over telnet and see if that was successful or not.
test -c /dev/null
When I run it like below I always get 1 as exit status
{ test -c /dev/null; sleep 1;} | telnet <board ip addr>
If possible I don't want to use expect,
Any suggestion/pointers ?
With SSH could trivially and robustly have done:
ssh yourhost 'test -c /dev/null'
With a simple shell on a TCP port, you could somewhat robustly but annoyingly have used:
echo 'test -c /dev/null; echo $?' | nc -q 1 yourhost 1234
telnet is instead notoriously timing sensitive and tricky to script, so since you don't want to do it robustly with expect, you can try to kludge it:
{ sleep 1; echo 'test -c /dev/null; echo $?'; sleep 1; } | telnet somehost
I am trying to use ssh from the command line to launch a python server on a remote server using the following command:
$ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no \
-o ConnectTimeout=5 -f -i mykey.pem user#99.99.99.99 \
'python -m SimpleHTTPServer 3000 & echo $! > /home/user/pysrv.pid'
After the launch my ssh session goes to the background but does not exit until the python server is running. Is there a way I can setup the command so that ssh does not stick around as a background process on my current machine?
You can do
nohup python -m SimpleHTTPServer 3000 & echo $! > /home/user/pysrv.pid &
It will create a detached task that doesn't need the parent (ssh).
OTOH, if you kill the ssh server process, you won't be able to connect again. Is this what you want, or are you just trying to kill the session? If it's just the session, it should go away by itself after the connection is dropped.
The ssh command exits when the TCP connection is closed. If you redirect stdin and stderr of the remote command, nothing will be connected to the TCP connection and it will close.
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ConnectTimeout=5 -f -i mykey.pem user#99.99.99.99 'python -m SimpleHTTPServer 3000 2>/dev/null 2>&1 </dev/null& echo $! > /home/user/pysrv.pid'
For example, I have a very simple script, ping.sh:
#!/bin/bash
/usr/bin/xterm -e ping localhost
Right now, the output of the ping only shows up in the new xterm. I would like the output to show in both the original terminal (stdout of ping.sh) as well as in the new xterm. Is there a way to do this?
PS: I'm struggling with a title for this.
Seems like a weird thing to do, but this might work:
#!/bin/bash
f=$(mktemp)
touch "$f"
tail -f "$f" &
/usr/bin/xterm -e "sh -c 'ping localhost 2>&1 | tee -a $f'"
Alternatively, it's possible to get the file name of the terminal connected to standard input using the command tty, then use tee in the new terminal to copy the output to the old terminal.
/usr/bin/xterm -e "ping localhost | tee $(tty)"
Of course, this only works if the script is not called with redirected stdin.
In case the script is called with redirected stdin, solutions in shell - How to get the real name of the controlling terminal? - Unix & Linux Stack Exchange can be used. readlink /proc/self/fd/1, or ps (require some output parsing)
I have an old Syno NAS and wish to use the "shred" command to wipe this disks inside. The idea is to let the command run to complete on the box itself without the need of a computer.
So far I have managed...
1) to get the right parameters for 'shred'
* runs in the background using the &
2) get that command to output the progress (-v option) to a file shred.txt
* to see from the file what the progress is
shred -v -f -z -n 2 /dev/hdd 2>&1 | tee /volume1/backup/shred.txt &
3) ssh tunnel the command so I can turn off my laptop while its running
ssh -n -f root#host "sh -c 'nohup /opt/bin/shred -f -z -n 2 /dev/sdd > /dev/null 2>&1 &'"
The problem is that I can't combine 2) and 3)
I tried to combine them like this, but the resulting file remained empty:
ssh -n -f root#host "sh -c 'nohup /opt/bin/shred -f -z -n 2 /dev/sdd 2>&1 | tee /volume1/backup/shred.txt > /dev/null &'"
It might be a case of the NOOBS but I can't figure out how to get this done.
Any suggestions?
Thanks. Vince
Commands sh and tee are not needed in here:
ssh -n root#host 'nohup /opt/bin/shred -f -z -n 2 /dev/sdd 2>&1 >/volume1/backup/shred.txt &' >/dev/null
The final >/dev/null is optional, it will just disregard any greetings from other hosts.
Tried the following command (based on Grzegorz suggestion) and included the opening date stamp and the before mentioned - stupidly forgotten - verbose switch. Last version of the command string:
ssh -n root#host 'date > /volume1/backup/shred_sda.txt; nohup /opt/bin/shred -v -f -z -n 4 /dev/sda 2>&1 >> /volume1/backup/shred_sda.txt # >/dev/null'
The last thing to figure out is how to include the date stamp when the shred command has completed.
Linux newbie here.
I have a perl script which takes two command line inputs. I tried to run it in the background but this is what I got:
[~user]$ nohup sudo ./ascii_loader_script.pl 20070502 ctm_20070502.csv &
[2] 19603
[~user]$ nohup: appending output to `nohup.out'
after the system returns "nohup: appending output to `nohup.out'", no new prompt will appear. Then as long as I type in some other command, the shell will tell me that the process is stopped:
[~user]$ nohup sudo ./ascii_loader_script.pl 20070502 ctm_20070502.csv &
[2] 19603
[~user]$ nohup: appending output to `nohup.out'
ls
ascii_loader_script.pl format_wrds_trd.txt nohup.out norm_wrds_trd.cfg
[2]+ Stopped nohup sudo ./ascii_loader_script.pl 20070502 ctm_20070502.csv
I've looked at this post and tried to do "sudo date" before executing the command. Still got the same thing.
http://www.sudo.ws/pipermail/sudo-users/2003-July/001648.html
The solution is to use the -b flag for sudo to run the command in the background:
$ sudo -b ./ascii_loader_script.pl 20070502 ctm_20070502.csv
You should only use nohup if you want the program to continue even after you close your current terminal session
The problem here, imho, is not nohup, but background processing sudo.
You are putting the process in background (& at end of command) but probably sudo needs password authentication, and that is why the process stops.
Try one of these:
1) remove the ampersand from end of command, reply to passord prompt and afterwords put it in background (by typing CTRL-Z - which stops the process and issuing the bg command to send it to background)
2) Change the /etc/sudoers to not ask for users password by including the line:
myusername ALL=(ALL) NOPASSWD: ALL
If besides the password reply your application waits for other input, then you can pipe the input to the command like this:
$ cat responses.txt|sudo mycommand.php
hth
You can Try
sudo su
and then
nohup ./ascii_loader_script.pl 20070502 ctm_20070502.csv &
instead of
nohup sudo ./ascii_loader_script.pl 20070502 ctm_20070502.csv &
You must use sudo first, nohup second.
sudo nohup ./ascii_loader_script.pl 20070502 ctm_20070502.csv &
My working solution for evaluating disk fragmentation in the background:
Exec sudo with nohup without ampersand (&) at the end:
$ sudo nohup nice -20 find / -type f -exec filefrag "{}" \; | sed 's/^\(.*\): \([0-9]\+\) extent.*/\2\t\1/'| awk -F ' ' '$1 > 0' | sort -n -r | head -50 > filefrag.txt
Enter password for sudo;
Press Ctrl+Z;
Put the running process in the background.
$ bg 1
[1]+ sudo nohup nice -20 find / -type f -exec filefrag "{}" \; | sed 's/^\(.*\): \([0-9]\+\) extent.*/\2\t\1/' | awk -F ' ' '$1 > 0' | sort -n -r | head -50 > filefrag.txt &
Now you can exit the terminal and log in later. The process will remain running in the background. Because nohup is used.
First of all, you should switch sudo and nohup.
And then:
if sudo echo Starting ...
then
sudo nohup <yourProcess> &
fi
The echo Starting ... can be replaced by any command that does not do much.
I only use it as dummy command for the sudo.
By this the sudo in the if-condition triggers the password-check.
If it is ok then the sudo session is logged in and the second call will succeed, otherwise the if will fail and not execute the actual command.
I open an editor and typed these lines:
#!/bin/bash
sudo echo Starting ...
sudo -b MyProcess
(Where MyProcess is anything I want to run as superuser.)
Then I save the file where I want it as MyShellScript.sh .
Then change the file permissions to allow execution.
Then run it in a terminal. the "-b" option tells sudo to run the process separately in the background, so the process keeps running after the terminal session dies.
Worked for me in linux-mint.
You can set it as your alias:
sudo sh -c 'nohup openvpn /etc/openvpn/client.ovpn 2>&1 > /dev/null &'
This should work
sudo -b -u userName ./myScript > logFile
I am just curious to understand that can I send this logFile as a email after the ./myScript is successful running in background.
Try:
xterm -e "sudo -b nohup php -S localhost:80 -t /media/malcolm/Workspace/sites &>/dev/null"
When you close xterm, the PHP web server still alive.
Don't put nohup before sudo or else the PHP web server will be killed after closing xterm.