In Linux how do you make a command run in the background without it outputting to the screen? [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I know this sounds like a silly question at first glance, but I've tried everything.
I want to execute the command arpspoof in the Kali Linux terminal but I do not want to see the endless output.
First I try this:
arpspoof -t 10.1.1.1 10.1.1.2 >/dev/null
And it still outputs to the screen.
Then I try this:
arpspoof -t 10.1.1.1 10.1.1.2 & >/dev/null
And it still outputs to the screen.
Then I add another one at the end:
arpspoof -t 10.1.1.1 10.1.1.2 & >/dev/null &
And it still outputs to the freakin screen.

Try
arpspoof -t 10.1.1.1 10.1.1.2 2>/dev/null 1>/dev/null &
where:
arpspoof -t 10.1.1.1 10.1.1.2 is your command
2>/dev/null redirects standard error (STDERR) to the "bit bucket"
1>/dev/null redirects standard out (STDOUT) to the "bit bucket"
& sets the entire command line to run in the background
This line of code is more verbose and perhaps clearer to understand.

A somewhat redundant answer but I prefer the format:
arpspoof -t 10.1.1.1 10.1.1.2 >/dev/null 2>&1
Be sure that you don't have background processes still running (therefore still writing to the console/screen) from previous attempts to redirect the output.

Related

Process gets killed when ssh disconnects [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
I'm running the script below on a gcp debian instance. When shutting down my computer, ssh disconnects, and the script stops. Below is my script:
wget -P/root -N --no-check-certificate "https://raw.githubusercontent.com/reeceyng/v2ray-agent/master/shell/install_en.sh" && mv /root/install_en.sh /root/install.sh && chmod 700 /root/install.sh &&/root/install.sh
I have tried Tmux and screen to prevent this based on other posts suggestions. None of them were helpful. All processes stop after some time.
Use nohup to detach your process (here, wget) from your shell. For example:
nohup wget -P/root -N --no-check-certificate "https://raw.githubusercontent.com/reeceyng/v2ray-agent/master/shell/install_en.sh" && mv /root/install_en.sh /root/install.sh && chmod 700 /root/install.sh &&/root/install.sh &
should do the trick.

How to set the output of ^Z (Control-Z) to output the PID of the stopped process? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 months ago.
Improve this question
I start my script bash test.sh and then press Control-Z and get this:
^Z
[1]+ Stopped bash test.sh
All fine. But I want that the output also has the PID.
I know I can do this ps $(jobs -p) afterwards to get the PID. But how it is possible that the output of Control-Z prints directly the PID?
I don't think that's possible. That said, perhaps you can take a step back and clarify why you are hoping to do that?
Because what you can do is directly refer to the particular job with %1 (or %<n> more generally, if you have multiple background jobs) for several built-in commands (fg, bg, kill, ...):
$ sleep 30
^Z
[1]+ Stopped sleep 30
$ kill %1
[1]+ Terminated: 15 sleep 30
More details in man bash or here: https://www.gnu.org/software/bash/manual/bash.html#Job-Control-Basics

Closing an open ssh port in Linux with one line [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I often ssh tunnel into Rstudio on a server I have set up. I'm trying to devise a single command that I can use to close the ssh port. I know that I can find the PID for localhost:1234 with:
sudo lsof -i :1234
And I also know that I can kill the process with:
sudo kill $(sudo lsof -t -i:1234)
The issue is that if I have Chrome open to run Rstudio server, the 2nd command will kill the open Chrome browswer as well. Is there a way to modify the 2nd command so that I close the open ssh port, but not the Chrome browser? There are two PID numbers, so I could theoretically grep for 'ssh' but I'm not sure how.
EDIT FOR CLARITY:
For example, I get the following output from the first command. I want to modify the 2nd command so that I can kill PID 15834, but not 30117. Apologies, I hope that makes more sense.
try this
sudo kill $(sudo lsof -t -i:1234 -c ssh)
-c => selects the listing of files for processes executing the command that begins with the characters of c.
Just firewall the port:
sudo iptables -I INPUT -p tcp --dport 1234 -j DROP

Bash script check permissions to run command on remote [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I have a local development machine and from my bash script am sending commands to the remote server.
How can I write bash code to check if I am allowed to run the remote command so that I can handle the success/failure response from my script?
Alternatively, how can I capture the output so that I can parse it and detect if it succeeded. The difficulty with parsing is that the ssh command might trigger a password prompt so I can't interfere with that.
That bash script uses ssh -qt to send the remote commands
Command
ssh user#host -qt "sudo -u www /usr/local/bin/php /mnt/data/script.php"
Output:
[sudo] password for xxx:
Sorry, user xxx is not allowed to execute '/usr/local/bin/php /mnt/data/script.php' as www on host.domain.com
Assuming that user != root above: you can't - there's no way to read /etc/sudoers or /etc/sudoers.d/* in a normally set-up Linux box if you're not root, so apart from trial & error there's nothing to be done.
As for capturing the result - that's fairly simple (parsing it, of course, is a different story, depending on what you're doing over there).
output=$( ssh user#host -qt "sudo -u www /usr/local/bin/php /mnt/data/script.php" 2>&1 )
After the execution (and you typing the password for sudo)
echo $? # gives you the return-code of what happened on the far end, if it's a success that should be 0
echo $output # gives you the strings to parse

Keep process running on remote machine after exiting ssh session inside bash script [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have a bash script in which I login to a remote machine over ssh and run iperf and then logout and do other things locally. I want iperf to keep running after the bash script logs out. I have tried nohup, disown and setsid, but they don't seem to work when I use them inside the bash script. I have also tried running iperf inside another script, that didn't work either.
Here's the part of the script with nohup example:
ssh root#10.101.10.35 &>/dev/null & << EOF
nohup iperf -s -B 192.168.99.1 &>/dev/null &
EOF
You need to redirect stdin, stdout and stderr to somewhere else as opposed to your terminal like so:
ssh root#10.101.10.35 'iperf -s -B 192.168.99.1 < /dev/null > /tmp/iperf_combined.log 2>&1 &'
stdin is taken from /dev/null (nothing is entered)
stdout and stderr goes to /tmp/iperf_combined.log
The process will run on the remote machine until you will manually kill it or until the script/command will exit on its own.
Edit (as a reply to the poster's comment):
If you want to run multiple commands in the same ssh session, you may use:
ssh -T root#10.101.10.35 << EOF
iperf -s -B 192.168.99.1 < /dev/null > /tmp/iperf_combined_1.log 2>&1 &
iperf -s -B random_ip2 < /dev/null > /tmp/iperf_combined_2.log 2>&1 &
EOF
As per ssh man page:
-T Disable pseudo-tty allocation.
Detailed explanation on psqudo-tty here

Resources