Process killed after closing terminal SSH [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I'm trying to clear facebook cache on my server every 2 seconds so i logged in the SSH and i run this command
while true; do sleep 2; curl -F id="http://twd.ma" -F scrape=true -F access_token='token' -F appID=appID https://graph.facebook.com; done &
And every thing worked fine and the cache started to be cleaned every 2 seconds. However, when i close the Terminal SSH the cache stop being cleaned and i think the process is killed, what should i do please?

Your command will stop executing because when you log out, the shell is lost. The '&' means that the script runs in background "as long as the shell is active"
You can do the following:
Write your script into a file, i.e. clearcache.sh and omit the '&'
#!/bin/bash
while true; do
sleep 2
curl -F id="http://twd.ma" -F scrape=true -F access_token='token' -F appID=appID https://graph.facebook.com
done
Write the path to your script into /etc/rc.local
/path/to/clearcache.sh > /dev/null 2&>1 &
The ' >/dev/null 2&>1 means that all output that your script produces will be deleted.

If screen is available to you then you can start a screen session by running screen, run your commands, then press ctrl-a ctrl-d to detach the session.
When you log in later you can issue screen -r to reconnect to the detached session.

Related

How to kill a process in bash through terminal [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
This might be a very basic question but I couldn't find it anywhere on the internet.
Lets assume I have a file named test with this code
echo hello
sleep 10
echo hello
sleep 10
echo hello
sleep 10
How would I go about killing that program through another terminal in my server?
I am assuming the file is test.sh
You can do:
ps -x | grep ./test.sh
This will show the processes:
11164 pts/1 S+ 0:00 /usr/bin/bash ./test.sh
and a second process that will be a grep process, you won't be able to kill the process that has the word grep in it because that process completes right away
now you can kill the process using the PID:
kill 11164
Your script filename is test.
So, in another terminal, you can execute ps aux | grep test.
Then you can get the PID of test, which is located at the second column.
Then, execute kill -9 <PID>.
Ctrl c
By Pressing this, you can kill that program from your terminal.
And you can kill this program from your main terminal where you exicute this in first place.

Keep process running on remote machine after exiting ssh session inside bash script [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have a bash script in which I login to a remote machine over ssh and run iperf and then logout and do other things locally. I want iperf to keep running after the bash script logs out. I have tried nohup, disown and setsid, but they don't seem to work when I use them inside the bash script. I have also tried running iperf inside another script, that didn't work either.
Here's the part of the script with nohup example:
ssh root#10.101.10.35 &>/dev/null & << EOF
nohup iperf -s -B 192.168.99.1 &>/dev/null &
EOF
You need to redirect stdin, stdout and stderr to somewhere else as opposed to your terminal like so:
ssh root#10.101.10.35 'iperf -s -B 192.168.99.1 < /dev/null > /tmp/iperf_combined.log 2>&1 &'
stdin is taken from /dev/null (nothing is entered)
stdout and stderr goes to /tmp/iperf_combined.log
The process will run on the remote machine until you will manually kill it or until the script/command will exit on its own.
Edit (as a reply to the poster's comment):
If you want to run multiple commands in the same ssh session, you may use:
ssh -T root#10.101.10.35 << EOF
iperf -s -B 192.168.99.1 < /dev/null > /tmp/iperf_combined_1.log 2>&1 &
iperf -s -B random_ip2 < /dev/null > /tmp/iperf_combined_2.log 2>&1 &
EOF
As per ssh man page:
-T Disable pseudo-tty allocation.
Detailed explanation on psqudo-tty here

I keep on getting error in rc.local [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I have the following script added to my rc.local file
#wait until the internet connection is ready
wget -q --tries=10 --timeout=20 -O - http://google.com > /dev/null
while [ $? -ne 0 ]; do
printf "Waiting for internet connection"
sleep 10;
wget -q --tries=10 --timeout=20 -O - http://google.com > /dev/null;
done
If I am connected to internet everything just works fine but if I am offline I get error
[FAIL] startpar: service(s) returned failure: rc.local ... failed!
What is wrong here and how could I fix it, I want rc.local to wait for internet connection before it continues to another script
As it appears to have solved your problem, this is how I would suggest writing your code:
while ! wget -q --tries=10 --timeout=20 -O - http://google.com > /dev/null; do
echo 'Waiting for internet connection'
sleep 10
done
Remember that [ is not syntax, it is a command whose exit status determines whether your while loop continues or not. You should just test the exit status of the command you are trying to run directly.
As for why this has solved your problem, to be honest I'm not sure. The error message that you have shown isn't really specific enough to indicate where the problem lies and I can't see anything actually wrong with what you did before. Perhaps you should try adding set -xv to your rc.local file to enable debugging.

How to wait for user input in a terminal called with -e option? [duplicate]

This question already has answers here:
Prevent Gnome Terminal From Exiting After Execution [duplicate]
(4 answers)
Closed 9 years ago.
I'm trying to open gnome-terminal (though I think it would be related to any x-terminal-emulator) with a command provided using -e option, like gnome-terminal -e 'ls'. The terminal is closed as soon as the command is done working, so I need a way to wait for user input to be able to read the result and then finally close the window with Enter press.
I tried gnome-terminal -e 'ls; read -p "..."' and it works if I run ls; read -p "..." in an already opened terminal, but a terminal called with -e option keeps getting closed.
So is there any way to keep the terminal open until some user input is provided while using -e option?
Spawn a shell;
xterm -e bash -c 'ls; read -p "Press any key ..."'

Write a bash script to restart a daemon [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I thought I could just use this related question: How Do I write a bash script to restart a process if it dies. #lhunath had a great answer and told me everything I might do about it was wrong, but I'm restarting a daemon process and if I'm hoping there's something I can do in a single script that works.
my process starts with a kick off script that shows the startup log, but then quits and leaves the process running off the shell:
>sudo ./start
R CMD Rserve --RS-conf /var/FastRWeb/code/rserve.conf --vanilla --no-save
...
Loading required package: FastRWeb
FastRWeb: TRUE
Loading data...
Rserv started in daemon mode.
>
The process is up and running,
ps -ale | grep Rserve
1 S 33 16534 1 0 80 0 - 60022 poll_s ? 00:00:00 Rserve
Is there a simple way to wrap or call the 'start' script from bash and restart when the process dies or is this a case where PID files are actually called for?
Dang - question got closed even after pointing to a very similar question that was not closed on stackoverflow. you guys suck
A very simple way to monitor the program is to use cron: check every minute (or so) if the program still is alive, ./start it otherwise.
As root, invoke crontab -e.
Append a line like this:
* * * * * if ! pidof Rserve 2>&1 >/dev/null; then /full/path/to/start; fi
This method will stay persistent, i.e., it will be executed after a reboot etc. If this is not what you want, move it to a shell script:
#! /bin/bash
# monitor.sh
while true; do
if ! pidof Rserve 2>&1 >/dev/null; then /full/path/to/start; fi
sleep 10
done
This script has to be started manually from the command line, and can be easily stopped with Ctrl-C.
The easiest solution, if you can run the process is NON-daemon mode, is to wrap it in a script.
#!/bin/bash
while (true)
do
xmessage "This is your process. Click OK to kill and respawn"
done
Edit
Many deamons leave a lock file, usually in /var/lock, that contains their PID. This keeps multiple copies of the deamon from running.
Under Linux, it is fairly simple to look throgh /proc and see if that process is still around.
Under other platforms you may need to play games with ps to check for the processes existence.

Resources