How to run sbt as daemon? - linux

I've tried nohup "sbt run" &
returns : nohup: failed to run command ‘sbt run’: No such file or directory
and tried :
nohup sbt run &
[2] 7897
# nohup: ignoring input and appending output to ‘nohup.out’
When I carriage return expecting process to continue running I receive :
[2]+ Stopped nohup sbt run
How to run sbt as a daemon ?
Update :
sbt run </dev/null &
[5] 8961
I think cd up one dir :
# cd ..
[5]+ Stopped sbt run < /dev/null (wd: /home/sum)
(wd now: /home)
So it starts as daemon but if I perform any actions such as changing dir it kills the process ? How to keep process running ?

Looks like sbt requested input from your terminal. If it does not really need input (which is probably the case when you run program in background), you can run it like this:
sbt run </dev/null >output-file &
See this answer for details.
EDIT
Ok, now that was a puzzle. Short answer: run sbt as follows:
setsid nohup sbt run &
Rationale:
The reason why sbt stops is arrival of SIGTTOU signal. It is delivered to background process in several cases, which include modifying terminal configuration. This is our case because according to strace -f sbt run &, sbt does a lot of black magic under the hood like this:
[pid 16600] execve("/usr/bin/sh", ["sh", "-c", "stty -g < /dev/tty"], [/* 75 vars */] <unfinished ...>
To work this around, you can run sbt in a different session to detach it from current terminal, so that it won't open /dev/tty and mess with our terminal.

This should also work
sbt -Djline.terminal=jline.UnsupportedTerminal run &
source: https://github.com/sbt/sbt/issues/701

oleg-andriyanov's answer did not work in my case.
(process exited soon after launch)
In a such case, try Mirko Stocker's command written in play ML below for alternative.
https://groups.google.com/forum/#!topic/play-framework/ZgjrPgib0-8
# screen -d -m sbt run

You can easily use tmux to do this (and persist anything else). A bonus feature is that if you install on a remote server you can persist jobs as "sessions" and reconnect to the same terminal "session". https://www.linode.com/docs/networking/ssh/persistent-terminal-sessions-with-tmux/
1) Launch your sbt job
sbt
run
2) detach with tmux session
ctrl+b (then release)
d
3) Show active tmux sessions (only occurs local tmux)
ctrl + b
s
4) Show all sessions on remote machine
$ tmux a
5) Attach session
$ tmux attach-session (your-session-number)

If you have already run sbt and you want to move it to the background then once the processes is in STOPPED condition, send it a CONTINUE signal with
kill -s SIGCONT PID
or
kill -s SIGCONT JOB_NUMBER
This will ensure that the sbt process is now running in the background.

Related

Don't kill the child process after SBT is killed in Scala?

I am running a script in Scala and Play using:
val pb = Process(s"bash $path/script.sh")
pb.run
The script starts a background process in the background that is supposed to start run even when sbt is killed. Here is the script:
#!/bin/bash
nohup liquidsoap liquidsoap.ls >/dev/null 2>&1 &
echo $! > liquidsoap.pid
The problem is that even after using nohup and redirecting the output. When I kill SBT, the background process that was started using the script is killed too.
Thank you
Try to add this to you sbt file:
fork in run := true
I found a solution. The problem was that when I was killing SBT I was sending a SIGINT signal to all processes. In order to avoid the created processes to not be killed I need to put the process in a different process group which is done the setsid command.

Run a command in background and exit

I want to run a command silently via ssh and exit the shell, but the program should continue running.
I tried screen and nohup, but apparently with those it executes 3 processes instead of 1:
user:/bin/bash ./[script]
root: sudo [commandInTheScript]
root: [commandInTheScript]
What am I doing wrong?
P.S.: The thing is that I want to run this command with the Workflow app (iOS), but the app waits until the command is finished, so it freezes 'forever'
To run your process back ground, at end of the command you have to use &.
In your case, you have to run without session since you are planning to exit from ssh after execute the command, so you need nohup
nohup <command> &
nohup < command > &
This makes your command runs on background and shows its PID
How did you use nohup?
Eg.
nohup ruby server.rb &
Ampersand (&) is necessary to let command run in the background.

Running shell script command after executing an application

I have written a shell script to execute a series of commands. One of the commands in the shell script is to launch an application. However, I do not know how to continue running the shell script after I have launched the application.
For example:
...
cp somedir/somefile .
./application
rm -rf somefile
Once I launched the application with "./application" I am no longer able to continue running the "rm -rf somefile" command, but I really need to remove the file from the directory.
Anyone have any ideas how to compete running the "rm -rf" command after launching the application?
Thanks
As pointed out by others, you can background the application (man bash 'job control', e.g.).
Also, you can use the wait builtin to explicitely await the background jobs later:
./application &
echo doing some more work
wait # wait for background jobs to complete
echo application has finished
You should really read the man pages and bash help for more details, as always:
http://unixhelp.ed.ac.uk/CGI/man-cgi?sh
http://www.gnu.org/s/bash/manual/bash.html#Job-Control-Builtins
Start the application in the background, this way the shell is not going to wait for it to terminate and will execute the consequent commands right after starting the application:
./application &
In the meantime, you can check the background jobs by using the jobs command and wait on them via wait and their ID. For example:
$ sleep 100 &
[1] 2098
$ jobs
[1]+ Running sleep 100 &
$ wait %1
put the started process to background:
./application &
You need to start the command in the background using '&' and maybe even nohup.
nohup ./application > log.out 2>&1

How to run Node.js as a background process and never die?

I connect to the linux server via putty SSH. I tried to run it as a background process like this:
$ node server.js &
However, after 2.5 hrs the terminal becomes inactive and the process dies. Is there anyway I can keep the process alive even with the terminal disconnected?
Edit 1
Actually, I tried nohup, but as soon as I close the Putty SSH terminal or unplug my internet, the server process stops right away.
Is there anything I have to do in Putty?
Edit 2 (on Feb, 2012)
There is a node.js module, forever. It will run node.js server as daemon service.
nohup node server.js > /dev/null 2>&1 &
nohup means: Do not terminate this process even when the stty is cut
off.
> /dev/null means: stdout goes to /dev/null (which is a dummy
device that does not record any output).
2>&1 means: stderr also goes to the stdout (which is already redirected to /dev/null). You may replace &1 with a file path to keep a log of errors, e.g.: 2>/tmp/myLog
& at the end means: run this command as a background task.
Simple solution (if you are not interested in coming back to the process, just want it to keep running):
nohup node server.js &
There's also the jobs command to see an indexed list of those backgrounded processes. And you can kill a backgrounded process by running kill %1 or kill %2 with the number being the index of the process.
Powerful solution (allows you to reconnect to the process if it is interactive):
screen
You can then detach by pressing Ctrl+a+d and then attach back by running screen -r
Also consider the newer alternative to screen, tmux.
You really should try to use screen. It is a bit more complicated than just doing nohup long_running &, but understanding screen once you never come back again.
Start your screen session at first:
user#host:~$ screen
Run anything you want:
wget http://mirror.yandex.ru/centos/4.6/isos/i386/CentOS-4.6-i386-binDVD.iso
Press ctrl+A and then d. Done. Your session keeps going on in background.
You can list all sessions by screen -ls, and attach to some by screen -r 20673.pts-0.srv command, where 0673.pts-0.srv is an entry list.
This is an old question, but is high ranked on Google. I almost can't believe on the highest voted answers, because running a node.js process inside a screen session, with the & or even with the nohup flag -- all of them -- are just workarounds.
Specially the screen/tmux solution, which should really be considered an amateur solution. Screen and Tmux are not meant to keep processes running, but for multiplexing terminal sessions. It's fine, when you are running a script on your server and want to disconnect. But for a node.js server your don't want your process to be attached to a terminal session. This is too fragile. To keep things running you need to daemonize the process!
There are plenty of good tools to do that.
PM2: http://pm2.keymetrics.io/
# basic usage
$ npm install pm2 -g
$ pm2 start server.js
# you can even define how many processes you want in cluster mode:
$ pm2 start server.js -i 4
# you can start various processes, with complex startup settings
# using an ecosystem.json file (with env variables, custom args, etc):
$ pm2 start ecosystem.json
One big advantage I see in favor of PM2 is that it can generate the system startup script to make the process persist between restarts:
$ pm2 startup [platform]
Where platform can be ubuntu|centos|redhat|gentoo|systemd|darwin|amazon.
forever.js: https://github.com/foreverjs/forever
# basic usage
$ npm install forever -g
$ forever start app.js
# you can run from a json configuration as well, for
# more complex environments or multi-apps
$ forever start development.json
Init scripts:
I'm not go into detail about how to write a init script, because I'm not an expert in this subject and it'd be too long for this answer, but basically they are simple shell scripts, triggered by OS events. You can read more about this here
Docker:
Just run your server in a Docker container with -d option and, voilá, you have a daemonized node.js server!
Here is a sample Dockerfile (from node.js official guide):
FROM node:argon
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD [ "npm", "start" ]
Then build your image and run your container:
$ docker build -t <your username>/node-web-app .
$ docker run -p 49160:8080 -d <your username>/node-web-app
Always use the proper tool for the job. It'll save you a lot of headaches and over hours!
another solution disown the job
$ nohup node server.js &
[1] 1711
$ disown -h %1
nohup will allow the program to continue even after the terminal dies. I have actually had situations where nohup prevents the SSH session from terminating correctly, so you should redirect input as well:
$ nohup node server.js </dev/null &
Depending on how nohup is configured, you may also need to redirect standard output and standard error to files.
Nohup and screen offer great light solutions to running Node.js in the background. Node.js process manager (PM2) is a handy tool for deployment. Install it with npm globally on your system:
npm install pm2 -g
to run a Node.js app as a daemon:
pm2 start app.js
You can optionally link it to Keymetrics.io a monitoring SAAS made by Unitech.
$ disown node server.js &
It will remove command from active task list and send the command to background
I have this function in my shell rc file, based on #Yoichi's answer:
nohup-template () {
[[ "$1" = "" ]] && echo "Example usage:\nnohup-template urxvtd" && return 0
nohup "$1" > /dev/null 2>&1 &
}
You can use it this way:
nohup-template "command you would execute here"
Have you read about the nohup command?
To run command as a system service on debian with sysv init:
Copy skeleton script and adapt it for your needs, probably all you have to do is to set some variables. Your script will inherit fine defaults from /lib/init/init-d-script, if something does not fits your needs - override it in your script. If something goes wrong you can see details in source /lib/init/init-d-script. Mandatory vars are DAEMON and NAME. Script will use start-stop-daemon to run your command, in START_ARGS you can define additional parameters of start-stop-daemon to use.
cp /etc/init.d/skeleton /etc/init.d/myservice
chmod +x /etc/init.d/myservice
nano /etc/init.d/myservice
/etc/init.d/myservice start
/etc/init.d/myservice stop
That is how I run some python stuff for my wikimedia wiki:
...
DESC="mediawiki articles converter"
DAEMON='/home/mss/pp/bin/nslave'
DAEMON_ARGS='--cachedir /home/mss/cache/'
NAME='nslave'
PIDFILE='/var/run/nslave.pid'
START_ARGS='--background --make-pidfile --remove-pidfile --chuid mss --chdir /home/mss/pp/bin'
export PATH="/home/mss/pp/bin:$PATH"
do_stop_cmd() {
start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 \
$STOP_ARGS \
${PIDFILE:+--pidfile ${PIDFILE}} --name $NAME
RETVAL="$?"
[ "$RETVAL" = 2 ] && return 2
rm -f $PIDFILE
return $RETVAL
}
Besides setting vars I had to override do_stop_cmd because of python substitutes the executable, so service did not stop properly.
Apart from cool solutions above I'd mention also about supervisord and monit tools which allow to start process, monitor its presence and start it if it died. With 'monit' you can also run some active checks like check if process responds for http request
For Ubuntu i use this:
(exec PROG_SH &> /dev/null &)
regards
Try this for a simple solution
cmd & exit

How to run infinitely script in background on Linux?

I have a PHP script with infinite loop. I need this script running forever. So, I run
php /path/to/script.php > /dev/null &
And it works in background in my current user's security context. But when I close terminal window (log off), of course, CentOS Linux kills my program.
I see two guesses: run from a different user in background or make a daemon. I need help in each situation.
Thanks a lot!
nohup is your friend.
nohup command &
I think the general solution to that is nohup:
nohup is a POSIX command to ignore the HUP (hangup) signal, enabling the command to keep running after the user who issues the command has logged out. The HUP (hangup) signal is by convention the way a terminal warns depending processes of logout.
nohup is most often used to run commands in the background as daemons. Output that would normally go to the terminal goes to a file called nohup.out if it has not already been redirected. This command is very helpful when there is a need to run numerous batch jobs which are inter-dependent.
nohup is your friend.
You could:
Install screen and run the command from there. screen is a persistent terminal session that you can leave running.
Write an init/upstart (whatever you use) script so it loads on boot
Use the pear lib system_daemon
Use cron if batch work fits the scenario better (just remember to check for running instances before you launch another, iff concurrency is an issue)
Edit: or as everybody else and their brother has just said, nohup
Using command
nohup your_command &
For example
nohup phantomjs highcharts-convert.js -host 127.0.0.1 -port 3003 &
here "phantomjs highcharts-convert.js -host 127.0.0.1 -port 3003" was my command

Resources