Recently I was working on an update and I had to kill a few java process before that.
killall -9 java
So I used the above command which killed all the java process. But now I'm stuck without knowing how to restart those java services.
Is there a command to start all java services killed using killall?
using kill
First of all: kill -9 should be the last method to use to stop a process.
A process stopped with SIGKILL has no chance to shutdown properly. Some services or daemons have complex and important shutdown procedures like databases who takes care to close open database files in a consistent state and write cached data to it.
Before stopping processes with kill or something like that, you should try the stop procedure which comes from the init system of your unix/linux operating system.
When you have to use kill, try to send a TERM signal to a process first (just use kill without -9) and wait a moment to see if the process shuts down. Use -9 if there is no other option!
Starting and stopping services
Starting and stopping services should be handled by the init service which comes with your unix/linux operating system.
SysV init or systemd is common. Check the Manual of your operating system to see which system is used. If set up properply you can check which services are missing (stopped, which should be running) and start them again.
here are some manual examples
FreeBSD:
https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/configtuning-rcd.html
Debian:
https://www.debian.org/doc/manuals/debian-handbook/unix-services.de.html#sect.system-boot
Fedora: https://docs.fedoraproject.org/f28/system-administrators-guide/infrastructure-services/Services_and_Daemons.html
As far as I know, no. There is no record (by default) of what you have killed, as you can see in strace killall java.
More about process management, including why SIGKILL is a bad idea almost all of the time.
Related
in the linux os ,what is the difference between kill the weblogic porcess and run the stopWeblogic.sh?
Without seeing the contents of stopWeblogic.sh my assumption would be that the script shuts down the service gracefully, but killing the pid in the OS just kills it, ungracefully.
Perhaps post some code and the answer could be better suited to your particular case.
Is it more reliable to use daemontools or supervisord, or that I use crontab that runs a script of mine to keep checking if the process still exists and if not, start it again?
What is the best way to guarantee for sure that a process is always running, and running in a healthy condition? (i.e., it isn't running but stalled in some error, where it should be killed and started again).
Btw, this is a Java process that I start like java -jar app.jar.
Thanks!
We use monit for such tasks. It could run process if it is in down state
Personally, I use supervisord for my servers. It's really easy to configure and understand, and it can be configured to automatically re-run a failing process several times.
You should probably start by reading the oficial docs, specifically, this section to set up start retries.
We have a custom setup which has several daemons (web apps + background tasks) running. I am looking at using a service which helps us to monitor those daemons and restart them if their resource consumption exceeds over a level.
I will appreciate any insight on when one is better over the other. As I understand monit spins up a new process while supervisord starts a sub process. What is the pros and cons of this approach ?
I will also be using upstart to monitor monit or supervisord itself. The webapp deployment will be done using capistrano.
Thanks
I haven't used monit but there are some significant flaws with supervisord.
Programs should run in the foreground
This means you can't just execute /etc/init.d/apache2 start. Most times you can just write a one liner e.g. "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND" but sometimes you need your own wrapper script. The problem with wrapper scripts is that you end up with two processes, a parent and child. See the the next flaw...
supervisord does not manage child processes
If your program starts child process, supervisord wont detect this. If the parent process dies (or if it's restarted using supervisorctl) the child processes keep running but will be "adopted" by the init process and stay running. This might prevent future invocations of your program running or consume additional resources. The recent config options stopasgroup and killasgroup are supposed to fix this, but didn't work for me.
supervisord has no dependency management - see #122
I recently setup squid with qlproxy. qlproxyd needs to start first otherwise squid can fail. Even though both programs were managed with supervisord there was no way to ensure this. I needed to write a start script for squid that made it wait for the qlproxyd process. Adding the start script resulted in the orphaned process problem described in flaw 2
supervisord doesn't allow you to control the delay between startretries
Sometimes when a process fails to start (or crashes), it's because it can't get access to another resource, possibly due to a network wobble. Supervisor can be set to restart the process a number of times. Between restarts the process will enter a "BACKOFF" state but there's no documentation or control over the duration of the backoff.
In its defence supervisor does meet our needs 80% of the time. The configuration is sensible and documentation pretty good.
If you want to additionally monitor resources you should settle for monit. In addition to just checking whether a process is running (availability), monit can also perform some checks of resource usage (performance, capacity usage), load levels and even basic security checks (md5sum of a bianry file, config file, etc). It has a rule-based config which is quite easy to comprehend. Also there is a lot of ready to use configs: http://mmonit.com/wiki/Monit/ConfigurationExamples
Monit requires processes to create PID files, which can be a flaw, because if a process does not create pid file you have to create some wrappers around. See http://mmonit.com/wiki/Monit/FAQ#pidfile
Supervisord on the other hand is more bound to a process, it spawns it by itself. It cannot make any resource based checks as monit. It has a nice CLI servicectl and a web GUI though.
What are the advantages of "daemonizing" a server application over running the program in console mode?
Having it run as a daemon means you can
log out without loosing the service (which saves some resources)
do not risk loosing the service from an accidental ctrl-c
does not offer a minor security risk from someone accessing the terminal, hitting ctrl-c and taking your session
Essentially all 'real' services that are running 'in production' (as opposed to debug mode) run that way.
I think it is preventing from accidentally closing an app and you have one more terminal free.
But I personally don't see big difference between "screen" program and "daemonizing"
The main point would be to detach the process from the terminal so that the process does not terminate when the user logs out from the terminal. If you run a program in console mode, it will terminate when you log out, because this is the default behavior for a process when it receives a SIGHUP signal.
Note that there is more to writing a daemon than just calling daemon(3). See How to write a unix daemon for more information.
My system includes a task which opens a network socket, receives pushed data from the network, processes it, and writes it out to disk or pings other machines depending on the messages. This task is intended to run forever, and the service is designed to have this task always running. But sometimes it crashes.
What's the best practice for keeping a task like this alive? Assume it's okay for the task to be dead for up to 30 seconds before we restart it.
Some obvious ideas include having a watchdog process that checks to make sure the process is still running. Watchdog could be triggered by cron. But how does it know if the process is alive or not? Write a pidfile? touch a heartbeat file? An ideal solution wouldn't continuously spin up more processes if the machine gets bogged down to the point where the watchdog is running faster than the heartbeat.
Are there standard linux tools for this? I can imagine a solution that uses a message queue, but I'm not sure if that's a good idea or not.
Depending on the nature of the task that you wish to monitor, one method is to write a simple wrapper to start up your task in a fork().
The wrapper task can then do a waitpid() on the child and restart it if it is terminated.
This does depend on modifying the source for the task that you wish to run.
sysvinit will restart processes that die, if added to inittab.
If you're worried about the process freezing without crashing and ending the process, you can use a heartbeat and hard kill the active instance, letting init restart it.
You could use monit along with daemonize. There are lots of tools for this in the *nix world.
Supervisor was designed precisely for this task. From the project website:
Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.
It runs as a daemon (supervisord) controlled by a command line tool, supervisorctl. The configuration file contains a list of programs it is supposed to monitor, among other settings.
The number of options is quite extensive, -- have a look at the docs for a complete list. In your case, the relevant configuration section might be something like this:
[program:my-network-task]
command=/bin/my-network-task # where your binary lives
autostart=true # start when supervisor starts?
autorestart=true # restart automatically when stopped?
startsecs=10 # consider start successful after how many secs?
startretries=3 # try starting how many times?
I have used Supervisor myself and it worked really well once everything was set up. It requires Python, which should not be a big deal in most environments but might be.