So, I can get the directory of dump.rdb's location to change by using the dir option in redis.conf when I start it normally (just calling redis-server). If I want redis-server to run all of the time (I do) without needing a terminal window always open, I think I need to daemonize it. However, it doesn't seem this ever persists to the disk automatically and whenever the redis-server process ends (I've been ending it in testing by just running redis-cli shutdown or sometimes just killing the process with kill PID) and starts back up, all database changes are lost, which seems pretty bad if a crash or unexpected shutdown were to happen in the future. In the code that runs the processing of data (either python with redis-py or java with jedis), I can explicitly run bgsave(), but that saves dump.rdb in the directory that the code was run in and not where the dir option specifies in redis.conf
So, is there either another way to run redis-server without requiring a whole terminal window to stay open that allows what I want to do or is there a way to get the data to persist on disk in the proper directory when it's run as redis-server --daemonize yes or similar?
You could put it on linux "background" using nohup. It does not need a terminal window to stay up and running. I don't know the daemonize option to give you an advice about that, but, see if it works for you:
nohup redis-server &> redis.log&
or
Set daemonize yes in the conf file and run:
redis-server path/to/redis.conf
Related
When I run dsc cassandra on CoreOS(tarball) using telnet everything comes up fine. But when i close the telnet session, it kills the process. How do i keep the cassandra server running?
I tried sudo bin/cassandra and sudo bin/cassandra -f
both didnt help.
I have no issues in other OS.
Option Description
-f Start the cassandra process in foreground. The default is to start as background process.
-h Help.
-p filename Log the process ID in the named file. Useful for stopping Cassandra by killing its PID.
-v Print the version and exit.
When you are starting cassandra using -f it runs in foreground, hence it will stop as soon as terminal is closed. Same is true for background process.
This will happen with any application you run in telnet session.
You can try
sudo service cassandra start OR nohup bin/cassandra this will keep your application running even when terminal is closed
You need to run Cassandra as a systemd service, as described here: https://coreos.com/os/docs/latest/getting-started-with-systemd.html
Running in the foreground with cassandra -f as your ExecStart= command will allow systemd to manage the state of the process (ideally inside a container).
While this is a bit different than what you're used to, it will lead to an overall more stable mechanism since you'll be using an init system that understands dependency chains, restart and reboot behavior, logging, etc.
Run the process in a screen or tmux session. Detaching from the screen session should allow the process to keep running.
On my raspberry pi, I want my own written server to be started at startup, and to be restarted when it segfaults, so I added it to /etc/inittab. The problem is that the server won't start
The line I added:
1:2345:respawn:/home/gear/lionfish/main /home/gear/lionfish/app
When I run this command from the command line it works just fine, but the server doesn't run. I've checked this with ps aux, and it didn't show up
Have I made some sort of mistake?
EDIT: Small side question. The server needs root privileges, does inittab do this automatically or do I need to add something to it?
Typical problems:
As already mentioned, environment is set up differently. Make sure $PATH iscorrect.
Does your program try to execute in a directory which becomes unmounted? If so, cd to / first.
Access restrictions to files and directories.
Process doesn't detach from stdin/stdout/stderr.
The process runs in foreground instead of background.
Parent process receives a terminating signal such as SIGTERM which kills your process as well. Try ignoring this (and some others) signals by using nohup or sigset/sigignore.
Debugging hint: Let the server start by appending current time to the end of an already existing file in a directory which is guaranteed to be writable. Make sure you flush (and close) the file pointer immediately. Then at least you can see whether it was started at all or not.
I have used various snippets of code to build a system which
listens to a port for incoming TCP data (using a perl script), writes this data to a log file.
calls and runs a PHP script to consume the log file and write it to an RDS MySQL DB
I have a GPS device configured to send the data to the elastic IP of my AWS EC2 Server
It works fine, and when i run via SSH
perl portlistener.pl
it does it's job fine, happily working away.
The only way I can stop the script running is by closing the terminal window, ending my SSH session. What I need to do, is keep it running at all times, and to implement a start, stop and restart facility. Do i need to create a daemon?
I know PHP, but until now have never worked with Perl. I'm also not that familiar with command line, other than installing updates, navigating and editing single files etc.
Thanks in advance for any help, or for pointing me in the right direction.
Solved it I think!
installed CPAN http://www.thegeekstuff.com/2008/09/how-to-install-perl-modules-manually-and-using-cpan-command/
Using CPAN, installed Deamon::Control
Then created a new program as below (portlistener_launcher.pl), and ran it as SU.
#!/usr/bin/perl
use strict;
use warnings;
use Daemon::Control;
$ENV{PHP_FCGI_CHILDREN} = 10;
$ENV{PHP_FCGI_MAX_REQUESTS} = 1000;
Daemon::Control->new({
name => 'portlistener',
program => 'perl /home/ec2-user/portlistener/portlistener.pl',
fork => 2,
pid_file => '/var/run/portlistener.pid',
stdout_file => '/var/log/portlistener.log',
stderr_file => '/var/log/portlistener.log',
})->run;
There's probably a neater way of doing it, but it seems to work, and I can stop/start it like so:
perl portlistener_launcher.pl start
If the terminal window is the only task, you can use the nohup command, e.g.
http://linux.101hacks.com/unix/nohup-command/
To terminate the listener you can kill an appropriate running process or processes.
An implementation of daemon does not ensure its permanent running. It can crash or might be killed from someone. To guarantee the permanent daemon running you must implement a 24x7 monitoring of this daemon and automatic restarting of it.
I am continually running a few server scripts (on different ports) with nodejs using forever.
There is a considerable amount of traffic on some of these servers. The console.log commands I have for tracking connection anomalies result in bloated log files that I don't need all of the time - only for debugging. I have been manually stopping the scripts late at night, truncating the files, and restarting them. This won't do for long term, so we decided to find a solution.
Someone else on my system deleted the log files I had set up for each of the servers without my knowledge. Calling forever list on the command line shows that the server scripts are still running but now I can't tail the log files to see how the nodes are doing.
Node downtime should be kept to a bare minimum, so I'm hesitant to stop the servers during daylight hours for longer than a few minutes. Initial testing from the client side seems to indicate that the scripts are doing fine, but I can't be 100% sure there are no errors due to failed attempts at logging to a nonexistent file.
I have a few questions actually:
Is it ok to keep forever running like this?
If not, is there a proper way to disable logging? The github repository seems to indicate that forever will still log to a default file, which I don't want. Otherwise I may just write a cronjob that periodically stops scripts, truncates logs, then restarts the scripts.
What happens if I just create the logfile again with something like touch logfile_name.log while the script is still running - will this make forever freak out or is this a plausible solution?
Thanks for your time.
according to https://github.com/foreverjs/forever, try to pass -s to silent all log.
forever -s start YOURSCRIPT
Surely, before doing this, try to update forever to the latest:
sudo curl -L https://npmjs.com/install.sh | sudo sh
sudo npm update -g.
1) Just build in a periodic function or admin option to clear the forever logs. From the manual forever cleanlogs
2) At least for linux. Send each log file to /dev/null. Each log type is specified by options -l -o and -r. The -a option for append log, will stop it complaining about the log already existing.
forever start -a -l /dev/null -o /dev/null -r /dev/null your-server.js
Perhaps employ your own logging system, I use log4js, it doesn't complain if I delete the log file while the node process is still running.
There's a nifty tool that can help you that called logrotate. Have a look here
Especially the copytruncate option, it is very useful in your case.
We are finishing development of a project, the client is already using it but occasionally some errors occur - crashing the server.
I know I could register a service as 'upstart' script on linux, in order to have my node service restart when it crashes.
But our server is running other stuff, so we can't restart it.
Well, actually, while writing, I realize I have two questions then:
Will 'upstart' work without having to reboot? Something is just whispering yes to me :)
If not, what other option would I have to 'respawn' my node server when it crashes?
Yes, upstart will restart your process without a reboot.
Also, you should look into forever.
PM2 is a Production process manager for Node.js app.
If your focus for automatic restart is an always running application, I suggest to use a process manager. Process manager, in general, handles the node process(es if cluster enabled), and is responsible for the process/es execution. PM leans on the operative system: your node app and the OS are not so strinctly chained because the pm is in the middle.Final trick: put the process manager on upstart. Here is a complete performance improvement path to follow.
Using a shared server and not having root privileges, I can't download or install any of the previously mentioned libraries. What I am able to do is use a simple infinite bash loop to solve my issue. First, I created the file ./startup.sh in the base directory ($ vim startup.sh):
#!/bin/bash
while:
do
node ./dist/sophisticatedPrimate/server/main.js
done
Then I run it with:
$ bash startup.sh
and it works fine. There is a downside to this, which is that is doesn't have a graceful way to end the loop (at least not once I exit the server). What I ended up doing is simply finding the process with:
$ ps aux | grep startup.sh
Then killing it with
$ kill <process id>
example
$ kill 555555