What's difference between these redis starting commands - linux

sudo /etc/init.d/redis-server start
sudo service redis-server start
sudo systemctl start redis-server
sudo redis-server --daemonize yes

The last one is "nearest to the metal", it directly starts the Redis server process with no special options, and is "stand-alone". I would use this type of command when just "messing around" in the Terminal with quick tests and when trying to get an initial configuration tested and running.
The first 3 are all basically wrappers around starting the Redis server process to make it compatible with systemd or other Linux startup systems. They potentially add more layers of management, like:
reporting to the systemctl logs
saving the process id so the process can be killed or restarted
potentially specifying a different config file
potentially waiting for other services to become available before starting Redis
I would prefer one of the first three for routine, every-day, managed starting up of Redis on a production system.

Related

Redis Startup issues on Debian Stretch (9)

Actually I'm on my way to switch to debian 9 for the new production servers of the company and want to provision them with ansible. So far, everything works fine, but now I'm stuck with redis-server.
By default, Debian 9 comes with redis version 3.2. I'm installing the package via apt-get install redis-server. After that, redis starts up as a daemon in the background. Now I want to apply some custom configuration, like binding to 2 different IPs (127.0.0.1 and the server IP).
After changing this as well as the daemonize option (to yes), redis is no longer willing to start in the background. Whenever doing either service redis-server start or /etc/init.d/redis-server start, the command just stucks.
journalctl -xe tells me, that the pid file is not readable (redis-server.service: PID file /var/run/redis/redis-server.pid not readable (yet?) after start-post: No such file or directory) even though it should be created according to init.d script:
start)
echo -n "Starting $DESC: "
mkdir -p $RUNDIR
touch $PIDFILE
chown redis:redis $RUNDIR $PIDFILE
chmod 755 $RUNDIR
After all, I can see, that both service redis-server start and /etc/init.d/redis-server are starting the server and I'm also able to connect to the server via redis-cli. But the damn process stucks.
Can anyone help? If you need further information, just let me know. I'll provide what ever possible if this solves the problem!
best
Chris
I had a similar situation on a Centos 7 server.
The resolution was to change supervised from no to auto
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised auto
When you run the process as daemon it need to interact with systemd for process management (if I read well some documentation).
Thanks

PostgreSQL 9.5 doesn't start after reboot with systemd

I'm having a problem with PostgreSQL 9.5+173 on Ubuntu 16.04 and I happened to stumble across the following threads in my research that somewhat describes the behavior I'm seeing:
https://www.postgresql.org/message-id/CAFyxdeT%2B%3Dx-d0oNbFPoe%2B4xnt0Qdfi%2BzAEn%2BrQmEK0AZbJFRtg%40mail.gmail.com
https://www.postgresql.org/message-id/562E4453.5090803%40aklaver.com
Long story short I have a fresh install of Ubuntu 16 with nothing on it and PostgreSQL running. I've stopped PostgreSQL changed the data directory and port and a couple other settings and it starts back up fine.
I can start and stop PostgreSQL manually via systemctl without any problems. I can also connect to the database and can verify that it is running via a ps ax | grep postgres.
However, after I reboot PostgreSQL will not start up. Any attempt to start it up via systemctl start postgresql.service doesn't do anything and does not fail. The only way I am able to get it started is if I call systemctl start postgresql#9.5-main.service.
I did some investigation and looked at both the postgresql.service and postgresql#9.5-main.service scripts and realized that the postgresql.service script does nothing as stated in the thread above and that the postgresql#9.5-main.service has the PartOf directive which means it should be getting triggered from the postgresql.service as the sytemd docs state, but it isn't for some reason. Basically I'm at a loss as to why everything works before I reboot and then doesn't work after I reboot. Is there something I'm missing? I'm starting to go CRAZY over something so simple.
Update: I added an ExecStartPre=/bin/touch /tmp/postgresq.log to the postgresql#9.5-main.service to see if it's actually getting called on boot and it is not. Manually calling systemctl start postgresql#9.5-main.service creates the file in the /tmp directory.
Update: I have also found that calling systemd daemon-reload after reboot will allow me to start postgres via the systemctl start postgresql command.
Did you try doing systemctl enable postgresql? This will tell systemd to start this service after boot. Try rebooting after that.
Turns out that the problem was the fact that I symlinked /etc/postgresql/9.5/main/ across partitions to a custom partition that wasn't available right away, so when PostgreSQL tried to start on boot it couldn't because it's configuration files were not available. This describes what was happening since I could start PostgreSQL manually after I logged in.

Cassandra process killed on exit

When I run dsc cassandra on CoreOS(tarball) using telnet everything comes up fine. But when i close the telnet session, it kills the process. How do i keep the cassandra server running?
I tried sudo bin/cassandra and sudo bin/cassandra -f
both didnt help.
I have no issues in other OS.
Option Description
-f Start the cassandra process in foreground. The default is to start as background process.
-h Help.
-p filename Log the process ID in the named file. Useful for stopping Cassandra by killing its PID.
-v Print the version and exit.
When you are starting cassandra using -f it runs in foreground, hence it will stop as soon as terminal is closed. Same is true for background process.
This will happen with any application you run in telnet session.
You can try
sudo service cassandra start OR nohup bin/cassandra this will keep your application running even when terminal is closed
You need to run Cassandra as a systemd service, as described here: https://coreos.com/os/docs/latest/getting-started-with-systemd.html
Running in the foreground with cassandra -f as your ExecStart= command will allow systemd to manage the state of the process (ideally inside a container).
While this is a bit different than what you're used to, it will lead to an overall more stable mechanism since you'll be using an init system that understands dependency chains, restart and reboot behavior, logging, etc.
Run the process in a screen or tmux session. Detaching from the screen session should allow the process to keep running.

How can I automatically start a node.js application in Amazon Linux AMI on aws?

Is there a brief guide to explain how to start up a application when the instance starts up and running? If it were one of the services installed through yum then I guess I can use /sbin/chkconfig to add it to the service. (To make it sure, is it correct?)
However, I just want to run the program which was not installed through yum. To run node.js program, I will have to run script sudo node app.js at home directory whenever the system boots up.
I am not used to Amazon Linux AMI so I am having little trouble finding a 'right' way to run some script automatically on every boot.
Is there an elegant way to do this?
One way is to create an upstart job. That way your app will start once Linux loads, will restart automatically if it crashes, and you can start / stop / restart it by sudo start yourapp / sudo stop yourapp / sudo restart yourapp.
Here are beginning steps:
1) Install upstart utility (may be pre-installed if you use a standard Amazon Linux AMI):
sudo yum install upstart
For Ubuntu:
sudo apt-get install upstart
2) Create upstart script for your node app:
in /etc/init add file yourappname.conf with the following lines of code:
#!upstart
description "your app name"
start on started mountall
stop on shutdown
# Automatically Respawn:
respawn
respawn limit 99 5
env NODE_ENV=development
# Warning: this runs node as root user, which is a security risk
# in many scenarios, but upstart-ing a process as a non-root user
# is outside the scope of this question
exec node /path_to_your_app/app.js >> /var/log/yourappname.log 2>&1
3) start your app by sudo start yourappname
You can use forever-service for provisioning node script as a service and automatically starting during boots. Following commands will do the needful,
npm install -g forever-service
forever-service install test
This will provision app.js in the current directory as a service via forever. The service will automatically restart every time system is restarted. Also when stopped it will attempt a graceful stop. This script provisions the logrotate script as well.
Github url: https://github.com/zapty/forever-service
As of now forever-service supports Amazon Linux, CentOS, Redhat support for other Linux distro, Mac and Windows are in works..
NOTE: I am the author of forever-service.
Quick solution for you would be to start your app from /etc/rc.local ; just add your command there.
But if you want to go the elegant way, you'll have to package your application in a rpm file,
have a startup script that goes in /etc/rc.d so that you can use chkconfig on your app, then install the rpm on your instance.
Maybe this or this help. (or just google for "creating rpm packages")
My Amazon Linux instance runs on Ubuntu, and I used systemd to set it up.
First you need to create a <servicename>.service file. (in my case cloudyleela.service)
sudo nano /lib/systemd/system/cloudyleela.service
Type the following in this file:
[Unit]
Description=cloudy leela
Documentation=http://documentation.domain.com
After=network.target
[Service]
Type=simple
TimeoutSec=0
User=ubuntu
ExecStart=/usr/bin/node /home/ubuntu/server.js
Restart=on-failure
[Install]
WantedBy=multi-user.target
In this application the node application is started. You will need a full path here. I configured that the application should simply restart if something goes wrong. The instances that Amazon uses have no passwords for their users by default.
Reload the file from disk, and then you can start your service. You need to enable it to make it active as a service, which automatically launches at startup.
ubuntu#ip-172-31-21-195:~$ sudo systemctl daemon-reload
ubuntu#ip-172-31-21-195:~$ sudo systemctl start cloudyleela
ubuntu#ip-172-31-21-195:~$ sudo systemctl enable cloudyleela
Created symlink /etc/systemd/system/multi-user.target.wants/cloudyleela.service → /lib/systemd/system/cloudyleela.service.
ubuntu#ip-172-31-21-195:~$
A great systemd for node.js tutorial is available here.
If you run a webserver:
You probably will have some issues running your webserver on port 80. And the easiest solution, is actually to run your webserver on a different port (e.g. 4200) and then to redirect that port to port 80. You can accomplish this with the following command:
sudo iptables -t nat -A PREROUTING -i -p tcp --dport 80 -j REDIRECT --to-port 4200
Unfortunately, this is not persistent, so you have to repeat it whenever your server restarts. A better approach is to also include this command in our service script:
ExecStartPre to add the port forwarding
ExecStopPost to remove the port forwarding
PermissionStartOnly to do this with sudo power
So, something like this:
[Service]
...
PermissionsStartOnly=true
ExecStartPre=/sbin/iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 4200
ExecStopPost=/sbin/iptables -t nat -D PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 4200
Don't forget to reload and restart your service:
[ec2-user#ip-172-31-39-212 system]$ sudo systemctl daemon-reload
[ec2-user#ip-172-31-39-212 system]$ sudo systemctl stop cloudyleela
[ec2-user#ip-172-31-39-212 system]$ sudo systemctl start cloudyleela
[ec2-user#ip-172-31-39-212 system]$
For microservices (update on Dec 2020)
The previously mentioned solution gives a lot of flexibility, but it does take some time to set it up. And for each additional application, you need to go through this entire process again. By the time you'll be installing your 5th node application, you'll certainly start wondering: "there has to be a shortcut".
The advantage of PM2 is that it's just 1 service to install. Next it's PM2 which manages the actual applications.
Even the initial setup of PM2 is easy, because it automatically installs the pm2 service for you.
npm install pm2 -g
And adding new services is even easier:
pm2 start index.js --name "foo"`.
When everything's up and running, you can save your setup, to have it automatically start on reboot.
pm2 save
If you want an overview of all your running node applications,
you can run pm2 list
And PM2 also offers an online (webbased) dashboard to monitor your application remotely. You may need a license to access some of the dashboard functionality though (which is a bit over-priced imho).
You can create a script that can start and stop your app and place it in /etc/init.d; make the script adhere to chkconfig's conventions (below), and then use chkconfig to set it to start when other services are started.
You can pick an existing script from /etc/init.d to use as an example; this article describes the requirements, which are basically:
An executable script that identifies the shell needed (i.e., #!/bin/bash)
A comment of the form # chkconfig: where is often 345, startprio indicates where in the order of services to start, and stopprio is where in the order of services to stop. I generally pick a similar service that already exists and use that as a guide for these values (i.e., if you have a web-related service, start at the same levels as httpd, with similar start and stop priorities).
Once your script is set up, you can use
chkconfig --add yourscript
chkconfig yourscript on
and you should be good to go. (Some distros may require you to manually symlink to the script to /etc/init.d/rc.d, but I believe your AWS distro will do that for you when you enable the script.
Use Elastic Beanstalk :) Provides support for auto-scaling, SSL termination, blue/green deployments, etc
If you want the salty sysadmin way for a RedHat based linux distro (Amazon Linux is a flavor of RedHat), learn systemd, as mentioned by #bvdb in the answer above:
https://en.wikipedia.org/wiki/Systemd
Set everything up as described on an EC2 instance, snapshot a custom AMI, and use this custom AMI as your base for EC2 instances hosting your apps. This way you don't have to go through all that setup multiple times. You'll probably want to get acquainted with load balancers too, if you are running in a production environment with uptime requirements.
Or, yes, as mentioned by #bvdb, you could also use pm2 to interface with systemd. Though I don't think pm2 helps with running your app across multiple EC2 instances, which is definitely recommended for production environments with uptime requirements.
All of which is a very steep learning curve. Since the OP seemed to be new to all this, Elastic Beanstalk, Google App Engine, and others are a great way to get code running in the cloud without all that.
These days I dev in TypeScript, deploying to serverless function execution in the cloud for most things, and don't have to think about package installs or app startup at all.
You can use screen. Run crontab -e and add this line:
#reboot screen -d -m bash -c "cd /home/user/yourapp/; node app"
Have been using forever on AWS and it does a good job. Install using
[sudo] npm install forever -g
To add an application use
forever start path_to_application
and to stop the application use
forever stop path_to_application
This is a useful article that helped me with setting it up.

How do I run a Node.js application as its own process?

What is the best way to deploy Node.js?
I have a Dreamhost VPS (that's what they call a VM), and I have been able to install Node.js and set up a proxy. This works great as long as I keep the SSH connection that I started node with open.
2016 answer: nearly every Linux distribution comes with systemd, which means forever, monit, PM2, etc. are no longer necessary - your OS already handles these tasks.
Make a myapp.service file (replacing 'myapp' with your app's name, obviously):
[Unit]
Description=My app
[Service]
ExecStart=/var/www/myapp/app.js
Restart=always
User=nobody
# Note Debian/Ubuntu uses 'nogroup', RHEL/Fedora uses 'nobody'
Group=nogroup
Environment=PATH=/usr/bin:/usr/local/bin
Environment=NODE_ENV=production
WorkingDirectory=/var/www/myapp
[Install]
WantedBy=multi-user.target
Note if you're new to Unix: /var/www/myapp/app.js should have #!/usr/bin/env node on the very first line and have the executable mode turned on chmod +x myapp.js.
Copy your service file into the /etc/systemd/system folder.
Tell systemd about the new service with systemctl daemon-reload.
Start it with systemctl start myapp.
Enable it to run on boot with systemctl enable myapp.
See logs with journalctl -u myapp
This is taken from How we deploy node apps on Linux, 2018 edition, which also includes commands to generate an AWS/DigitalOcean/Azure CloudConfig to build Linux/node servers (including the .service file).
Use Forever. It runs Node.js programs in separate processes and restarts them if any dies.
Usage:
forever start example.js to start a process.
forever list to see list of all processes started by forever
forever stop example.js to stop the process, or forever stop 0 to stop the process with index 0 (as shown by forever list).
I've written about my deployment method here: Deploying node.js apps
In short:
Use git post-receive hook
Jake for the build tool
Upstart as a service wrapper for node
Monit to monitor and restart applications it they go down
nginx to route requests to different applications on the same server
pm2 does the tricks.
Features are: Monitoring, hot code reload, built-in load balancer, automatic startup script, and resurrect/dump processes.
You can use monit, forever, upstart or systemd to start your server.
You can use Varnish or HAProxy instead of Nginx (Nginx is known not to work with websockets).
As a quick and dirty solution you can use nohup node your_app.js & to prevent your app terminating with your server, but forever, monit and other proposed solutions are better.
I made an Upstart script currently used for my apps:
description "YOUR APP NAME"
author "Capy - http://ecapy.com"
env LOG_FILE=/var/log/node/miapp.log
env APP_DIR=/var/node/miapp
env APP=app.js
env PID_NAME=miapp.pid
env USER=www-data
env GROUP=www-data
env POST_START_MESSAGE_TO_LOG="miapp HAS BEEN STARTED."
env NODE_BIN=/usr/local/bin/node
env PID_PATH=/var/opt/node/run
env SERVER_ENV="production"
######################################################
start on runlevel [2345]
stop on runlevel [016]
respawn
respawn limit 99 5
pre-start script
mkdir -p $PID_PATH
mkdir -p /var/log/node
end script
script
export NODE_ENV=$SERVER_ENV
exec start-stop-daemon --start --chuid $USER:$GROUP --make-pidfile --pidfile $PID_PATH/$PID_NAME --chdir $APP_DIR --exec $NODE_BIN -- $APP >> $LOG_FILE 2>&1
end script
post-start script
echo $POST_START_MESSAGE_TO_LOG >> $LOG_FILE
end script
Customize all before #########, create a file in /etc/init/your-service.conf and paste it there.
Then you can:
start your-service
stop your-service
restart your-service
status your-service
I've written a pretty comprehensive guide to deploying Node.js, with example files:
Tutorial: How to Deploy Node.js Applications, With Examples
It covers things like http-proxy, SSL and Socket.IO.
Here's a longer article on solving this problem with systemd: http://savanne.be/articles/deploying-node-js-with-systemd/
Some things to keep in mind:
Who will start your process monitoring? Forever is a great tool, but it needs a monitoring tool to keep itself running. That's a bit silly, why not just use your init system?
Can you adequately monitor your processes?
Are you running multiple backends? If so, do you have provisions in place to prevent any of them from bringing down the others in terms of resource usage?
Will the service be needed all the time? If not, consider socket activation (see the article).
All of these things are easily done with systemd.
If you have root access you would better set up a daemon so that it runs safe and sound in the background. You can read how to do just that for Debian and Ubuntu in blog post Run Node.js as a Service on Ubuntu.
Forever will do the trick.
#Kevin: You should be able to kill processes fine. I would double check the documentation a bit. If you can reproduce the error it would be great to post it as an issue on GitHub.
Try this: http://www.technology-ebay.de/the-teams/mobile-de/blog/deploying-node-applications-with-capistrano-github-nginx-and-upstart.html
A great and detailed guide for deploying Node.js apps with Capistrano, Upstart and Nginx
As Box9 said, Forever is a good choice for production code. But it is also possible to keep a process going even if the SSH connection is closed from the client.
While not necessarily a good idea for production, this is very handy when in the middle of long debug sessions, or to follow the console output of lengthy processes, or whenever is useful to disconnect your SSH connection, but keep the terminal alive in the server to reconnect later (like starting the Node.js application at home and reconnecting to the console later at work to check how things are going).
Assuming that your server is a *nix box, you can use the screen command from the shell to do keep the process running even if the client SSH is closed. You can download/install screen from the web if not already installed (look for a package for your distribution if Linux, or use MacPorts if OS X).
It works as following:
When you first open the SSH connection, type 'screen' - this will start your screen session.
Start working as normal (i.e. start your Node.js application)
When you are done, close your terminal. Your server process(es) will continue running.
To reconnect to your console, ssh back to the server, login, and enter 'screen -r' to reconnect. Your old console context will pop back ready for you to resume using it.
To exit screen, while connected to the server, type 'exit' on the console prompt - that will drop you onto the regular shell.
You can have multiple screen sessions running concurrently like this if you need, and you can connect to any of it from any client. Read the documentation online for all the options.
Forever is a good option for keeping apps running (and it's npm installable as a module which is nice).
But for more serious 'deployment' -- things like remote management of deploying, restarting, running commands etc -- I would use capistrano with the node extension.
https://github.com/loopj/capistrano-node-deploy
https://paastor.com is a relatively new service that does the deploy for you, to a VPS or other server. There is a CLI to push code. Paastor has a free tier, at least it did at the time of posting this.
In your case you may use the upstart daemon. For a complete deployment solution, I may suggest capistrano. Two useful guides are How to setup Node.js env and How to deploy via capistrano + upstart.
Try node-deploy-server. It is a complex toolset for deploying an application onto your private servers. It is written in Node.js and uses npm for installation.

Resources