Get RabbitMQ run automatically? - linux

I'm running RabbitMQ on a EC2 machine. To start RabbitMQ working I type:
sudo rabbitmq-server
This starts ok and everything works. My issue is when I disconnect from the shell so does Rabbit.
How to I get rabbitmq-server to run automatically and without having to keep open my .ssh shell?
I'm running on a Ubuntu instance on Amazon EC2.

Start the process detached as the docs state.
sudo rabbitmq-server -detached

Related

How to run Node.js and MongoDB interactive shell simultaneously within a Docker container

I have a Docker image configured with node.js (with express) and mongoDB.
I run the mongod service in the background: mongod --fork --logpath /var/lib/mongodb.log. I start my node.js app: npm start which results in an interactive shell(shows the requests to server).
But if I want to monitor the DB changes being made by my node.js application, each time I am forced to stop the node server (ctrl + c) and launches the mongoDB interactive shell using: mongo.
So the next time if I want to run my node.js app, I had to stop the mongoDB interactive shell (ctrl + c) and run the server all over again.
Is there any way to run both node.js interactive shell and mongoDB interactive shell simultaneously, may be in two different terminal window in Docker ?
The image below shows the snapshot of my terminal.
I am using Ubuntu 15.04 and Docker version 1.5.0, build a8a31ef
I would suggest not running these services in the same container. Run each one in a separate container and use docker-compose to manage building and running the containers.
docker-compose logs will show you the output of each service.
Managing the services in separate containers will let you modify each independently, and gives you an environment that is closer to a production setup.
I would recommend you try installing tmux. You can add the following to your Dockerfile to make tmux available in the container:
RUN apt-get update && apt-get install -y tmux
tmux will provide you with a screen that can represent multiple windows with multiple panes, handling the I/O for you.
Alternatively, you can use CTRL+Z, fg, bg, to change the process your viewing in the foreground. A final solution might be to run docker exec in two separate terminals.
Lastly, not exactly related to your question, you could expose the port to mongod to your host and connect to it via your local mongo CLI client or a GUI client such as Robomongo.

Docker fails at first run after install. Error Post http://..... permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?

I'm following step one of this docker tutorial.
I have installed ubuntu version 14.04 on a virtual box vm.
I intentionally downgraded by docker version so that when I type "docker version" I get Client version: 1.5.0. This is because the server I intend to communicate with is on 1.5.0.
When trying the command "docker run hello-world" I get the response:
"Post http:///var/run/docker.sock/v1.17/containers/create: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?"
When running "sudo docker run hello-world" I get the response:
Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
Can someone please explain to me what's happening and how can fix it?
Thanks.
Edit: I tried to follow the solution for Linux here
However,
I had tried to follow El Mesa's instructions in that post. However, when I got to running sudo docker -d I got an Error running DeviceCreate (createPool) dm_task_run failed. I don't think I need to start up a anything since I was just following the tutorial and the tutorial just did docker run hello-world immediately after installing docker
Pay attention to the text that immediately preceeds Are you trying to connect to a TLS-enabled daemon without TLS in the error message. In the question asked here it is permission denied, but it could also be no such file or directory (or possibly something else). The former is more likely to mean that the current user is lacking permissions to access docker, and the latter is more likely to mean that there is a problem with the docker service itself, including the possibility that it is not running at all.
So depending on what your situation is look for the answers on this and the
linked question page that focus on the respective problem area.
In my case (CentOS Linux release 7.1.1503 (Core), docker-1.7.1-108.el7.centos.x86_64) it was permission denied. I have added user to the docker group (sudo usermod -a -G docker user) but docker command still didn't work when I ran it under user, while it ran fine under sudo. What I forgot to do is log the user out and back in after adding it to the docker group, which is a step necessary for the group membership to take effect.
Restarting the machine will also solve this issue but it is a more drastic step and will work because it will imply log out / log in step. I would recommend trying to log out and back in before restarting because if it works it will give you more confidence that the group membership was the actual issue. And if it doesn't work you can always try restarting, though if it works after that it will probably work because restarting took care of some other underlying issue.
And one more thing in case you come across it and find yourself in doubt - when you first install docker and wish to add user to the docker group, you may notice (as I did in my case) that the "dockerroot" group exists but not "docker" group. Do not add user to the dockerroot group assuming that is the one you need. Instead create new docker group and add the user to it.
It may be that your docker daemon is not running.
I have ubuntu/docker on a desktop with wireless LAN.
It acts a bit finicky compared to the wired computers from which docker works OK, and duplicates the error message you reported:
$ docker run -it ubuntu:latest /bin/bash
FATA[0000] Post http:///var/run/docker.sock/v1.17/containers/create: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
However, after running:
sudo service docker start
It behaves correctly (at least until the host is rebooted):
$ docker run -it ubuntu:latest /bin/bash
root#2cea4e5f5028:/#
If the system is not starting the docker daemon on boot, as was the case here, then the docker daemon can be automatically started on boot by editing /etc/rc.local to do so. Add the line below immediately before the exit line. This will fork a new bash shell, wait 30 sec for the network setup, etc., to settle, and start the docker daemon. sudo is unnecessary here because /etc/rc.local runs as root.
( sleep 30; /usr/sbin/service docker start ) &

Unable to restart rogue Jenkins on Ubuntu

I was configuring Jenkins last night to run some reporting plugins (codestyle, findbugs, cobertura). When I ran my build job it got hung up somewhere in codestyle, and the server ui became unresponsive.
Today I logged in to the server and the Jenkins log is reporting errors that look like the server ran out of memory, but more than that, I cannot seem to stop or restart the server. I have limited experience with services in linux.
Jenkins was installed on Ubuntu with atp. I have tried $ sudo /etc/init.d/jenkins restart but it reports
* Starting Jenkins Continuous Integration Server jenkins
The selected http port (8080) seems to be in use by another program
Please select another port to use for jenkins
When I try to run service jenkins status to get a pid to kill i get
2 instances of jenkins are running at the moment
but the pidfile /var/run/jenkins/jenkins.pid is missing
Running netstat and ps has identified the port being held by a jenkins instance.
How can I recover from this?
Mostly I was concerned about abruptly killing the Jenkins server while it has gone rogue. Something this tied into process with server connections and plugins makes me wary of taking a shotgun to the process.
That's exactly what I did. server jenkins status didn't work, so I got the process id from netstat -tulpn. kill -15 didn't work so I did kill -9, waited a respectful grieving period, then restarted the Jenkins service.
I will next be investigating the root problem of running out of memory in my Jenkins installation so hopefully this doesn't happen again while I am firewalled away from my server.
Where is your server hosted?
I had the same issue with AWS EC2 server.
Command lines did not work to reboot the server.
However, on AWS admin console, I did: EC2 -> restart and it works like a charm.
This may not be a solution but a workaround.
I was able to do
sudo ps aux | grep jenkins
To find a list of jenkins processes. Then I ran
sudo kill <pid>
And then finally
sudo service jenkins restart

How to get PGBouncer auto start on reboot on Linux?

On Ubuntu 12.04 (precise) in a Windows Azure VM I have postgres and pgbouncer running on the same machine. Everything is setup and works however when the VM is rebooted pgbouncer doesn't automatically startup.
How do I make it so that it starts on reboot?
Does Postgres need to be running before PGBouncer? If so how is this accomplished? I'm assuming PGBouncer would still run just any sql connections wouldn't connect if Postgres wasn't running or is this assumption wrong?
The commands I run to get it started are the below. Note: I need to be 'postgres' user in order to start the service otherwise it fails. Also detailed answer preferrend. Linux isn't my normal OS.
sudo su postgres
pgbouncer -d -v /etc/pgbouncer/pgbouncer.ini
If helpful, this is how pgbouncer was installed:
sudo apt-get install postgresql-9.3 pgbouncer
Note: I can interact with pgbouncer service (force-reload, status, start, stop) however only after I first run the pgbouncer -d -v /etc/pgbouncer/pgbouncer.ini command.
Edit /etc/default/pgbouncer and set
START=1
Then start pgbouncer using the init script:
/etc/init.d/pgbouncer start
The init script will automatically start pgbouncer on boot. But you need to make that START=1 setting.

How can I automatically start a node.js application in Amazon Linux AMI on aws?

Is there a brief guide to explain how to start up a application when the instance starts up and running? If it were one of the services installed through yum then I guess I can use /sbin/chkconfig to add it to the service. (To make it sure, is it correct?)
However, I just want to run the program which was not installed through yum. To run node.js program, I will have to run script sudo node app.js at home directory whenever the system boots up.
I am not used to Amazon Linux AMI so I am having little trouble finding a 'right' way to run some script automatically on every boot.
Is there an elegant way to do this?
One way is to create an upstart job. That way your app will start once Linux loads, will restart automatically if it crashes, and you can start / stop / restart it by sudo start yourapp / sudo stop yourapp / sudo restart yourapp.
Here are beginning steps:
1) Install upstart utility (may be pre-installed if you use a standard Amazon Linux AMI):
sudo yum install upstart
For Ubuntu:
sudo apt-get install upstart
2) Create upstart script for your node app:
in /etc/init add file yourappname.conf with the following lines of code:
#!upstart
description "your app name"
start on started mountall
stop on shutdown
# Automatically Respawn:
respawn
respawn limit 99 5
env NODE_ENV=development
# Warning: this runs node as root user, which is a security risk
# in many scenarios, but upstart-ing a process as a non-root user
# is outside the scope of this question
exec node /path_to_your_app/app.js >> /var/log/yourappname.log 2>&1
3) start your app by sudo start yourappname
You can use forever-service for provisioning node script as a service and automatically starting during boots. Following commands will do the needful,
npm install -g forever-service
forever-service install test
This will provision app.js in the current directory as a service via forever. The service will automatically restart every time system is restarted. Also when stopped it will attempt a graceful stop. This script provisions the logrotate script as well.
Github url: https://github.com/zapty/forever-service
As of now forever-service supports Amazon Linux, CentOS, Redhat support for other Linux distro, Mac and Windows are in works..
NOTE: I am the author of forever-service.
Quick solution for you would be to start your app from /etc/rc.local ; just add your command there.
But if you want to go the elegant way, you'll have to package your application in a rpm file,
have a startup script that goes in /etc/rc.d so that you can use chkconfig on your app, then install the rpm on your instance.
Maybe this or this help. (or just google for "creating rpm packages")
My Amazon Linux instance runs on Ubuntu, and I used systemd to set it up.
First you need to create a <servicename>.service file. (in my case cloudyleela.service)
sudo nano /lib/systemd/system/cloudyleela.service
Type the following in this file:
[Unit]
Description=cloudy leela
Documentation=http://documentation.domain.com
After=network.target
[Service]
Type=simple
TimeoutSec=0
User=ubuntu
ExecStart=/usr/bin/node /home/ubuntu/server.js
Restart=on-failure
[Install]
WantedBy=multi-user.target
In this application the node application is started. You will need a full path here. I configured that the application should simply restart if something goes wrong. The instances that Amazon uses have no passwords for their users by default.
Reload the file from disk, and then you can start your service. You need to enable it to make it active as a service, which automatically launches at startup.
ubuntu#ip-172-31-21-195:~$ sudo systemctl daemon-reload
ubuntu#ip-172-31-21-195:~$ sudo systemctl start cloudyleela
ubuntu#ip-172-31-21-195:~$ sudo systemctl enable cloudyleela
Created symlink /etc/systemd/system/multi-user.target.wants/cloudyleela.service → /lib/systemd/system/cloudyleela.service.
ubuntu#ip-172-31-21-195:~$
A great systemd for node.js tutorial is available here.
If you run a webserver:
You probably will have some issues running your webserver on port 80. And the easiest solution, is actually to run your webserver on a different port (e.g. 4200) and then to redirect that port to port 80. You can accomplish this with the following command:
sudo iptables -t nat -A PREROUTING -i -p tcp --dport 80 -j REDIRECT --to-port 4200
Unfortunately, this is not persistent, so you have to repeat it whenever your server restarts. A better approach is to also include this command in our service script:
ExecStartPre to add the port forwarding
ExecStopPost to remove the port forwarding
PermissionStartOnly to do this with sudo power
So, something like this:
[Service]
...
PermissionsStartOnly=true
ExecStartPre=/sbin/iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 4200
ExecStopPost=/sbin/iptables -t nat -D PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 4200
Don't forget to reload and restart your service:
[ec2-user#ip-172-31-39-212 system]$ sudo systemctl daemon-reload
[ec2-user#ip-172-31-39-212 system]$ sudo systemctl stop cloudyleela
[ec2-user#ip-172-31-39-212 system]$ sudo systemctl start cloudyleela
[ec2-user#ip-172-31-39-212 system]$
For microservices (update on Dec 2020)
The previously mentioned solution gives a lot of flexibility, but it does take some time to set it up. And for each additional application, you need to go through this entire process again. By the time you'll be installing your 5th node application, you'll certainly start wondering: "there has to be a shortcut".
The advantage of PM2 is that it's just 1 service to install. Next it's PM2 which manages the actual applications.
Even the initial setup of PM2 is easy, because it automatically installs the pm2 service for you.
npm install pm2 -g
And adding new services is even easier:
pm2 start index.js --name "foo"`.
When everything's up and running, you can save your setup, to have it automatically start on reboot.
pm2 save
If you want an overview of all your running node applications,
you can run pm2 list
And PM2 also offers an online (webbased) dashboard to monitor your application remotely. You may need a license to access some of the dashboard functionality though (which is a bit over-priced imho).
You can create a script that can start and stop your app and place it in /etc/init.d; make the script adhere to chkconfig's conventions (below), and then use chkconfig to set it to start when other services are started.
You can pick an existing script from /etc/init.d to use as an example; this article describes the requirements, which are basically:
An executable script that identifies the shell needed (i.e., #!/bin/bash)
A comment of the form # chkconfig: where is often 345, startprio indicates where in the order of services to start, and stopprio is where in the order of services to stop. I generally pick a similar service that already exists and use that as a guide for these values (i.e., if you have a web-related service, start at the same levels as httpd, with similar start and stop priorities).
Once your script is set up, you can use
chkconfig --add yourscript
chkconfig yourscript on
and you should be good to go. (Some distros may require you to manually symlink to the script to /etc/init.d/rc.d, but I believe your AWS distro will do that for you when you enable the script.
Use Elastic Beanstalk :) Provides support for auto-scaling, SSL termination, blue/green deployments, etc
If you want the salty sysadmin way for a RedHat based linux distro (Amazon Linux is a flavor of RedHat), learn systemd, as mentioned by #bvdb in the answer above:
https://en.wikipedia.org/wiki/Systemd
Set everything up as described on an EC2 instance, snapshot a custom AMI, and use this custom AMI as your base for EC2 instances hosting your apps. This way you don't have to go through all that setup multiple times. You'll probably want to get acquainted with load balancers too, if you are running in a production environment with uptime requirements.
Or, yes, as mentioned by #bvdb, you could also use pm2 to interface with systemd. Though I don't think pm2 helps with running your app across multiple EC2 instances, which is definitely recommended for production environments with uptime requirements.
All of which is a very steep learning curve. Since the OP seemed to be new to all this, Elastic Beanstalk, Google App Engine, and others are a great way to get code running in the cloud without all that.
These days I dev in TypeScript, deploying to serverless function execution in the cloud for most things, and don't have to think about package installs or app startup at all.
You can use screen. Run crontab -e and add this line:
#reboot screen -d -m bash -c "cd /home/user/yourapp/; node app"
Have been using forever on AWS and it does a good job. Install using
[sudo] npm install forever -g
To add an application use
forever start path_to_application
and to stop the application use
forever stop path_to_application
This is a useful article that helped me with setting it up.

Resources