Docker container and PM2 runnng in ec2 instance - node.js

I have an ec2 instances that is running a node application. I am thinking of doing a container implementation using docker. The pm2 is running two application one is the actual node application (express and pug) and a cronjob using agenda. Is it a good idea to put my applications in one container?
I am not yet familiar with the pros and cons of this and I read that docker is already a process manager. How will the pm2 fit in all of this once I implement it. Or should I just ditch docker and run the applications in the native linux of my ec2.

You have a couple of questions, I try to answer them below:
1. Is it a good idea to put my applications in one container?
It depends, there are many cases why you would like to run the same container doing multiple things. But it really depends on the CPU/RAM/Memory usage of the job. And how often does it run?
Anyway from experience I can say if I run a cronjob from the same container, I would always use a worker approach for this using either NodeJS cores worker_threads or cluster module. Because you do not want that a cronjob impacts the behavior of the main thread. I have an example of running 2 applications on multiple threads in the following repo.
2. should I just ditch docker and run the applications in the native linux of my ec2
Docker and PM2 are 2 really different things. Docker is basically to containerize your entire Node app, so it is much easier to ship it. PM2 is a process manager for node and makes sure your app is up and comes with some nice metrics and logs UI on PM2 metrics. You can definitely use the 2 together, as PM2 makes also sure your app will start up after it crashes.
However, if you use pm2 you have to use the pm2-runtime when using docker. Example Dockerfile:
FROM node:16.9.0
WORKDIR /home/usr/app
COPY . .
RUN npm ci && npm run build
# default command is starting the server
CMD ["npx", "pm2-runtime", "npm", "--", "start"]

Related

linuxamazon not running docker deamon [duplicate]

I'm running Jenkins inside a Docker container. I wonder if it's ok for the Jenkins container to also be a Docker host? What I'm thinking about is to start a new docker container for each integration test build from inside Jenkins (to start databases, message brokers etc). The containers should thus be shutdown after the integration tests are completed. Is there a reason to avoid running docker containers from inside another docker container in this way?
Running Docker inside Docker (a.k.a. dind), while possible, should be avoided, if at all possible. (Source provided below.) Instead, you want to set up a way for your main container to produce and communicate with sibling containers.
Jérôme Petazzoni — the author of the feature that made it possible for Docker to run inside a Docker container — actually wrote a blog post saying not to do it. The use case he describes matches the OP's exact use case of a CI Docker container that needs to run jobs inside other Docker containers.
Petazzoni lists two reasons why dind is troublesome:
It does not cooperate well with Linux Security Modules (LSM).
It creates a mismatch in file systems that creates problems for the containers created inside parent containers.
From that blog post, he describes the following alternative,
[The] simplest way is to just expose the Docker socket to your CI container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.
I answered a similar question before on how to run a Docker container inside Docker.
To run docker inside docker is definitely possible. The main thing is that you run the outer container with extra privileges (starting with --privileged=true) and then install docker in that container.
Check this blog post for more info: Docker-in-Docker.
One potential use case for this is described in this entry. The blog describes how to build docker containers within a Jenkins docker container.
However, Docker inside Docker it is not the recommended approach to solve this type of problems. Instead, the recommended approach is to create "sibling" containers as described in this post
So, running Docker inside Docker was by many considered as a good type of solution for this type of problems. Now, the trend is to use "sibling" containers instead. See the answer by #predmijat on this page for more info.
It's OK to run Docker-in-Docker (DinD) and in fact Docker (the company) has an official DinD image for this.
The caveat however is that it requires a privileged container, which depending on your security needs may not be a viable alternative.
The alternative solution of running Docker using sibling containers (aka Docker-out-of-Docker or DooD) does not require a privileged container, but has a few drawbacks that stem from the fact that you are launching the container from within a context that is different from that one in which it's running (i.e., you launch the container from within a container, yet it's running at the host's level, not inside the container).
I wrote a blog describing the pros/cons of DinD vs DooD here.
Having said this, Nestybox (a startup I just founded) is working on a solution that runs true Docker-in-Docker securely (without using privileged containers). You can check it out at www.nestybox.com.
Yes, we can run docker in docker, we'll need to attach the unix socket /var/run/docker.sock on which the docker daemon listens by default as volume to the parent docker using -v /var/run/docker.sock:/var/run/docker.sock.
Sometimes, permissions issues may arise for docker daemon socket for which you can write sudo chmod 757 /var/run/docker.sock.
And also it would require to run the docker in privileged mode, so the commands would be:
sudo chmod 757 /var/run/docker.sock
docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -it ...
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container. (Which never ever works for me)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options. The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container: https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md

Running more node.js applications with different arguments on AWS Stack

I have node.js application, which I would like to run many times, each day on scheduled time with different arguments.
Condition: Usage of AWS stack
Example: Each day run at 7 p.m. Run 10 times the same app using different arguments for each.
Currently I am using basic set-up of one EC2, pm2, running the node.js app with different arguments.
What I would like to achieve is to use more serverless infrastructure, with separate computing power. I am using puppeteer in my app so sharing e.g. 10 processes on one machine is problem sometimes.
I am looking for advice what AWS service is best for my usage (Elastic Beanstalk, ECS, AWS Lambda) ...
If your application exit after a particular time then serverless will work well in this situation.
I will suggest two option
AWS Fargate
AWS lambda
If we look into fargate, All you need different task definition and single Docker image for all your dozen application plus you can run the application for more then fifteen minutes with fargate that is not possible with lambda.
So all you need to design Dockerfile in way that It accepts CMD and you can pass an argument to nodejs process.
FROM node:alpine
RUN apk add --no-cache curl
WORKDIR /app
RUN curl -o app.js https://gist.githubusercontent.com/Adiii717/94543c1f87e6db86b55ba3a5a58a2bbc/raw/da0695811b50d70be4c36951e5baa40a051a2dcf/app.js
RUN echo $'#!/bin/sh \n\
node app.js \"${#}\" ' >> /bin/entrypoint.sh
RUN chmod +x /bin/entrypoint.sh
entrypoint ["entrypoint.sh"]
So to run N service with this Dockefile base on argument
docker run -it --rm abc app1
docker run -it --rm abc app2
docker run -it --rm abc app3
.
.
.
In task definition you will need to pass just
"command": ["app1"]
"command": ["app2"]
Example: Each day run at 7 p.m. Run 10 times the same app using
different arguments for each.
Define cloud watch rule to trigger desired fargate task base on time.
scheduled_tasks
I want to save money for time when no process is running.
You will only pay when the process/tasks is running.
If you want to use ec2 instance for this task you can use cron
sudo apt install cron
In it you can schedule the tasks. for more info you can visit
Here
But if you are saying you wish to go server-less then you can use lambdas and cloud watch that will trigger event after time that you set it. for more info you can visit
Here
Lambda's are cheap too.

How to run a docker container as a windows service

I have a windows service that I want to run in a docker container on Azure.
I would like to have the same setup when running the service locally, so I would like to run the same docker container locally as a windows service (I think?).
How would I do that? Or is there a better approach?
Thanks,
Michael
IMHO Michael asked how to start docker images without the need to have a user logged in. The docker restart flag actually only deals with starting images after docker is running. To get docker to run without logged in user (or after automatic windows updates) it seems to me you will also need to make a windows service that runs docker.
A good explanation for this part of the problem can be found here (no good solution has been found yet without paying for it - docker team ignored request to make this work without third party so far):
How to start Docker daemon (windows service) at startup without the need to log-in?
You can use the flag --restart=unless-stopped with the docker run command and the docker container will run automatically even if the server was shutdown.
Further read for the restart policy and flag here
but conditions apply - docker itself should always run on startup. which is default setting by itself.

Docker: Approaches for watching files and restarting container processes

Are there any workable approaches to watch/reload within docker?
The use case here is development, where switching between branches may change one or more of backend, frontend or database provisioning files.
Example: I have a node.js application. If server JS code changes, I want the backend server to restart. If package.json changes, I want the "install" container (runs npm install, saving node_modules to a shared Volume). If SQL files change, I want the provisioning container to run its psql commands again.
Basically, I want to watch certain files and restart the process if they change (container itself is not technically restarted). Supervisord isn't cut out for watching, but seems like process managers like PM2 or Forever would normally be the slam dunk choice if it weren't for the docker consideration.

When node.js goes down, how can I bring it back up automatically?

Since node is basically a single process, when something goes terribly wrong, the whole application dies.
I now have a couple of apps built on express and I am using some manual methods to prevent extended downtimes ( process.on('uncaughtException') and a custom heartbeat monitor ).
Any suggestions from the community?
Best-practices? Frameworks?
Thanks!
A
Use something like forever
or use supervisor.
Just npm link and then sudo supervisor server.js.
These types of libraries also support hot reloading. There are some which you use from the command line and they run node services as sub processes for you. There are others which expect you to write your code to reload itself.
Ideally what you want to move towards a full blown load balancer which is failure safe. If a single node proccess in your load balancer crashes you want it to be quietly restarted and all the connections and data rescued.
Personally I would recommend supervisor for development (It's written by isaacs!) and a full blown load balancer (either nginx or node) for your real production server.
Of course your already running multiple node server processes in parallel because you care about scaling across multiple cores right ;)
Use forever.
"A simple CLI tool for ensuring that a given script runs continuously (i.e. forever)"
Just install it with npm
npm install forever
and type
forever start app.js
Try to look at forever module.
If you're using Ubuntu you can use upstart (which is installed by default).
$ cat /etc/init/my_app.conf
description "my_app"
author "me"
start on (local-filesystems and net-device-up IFACE=eth0) stop on
shutdown
respawn
exec sh -c "env NODE_ENV=production node /path/myapp/app.js >> /var/log/node.log 2>&1"
"respawn" mean that the app will be restarted if it dies.
To start the app
start my_app
For other commands
man initctl
I'll strongly recommend forever too. I used it yesterday and its a breeze:
Install npm install forever
Start your app forever start myapp.js
Check if its working forever list
Try killing your app :
ps
Get your myapp.js pid and run kill <pid
run forever list and you'll see it's running again
You can try using Fugue, a library for node.js similar to Spark or Unicorn:
https://github.com/pgte/fugue
Fugue can manage any node.js server type, not just web servers, and it's set up and configured as a node.js script, not a CLI command, so normal node.js build & deployment toolchains can use it.

Resources