I have node.js application, which I would like to run many times, each day on scheduled time with different arguments.
Condition: Usage of AWS stack
Example: Each day run at 7 p.m. Run 10 times the same app using different arguments for each.
Currently I am using basic set-up of one EC2, pm2, running the node.js app with different arguments.
What I would like to achieve is to use more serverless infrastructure, with separate computing power. I am using puppeteer in my app so sharing e.g. 10 processes on one machine is problem sometimes.
I am looking for advice what AWS service is best for my usage (Elastic Beanstalk, ECS, AWS Lambda) ...
If your application exit after a particular time then serverless will work well in this situation.
I will suggest two option
AWS Fargate
AWS lambda
If we look into fargate, All you need different task definition and single Docker image for all your dozen application plus you can run the application for more then fifteen minutes with fargate that is not possible with lambda.
So all you need to design Dockerfile in way that It accepts CMD and you can pass an argument to nodejs process.
FROM node:alpine
RUN apk add --no-cache curl
WORKDIR /app
RUN curl -o app.js https://gist.githubusercontent.com/Adiii717/94543c1f87e6db86b55ba3a5a58a2bbc/raw/da0695811b50d70be4c36951e5baa40a051a2dcf/app.js
RUN echo $'#!/bin/sh \n\
node app.js \"${#}\" ' >> /bin/entrypoint.sh
RUN chmod +x /bin/entrypoint.sh
entrypoint ["entrypoint.sh"]
So to run N service with this Dockefile base on argument
docker run -it --rm abc app1
docker run -it --rm abc app2
docker run -it --rm abc app3
.
.
.
In task definition you will need to pass just
"command": ["app1"]
"command": ["app2"]
Example: Each day run at 7 p.m. Run 10 times the same app using
different arguments for each.
Define cloud watch rule to trigger desired fargate task base on time.
scheduled_tasks
I want to save money for time when no process is running.
You will only pay when the process/tasks is running.
If you want to use ec2 instance for this task you can use cron
sudo apt install cron
In it you can schedule the tasks. for more info you can visit
Here
But if you are saying you wish to go server-less then you can use lambdas and cloud watch that will trigger event after time that you set it. for more info you can visit
Here
Lambda's are cheap too.
Related
I have created a Docker image and pushed it to the AWS ECR repository
I'm creating a task with 3 containers included, one for Redis one for PostgreSQL and another one for the given Image which is my Node project
In Dockerfile, I have added a CMD to run the App with node command, here is the Dockerfile content:
FROM node:16-alpine as build
WORKDIR /usr/token-manager/app
COPY package*.json .
RUN npm install
COPY . .
RUN npm run build
FROM node:16-alpine as production
ARG ENV_ARG=production
ENV NODE_ENV=${ENV_ARG}
WORKDIR /usr/token-manager/app
COPY package*.json .
RUN npm install --production
COPY --from=build /usr/token-manager/app/dist ./dist
CMD ["node", "./dist/index.js"]
This image is working in a docker-compose locally without any issue
The issue is when I run the task in ECS Cluster it's not running the Node project, it seems that it's not running the CMD command
I tried to override that CMD command by adding a new command to the Task definition:
When I run task with this command, there is nothing in the CloudWatch log and obviously the Node App is not running, here you can see that there is no log for api-container:
When I change the command to something else, for example "ls" it gets executed and I can see the result in CloudWatch log:
or when I change it to a wrong command, I get an error in the log:
But When I change it to the right command which should run the App, nothing happens, it's not even showing anything in the log as error
I have added inbound rules to allow the port number needed for connecting to the App but it seems it's not running at all!
What should I do? How can I check to see what is the issue?
UPDATE: I changed the App Container configuration to make it Essential, it means that the whole Task will fail and stop if this container exits with any error, then I started the Task again and it gets stopped, so now I'm sure that the App Container is crashing and exiting some how but there is nothing in the log!
First: Make Sure your Docker image in deployed to ECR(you can using Codepipeline) because that is where the ECS will look for the DockerImage.
Second:Please Specify your launch-Type, in case of Ec2 make sure you are using latest Node Image while adding container.
Here you can find latest Docker Image for Node: https://hub.docker.com/_/node
Third: Create Task-Definition and Run the task, now make sure you navigate to cluster and check if task is running and check task status.
Fourth: Make sure you allow all inbound traffic in Security group and open HTTP for 0.0.0.0/0
You can test using curl i.e :http://ec2-52-38-113-251.us-west-2.compute.amazonaws.com
In case you failed to do so, i would recommend deploying simple Node App and get that running and then deploy your project. Thank you
I found the issue, I'll post it here, it may help someone else
If you go to Cluster details screen > Tasks tab > Stopped > Task ID, then you can see a brief status message regarding each container in Containers list:
it saying that container killed due to Memory issue, we can fix it by increasing the memory we specify for containers when adding new Task Definition
This is the total amount of memory you want to give to the whole Task, which will be shared between all containers:
When you are adding new Container, there is a place for specifying the memory limit:
Hard Limit: If you specify a Hard Limit, your container will get killed when attempt to exceed that limit of memory usage
Soft Limit: If you specify the Soft Limit, ECS will reserve that memory for your container, but your container can request more memory up to the Hard Limit
So the main point here is when there is some kind of Initial issue for container, there won't be any log in CloudWatch and when there is and issue but we didn't find anything in Log, then we should check possibilities like Memory or anything prevent container from being started
I have an ec2 instances that is running a node application. I am thinking of doing a container implementation using docker. The pm2 is running two application one is the actual node application (express and pug) and a cronjob using agenda. Is it a good idea to put my applications in one container?
I am not yet familiar with the pros and cons of this and I read that docker is already a process manager. How will the pm2 fit in all of this once I implement it. Or should I just ditch docker and run the applications in the native linux of my ec2.
You have a couple of questions, I try to answer them below:
1. Is it a good idea to put my applications in one container?
It depends, there are many cases why you would like to run the same container doing multiple things. But it really depends on the CPU/RAM/Memory usage of the job. And how often does it run?
Anyway from experience I can say if I run a cronjob from the same container, I would always use a worker approach for this using either NodeJS cores worker_threads or cluster module. Because you do not want that a cronjob impacts the behavior of the main thread. I have an example of running 2 applications on multiple threads in the following repo.
2. should I just ditch docker and run the applications in the native linux of my ec2
Docker and PM2 are 2 really different things. Docker is basically to containerize your entire Node app, so it is much easier to ship it. PM2 is a process manager for node and makes sure your app is up and comes with some nice metrics and logs UI on PM2 metrics. You can definitely use the 2 together, as PM2 makes also sure your app will start up after it crashes.
However, if you use pm2 you have to use the pm2-runtime when using docker. Example Dockerfile:
FROM node:16.9.0
WORKDIR /home/usr/app
COPY . .
RUN npm ci && npm run build
# default command is starting the server
CMD ["npx", "pm2-runtime", "npm", "--", "start"]
I am trying to run multiple js files in a bash script like this. This doesn't work. The container comes up but doesn't run the script. However when I ssh to the container and run this script, the script runs fine and the node service comes up. Can anyone tell me what am I doing wrong?
Dockerfile
FROM node:8.16
MAINTAINER Vivek
WORKDIR /a
ADD . /a
RUN cd /a && npm install
CMD ["./node.sh"]
Script is as below
node.sh
#!/bin/bash
set -e
node /a/b/c/d.js &
node /a/b/c/e.js &
As #hmm mentions your script might be run, but your container is not waiting for your two sub-processes to finish.
You could change your node.sh to:
#!/bin/bash
set -e
node /a/b/c/d.js &
pid1=$!
node /a/b/c/e.js &
pid2=$!
wait pid1
wait pid2
Checkout https://stackoverflow.com/a/356154/1086545 for a more general solution of waiting for sub-processes to finish.
As #DavidMaze is mentioning, a container should generally run one "service". It is of course up to you to decide what constitutes a service in your system. As described officially by docker:
It is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes). It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application.
See https://docs.docker.com/config/containers/multi-service_container/ for more details.
Typically you should run only a single process in a container. However, you can run any number of containers from a single image, and it's easy to set the command a container will run when you start it up.
Set the image's CMD to whatever you think the most common path will be:
CMD ["node", "b/c/d.js"]
If you're using Docker Compose for this, you can specify build: . for both containers, but in the second container, specify an alternate command:.
version: '3'
services:
node-d:
build: .
node-e:
build: .
command: node b/c/e.js
Using bare docker run you can specify an alternate command after the image name
docker build -t me/node-app .
docker run -d --name node-d me/node-app
docker run -d --name node-e me/node-app \
node b/c/e.js
This lets you do things like independently set restart policies for each container; if you run this in a clustered environment like Docker Swarm or Kubernetes, you can independently scale the two containers/pods/processes as well.
I have set up AWS Glue to output Spark event logs so that they can be imported into Spark History Server. AWS provide a CloudFormation stack for this, I just want to run the history server locally and import the event logs. I want to use Docker for this so colleagues can easily run the same thing.
I'm running into problems because the history server is a daemon process, so the container starts and immediately shuts down.
How can I keep the Docker image alive?
My Dockerfile is as follows
ARG SPARK_IMAGE=gcr.io/spark-operator/spark:v2.4.4
FROM ${SPARK_IMAGE}
RUN apk --update add coreutils
RUN mkdir /tmp/spark-events
ENTRYPOINT ["/opt/spark/sbin/start-history-server.sh"]
I start it using:
docker run -v ${PWD}/events:/tmp/spark-events -p 18080:18080 sparkhistoryserver
You need the SPARK_NO_DAEMONIZE environment variable, see here. This will keep the container alive.
Just modify your Dockerfile as follows:
ARG SPARK_IMAGE=gcr.io/spark-operator/spark:v2.4.4
FROM ${SPARK_IMAGE}
RUN apk --update add coreutils
RUN mkdir /tmp/spark-events
ENV SPARK_NO_DAEMONIZE TRUE
ENTRYPOINT ["/opt/spark/sbin/start-history-server.sh"]
See here for a repo with more detailed readme.
I've been looking at various methods to run commands upon creation of EC2 instances using Elastic Beanstalk on AWS. I've been given different methods to do this through AWS Tech Support, including life cycle hooks, custom AMI's, and .ebextensions. I've been having issues getting the first 2 methods (life cycle hooks and custom AMIs) to work with EB.
I'm currently using .ebextensions to run commands upon deploy, but not sure if there's a way to run a set of commands upon creation only instead of every time I deploy code. For instance, I have a file .ebextensions/03-commands.config that contains the following code:
container_commands:
01_npm_install:
command: "npm install -g -f npm#latest"
However, I only want this code to run upon instance creation, not every time I deploy, as it currently does. Does anyone know a way to accomplish this?
Thanks in advance!
I would recommend creating an idempotent script in your application that leaves a marker file on the instance in some location say /var/myapp/marker using something like mkdir -p /var/myapp-state/; touch /var/myapp-state/marker on successful execution. That way in your script you can check that if the marker file exists you can make your script a no-op.
Then you can call your script from container commands but it will be a no-op everytime because on first successful execution it will create the marker file and subsequent executions will be no-ops.
Create a custom AMI. This way you can setup your instances whoever you want and they will launch faster
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.customenv.html
As I see from you question, you're using: container_commands, that is means you are using Elastic Beanstalk with Docker. Right? In this case I would recommended to read: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html.
The idea is following, that you can create own Dockerfile, where you can specify all commands that you need to build a docker container, for example to install all dependencies.
I would recommended to use .ebextensions for the Elastic Beanstalk customization and configuration, for example to specify ELB or RDS configuration. In the Dockerfile make sense to specify all commands, that you need to build a container for your application, that includes setup of the web server, dependencies etc.
With this approach, Elastic Beanstalk will build a container, that each time when you do deploy, it execute a docker instance with deployed source code.
There is a simple option leader_only: true you need to use as per current AWS Elasticbeanstalk configurations, You simply need to add this under
container_commands:
01_npm_install:
command: "npm install -g -f npm#latest"
leader_only: true
This is the link as per AWS
AWS Elasticbeanstalk container command option