Issue with running Node application in AWS ECS container - node.js

I have created a Docker image and pushed it to the AWS ECR repository
I'm creating a task with 3 containers included, one for Redis one for PostgreSQL and another one for the given Image which is my Node project
In Dockerfile, I have added a CMD to run the App with node command, here is the Dockerfile content:
FROM node:16-alpine as build
WORKDIR /usr/token-manager/app
COPY package*.json .
RUN npm install
COPY . .
RUN npm run build
FROM node:16-alpine as production
ARG ENV_ARG=production
ENV NODE_ENV=${ENV_ARG}
WORKDIR /usr/token-manager/app
COPY package*.json .
RUN npm install --production
COPY --from=build /usr/token-manager/app/dist ./dist
CMD ["node", "./dist/index.js"]
This image is working in a docker-compose locally without any issue
The issue is when I run the task in ECS Cluster it's not running the Node project, it seems that it's not running the CMD command
I tried to override that CMD command by adding a new command to the Task definition:
When I run task with this command, there is nothing in the CloudWatch log and obviously the Node App is not running, here you can see that there is no log for api-container:
When I change the command to something else, for example "ls" it gets executed and I can see the result in CloudWatch log:
or when I change it to a wrong command, I get an error in the log:
But When I change it to the right command which should run the App, nothing happens, it's not even showing anything in the log as error
I have added inbound rules to allow the port number needed for connecting to the App but it seems it's not running at all!
What should I do? How can I check to see what is the issue?
UPDATE: I changed the App Container configuration to make it Essential, it means that the whole Task will fail and stop if this container exits with any error, then I started the Task again and it gets stopped, so now I'm sure that the App Container is crashing and exiting some how but there is nothing in the log!

First: Make Sure your Docker image in deployed to ECR(you can using Codepipeline) because that is where the ECS will look for the DockerImage.
Second:Please Specify your launch-Type, in case of Ec2 make sure you are using latest Node Image while adding container.
Here you can find latest Docker Image for Node: https://hub.docker.com/_/node
Third: Create Task-Definition and Run the task, now make sure you navigate to cluster and check if task is running and check task status.
Fourth: Make sure you allow all inbound traffic in Security group and open HTTP for 0.0.0.0/0
You can test using curl i.e :http://ec2-52-38-113-251.us-west-2.compute.amazonaws.com
In case you failed to do so, i would recommend deploying simple Node App and get that running and then deploy your project. Thank you

I found the issue, I'll post it here, it may help someone else
If you go to Cluster details screen > Tasks tab > Stopped > Task ID, then you can see a brief status message regarding each container in Containers list:
it saying that container killed due to Memory issue, we can fix it by increasing the memory we specify for containers when adding new Task Definition
This is the total amount of memory you want to give to the whole Task, which will be shared between all containers:
When you are adding new Container, there is a place for specifying the memory limit:
Hard Limit: If you specify a Hard Limit, your container will get killed when attempt to exceed that limit of memory usage
Soft Limit: If you specify the Soft Limit, ECS will reserve that memory for your container, but your container can request more memory up to the Hard Limit
So the main point here is when there is some kind of Initial issue for container, there won't be any log in CloudWatch and when there is and issue but we didn't find anything in Log, then we should check possibilities like Memory or anything prevent container from being started

Related

After installing docker I am unable to run commands that I used to be able to run

Two examples include snap and certbot. I used to type sudo certbot and would be able to add ssl certs to my nginx servers. Now I get this every time I enter certbot. The same thing goes for snap. I'm new to docker and don't understand what is going on. Can somebody explain what is ging on?
Usage: docker compose [OPTIONS] COMMAND
Docker Compose
Options:
--ansi string Control when to print ANSI control characters ("never"|"always"|"auto") (default "auto")
--compatibility Run compose in backward compatibility mode
--env-file string Specify an alternate environment file.
-f, --file stringArray Compose configuration files
--profile stringArray Specify a profile to enable
--project-directory string Specify an alternate working directory
(default: the path of the, first specified, Compose file)
-p, --project-name string Project name
Commands:
build Build or rebuild services
convert Converts the compose file to platform's canonical format
cp Copy files/folders between a service container and the local filesystem
create Creates containers for a service.
down Stop and remove containers, networks
events Receive real time events from containers.
exec Execute a command in a running container.
images List images used by the created containers
kill Force stop service containers.
logs View output from containers
ls List running compose projects
pause Pause services
port Print the public port for a port binding.
ps List containers
pull Pull service images
push Push service images
restart Restart service containers
rm Removes stopped service containers
run Run a one-off command on a service.
start Start services
stop Stop services
top Display the running processes
unpause Unpause services
up Create and start containers
version Show the Docker Compose version information
Run 'docker compose COMMAND --help' for more information on a command.
NEVER INSTALL DOCKER WITH SNAP
I solved the problems. Not sure where everything went wrong, but I completely destroyed snapd from my system following this https://askubuntu.com/questions/1280707/how-to-uninstall-snap. Then I installed snap again and everything works.
INSTALL DOCKER WITH THE OFFICIAL GUIDE (APT)
Go here to install docker the correct way. https://docs.docker.com/engine/install/ubuntu/
If you are new to docker follow this advice and NEVER TYPE snap install docker into you terminal. Follow these words of wisdom or use the first half if you already messed up.

how to build node based dockerized jenkins-slave that can run the node base project through Jenkins master

I have successfully enabled the Docker API which I can able to connect from the Jenkins. And Now I'm trying to create docker slave agent that can dynamically create and want to create 100 active docker slave agent that can immediately pick the jobs from queue and execute.
I just trying to create a node base docker image which can be act as slave agent and the image file look like below:
Image Context:
FROM node:15.12.0-alpine3.10
RUN mkdir -p /home/achu/nodeSlave
CMD ["node", "npm --version"]
Output:
[ArrchanaMohan#devops-monitoring-achu ~]$ sudo docker build -t docker-slave-nodes:1.0 .
Sending build context to Docker daemon 7.368GB
The GB count is keep increasing and I don't see image is build success message. I'm very new to the docker world, and I'm not sure whether I'm doing right things.
Can someone please help me to resolve this issue.
Thanks in advance.
Updated:
Problem Image:
You need to use slave image and install node on it.
Here is the one of them;
FROM openshift/jenkins-slave-base-centos7:v3.11

What is the best way pull updated changes into the Docker containers that already deployed?

I had to perform these steps to deploy my Nodejs/Angular site to AWS via DockerCloud
Write Dockerfile
Build Docker Images base on my Dockerfiles
Push those images to Docker Hub
Create Node Cluster on DockerCloud Account
Write Docker stack file on DockerCloud
Run the stack on DockerCloud
See the instance running in AWS, and can see my site
If we require a small thing changes that require a pull from my project repo.
BUT we already deployed our dockers as you may know.
What is the best way pull those changes into the Docker containers that already deployed ?
I hope we don’t have to :
Rebuild our Docker Images
Re-push those images to Docker Hub
Re-create our Node Cluster on DockerCloud
Re-write our docker stack file on DockerCloud
Re-run the stack on DockerCloud
I was thinking
SSH into a VM that has the Docker running
git pull
npm start
Am I on the right track?
You can use docker service update --image https://docs.docker.com/engine/reference/commandline/service_update/#options
I have not experience with AWS but I think you can build and update automatically.
If you want to treat a Docker container as a VM, you totally can, however, I would strongly caution against this. Anything in a container is ephemeral...if you make changes to files in it and the container goes down, it will not come back up with the changes.
That said, if you have access to the server you can exec into the container and execute whatever commands you want. Usually helpful for dev, but applicable to any container.
This command will start an interactive bash session inside your desired container. See the docs for more info.
docker exec -it <container_name> bash
Best practice would probably be to update the docker image and redeploy it.

Live reload Node.js dev environment with Docker

I'm trying to work on a dev environment with Node.js and Docker.
I want to be able to:
run my docker container when I boot my computer once and for all;
make changes in my local source code and see the changes without interacting with the docker container (with a mount).
I've tried the Node image and, if I understand correctly, it is not what I'm looking for.
I know how to make the mount point, but I'm missing how the server is supposed to detect the changes and "relaunch" itself.
I'm new to Node.js so if there is a better way to do things, feel free to share.
run my docker container when I boot my computer once and for all;
start containers automatically with the docker daemon or with your process manager
make changes in my local source code and see the changes without
interacting with the docker container (with a mount).
You need to mount your dev app folder as a volume
$ docker run --name myapp -v /app/src:/app image/app
and set in your Dockerfile nodeJs
CMD ["nodemon", "-L", "/app"]

Creating an environmental variable within Docker container when starting up

How would I get the ip address of a mongo container and set it as environmental variable when creating a node image?
I've been running into an issue with conflicting tech stacks: keystone.js, forever, and docker. My problem is that I need to set up an environmental variable for a separate mongo container which would seem easy to do by running a shell script when I start up the container that includes:
export MONGO_URI="mongodb://${MONGODB_PORT_27017_TCP_ADDR}:27017/(db_name)"
The issue comes with starting the keystone app. Normally I would place it in the same script and call it with docker run but this project we need to use forever. Command would be forever keystone.js. There is an issue with this in that the docker container drops immediatly. If I start the app with a simple forever start rather than going to the script the app starts up fine but the env variable needed is not set. It's hard coded in the docker image but of course this is not a good solution as the ip of the mongodb may change in the future and then on the node container restart it would not be able to find the db. See a couple of possibilities:
Switch to just using a node keystone.js, would loose the functionality of the forever start (which will restart the app if there is a critical failure). Tested and this works but maybe someone knows a way to make forever work or a viable alternate?
Find a way to set the above export from the docker file when creating the image. Haven't been able to get this to work but I do know the name that the mongdb is going to use no matter what if that helps
Any help is most appreciated.
The best way is to use docker link this provides you a hostname + your environmental variables.
docker run ... --link mongodb:mongodb ..
Also you can use the command line option from run
docker run -e MONGO_URI="mongodb://${MONGODB_PORT_27017_TCP_ADDR}:27017/(db_name)"
An Option for dynamic dns would be SkyDNS + SkyDock.

Resources