How to run Docker Compose on AWS EB? - node.js

I've got a problem with AWS Elastic Beanstalk. I'm create an application on EB with Docker. My 'local' application is a simply server in NestJS and Typescript where in docker-compose-yml are two services like below:
version: '3.7'
networks:
proxy:
name: proxy
services:
redis:
image: redis:6.2-alpine
ports:
- 6379:6379
command: ["redis-server", "--requirepass", "pass"]
networks:
- proxy
worker:
container_name: worker
build:
context: .
dockerfile: Dockerfile
depends_on:
- redis
ports:
- 8080:8080
expose:
- '8080'
env_file:
- .env
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
command: npm run dev
networks:
- proxy
and Dockerfile for worker container:
FROM node:16.3.0-alpine as builder
WORKDIR /dist
COPY package*.json ./
RUN npm install --force
COPY . .
RUN npm run build
EXPOSE 8080
CMD [ "npm", "run", "dev"]
From project root I'm launch commands with using AWS and EB CLI, (in order) eb init, eb create, eb deploy. Thanks of this Beanstalk create for me Ec2 instance, Load Balancing, S3 etc.. And ok. Everything going good. In Beanstalk Configuration I'm adding Environment variables and rebuild project to because after eb create app is not running cause need envs like envs to DB on RDS. And now I have a problem, after doing all of this above, and entering on URL in browser i see this:
Can someone tell me what am i doing wrong? I'm searching for fix to this issue and i set inbound rules like below:
but it not help, in envs I set PORT env to 5000 and still not working
so, can someone with experience tell me what am i doing wrong? I stuck here for a last 2 days and I have no idea what to do with this stuff... Thanks for any help!

Related

Docker and NodeJS: could not connect to the container

I'm trying to dockerize a simple NodeJS API, I've tested it as a standalone and it's working. But after dockerize it I can't connect to the container, in the next image you can see two important facts: the container is permanently restarting and I could not connect to it:
After try to establish connection using a GET request the container begins to restart and after a minute later is up for short seconds.
This is my Dockerfile:
FROM node:lts-buster-slim
# Create app directory
WORKDIR /opt/myapps/noderest01
COPY package.json /opt/myapps/noderest01/package.json
COPY package-lock.json /opt/myapps/noderest01/package-lock.json
RUN npm ci
COPY . /opt/myapps/noderest01
EXPOSE 3005
CMD [ "npm", "run", "dev" ]
And this my yaml file:
services:
rest01:
container_name: rest01
ports:
- "3005:3005"
restart: always
build: .
volumes:
- rest01:/opt/myapps/noderest01
- rest01nmodules:/opt/myapps/noderest01/node_modules
networks:
- node-rest01
volumes:
rest01:
rest01nmodules:
networks:
node-rest01:
I used this command to create the image: docker-compose -f docker-compose.yaml up -d
Surely, I need to update my yaml or dockerfile to fix this, I've been searching for a while but I can't find the origin of the problem, so I want to ask for your advises how to fix and update my docker's files and connect to the container, if you have any suggestions please let me know.
Best.

docker-compose network error cant connect to other host

I'm new to docker and I'm having issues with connecting to my managed database cluster on the cloud services which was separated from the docker machine and network.
So recently I attempted to use docker-compose because manually writing docker run command every update is a hassle so I configure the yml file.
Whenever I use docker compose, I'm having issues connecting to the database with this error
Unhandled error event: Error: connect ENOENT %22rediss://default:password#test.ondigitalocean.com:25061%22
But if I run it on the actual docker run command with the ENV in dockerfile, then everything will work fine.
docker run -d -p 4000:4000 --restart always test
But I don't want to expose all the confidential data to the code repository with all the details on the dockerfile.
Here is my dockerfile and docker-compose
dockerfile
FROM node:14.3.0
WORKDIR /kpb
COPY package.json /kpb
RUN npm install
COPY . /kpb
CMD ["npm", "start"]
docker-compose
version: '3.8'
services:
app:
container_name: kibblepaw-graphql
restart: always
build: .
ports:
- '4000:4000'
environment:
- PRODUCTION="${PRODUCTION}"
- DB_SSL="${DB_SSL}"
- DB_CERT="${DB_CERT}"
- DB_URL="${DB_URL}"
- REDIS_URL="${REDIS_URL}"
- SESSION_KEY="${SESSION_KEY}"
- AWS_BUCKET_REGION="${AWS_BUCKET_REGION}"
- AWS_BUCKET="${AWS_BUCKET}"
- AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}"
- AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}"
You should not include the " for the values of your environment variables in your docker-compose.
This should work:
version: '3.8'
services:
app:
container_name: kibblepaw-graphql
restart: always
build: .
ports:
- '4000:4000'
environment:
- PRODUCTION=${PRODUCTION}
- DB_SSL=${DB_SSL}
- DB_CERT=${DB_CERT}
- DB_URL=${DB_URL}
- REDIS_URL=${REDIS_URL}
- SESSION_KEY=${SESSION_KEY}
- AWS_BUCKET_REGION=${AWS_BUCKET_REGION}
- AWS_BUCKET=${AWS_BUCKET}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}

Connection Refused Error on React using env Variables from Docker

I'm trying to define some env variables in my docker files, in order to use these in my react application:
The next is my docker file in the Node server side:
FROM node:lts-slim
RUN mkdir -p /app
WORKDIR /app
# install node_modules
ADD package.json /app/package.json
RUN npm install --loglevel verbose
# copy codebase to docker codebase
ADD . /app
EXPOSE 8081
# You can change this
CMD [ "nodemon", "serverApp.js" ]
This is my docker-compose file:
version: "3"
services:
frontend:
stdin_open: true
container_name: firestore_manager
build:
context: ./client/firestore-app
dockerfile: DockerFile
image: rasilvap/firestore_manager
ports:
- "3000:3000"
volumes:
- ./client/firestore-app:/app
environment:
- BACKEND_HOST=backend
- BACKEND_PORT=8081
depends_on:
- backend
backend:
container_name: firestore_manager_server
build:
context: ./server
dockerfile: Dockerfile
image: rasilvap/firestore_manager_server
ports:
- "8081:8081"
volumes:
- ./server:/app
This is the way in which I'm using it in the react code:
axios.delete(`http://backend:8081/firestore/`, request).then((res) => {....
But I'm getting a connection refused Error. I'm new with react and not pretty sure how can I achieve this.
Any ideas?
Your way of requesting the service looks fine : http://foo-service:port.
I think that your issue is a security issue.
Because the two applications are not considered on the same origin (origin = domain + protocol + port), you fall into into a CORS (Cross Origin Resource Sharing) requirement.
In that scenario, your browser will not accept to perform your ajax query if the backend didn't return a its agreement to the preflight CORS request to share its resources with that other "origin".
So enable CORS in the backend to solve the issue (each api/framework has its own way).
Maybe that post may help you for firebase.

Run multiple Docker containers at once using docker-compose

The Problem
Currently I've created a Dockerfile and a docker-compose.yml to run my rest-api and database using docker-compose up.
What I want to do now is add another container, namely the web application (build with React). I'm a little bit confused on how to do that, since I just started learning Docker 2 days ago.
Folder Structure
This is my current folder structure
Folder: rest-api (NodeJS)
Dockerfile
dockercompose.yml
The Question
In the end I want to be able to run docker-compose up to fire up both the rest-api and the web-app.
Do I need to create a seperate Dockerfile in every folder and create a 'global' docker-compose.yml to link everything together?
New folder structure:
dockercompose.yml
Folder: rest-api (NodeJS)
Dockerfile
Folder: web-app (React)
Dockerfile
My current setup to run the rest-api and database
Dockerfile
FROM node:13.10
# The destination of the app in the container
WORKDIR /usr/src/app
# Moves the package.json, package-loc.json and tsconfig.json to the specified workdir
COPY package*.json ./
COPY tsconfig.json ./
# Create user and postgres
ENV POSTGRES_USER root
ENV POSTGRES_PASSWORD 12345
ENV POSTGRES_DB postgres
ENV POSTGRES_URI 'postgres://postgres:12345#postgres:5432/postgres'
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml
version: '3'
services:
node:
container_name: rest-api
restart: always
build: .
environment:
PORT: 3000
ports:
- '80:3000'
links:
- postgres
postgres:
container_name: postgres-database
image: postgres
environment:
POSTGRES_URI: 'postgres://postgres:12345#postgres-database:5432/postgres'
POSTGRES_PASSWORD: 12345
ports:
- '5432:5432'
Ok - so there are quite a few ways to approach this and it is pretty much more or less based on your preferance.
If you want to go with your proposed folder structure (which is fine), the you can for example do it like so:
Have a Dockerfile in the root of each of your applications which will build the specific application (as you already suggested) place your docker-compose.yml file in the parent folder of both applications (exactly as you proposed already) and then just make some changes to your docker-compose.yml (I only left the essential parts. Note that links are no longer necessary - the internal networking will resolve the service names to the corresponding service IP address)
version: '3'
services:
node:
build:
context: rest-api
environment:
PORT: 3000
ports:
- '3000:3000'
web:
image: web-app
build:
context: web-app
ports:
- 80:80
postgres:
image: postgres
environment:
POSTGRES_URI: 'postgres://postgres:12345#postgres-database:5432/postgres'
POSTGRES_PASSWORD: 12345
ports:
- '5432:5432'
So the context is what tells docker that what you are building is actually in a different directory and all of the commands executed in the Dockerfile will be relative to that folder
I also changed the port mappings, cause you probably will want to access your web app via HTTP port. Note that the web-app will be able to communicate with the rest-api container by using the node hostname as long as the node service is binding to 0.0.0.0:3000 (not 127.0.0.1:3000)

Docker is not building container with changes in source code

I'm relatively new with Docker and I just created an Node.js application that should connect with other services also running on Docker.
So I get the source code and a Dockerfile to setup this image and a docker-compose to orchestrate the environment.
I had a few problems in the beginning so I just updated my source code and found out that it's not getting updated in the next build of docker-compose.
For example I commented all the lines that connect to Redis and MongoDB. I run the application locally and it's fine. But when I create it again in a container, I get the errors "Connection refused..."
I tried many things and this is what i get at the momment:
Dockerfile
FROM node:9
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD node app.js
EXPOSE 8090
docker-compose.yml
version: '3'
services:
app:
build: .
ports:
- "8090:8090"
container_name: app
redis:
image: redis:latest
ports:
- "6379:6379"
container_name: redis
mongodb:
image: mongo:latest
container_name: "mongodb"
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
up.sh
sudo docker stop app
sudo docker rm app
docker-compose build --no-cache app
sudo docker-compose up --force-recreate
Any ideas on what could be the problem? Why doesn't it use the current source code? It is using some sort of cache.

Resources