Unable to deploy node image(NestJS) on AWS Elastic beanstalk - node.js

I am literally new with AWS as well and Containerization technology. What I am trying to achieve is that deploying a node image to AWS.
AS I am working with NESTJS my main.ts bootstrap method
async function bootstrap() {
const app = await NestFactory.create(AppModule);
app.setGlobalPrefix('api/v1');
await app.listen(5000);
Logger.log(`Server is running on port ${port}`, "Bootstrap");
}
bootstrap();
I am also using Travis CI to ship my container to AWS
My Docker file
# Download base image
FROM node:alpine as builder
# Define Base Directory
WORKDIR /usr/app/Api
# Copy and restore packages
COPY ./package*.json ./
RUN npm install
# Copy all other directories
COPY ./ ./
# Setup base command
CMD ["npm", "run", "start"]
MY .travis.yml file --> which is the config of Travis CI
sudo: required
services:
- docker
branches:
only:
- master
before_install:
- docker build -t xx/api .
script:
- docker run xx/api npm run test
deploy:
provider: elasticbeanstalk
region: "us-east-2"
app: "api"
env: "api-env"
bucket_name: "name"
bucket_path: "api"
on:
branch: master
access_key_id: "$AWS_ACCESS_KEY"
secret_access_key: "$AWS_SECRET_KEY"
Every time code pushed from Travis CI my Elastic beanstalk start building and failed.
So, I start googling to solve the issue. What I could is that I need to expose port using NGINX. Expost 80 PORT
FROM Nginx
EXPOSE 80
COPY --from=builder /app/build /usr/share/nginx/html
My question is how should I incorporate NGINX to my docker file? AS my application is not something static content. If I move all my build artefacts to /usr/share/nginx/html. This will simply not work as I am not serving static content. So What I need is that I simultaneously run my server to server node app as well as run another container with NGINX which will export 80 port and proxy my requests.
How should I do that? Any help?

Related

Not able to connect to vue app in docker container (Vue+Flask+Docker)

I am trying to set up a skeleton project for a web app. Since I have no experience using docker I followed this tutorial for a Flask+Vue+Docker setup:
https://www.section.io/engineering-education/how-to-build-a-vue-app-with-flask-sqlite-backend-using-docker/
The backend and frontend on their own run correct, now I wanted to dockerize the parts as described with docker-compose and separate containers for back- and frontend. Now when I try to connect to localhost://8080 I get this:
"This page isnt working, localhost did not send any data"
This is my frontend dockerfile:
#Base image
FROM node:lts-alpine
#Install serve package
RUN npm i -g serve
# Set the working directory
WORKDIR /app
# Copy the package.json and package-lock.json
COPY package*.json ./
# install project dependencies
RUN npm install
# Copy the project files
COPY . .
# Build the project
RUN npm run build
# Expose a port
EXPOSE 5000
# Executables
CMD [ "serve", "-s", "dist"]
and this is the docker-compose.yml
version: '3.8'
services:
backend:
build: ./backend
ports:
- 5000:5000
frontend:
build: ./frontend
ports:
- 8080:5000
In the Docker-Desktop GUI for the frontend container I get the log message "Accepting connections at http://localhost:3000" but when I open it in browser it connects me to the 8080 port.
During research I found that many people say I have to make the app serve on 0.0.0.0 to work from a docker container, but I don't know how to configure that. I tried adding
devServer: {
public: '0.0.0.0:8080'
}
to my vue.config.js which did not change anything. Others suggested to change the docker run command to incorporate the host change, but I don't use that but use docker-compose up to start the app.
Sorry for my big confusion, I hope someone can help me out here. I really hope it's something simple I am overlooking.
Thanks to everyone trying to help in advance!

How to configure port of React app fetch when deploying to ECS fargate cluster with two tasks

I have two docker images that communicate fine when deployed locally, but I'm not sure how to set up my app to correctly make fetch() calls from the React app to the correct port on the other app when they're both deployed as tasks on the same ECS cluster.
My react app uses a simple fetch('/fooService/' + barStr) type call, and my other app exposes a /fooService/{barStr} endpoint on port 8081.
For local deployment and local docker, I used setupProxy.js to specify a proxy:
const { createProxyMiddleware } = require("http-proxy-middleware");
module.exports = function(app) {
app.use(createProxyMiddleware('/fooService',
{ target: 'http://fooImage:8081',
changeOrigin: true
}
));
}
In ECS this seems to do nothing, though. I see the setupProxy run when the image starts up, but the requests from my react app just go directly to {sameIPaddress}/fooService/{barStr}, ignoring the proxy specification entirely. I can see in the browser that the requests are being made over port 80. If these requests are made on port 8081 manually, they complete just fine, so the port is available and the service is running.
I've exposed port 8081 on the other task, and I can access it externally with no problem, I just am unclear on how to design my react app to point to it, since I won't necessarily know what IP address I will be assigned until the task launches. If I use a relative path, I cannot specify the port.
What's the idiomatic way to specify the destination of my fetch requests in this context?
Edit: If it is relevant, here is how the docker images are configured. They are built automatically on dockerhub and work fine if I deploy them in my local docker instance.
docker-compose.yaml
version: "3.8"
services:
fooImage:
image: myname/foo-image:0.1
build: ./
container_name: server
ports:
- '8081:8081'
barImage:
image: myname/bar-image:0.1
build: ./bar
container_name: front
ports:
- '80:80'
stdin_open: true
tty: true
Dockerfile - foo image
#
# Build stage
#
FROM maven:3.8.5-openjdk-18-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean package
FROM openjdk:18-alpine
COPY --from=build /home/app/target/*.jar /usr/local/lib/app.jar
EXPOSE 8081
ENTRYPOINT ["java", "-jar", "/usr/local/lib/app.jar"]
Dockerfile - bar image
FROM node:17-alpine
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
ECS Foo task ports
ECS Bar task ports
The solution to this issue was to return the proxy target to "localhost:8081". Per Amazon support:
For quickest resolve your issue, you can try to change your proxy
configuration from "http://server:8081" to "http://localhost:8081" and
the proxy should work.
That's because when using Fargate with awsvpc network mode, containers running in a Task share the same network namespace, which means containers can communicate with each other through localhost. (e.g. When back-end container listen at port 8081, front-end container can access back-end container via localhost:8081) And when using docker-compose [2], you can use hostname to communicate to another container that specified in the same docker-compose file. So proxying back-end traffic with "http://server:8081" in Fargate won't work and should be modified to "http://localhost:8081"."

Error occurred while trying to proxy request while running on Docker

I am trying to deploy my React + Spring Boot app to docker. However, the api from backend seems not connected with my React app although I have already check the port 8080 of the Spring Boot server and check the proxy.js in the React app. It keeps performing "Error occurred while trying to proxy request" error. Please help me answer this!
Here's the proxy.js
export default {
dev: {
'/api/': {
target: 'http://localhost:8080/',
changeOrigin: true,
pathRewrite: {
'^': '',
},
},
}
}
This is the dockerfile of the React App
FROM node:12
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 8000
ENTRYPOINT npm run dev
The Backend Dockerfile
FROM openjdk:8-jdk-alpine
EXPOSE 8080
RUN addgroup -S spring && adduser -S spring -G spring
USER spring:spring
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
And the docker-compose.yml file
version: "3"
services:
server:
build:
context: ./service
dockerfile: ./Dockerfile
ports:
- "8080:8080"
image: academy-server
client:
build:
context: ./web
dockerfile: ./Dockerfile
ports:
- "8000:8000"
image: academy-client
links:
- "server"
Running in Docker is the same as if you were running your front end and backend in two different machines. As such, you cannot use localhost to talk to your backend. Instead you need to use the service names as defined in your docker-compose. So in your case you should use 'server' instead of localhost.
Docker-compose automatically creates an internal network, attaches both of your containers to that network and uses the service names for routing between the containers

copy build from containter in different context

So im trying to get the environment for my project set up to use docker. Project structure is as follows.
/client
/server
/nginx
docker-compose.yml
docker-compose.override.yml
docker-compose.prod.yml
in the Dockerfile for each /client, /server, and nginx I have a base image that installs my dependencies then a development image that installs dev-dependencies and a production image that builds or runs the image for client and server respectively
ex.
# start from a node image
FROM node:14.8.0-alpine as base
WORKDIR /client
COPY package.json package-lock.json ./
RUN npm i --only=prod
FROM base as development
RUN npm install --only=dev
CMD [ "npm", "run", "start" ]
FROM base as production
COPY . .
RUN npm run build
so here is where my problem comes in.
In /nginx I want nginx in development just act as a revers proxy for create-react-app, but when I am in production I want to take client/build from the production client image and copy it into the nginx server to be served statically without the overhead of the entire build tool chain for react.
ie.
FROM nginx:stable-alpine as base
FROM base as development
COPY development.conf /etc/nginx/nginx.conf
FROM base as production
COPY production.conf /etc/nginx/nginx.conf
COPY --from=??? /client/build /usr/share/nginx/html
^
what goes here?
If anyone has any clue how to get this to work without having pull from docker hub and having to push images up to docker hub every time a change is made that would be great.
You can COPY --from= another image by name. Just like docker run, the image needs to be local, and Docker won't contact Docker Hub or another registry server if you already have the image.
# Most basic form; "myapp" is the containing directory name
COPY --from=myapp_client /client/build /usr/share/nginx/html
Compose doesn't directly have a way to specify this build dependency, but running docker-compose build twice should do the trick.
If you're planning to deploy this, you probably want some control over the name and tag of the image. In docker-compose.yml you can specify both build: and image:, which well tell Compose what name to use when it builds the image. You can also use environment variables almost everywhere in the Compose file, and pass ARG into a build to configure it. Combining all of these would give you:
version: '3.8'
services:
client:
build: ./client
image: registry.example.com/my/client:${TAG:-latest}
nginx:
build:
context: ./nginx
args:
TAG: ${TAG:-latest}
image: registry.example.com/my/client:${TAG:-latest}
FROM nginx:stable-alpine
ARG TAG=latest
COPY --from=registry.example.com/my/client:${TAG} /usr/share/nginx/html
TAG=20210113 docker-compose build
TAG=20210113 docker-compose build
TAG=20210113 docker-compose up -d
# TAG=20210113 docker-compose push

How do I deploy my express web app on nginx web server using docker?

I have been trying to deploy my express web application to nginx web server.
This is my directory structure.
express-app
frontend
public
/*all resources, images etc*/
src
/*all js files*/
views
/*html files*/
package.json
index.js //server file
Dockerfile //image for front end
backend
src
server.js
package.json
Dockerfile //image for backend
proxy
Dockerfile //???
proxy.conf //???
docker-compose.yml
I have successfully dockerised my application and it works fine. But i am little confused in how do i create Dockerfile for nginx and proxy.nginx so that nginx could be used as a web server for my application. The Dockerfiles for frontend and backend work.
Dockerfile for front end:
FROM node:carbon
RUN mkdir -p /usr/src/frontend
# Create app directory
WORKDIR /usr/src/frontend
COPY package*.json ./
RUN npm install && npm install gulp -g
# Bundle app source
COPY . .
CMD ["gulp","sass","js-global","js-pages"]
EXPOSE 8081
CMD [ "npm", "start" ]
Dockerfile for backend:
FROM node:carbon
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
COPY . .
ENV MONGODB_URL=mongodb://mongo/db3
ENV BACKEND_HOST_PATH=http://localhost:5000/
EXPOSE 5000
CMD ["npm", "start"]
I am using docker for windows.
How do I make a dockerfile and conf file for nginx so that it acts as a web server for my application?
I think there is two way.
first,
simple nginx isn't dockernize.
You setup nginx in docker host.
and just nginx proxy.
#front
location / {
proxy_pass http://localhost: 8081;
}
#backend
location / {
proxy_pass http://localhost: 8081;
}
second,
if you want to using dockernize about nginx.
docker network create test
docker run --name nginx --network=test
docker run --name front --network=test
docker run --name backend --network=test
you excute above command.
nginx container connect front, backend by name;
this is like hosts file.
Do you understand?
Did I understand what do you want?

Resources