I am trying to deploy dockerized React JS application (uses ngnix) on MS Azure App Service (Web application as Container/Web App). Using Azure Container Registry for the same.
Here is my Dockerfile
FROM node:14.17.0 as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm ci --silent
RUN npm install react-scripts -g --silent
COPY . .
RUN npm run build
#prepare nginx
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
#fire up nginx
EXPOSE 80
CMD ["nginx","-g","daemon off;"]
Able to run image as container on local machine and working perfectly.
docker run -itd --name=ui-container -p 80:80 abc.azurecr.io:latest
But problem starts after running the image on Azure App Service/ Container Service due to it is not able to ping the port.
ERROR - Container didn't respond to HTTP pings on port: 80, failing site start. See container logs for debugging
This is the docker run command available on App service logs
docker run -d --expose=80 --name id_0_f8823503 -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITES_PORT=80 -e WEBSITE_SITE_NAME=id -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=id.azurewebsites.net -e WEBSITE_INSTANCE_ID=af26eeb17400cdb1a96c545117762d0fdf33cf24e01fb4ee2581eb015d557e50 -e WEBSITE_USE_DIAGNOSTIC_SERVER=False i.azurecr.io/ivoyant-datamapper
I see the reason is there is no -p 80:80 found in above docker run command. I have tried multiple approaches to fix this but nothing worked for me.
Tried adding
key: PORT value: 80 in configuration app settings
key: WEBSITES_PORT value: 80 in configuration app settings
App Service listens on port 80/443 and forwards traffic to your container on port 80 by default. If your container is listening on port 80, no need to set the WEBSITES_PORT application setting. The -p 80:80 parameter is not needed.
You would set WEBSITES_PORT to the port number of your container only if it listens on a different port number.
Now, why does App Service reports that error? It might be that your app fails when starting. Try enabling the application logs to see if you'll get more info.
Replaced
FROM node:14.17.0 as build
with
FROM node:19-alpine as build
Application was deployed successfully in Azure with success response. Usage of non alpine image caused the issue in Azure App Service.
Related
I'm a newbie to Docker so please correct me if anything I'm stating is wrong.
I created a React app and wrote a following Dockerfile in the root repository:
# pull official base image
FROM node:latest
# A directory within the virtualized Docker environment
# Becomes more relevant when using Docker Compose later
WORKDIR /usr/src/app
# Copies package.json and package-lock.json to Docker environment
COPY package*.json ./
# Installs all node packages
RUN npm install
# Copies everything over to Docker environment
COPY . .
# Uses port which is used by the actual application
EXPOSE 8080
# Finally runs the application
CMD [ "npm", "start" ]
My goal is to run the docker image in a way, that I can open the React app in my browser (with localhost).
Since in the Dockerfile I'm Exposing the app to the PORT: 8080. I thought I can run:
docker run -p 8080:8080 -t <name of the docker image>
But apparently the application is accessible through 3000 in the container, cause when I run:
docker run -p 8080:3000 -t <name of the docker image>
I can access it with localhost:8080.
What's the point of the EXPOSE port in the Dockerfile, when the service running in its container is accessible through a different port?
When containerizing a NodeJS app, do I always have to make sure that process.env.PORT in my app is the same as the EXPOSE in the Dockerfile?
EXPOSE is for telling docker what ports from inside the application can be exposed. It doesn't mean anything if you do not use those port inside (container -> host).
The EXPOSE is very handy when using docker run -P -t <name of the docker image> (-P capital P) to let Docker automatically publish all the exposed ports to random ports on the host (try it out. then run docker ps or docker inspect <containerId> and checking the output).
So if your web Server (React app) is running on port 3000 (inside the container) you should EXPOSE 3000 (instead of 8080) to properly integrate with the Docker API.
It's kind of weird.
Its just documentation in a sense.
https://docs.docker.com/engine/reference/builder/#:~:text=The%20EXPOSE%20instruction%20informs%20Docker,not%20actually%20publish%20the%20port.
The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published. To actually publish the port when running
the container, use the -p flag on docker run to publish and map one or
more ports, or the -P flag to publish all exposed ports and map them
to high-order ports.
do I always have to make sure that process.env.PORT in my app is the
same as the EXPOSE in the Dockerfile?
Yes. You should.
And then you also need to make sure that port actually gets published, when you use the docker run command or in your docker-compose.yml file, or however you plan on running docker.
Actually react app runs the default port 3000. so you must to mention ports and expose in docker-compose.yml. Now I'm changing the 3000 port to 8081
frontend:
container_name: frontend
build:
context: ./frontend/app
dockerfile: ../Dockerfile
volumes:
- ./frontend/app:/home/devops/frontend/app
- /home/devops/frontend/app/node_modules
ports:
- "8081:3000"
expose:
- 8081
command: ["npm", "start"]
restart: always
stdin_open: true
And run the docker
$ sudo docker-compose up -d
Then check the running containers for find the running port
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
83b970baf16d devops_frontend "docker-entrypoint..." 31 seconds ago Up 30 seconds 8081/tcp, 0.0.0.0:8081->3000/tcp frontend
It's resolved. check your public port
$ curl 'http://0.0.0.0:8081'
I'm running a nodejs app in a pod (together with a mongo container)
Nodejs app listens on port 3000, which I expose from the container.
I have published port 3000 on the pod.
The container starts successfully (logs verified), but I can't reach my application on the host. When I curl to my app from within the pod it works.
Containers run rootfull, OS: CentOS Linux release 8.0.1905 (Core).
What am I missing?
curl http://localhost:3000
curl: (7) Failed to connect to localhost port 3000: No route to host
podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
30da37306acf registry.gitlab.com/xxx/switchboard:master node main.js 34 minutes ago Up 34 minutes ago 0.0.0.0:3000->3000/tcp switchboard-app
acc08c71147b docker.io/library/mongo:latest mongod 35 minutes ago Up 35 minutes ago 0.0.0.0:3000->3000/tcp switchboard-mongo
podman port switchboard-app
3000/tcp -> 0.0.0.0:3000
app.listen(3000, "0.0.0.0",function () {
console.log('App is listening on port 3000!');
});
FROM node:13
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY /dist/apps/switchboard .
EXPOSE 3000
CMD [ "node", "main.js" ]
If you want to create production docker build, you can go another way. It's not a good idea to do npm install inside container, because it will be too large. Run npm run build first, and then copy builded statics into docker container, created with nginx image.
So your Dockerfile shoud look like:
FROM nginx:1.13.0-alpine
COPY build/* /usr/share/nginx/html/
also specify exposed ports correctly with docker run. So, if you want expose 3000, your steps is:
cd /project/dir
docker build -t switchboard-app .
docker run -d -p 3000:80 --name switchboard-app switchboard-app
I am very to new Docker so please pardon me if this this is a very silly question. Googling hasn't really produced anything I am looking for. I have a very simple Dockerfile which looks like the following
FROM node:9.6.1
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install --silent
COPY . /usr/src/app
RUN npm start
EXPOSE 8000
In the container the app is running on port 8000. Is it possible to access port 8000 without the -p 8000:8000? I just want to be able to do
docker run imageName
and access the app on my browser on localhost:8000
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host.
Read more: Container networking - Published ports
But you can use docker-compose to set config and run your docker images easily.
First installing the docker-compose. Install Docker Compose
Second create docker-compose.yml beside the Dockerfile and copy this code on them
version: '3'
services:
web:
build: .
ports:
- "8000:8000"
Now you can start your docker with this command
docker-compose up
If you want to run your services in the background, you can pass the -d flag (for “detached” mode) to docker-compose up -d and use `docker-compose ps to see what is currently running.
Docker Compose Tutorial
Old question but someone might find it useful:
First get the IP of the docker container by running
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
Then connect to it from the the browser or using curl using the IP and port exposed :
Note that you will not be able to access the container on 0.0.0.0 because port is not mapped
In my Dockerfile, I have the following:
# Start app and proxy
CMD service nginx start
CMD ["nodejs", "/src/index.js"]
Doing it this way, the Node server is running, but not nginx. Likewise, if I do something like:
# Start app and proxy
CMD service nginx start && nodejs /src/index.js
then nginx is running, but not Node.
Am I overlooking something obvious?
I think you can split your problem with docker-compose.
You will get one container with your nginx image and one container app with your node application.
Then just run a docker-compose up
You can use Docker legacy linking:
In nginx folder docker build -t docker-nginx .
In node folder docker build -t docker-node .
docker run -d --name app docker-node
docker run -d --name nginx --link app:app docker-nginx
Then you point to app in nginx config file. Like app:3000
You could also use docker-compose which would simplify image building and container running. Checkout documentation.
I'm finding the docs sorely lacking for this (or, I'm dumb), but here's my setup:
Webapp is running on Node and Express, in port 8080. It also connects with a MongoDB container (hence why I'm using docker-compose).
In my Dockerfile, I have:
FROM node:4.2.4-wheezy
# Set correct environment variables.
ENV HOME /root
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install;
# Bundle app source
COPY . /usr/src/app
CMD ["node", "app.js"]
EXPOSE 8080
I run:
docker-compose build web
docker-compose build db
docker-compose up -d db
docker-compose up -d web
When I run docker-machine ip default I get 192.168.99.100.
So when I go to 192.168.99.100:8080 in my browser, I would expect to see my app - but I get connection refused.
What silly thing have I done wrong here?
Fairly new here as well, but did you publish your ports on your docker-compose file?
Your Dockerfile will simply expose the ports but not open access to your host. Publishing (with the -p flag on a docker run command or ports on a docker-compose file. Will enable access from outside the container (see ports in the docs)
Something like this may help:
ports:
- "8080:8080"