Containerized NodeJS application on podman not reachable - node.js

I'm running a nodejs app in a pod (together with a mongo container)
Nodejs app listens on port 3000, which I expose from the container.
I have published port 3000 on the pod.
The container starts successfully (logs verified), but I can't reach my application on the host. When I curl to my app from within the pod it works.
Containers run rootfull, OS: CentOS Linux release 8.0.1905 (Core).
What am I missing?
curl http://localhost:3000
curl: (7) Failed to connect to localhost port 3000: No route to host
podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
30da37306acf registry.gitlab.com/xxx/switchboard:master node main.js 34 minutes ago Up 34 minutes ago 0.0.0.0:3000->3000/tcp switchboard-app
acc08c71147b docker.io/library/mongo:latest mongod 35 minutes ago Up 35 minutes ago 0.0.0.0:3000->3000/tcp switchboard-mongo
podman port switchboard-app
3000/tcp -> 0.0.0.0:3000
app.listen(3000, "0.0.0.0",function () {
console.log('App is listening on port 3000!');
});
FROM node:13
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY /dist/apps/switchboard .
EXPOSE 3000
CMD [ "node", "main.js" ]

If you want to create production docker build, you can go another way. It's not a good idea to do npm install inside container, because it will be too large. Run npm run build first, and then copy builded statics into docker container, created with nginx image.
So your Dockerfile shoud look like:
FROM nginx:1.13.0-alpine
COPY build/* /usr/share/nginx/html/
also specify exposed ports correctly with docker run. So, if you want expose 3000, your steps is:
cd /project/dir
docker build -t switchboard-app .
docker run -d -p 3000:80 --name switchboard-app switchboard-app

Related

Unable to access Dockerized Angular application in container [duplicate]

This question already has answers here:
Cannot connect to exposed port of container started with docker-compose on Windows
(4 answers)
Closed 1 year ago.
Dockerfile:
FROM node:current-alpine3.10
RUN mkdir -p /dist/angular
WORKDIR /dist/angular
COPY package.json .
RUN npm install --legacy-peer-deps
COPY . .
EXPOSE 8500
CMD ["npm","run","start:stage-ena-sso"]
package.json:
...
"scripts": {
"start:stage-ena-sso": "ng serve -o -c=stage-ena-sso --port=8500 --baseHref=/"
}...
Folder structure:
Command used to build the Docker image:
docker build . -t ssoadminuiapp
Command used to run the Docker image:
docker run --rm -it -p 8500:8500/tcp ssoadminuiapp:latest
Check if container is running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8959e5180eba ssoadminuiapp:latest "docker-entrypoint.s…" 6 minutes ago Up 6 minutes 0.0.0.0:8500->8500/tcp recursing_fermat
But accessing localhost:8500 doesnt seem to work:
I'm really new to Docker, so any useful beginner-friendly tips/infos would be very appreciated.
Edit #1, this is the result after running docker run command:
This is the problem:
Angular Live Development Server is listening on localhost:8500
Because your application is bound to localhost inside the container,
the port isn't available for port forwarding. That's like trying to
run a service on your host bound to localhost -- you can reach it
locally, but nothing remote can connect to it (and from a network
perspective, you host is "remote" from the container).
You'll need to configure the Angular server to bind to all addresses
in the container (0.0.0.0). I'm not sure exactly how to do this in
your application; when running ng serve from the command line there
is a --host option to set the bind address.

On what PORT is my docker container with a React app running? Reacts' default PORT or EXPOSE from Dockerfile?

I'm a newbie to Docker so please correct me if anything I'm stating is wrong.
I created a React app and wrote a following Dockerfile in the root repository:
# pull official base image
FROM node:latest
# A directory within the virtualized Docker environment
# Becomes more relevant when using Docker Compose later
WORKDIR /usr/src/app
# Copies package.json and package-lock.json to Docker environment
COPY package*.json ./
# Installs all node packages
RUN npm install
# Copies everything over to Docker environment
COPY . .
# Uses port which is used by the actual application
EXPOSE 8080
# Finally runs the application
CMD [ "npm", "start" ]
My goal is to run the docker image in a way, that I can open the React app in my browser (with localhost).
Since in the Dockerfile I'm Exposing the app to the PORT: 8080. I thought I can run:
docker run -p 8080:8080 -t <name of the docker image>
But apparently the application is accessible through 3000 in the container, cause when I run:
docker run -p 8080:3000 -t <name of the docker image>
I can access it with localhost:8080.
What's the point of the EXPOSE port in the Dockerfile, when the service running in its container is accessible through a different port?
When containerizing a NodeJS app, do I always have to make sure that process.env.PORT in my app is the same as the EXPOSE in the Dockerfile?
EXPOSE is for telling docker what ports from inside the application can be exposed. It doesn't mean anything if you do not use those port inside (container -> host).
The EXPOSE is very handy when using docker run -P -t <name of the docker image> (-P capital P) to let Docker automatically publish all the exposed ports to random ports on the host (try it out. then run docker ps or docker inspect <containerId> and checking the output).
So if your web Server (React app) is running on port 3000 (inside the container) you should EXPOSE 3000 (instead of 8080) to properly integrate with the Docker API.
It's kind of weird.
Its just documentation in a sense.
https://docs.docker.com/engine/reference/builder/#:~:text=The%20EXPOSE%20instruction%20informs%20Docker,not%20actually%20publish%20the%20port.
The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published. To actually publish the port when running
the container, use the -p flag on docker run to publish and map one or
more ports, or the -P flag to publish all exposed ports and map them
to high-order ports.
do I always have to make sure that process.env.PORT in my app is the
same as the EXPOSE in the Dockerfile?
Yes. You should.
And then you also need to make sure that port actually gets published, when you use the docker run command or in your docker-compose.yml file, or however you plan on running docker.
Actually react app runs the default port 3000. so you must to mention ports and expose in docker-compose.yml. Now I'm changing the 3000 port to 8081
frontend:
container_name: frontend
build:
context: ./frontend/app
dockerfile: ../Dockerfile
volumes:
- ./frontend/app:/home/devops/frontend/app
- /home/devops/frontend/app/node_modules
ports:
- "8081:3000"
expose:
- 8081
command: ["npm", "start"]
restart: always
stdin_open: true
And run the docker
$ sudo docker-compose up -d
Then check the running containers for find the running port
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
83b970baf16d devops_frontend "docker-entrypoint..." 31 seconds ago Up 30 seconds 8081/tcp, 0.0.0.0:8081->3000/tcp frontend
It's resolved. check your public port
$ curl 'http://0.0.0.0:8081'

Unable to Docker to port forward nuxt based express server to host

I am using Docker for Windows (Windows 10 is at 2004 so I have got WSL2) and I am trying to containerise a Nuxt application. The application runs well on my local system and after creating a Dockerfile and building it, I cannot get it to port forward onto my host system. Whereas, when trying the same when sample applications from https://github.com/BretFisher/docker-mastery-for-nodejs/tree/master/ultimate-node-dockerfile (the Dockerfile from the test folder is supposed to be used), I can access the same.
If I exec into my running container, I am able to get the output on running curl http://localhost:3000 so things are supposedly fine.
My Dockerfile looks like
FROM node:12.18.3-buster-slim
LABEL org.opencontainers.image.authors=sayak#redacted.com
EXPOSE 3000
WORKDIR /app
RUN chown -R node:node /app
COPY --chown=node:node package*.json ./
ENV NODE_ENV=development
RUN apt-get update -qq && apt-get install -qy \
ca-certificates \
bzip2 \
curl \
libfontconfig \
--no-install-recommends
USER node
RUN npm config list
RUN npm ci \
&& npm cache clean --force
ENV PATH=/app/node_modules/.bin:$PATH
COPY --chown=node:node . .
RUN nuxt build
ENV NODE_ENV=production
CMD ["node", "server/index.js"]
I have even tried by removing all chowns and removing USER node to run it as root but to no avail.
This is the output to docker ps -a
d727c8dd4d5c my-container:1.2.3 "docker-entrypoint.s…" 23 minutes ago Up 23 minutes 0.0.0.0:3000->3000/tcp inspiring_dhawan
c3a5aac8b79f sample-node-app "/tini -- node serve…" 23 minutes ago Up 23 minutes (unhealthy) 0.0.0.0:8080->8080/tcp tender_ardinghelli
The sample-node-app from the above GitHub link works whereas my my-container doesn't. What am I doing wrong?
EDIT: I have tried building and running the containers in an Ubuntu VM but I get the same result, so its not an issue with WSL or Windows but something is wrong with my Dockerfile.
By default, Nuxt development server host is 'localhost', but it is only accessible from within the host machine.
So, to tell Nuxt to resolve a host address, which is accessible to connections outside of the host machine, e.g. Ubuntu Windows Subsystem for Linux 2 (WSL2), you must use host '0.0.0.0'
You can fix this by adding the following code to the nuxt.config.js file :
export default {
server: {
port: 3000, // default : 3000
host: '0.0.0.0' // do not put localhost (only accessible from the host machine)
},
...
}
See Nuxt FAQ about Host Port for more information.
Found the solution! Nuxt changes the default address the server listens to from 0.0.0.0 to 127.0.0.1 and Docker can only port forward from 0.0.0.0.

Run node Docker without port mapping

I am very to new Docker so please pardon me if this this is a very silly question. Googling hasn't really produced anything I am looking for. I have a very simple Dockerfile which looks like the following
FROM node:9.6.1
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install --silent
COPY . /usr/src/app
RUN npm start
EXPOSE 8000
In the container the app is running on port 8000. Is it possible to access port 8000 without the -p 8000:8000? I just want to be able to do
docker run imageName
and access the app on my browser on localhost:8000
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the ‍‍--publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host.
Read more: Container networking - Published ports
But you can use docker-compose to set config and run your docker images easily.
First installing the docker-compose. Install Docker Compose
Second create docker-compose.yml beside the Dockerfile and copy this code on them
version: '3'
services:
web:
build: .
ports:
- "8000:8000"
Now you can start your docker with this command
docker-compose up
If you want to run your services in the background, you can pass the ‍‍-d flag (for “detached” mode) to docker-compose up -d and use `docker-compose ps to see what is currently running.
Docker Compose Tutorial
Old question but someone might find it useful:
First get the IP of the docker container by running
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
Then connect to it from the the browser or using curl using the IP and port exposed :
Note that you will not be able to access the container on 0.0.0.0 because port is not mapped

Unable to access docker container from the port mapped by docker

I have created a docker container but unable to run it on the port mapped by the docker (http://localhost:3000). Below are the details of docker configurations that I am using in my app.
Docker version : 17.05.0-ce
Os : ubuntu 16.04
My Dockerfile:
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install -g bower
RUN npm install -g grunt-cli
RUN npm install
RUN bower install --allow-root
#RUN grunt --force
EXPOSE 3000
CMD ["grunt", "serve"]
Creating docker container:
docker build -t viki76/ng-app .
Running Container:
docker run -p 3000:3000 -d viki76/ng-app
docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
21541171d884 viki/ng-app "grunt serve" 10 min ago Up 0.0.0.0:3000->3000/tcp
EDIT:
Updated Dockerfile configuration
EXPOSE 9000
$ docker run -p 9000:9000 viki76/ng-app
Running "serve" task
Running "clean:server" (clean) task
>> 1 path cleaned.
Running "wiredep:app" (wiredep) task
Running "wiredep:test" (wiredep) task
Running "concurrent:server" (concurrent) task
Running "copy:styles" (copy) task
Copied 2 files
Done, without errors.
Execution Time (2017-05-17 13:00:13 UTC-0)
loading tasks 189ms ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 88%
loading grunt-contrib-copy 11ms ▇▇ 5%
copy:styles 16ms ▇▇▇ 7%
Total 216ms
Running "postcss:server" (postcss) task
>> 2 processed stylesheets created.
Running "connect:livereload" (connect) task
Started connect web server on http://localhost:9000
Running "watch" task
Waiting...
From Gruntfile.js
connect: {
options: {
port: 9000,
// Change this to '0.0.0.0' to access the server from outside.
hostname: '0.0.0.0',
livereload: 35729
},
Please help me to fix it.
Thanks
I think your problem is that grunt is binding to localhost:9000 - which is internal to the container so the port you're publishing won't have any effect.
It needs to be listening on 0.0.0.0:9000 - I couldn't tell you off hand what your Gruntfile.js should say for that to happen, but off-hand it looks like, out of the box, grunt serve will only serve from localhost.
You're doing everyhthing right, though I'm assuming that your problem is simply configuration.
By default, grunt-serve serves on port 9000, not 3000. Do you want to try and EXPOSE and publish that port?
Try running the container in Host mode:
--net="host"
Pass the above when you run the container.

Resources