I have docker installed on Ubuntu 16.04 VM and I'm working on a personal project using nodejs and Docker image is from the DockerFile.
the container runs but when I try to access it with the VP'm public IP It's not accessible.
I tried to curl and I get curl: (52) empty reply from the server. after taking a very long time.
The port is mapped correctly and no firewall issues as well.
here is my docker file
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN apk update && apk upgrade \
&& apk add --no-cache git \
&& apk --no-cache add --virtual builds-deps build-base python \
&& npm install -g nodemon cross-env eslint npm-run-all node-gyp
node-pre-gyp && npm install\
&& npm rebuild bcrypt --build-from-source
RUN npm install --production --silent && mv node_modules ../
COPY . .
RUN pwd
EXPOSE 3001
CMD npm start
docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
8588419b40c4 xxx:v1 "/bin/sh -c 'npm sta…" 2 days ago
Up 2 days 0.0.0.0:3000->3001/tcp youthful_roentgen
Let xxx:v1 be the image name built by the Dockerfile you provided.
If you want to access your app via your host (curl localhost:3001), then you should run :
docker run -p 3001:3000 xxx:v1
This command binds port 3000 in your container to your port 3001 on your host (IIRC, 3000 is the default port used by npm start).
You should then be able to access localhost:3001 from your host with curl.
Note that EXPOSE directive in the Dockerfile does not automatically expose a port when running docker run. It's just an indication telling that your container listens on port you EXPOSEd. Here, your EXPOSE directive is wrong, you should have written :
EXPOSE 3000
because only port 3000 is exposed in the container (3000 is the default port used by npm). What port you choose to bind on the host (or not) is specified at runtime only.
If you don't want to access your app via localhost, but only via the container's IP, there is no need to bind the port (no -p). You only need to do curl <container_ip>:3000 from your host.
Related
i'm trying to turn UP my project with a Virtual Private Server. I've installed Docker and Portainer and i can start the project. But its not running in any port. I did set to run in port 3000 but when i put in browser IP_Of_My_VPS:3000 nothing happens. I'm new with docker and every configuration that i did was based on my searchs.
This print shows that image is running in no one port.
This other print shows that my application is running (but i dont know how access it).
My docker config:
FROM node:12-alpine
RUN apk --no-cache add curl
RUN apk --no-cache add git
RUN git --version
WORKDIR /app
COPY package*.json ./
RUN npm set progress=false && npm config set depth 0 && npm cache clean --force
RUN npm ci
COPY . .
RUN npm run build && rm -rf src
HEALTHCHECK --interval=30s --timeout=3s --start-period=30s \
CMD curl -f http://localhost:3000/health || exit 1
EXPOSE 3000
CMD ["node", "./dist/main.js"]
When docker container up, perform port forwarding
for examples,
docker run -p <your_forwarding_port>:3000 ~~~
# docker-compose.yaml
~~~
ports:
- "<your_forwarding_port>:3000"
~~~
you can see ref
: docker-container port
: docker-compose port
I am using Docker for Windows (Windows 10 is at 2004 so I have got WSL2) and I am trying to containerise a Nuxt application. The application runs well on my local system and after creating a Dockerfile and building it, I cannot get it to port forward onto my host system. Whereas, when trying the same when sample applications from https://github.com/BretFisher/docker-mastery-for-nodejs/tree/master/ultimate-node-dockerfile (the Dockerfile from the test folder is supposed to be used), I can access the same.
If I exec into my running container, I am able to get the output on running curl http://localhost:3000 so things are supposedly fine.
My Dockerfile looks like
FROM node:12.18.3-buster-slim
LABEL org.opencontainers.image.authors=sayak#redacted.com
EXPOSE 3000
WORKDIR /app
RUN chown -R node:node /app
COPY --chown=node:node package*.json ./
ENV NODE_ENV=development
RUN apt-get update -qq && apt-get install -qy \
ca-certificates \
bzip2 \
curl \
libfontconfig \
--no-install-recommends
USER node
RUN npm config list
RUN npm ci \
&& npm cache clean --force
ENV PATH=/app/node_modules/.bin:$PATH
COPY --chown=node:node . .
RUN nuxt build
ENV NODE_ENV=production
CMD ["node", "server/index.js"]
I have even tried by removing all chowns and removing USER node to run it as root but to no avail.
This is the output to docker ps -a
d727c8dd4d5c my-container:1.2.3 "docker-entrypoint.s…" 23 minutes ago Up 23 minutes 0.0.0.0:3000->3000/tcp inspiring_dhawan
c3a5aac8b79f sample-node-app "/tini -- node serve…" 23 minutes ago Up 23 minutes (unhealthy) 0.0.0.0:8080->8080/tcp tender_ardinghelli
The sample-node-app from the above GitHub link works whereas my my-container doesn't. What am I doing wrong?
EDIT: I have tried building and running the containers in an Ubuntu VM but I get the same result, so its not an issue with WSL or Windows but something is wrong with my Dockerfile.
By default, Nuxt development server host is 'localhost', but it is only accessible from within the host machine.
So, to tell Nuxt to resolve a host address, which is accessible to connections outside of the host machine, e.g. Ubuntu Windows Subsystem for Linux 2 (WSL2), you must use host '0.0.0.0'
You can fix this by adding the following code to the nuxt.config.js file :
export default {
server: {
port: 3000, // default : 3000
host: '0.0.0.0' // do not put localhost (only accessible from the host machine)
},
...
}
See Nuxt FAQ about Host Port for more information.
Found the solution! Nuxt changes the default address the server listens to from 0.0.0.0 to 127.0.0.1 and Docker can only port forward from 0.0.0.0.
I wrote a NodeJS server which I’m trying to run in a node-alpine based Docker container.
as Docker Node best practices, I’m using the node user.
I’m currently using port 9999, which works fine.
I would like to expose port 80 and 443 instead, but I can’t seem to get it to work.
The quick fix would be to simply use the root user instead, but that seems like a hacky solution.
The main question is:
Can port 80 and 443 be exposed by the node user? If so, how?
This also raises some additional questions:
Would it be better to just stick with the root user instead?
Is it a good idea to expose ports 80 and 443 in a Docker image?
For what it’s worth, this is my Dockerfile:
FROM node:10-alpine
ENV NODE_ENV production
WORKDIR /app
COPY api api
COPY packages/utils packages/utils
COPY package.json package.json
COPY yarn.lock yarn.lock
RUN npm uninstall --global npm \
&& apk add build-base python2 --no-cache \
&& yarn --frozen-lockfile --production \
&& rm -r /opt/yarn* yarn.lock
USER node
ENTRYPOINT ["node", "-r", "esm", "api/server.js"]
EXPOSE 9999
I created new Angular2 app by angular-cli and run it in Docker. But I cannot connect it from localhost.
At first I init app on my local machine:
ng new project && cd project && "put my Dockerfile there" && docker build -t my-ui .
I start it by command:
docker run -p 4200:4200 my-ui
Then try on my localhost:
curl localhost:4200
and receive
curl: (56) Recv failure: Connection reset by peer
Then, I tried switch into running container (docker exec -ti container-id bash) and run curl localhost:4200 and it works.
I also tried to run container with --net= host param:
docker run --net=host -p 4200:4200 my-ui
And it works. What is the problem? I also tried to run container in daemon mode and it did not helped. Thanks.
My Dockerfile
FROM node
RUN npm install -g angular-cli#v1.0.0-beta.24 && npm cache clean && rm -rf ~/.npm
RUN mkdir -p /opt/client-ui/src
WORKDIR /opt/client-ui
COPY package.json /opt/client-ui/
COPY angular-cli.json /opt/client-ui/
COPY tslint.json /opt/client-ui/
ADD src/ /opt/client-ui/src
RUN npm install
RUN ng build --prod --aot
EXPOSE 4200
ENV PATH="$PATH:/usr/local/bin/"
CMD ["npm", "start"]
It seems that you use ng serve to run development server and it by default starts on loop interface (available only on localhost). You should provide specific parameter:
ng serve --host 0.0.0.0
to run it on all interfaces.
You need to change angular-cli to serve the app externally i.e update your npm script to ng serve --host 0.0.0.0
I have tried to get this working but I am struggling to expose the node app on port 80. Also I want to be sure ever thing else is secure.
UPDATE:
Trying to be more clear...
I am using this Dockerfile
FROM node:argon
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 8888
CMD [ "node", "index.js" ]
Then I use this command to start the container
$ docker run -p 8888:80 christmedical/christ-medical-server
from my docker public IP I get nothing
In docker run reference documentation, in the expose port section says:
-p=[] : Publish a container᾿s port or a range of ports to the host
format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort
If you say you want to access it on port 80 of your host so this should be your command:
docker run -p 80:8888 christmedical/christ-medical-server