I wrote a NodeJS server which I’m trying to run in a node-alpine based Docker container.
as Docker Node best practices, I’m using the node user.
I’m currently using port 9999, which works fine.
I would like to expose port 80 and 443 instead, but I can’t seem to get it to work.
The quick fix would be to simply use the root user instead, but that seems like a hacky solution.
The main question is:
Can port 80 and 443 be exposed by the node user? If so, how?
This also raises some additional questions:
Would it be better to just stick with the root user instead?
Is it a good idea to expose ports 80 and 443 in a Docker image?
For what it’s worth, this is my Dockerfile:
FROM node:10-alpine
ENV NODE_ENV production
WORKDIR /app
COPY api api
COPY packages/utils packages/utils
COPY package.json package.json
COPY yarn.lock yarn.lock
RUN npm uninstall --global npm \
&& apk add build-base python2 --no-cache \
&& yarn --frozen-lockfile --production \
&& rm -r /opt/yarn* yarn.lock
USER node
ENTRYPOINT ["node", "-r", "esm", "api/server.js"]
EXPOSE 9999
Related
I am trying to dockerize both a frontend made with create-react-app and its express API backend.
The docker containers sit on ec2 instances. I have tried several tutorials and also threads here on stackoverflow, but I can't work out what is going wrong.
The error I get is:
Could not find an open port at ec2-x-xx-xx-xxx.eu-west-2.compute.amazonaws.com.
this refers to my .env file which contains only
HOST= ec2-x-xx-xx-xxx.eu-west-2.compute.amazonaws.com
my Dockerfile for the front end is as follows:
FROM node:12-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --silent
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
whereas the backend's:
FROM node:12-alpine as builder
WORKDIR /app
COPY package.json /app/package.json
RUN npm install
COPY . /app
EXPOSE 3001
CMD ["node", "app.js" ]
I need the backend to run on port 3001 and the frontend to run on 3000. The frontend is bound to the backend api via the proxy line in package.json, where I have placed:
"proxy":"http://ec2-x-xx-xx-xxx.eu-west-2.compute.amazonaws.com:3001",
Both containers build fine. However, running the backend with docker run -p 3001:3001 server and then docker run -p 3000:3000 client spits the error
Attempting to bind to HOST environment variable: ec2-x-xx-xx-xxx.eu-west-2.compute.amazonaws.com
If this was unintentional, check that you haven't mistakenly set it in your shell.
Learn more here: https://xxx.xx/xxx-advanced-config
Could not find an open port at ec2-x-xx-xx-xxx.eu-west-2.compute.amazonaws.com.
Network error message: listen EADDRNOTAVAIL: address not available 10.xx.0.xxx
I have tried running only the server side and then running npm start from my local machine, it works. The problem seems to have to do with docker networking between containers.
I also tried running the server side with the command
docker run -p 10.xx.0.xxx:3000:3000 client
to make sure the client pings the right ip address, however this didn't work.
Could anyone give me some direction please?
If you need more info on the source code, please just leave a comment, I didn't want to clutter the thread by making it longer than it already is..
Thank you
I have a simple node.js - express app and I set the listening port to 3000.
On my Dockerfile I expose port 3000 and run the container using -p 3000:3000.
I want to deploy my app on Heroku using my docker image, what is the proper way of doing it?
I am aware that in these cases we use process.env or specify a global variable in a .env file.
I list below my Dockerfile.
FROM node:10-alpine as builder
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
RUN apk --no-cache add python make g++
COPY package*.json ./
RUN npm install --only=production
# RUN npm ci --only=production
FROM node:10-alpine
WORKDIR /usr/src/app
COPY --from=builder node_modules node_modules
COPY . .
EXPOSE 3000
CMD [ "npm", "run", "start:prod" ]
What is the proper way of approaching the problem?
Plus any suggestions for improving my Dockerfile are more than welcomed.
When deploying a web application on Heroku they will tell you which port is free through an env variable. You have to bind that port in your source code.
In your Dockerfile remove EXPOSE 3000 since you cannot open a custom port.
In your source code you will have to write something like const port = process.env.PORT || 3000.
So when you are executing your program locally and you don't have the env var $PORT set it will open the port 3000. On Heroku it will open the port on what is specified in $PORT.
A common hurdle when deploying on Heroku is that your URLs are no longer working. When running locally you might have the URL: http://localhost:3000/ but on Heroku you have: https://my-app.herokuapp.com/.
https can cause a bit of headache since you might have been working with http the whole time. Furthermore if you hardcoded the port at the end of the hostname it is going to cause some problems. Heroku automatically translates a hostname into an ip-address + port.
You can read here about deploying on Docker here:
https://devcenter.heroku.com/categories/deploying-with-docker
https://devcenter.heroku.com/articles/build-docker-images-heroku-yml
When you've made sure that your code is Heroku compatible you can start doing the Heroku Docker deployment. The article is very thorough and I believe it is better if you read them than me just copy pasting what it says there.
I am trying to run an angular application in development mode inside a docker container, but when i run it with docker-compose build it works correctly but when i try to put up the container i obtain the below error:
ERROR: for sypgod Cannot start service sypgod: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"npm\": executable file not found in $PATH
The real problem is that it doesn't recognize the command npm serve, but why??
The setup would be below:
Docker container (Nginx Reverse proxy -> Angular running in port 4000)
I know that there are better ways of deploying this but at this moment I need this setup for some personals reasons
Dockerfile:
FROM node:10.9
COPY package.json package-lock.json ./
RUN npm ci && mkdir /angular && mv ./node_modules ./angular
WORKDIR /angular
RUN npm install -g #angular/cli
COPY . .
FROM nginx:alpine
COPY toborFront.conf /etc/nginx/conf.d/
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]
CMD ["npm", "serve", "--port 4000"]
NginxServerSite
server{
listen 80;
server_name sypgod;
location / {
proxy_read_timeout 5m;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:4000/;
}
}
Docker Compose file(the important part where I have the problem)
sypgod: # The name of the service
container_name: sypgod # Container name
build:
context: ../angular
dockerfile: Dockerfile # Location of our Dockerfile
The image that's finally getting run is this:
FROM nginx:alpine
COPY toborFront.conf /etc/nginx/conf.d/
EXPOSE 8080
CMD ["npm", "serve", "--port 4000"]
The first stage doesn't have any effect (you could COPY --from=... files out of it), and if there are multiple CMDs, only the last one has an effect. Since you're running this in a plain nginx image, there's no npm command, leading to the error you see.
I'd recommend using Node on the host for a live development environment. When you've built and tested your application and are looking to deploy it, then use Docker if that's appropriate. In your Dockerfile, run ng build in the first stage to compile the application to static files, add a COPY --from=... in the second stage to get the built application into the Nginx image, and delete all the CMD lines (nginx has an appropriate default CMD). #VikramJakhar's answer has a more complete Dockerfile showing this.
It looks like you might be trying to run both Nginx and the Angular development server in Docker. If that's your goal, you need to run these in two separate containers. To do this:
Split this Dockerfile into two. Put the CMD ["npm", "serve"] line at the end of the first (Angular-only) Dockerfile.
Add a second block in the docker-compose.yml file to run the second container. The backend npm serve container doesn't need to publish ports:.
Change the host name of the backend server in the Nginx config from localhost to the Docker Compose name of the other container.
It would appear the npm can't be accessed from the container.
Try defining where it tries to execute it from:
docker run -v "$PWD":/usr/src/app -w /usr/src/app node:10.9 npm serve --port 4000
source: https://gist.github.com/ArtemGordinsky/b79ea473e8bc6f67943b
Also make sure that npm is installed on the computer running the docker container.
You can do something like below
### STAGE 1: Build ###
# We label our stage as ‘builder’
FROM node:alpine as builder
RUN apk --no-cache --virtual build-dependencies add \
git \
python \
make \
g++
RUN mkdir -p /ng-app/dist
WORKDIR /ng-app
COPY package.json package-lock.json ./
## Storing node modules on a separate layer will prevent unnecessary npm installs at each build
RUN npm install
COPY . .
## Build the angular app in production mode and store the artifacts in dist folder
RUN npm run ng build -- --prod --output-path=dist
### STAGE 2: Setup ###
FROM nginx:1.14.1-alpine
## Copy our default nginx config
COPY toborFront.conf /etc/nginx/conf.d/
## Remove default nginx website
RUN rm -rf "/usr/share/nginx/html/*"
## From ‘builder’ stage copy over the artifacts in dist folder to default nginx public folder
COPY --from=builder /ng-app/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
If you have Portainer.io installed for managing your Docker setup, you can open the console for a particular container from a browser.
This is useful if you want to run a reference command like "npm list" to show what versions of dependencies have been loaded.
So that you can view it like this:
I found this useful for diagnosing issues where an update to a dependency had broken something, which worked fine in a test environment, but the docker version had installed newer minor versions which broke the application.
I have docker installed on Ubuntu 16.04 VM and I'm working on a personal project using nodejs and Docker image is from the DockerFile.
the container runs but when I try to access it with the VP'm public IP It's not accessible.
I tried to curl and I get curl: (52) empty reply from the server. after taking a very long time.
The port is mapped correctly and no firewall issues as well.
here is my docker file
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN apk update && apk upgrade \
&& apk add --no-cache git \
&& apk --no-cache add --virtual builds-deps build-base python \
&& npm install -g nodemon cross-env eslint npm-run-all node-gyp
node-pre-gyp && npm install\
&& npm rebuild bcrypt --build-from-source
RUN npm install --production --silent && mv node_modules ../
COPY . .
RUN pwd
EXPOSE 3001
CMD npm start
docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
8588419b40c4 xxx:v1 "/bin/sh -c 'npm sta…" 2 days ago
Up 2 days 0.0.0.0:3000->3001/tcp youthful_roentgen
Let xxx:v1 be the image name built by the Dockerfile you provided.
If you want to access your app via your host (curl localhost:3001), then you should run :
docker run -p 3001:3000 xxx:v1
This command binds port 3000 in your container to your port 3001 on your host (IIRC, 3000 is the default port used by npm start).
You should then be able to access localhost:3001 from your host with curl.
Note that EXPOSE directive in the Dockerfile does not automatically expose a port when running docker run. It's just an indication telling that your container listens on port you EXPOSEd. Here, your EXPOSE directive is wrong, you should have written :
EXPOSE 3000
because only port 3000 is exposed in the container (3000 is the default port used by npm). What port you choose to bind on the host (or not) is specified at runtime only.
If you don't want to access your app via localhost, but only via the container's IP, there is no need to bind the port (no -p). You only need to do curl <container_ip>:3000 from your host.
I have tried to get this working but I am struggling to expose the node app on port 80. Also I want to be sure ever thing else is secure.
UPDATE:
Trying to be more clear...
I am using this Dockerfile
FROM node:argon
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 8888
CMD [ "node", "index.js" ]
Then I use this command to start the container
$ docker run -p 8888:80 christmedical/christ-medical-server
from my docker public IP I get nothing
In docker run reference documentation, in the expose port section says:
-p=[] : Publish a container᾿s port or a range of ports to the host
format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort
If you say you want to access it on port 80 of your host so this should be your command:
docker run -p 80:8888 christmedical/christ-medical-server