I have stages for Jenkins jobs to test and deploy my nodejs with docker, I run docker on port 3000 but when I tried to browser my ip-server:3000, it doesn't work and nothing
my dockers are running
here is my Jenkinsfile
node {
try {
stage('Checkout') {
checkout scm
}
stage('Environment') {
sh 'git --version'
echo "Branch: ${env.BRANCH_NAME}"
sh 'docker -v'
sh 'printenv'
}
stage('Build Docker test'){
sh 'docker build -t crud-test -f Dockerfile.test --no-cache .'
}
stage('Docker test'){
sh 'docker run --rm crud-test'
}
stage('Clean Docker test'){
sh 'docker rmi crud-test'
}
stage('Deploy'){
if(env.BRANCH_NAME == 'master'){
sh 'docker build -t crud --no-cache .'
sh 'docker run -d -p 3000:3000 -e DB_USERNAME=myusername -e DB_PASSWORD=12345678 -e DB_NAME=employee crud'
}
}
}
catch (err) {
throw err
}
}
Dockerfile:
# Extending image
FROM node:carbon
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get -y install autoconf automake libtool nasm make pkg-config git apt-utils
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Versions
RUN npm -v
RUN node -v
COPY ./server/ /usr/src/app
RUN npm install
# Port to listener
EXPOSE 3000
# Environment variables
ENV PORT 4000
ENV DB_USERNAME myusername
ENV DB_PASSWORD 12345678
ENV DB_NAME employee
# Main command
CMD [ "npm", "run", "dev" ]
I run Jenkins via docker-compose on my ubuntu server,
is that something I missing or wrong ??
because my goal here using Jenkins to test my server and after the test successfully it gonna deploy my Nodejs on my ubuntu server
so after building finish on Jenkins, I could browser my server API on the browser to make sure it works on ip-sever:3000
but on the above pipeline, is that correct that job do every time I push to master, my server API will be updated without click build now on Jenkins? if not, how to configure it?
I am also don't know how to hide my ENV, so I show up on my stage Jenkinsfile because on multibranches pipeline does not have any option for env parameters
I can see your env variable port 4000 is that your server port ? If it's so you have to change docker run command to map port 4000 to work on port 3000 in your host, or you can change your env variable Port to be 3000
The thing is that most probably Jenkins is spawning a container to run your build.
If that container for build does not use docker engine from host but uses "docker-in-docker" your docker run command does expose port 3000 not on your host but on container on that host running dind.
Try runnning docker ps and see what containers are running on your host and see if the tested container is there. If it is there, examine by docker inspect what exacly it does, what ports does it expose, etc.
You can also inspect docker 'nat' network to see if port 3000 is really forwarded to host.
Related
i'm trying to turn UP my project with a Virtual Private Server. I've installed Docker and Portainer and i can start the project. But its not running in any port. I did set to run in port 3000 but when i put in browser IP_Of_My_VPS:3000 nothing happens. I'm new with docker and every configuration that i did was based on my searchs.
This print shows that image is running in no one port.
This other print shows that my application is running (but i dont know how access it).
My docker config:
FROM node:12-alpine
RUN apk --no-cache add curl
RUN apk --no-cache add git
RUN git --version
WORKDIR /app
COPY package*.json ./
RUN npm set progress=false && npm config set depth 0 && npm cache clean --force
RUN npm ci
COPY . .
RUN npm run build && rm -rf src
HEALTHCHECK --interval=30s --timeout=3s --start-period=30s \
CMD curl -f http://localhost:3000/health || exit 1
EXPOSE 3000
CMD ["node", "./dist/main.js"]
When docker container up, perform port forwarding
for examples,
docker run -p <your_forwarding_port>:3000 ~~~
# docker-compose.yaml
~~~
ports:
- "<your_forwarding_port>:3000"
~~~
you can see ref
: docker-container port
: docker-compose port
This question already has answers here:
docker running but can't access on the server
(2 answers)
Closed 3 years ago.
i create Jenkinsfile, Dockerfile, Dockerfile.test to CI and CD my server API on GitHub, i build it on Jenkins and the build was successfully, and my docker run on the container as well,
on Jenkinsfile stages, i create for test and deploy on server API,
and using docker for the container
i also run Jenkins on docker also,
using docker-compose
here is my Dockerfile on my ubuntu server
FROM jenkins/jenkins:lts
USER root
and here is my docker-compose on ubuntu server
version: '3'
services:
jenkins:
build: .
container_name: jenkins
privileged: true
restart: always
ports:
- 8080:8080
volumes:
- ./jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
registry:
image: registry
container_name: registry
restart: always
ports:
- 5000:5000
what i did above , i follow this intruction
then i tried ro run it and login on my jenkins server,
my jenkinsfile something like this
try {
stage('Checkout') {
checkout scm
}
stage('Environment') {
sh 'git --version'
echo "Branch: ${env.BRANCH_NAME}"
sh 'docker -v'
sh 'printenv'
}
stage('Build Docker test'){
sh 'docker build -t employee-test -f Dockerfile.test --no-cache .'
}
stage('Docker test'){
sh 'docker run --rm employee-test'
}
stage('Clean Docker test'){
sh 'docker rmi employee-test'
}
stage('Deploy'){
if(env.BRANCH_NAME == 'master'){
sh 'docker build -t employee --no-cache .'
sh 'docker run -d -p 4000:4000 -e DB_USERNAME=admin -e DB_PASSWORD=adminxxx -e DB_NAME=employee employee'
}
}
}
catch (err) {
throw err
}
}
and my Dockerfile for those jobs
FROM node:carbon
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get -y install autoconf automake libtool nasm make pkg-config git apt-utils
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm -v
RUN node -v
COPY ./server/ /usr/src/app
RUN npm install
EXPOSE 4000
ENV PORT 4000
ENV DB_USERNAME admin
ENV DB_PASSWORD adminxxx
ENV DB_NAME employee
CMD [ "npm", "run", "dev" ]
the jenkins job build it successfully and on the last stage my Jenkins, u can see that i run it on my docker container on my ubuntu server, after that finish, i tried to call server API on postman for http://ip-server:4000 , but it was nothing response, and i did set up the firewall tcp on my ubuntu serrver though
how can i solve this? so after Jenkins job finish, what i want i could call that server API on my postman to test it
Configuration looks good, So it seems that docker compose caching the volumes so please run this commands to cleanup every thing
docker rm employee
docker image rm employee
docker-compose down -v
docker-compose up
Please make sure that jenkins log this
'docker run -d -p 4000:4000 -e DB_USERNAME=admin -e DB_PASSWORD=adminxxx -e DB_NAME=employee employee'
When I start my docker container with:
docker run -it -d -p 8081:8080 --name ${APP_CONTAINER_NAME} ${APP_IMAGE}
I can access my web application just fine in my browser on: localhost:8081
But if I instead run it with the two volumes below:
docker run -it -d -p 8081:8080 -v ${PWD}:/app -v /app/node_modules --name ${APP_CONTAINER_NAME} ${APP_IMAGE}
The port mapping is ignored - I cannot access it at localhost:8081 but I can access it at localhost:8080.
My dockerfile has:
FROM node:8-alpine
RUN apk update && apk add bash
RUN npm install -g http-server
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
Why does adding the volumes to the second docker run command ignore the port mapping from 8081 to 8080?
As suggested below running without -d (but with volumes):
docker run -it -p 8081:8080 -v ${PWD}:/app -v /app/node_modules --name ${APP_CONTAINER_NAME} ${APP_IMAGE}
gives:
Starting up http-server, serving dist
Available on:
http://127.0.0.1:8080
http://172.17.0.2:8080
Hit CTRL-C to stop the server
But I cannot access it on localhost:8080 or localhost:8081 even though the container is indeed running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
603b1bf02d58 app-image "http-server dist" 11 seconds ago Up 5 seconds 0.0.0.0:8081->8080/tcp app-container
When I instead run it without volumes but still map to 8081 it works:
Starting up http-server, serving dist
Available on:
http://127.0.0.1:8080
http://172.17.0.2:8080
Hit CTRL-C to stop the server
and I can access it on localhost:8081. So something in the application must be messed up when adding the volumes just not sure what. I have also tried to run:
docker volume prune
before starting the container but it has no effect. Any ideas why creating the volumes prevents the application from being accessed?
I started to work with docker. I dockerized simple node.js app. I'm not able to access to my container from outside world (means by browser).
Stack:
node.js app with 4 endpoints (I used hapi server).
macOS
docker desktop community version 2.0.0.2
Here is my dockerfile:
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
RUN npm install -g nodemon
COPY . .
EXPOSE 8000
CMD ["npm","run", "start-server"]
I did following steps:
I run from command line from my working dir:
docker image build -t ares-maros .
docker container run -d --name rest-api -p 8000:8000 ares-maros
I checked if container is running via docker container ps
Here is the result:
- container is running
I open the browser and type 0.0.0.0:8000 (also tried with 127.0.0.1:8000 or localhost:8000)
result:
So running docker container is not rechable by browser
I also go into the container typing docker exec -it 81b3d9b17db9 sh and try to reach my node-app inside of container via wget/curl and that's works. I get responses fron all node.js endpoints.
Where could be the problem ? Maybe my mac can blocked connection ?
Thanks for help.
Please check the order of the parameters of the following command:
docker container run -d --name rest-api -p 8000:8000 ares-maros
I faced a similar. I was using -p port:port at the end of the command. Simply moving it to after 'Docker run' solved it for me.
I have docker installed on Ubuntu 16.04 VM and I'm working on a personal project using nodejs and Docker image is from the DockerFile.
the container runs but when I try to access it with the VP'm public IP It's not accessible.
I tried to curl and I get curl: (52) empty reply from the server. after taking a very long time.
The port is mapped correctly and no firewall issues as well.
here is my docker file
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN apk update && apk upgrade \
&& apk add --no-cache git \
&& apk --no-cache add --virtual builds-deps build-base python \
&& npm install -g nodemon cross-env eslint npm-run-all node-gyp
node-pre-gyp && npm install\
&& npm rebuild bcrypt --build-from-source
RUN npm install --production --silent && mv node_modules ../
COPY . .
RUN pwd
EXPOSE 3001
CMD npm start
docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
8588419b40c4 xxx:v1 "/bin/sh -c 'npm sta…" 2 days ago
Up 2 days 0.0.0.0:3000->3001/tcp youthful_roentgen
Let xxx:v1 be the image name built by the Dockerfile you provided.
If you want to access your app via your host (curl localhost:3001), then you should run :
docker run -p 3001:3000 xxx:v1
This command binds port 3000 in your container to your port 3001 on your host (IIRC, 3000 is the default port used by npm start).
You should then be able to access localhost:3001 from your host with curl.
Note that EXPOSE directive in the Dockerfile does not automatically expose a port when running docker run. It's just an indication telling that your container listens on port you EXPOSEd. Here, your EXPOSE directive is wrong, you should have written :
EXPOSE 3000
because only port 3000 is exposed in the container (3000 is the default port used by npm). What port you choose to bind on the host (or not) is specified at runtime only.
If you don't want to access your app via localhost, but only via the container's IP, there is no need to bind the port (no -p). You only need to do curl <container_ip>:3000 from your host.