docker run on container but not run on server [duplicate] - node.js

This question already has answers here:
docker running but can't access on the server
(2 answers)
Closed 3 years ago.
i create Jenkinsfile, Dockerfile, Dockerfile.test to CI and CD my server API on GitHub, i build it on Jenkins and the build was successfully, and my docker run on the container as well,
on Jenkinsfile stages, i create for test and deploy on server API,
and using docker for the container
i also run Jenkins on docker also,
using docker-compose
here is my Dockerfile on my ubuntu server
FROM jenkins/jenkins:lts
USER root
and here is my docker-compose on ubuntu server
version: '3'
services:
jenkins:
build: .
container_name: jenkins
privileged: true
restart: always
ports:
- 8080:8080
volumes:
- ./jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
registry:
image: registry
container_name: registry
restart: always
ports:
- 5000:5000
what i did above , i follow this intruction
then i tried ro run it and login on my jenkins server,
my jenkinsfile something like this
try {
stage('Checkout') {
checkout scm
}
stage('Environment') {
sh 'git --version'
echo "Branch: ${env.BRANCH_NAME}"
sh 'docker -v'
sh 'printenv'
}
stage('Build Docker test'){
sh 'docker build -t employee-test -f Dockerfile.test --no-cache .'
}
stage('Docker test'){
sh 'docker run --rm employee-test'
}
stage('Clean Docker test'){
sh 'docker rmi employee-test'
}
stage('Deploy'){
if(env.BRANCH_NAME == 'master'){
sh 'docker build -t employee --no-cache .'
sh 'docker run -d -p 4000:4000 -e DB_USERNAME=admin -e DB_PASSWORD=adminxxx -e DB_NAME=employee employee'
}
}
}
catch (err) {
throw err
}
}
and my Dockerfile for those jobs
FROM node:carbon
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get -y install autoconf automake libtool nasm make pkg-config git apt-utils
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm -v
RUN node -v
COPY ./server/ /usr/src/app
RUN npm install
EXPOSE 4000
ENV PORT 4000
ENV DB_USERNAME admin
ENV DB_PASSWORD adminxxx
ENV DB_NAME employee
CMD [ "npm", "run", "dev" ]
the jenkins job build it successfully and on the last stage my Jenkins, u can see that i run it on my docker container on my ubuntu server, after that finish, i tried to call server API on postman for http://ip-server:4000 , but it was nothing response, and i did set up the firewall tcp on my ubuntu serrver though
how can i solve this? so after Jenkins job finish, what i want i could call that server API on my postman to test it

Configuration looks good, So it seems that docker compose caching the volumes so please run this commands to cleanup every thing
docker rm employee
docker image rm employee
docker-compose down -v
docker-compose up
Please make sure that jenkins log this
'docker run -d -p 4000:4000 -e DB_USERNAME=admin -e DB_PASSWORD=adminxxx -e DB_NAME=employee employee'

Related

How to connect to docker daemon using docker in docker without privilege mode

I'm new to BitBucket Piepline and trying to use it as GitLab Ci way.
Now I happen to face an issues where I was trying to build docker in docker container using dnd.
error during connect: Post "http://docker:2375/v1.24/auth": dial tcp: lookup docker on 100.100.2.136:53: no such host
The above error show and I did some research believe was from the docker daemon.
Yet the atlassin claim there were no intention to work on privilege mode thus I think of any other option.
bitbucker-pipelines.yml
definitions:
services:
docker: # can only be used with a self-hosted runner
image: docker:23.0.0-dind
pipelines:
default:
- step:
name: 'Login'
runs-on:
- 'self.hosted'
services:
- docker
script:
- echo $ACR_REGISTRY_PASSWORD | docker login -u $ACR_REGISTRY_USERNAME registry-intl.ap-southeast-1.aliyuncs.com --password-stdin
- step:
name: 'Build'
runs-on:
- 'self.hosted'
services:
- docker
script:
- docker build -t $ACR_REGISTRY:latest .
- docker tag $(docker images | awk '{print $1}' | awk 'NR==2') $ACR_REGISTRY:$CI_PIPELINE_ID
- docker push $ACR_REGISTRY:$CI_PIPELINE_ID
- step:
Dockerfile
FROM node:14.17.0
RUN mkdir /app
#working DIR
WORKDIR /app
# Copy Package Json File
COPY ["package.json","./"]
# Expose port 80
EXPOSE 80
# Install git
RUN npm install git
# Install Files
RUN npm install
# Copy the remaining sources code
COPY . .
# Run prisma db
RUN npx prisma db pull
# Run prisma client
RUN npm i #prisma/client
# Build
RUN npm run build
CMD [ "npm","run","dev","node","build/server.ts"]

How to dockerize aspnet core application and postgres sql with docker compose

In the process of integrating the docker file into my previous sample project so everything was automated for easy code sharing and execution. I have some dockerize problem and tried to solve it but to no avail. Hope someone can help. Thank you. Here is my problem:
My repository: https://github.com/ThanhDeveloper/WebApplicationAspNetCoreTemplate
Branch for dockerize (my problem in macOS):
https://github.com/ThanhDeveloper/WebApplicationAspNetCoreTemplate/pull/1
Docker file:
# syntax=docker/dockerfile:1
FROM node:16.11.1
FROM mcr.microsoft.com/dotnet/sdk:5.0
RUN apt-get update && \
apt-get install -y wget && \
apt-get install -y gnupg2 && \
wget -qO- https://deb.nodesource.com/setup_6.x | bash - && \
apt-get install -y build-essential nodejs
COPY . /app
WORKDIR /app
RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]
RUN dotnet tool restore
EXPOSE 80/tcp
RUN chmod +x ./entrypoint.sh
CMD /bin/bash ./entrypoint.sh
Docker compose:
version: "3.9"
services:
web:
container_name: backendnet5
build: .
ports:
- "5005:5000"
depends_on:
- database
database:
container_name: postgres
image: postgres:latest
ports:
- "5433:5433"
environment:
- POSTGRES_PASSWORD=admin
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
Commands:
docker-compose build
docker compose up
Problems:
I guess the problem is not being able to run command line dotnet ef database update my migrations. Many thanks for any help.
In your appsettings.json file, you say that the database hostname is 'localhost'. In a container, localhost means the container itself.
Docker compose creates a bridge network where you can address each container by it's service name.
You connection string is
User ID=postgres;Password=admin;Host=localhost;Port=5432;Database=sample_db;Pooling=true;
but should be
User ID=postgres;Password=admin;Host=database;Port=5432;Database=sample_db;Pooling=true;
You also map port 5433 on the database to the host, but postgres listens on port 5432. If you want to map it to port 5433 on the host, the mapping in the docker compose file should be 5433:5432. This is not what's causing your issue though. This just prevents you from connecting to the database from the host, if you need to do that.

npm install package through running node container

I've followed the steps in the node.js documentation for creating a Dockerfile. I'm trying to run the command docker exec -it mynodeapp /bin/bash in order to go inside the container and install a new package via npm, but I get the following error
OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown
Any ideas what I'm doing wrong?
for ref this is how my docker-compose and dockerfile look like
FROM node:latest
RUN mkdir /app
WORKDIR /app
RUN npm install -g nodemon
COPY package.json package.json
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
and
version: '3'
services:
nodejs:
container_name: mynodeapp
build: .
command: nodemon --inspect server.js
ports:
- '5000:8080'
volumes:
- '.:/app'
networks:
- appnet
networks:
appnet:
driver: 'bridge'
Change docker exec mynodeapp -it /bin/bash to docker exec -it mynodeapp /bin/sh
According to docker documentation the correct syntax is the following:
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
-i and -t are the options
mynodeapp is CONTAINER
/bin/bash - is
a COMMAND inside container
And another problem is that there is no bash shell inside container, so you can use sh shell.

docker running but can't access on the server

I have stages for Jenkins jobs to test and deploy my nodejs with docker, I run docker on port 3000 but when I tried to browser my ip-server:3000, it doesn't work and nothing
my dockers are running
here is my Jenkinsfile
node {
try {
stage('Checkout') {
checkout scm
}
stage('Environment') {
sh 'git --version'
echo "Branch: ${env.BRANCH_NAME}"
sh 'docker -v'
sh 'printenv'
}
stage('Build Docker test'){
sh 'docker build -t crud-test -f Dockerfile.test --no-cache .'
}
stage('Docker test'){
sh 'docker run --rm crud-test'
}
stage('Clean Docker test'){
sh 'docker rmi crud-test'
}
stage('Deploy'){
if(env.BRANCH_NAME == 'master'){
sh 'docker build -t crud --no-cache .'
sh 'docker run -d -p 3000:3000 -e DB_USERNAME=myusername -e DB_PASSWORD=12345678 -e DB_NAME=employee crud'
}
}
}
catch (err) {
throw err
}
}
Dockerfile:
# Extending image
FROM node:carbon
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get -y install autoconf automake libtool nasm make pkg-config git apt-utils
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Versions
RUN npm -v
RUN node -v
COPY ./server/ /usr/src/app
RUN npm install
# Port to listener
EXPOSE 3000
# Environment variables
ENV PORT 4000
ENV DB_USERNAME myusername
ENV DB_PASSWORD 12345678
ENV DB_NAME employee
# Main command
CMD [ "npm", "run", "dev" ]
I run Jenkins via docker-compose on my ubuntu server,
is that something I missing or wrong ??
because my goal here using Jenkins to test my server and after the test successfully it gonna deploy my Nodejs on my ubuntu server
so after building finish on Jenkins, I could browser my server API on the browser to make sure it works on ip-sever:3000
but on the above pipeline, is that correct that job do every time I push to master, my server API will be updated without click build now on Jenkins? if not, how to configure it?
I am also don't know how to hide my ENV, so I show up on my stage Jenkinsfile because on multibranches pipeline does not have any option for env parameters
I can see your env variable port 4000 is that your server port ? If it's so you have to change docker run command to map port 4000 to work on port 3000 in your host, or you can change your env variable Port to be 3000
The thing is that most probably Jenkins is spawning a container to run your build.
If that container for build does not use docker engine from host but uses "docker-in-docker" your docker run command does expose port 3000 not on your host but on container on that host running dind.
Try runnning docker ps and see what containers are running on your host and see if the tested container is there. If it is there, examine by docker inspect what exacly it does, what ports does it expose, etc.
You can also inspect docker 'nat' network to see if port 3000 is really forwarded to host.

running docker container is not reachable by browser

I started to work with docker. I dockerized simple node.js app. I'm not able to access to my container from outside world (means by browser).
Stack:
node.js app with 4 endpoints (I used hapi server).
macOS
docker desktop community version 2.0.0.2
Here is my dockerfile:
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
RUN npm install -g nodemon
COPY . .
EXPOSE 8000
CMD ["npm","run", "start-server"]
I did following steps:
I run from command line from my working dir:
docker image build -t ares-maros .
docker container run -d --name rest-api -p 8000:8000 ares-maros
I checked if container is running via docker container ps
Here is the result:
- container is running
I open the browser and type 0.0.0.0:8000 (also tried with 127.0.0.1:8000 or localhost:8000)
result:
So running docker container is not rechable by browser
I also go into the container typing docker exec -it 81b3d9b17db9 sh and try to reach my node-app inside of container via wget/curl and that's works. I get responses fron all node.js endpoints.
Where could be the problem ? Maybe my mac can blocked connection ?
Thanks for help.
Please check the order of the parameters of the following command:
docker container run -d --name rest-api -p 8000:8000 ares-maros
I faced a similar. I was using -p port:port at the end of the command. Simply moving it to after 'Docker run' solved it for me.

Resources