I'm pretty new to do docker and jenkins but wanted to see if I could get a node app automatically deployed and running on my raspberry pi. In an ideal world, I'd like to have Jenkins pull down code from github, use a jenkinsfile and dockerfile to build and run the docker image (hopefully this is possible).
jenkinsfile
pipeline {
agent {
dockerfile true
}
environment {
CI = 'true'
HOME = '.'
}
stages {
stage('Install dependencies') {
steps {
sh 'npm install'
}
}
stage('Test') {
steps {
sh './scripts/test'
}
}
stage('Build Container') {
steps {
sh 'docker build -t test-app:${BUILD_NUMBER} . '
}
}
}
}
dockerfile
# Create image based on the official Node image
FROM node:12
# Create a directory where our app will be placed
RUN mkdir -p /usr/src/app
# Change directory so that our commands run inside this new directory
WORKDIR /usr/src/app
# Copy dependency definitions
COPY package.json /usr/src/app
# Install dependecies
RUN npm install
# Get all the code needed to run the app
COPY . /usr/src/app
# Expose the port the app runs in
EXPOSE 3000
# Serve the app
CMD ["npm", "start"]
However, when I try to run this in jenkins, I get the following error: ../script.sh: docker: not found. This seems to be the case for any docker command. I actually tried running some other command starting with 'sudo' and it complained that sudo: not found. Is there a step missing or am I trying to do something in an incorrect way. (NOTE: docker is installed on the raspberry pi. I can log in with the jenkins user and execute docker commands. It just doesn't work through the web ui) Any advice would be appreciated.
Thanks!
Apparently this section was breaking it:
agent {
dockerfile true
}
When I set this:
agent any
it finished the build, including docker commands without any issues. I guess I just don't understand how that piece works. Any explanations would be helpful!
Related
I'm running docker compose file in ec2 instance, this file contains mysql, jenkins images. Also running nodejs app using pm2 command, when I run nodejs server manually in ec2-instance everything is working properly.
But When I try to deploy nodejs app using jenkins container, latest code is not deployed, i tried to debug why it is not deployed, I found one interesting thing
When i try to run pipeline all commands executed inside jenkins container workspace with jenkins user(container path : /var/jenkins_home/workspace/main)
So my question is my actual nodejs app placed in /home/ubuntu/node-app. But when try to deploy code using jenkins pipeline, pipeline is running in different path(/var/jenkins_home/workspace/main).
Now i have question, is this possible to execute pipeline deployment command for /home/ubuntu/node-app path? not docker container path?
if changing path is not possible, how to point jenkins docker container to ec2 public ip?
I shared jenkinsfile script and docker compose image code for reference
Jenkinsfile code:
stages {
stage('Build') {
steps {
sh 'npm install && npm run build'
}
}
stage('Deploy') {
steps {
sh "pwd"
sh 'git pull origin main'
sh 'pm2 stop server || true'
sh 'npm install'
sh 'npm run build'
sh 'pm2 start build/server.js '
}
}
}
Jenkins Docker Image code:
jenkins:
image: 'jenkins/jenkins:lts'
container_name: 'jenkins'
restart: always
ports:
- '8080:8080'
- '50000:50000'
volumes:
- jenkins-data:/etc/gitlab
- /var/run/docker.sock:/var/run/docker.sock
Edit 1:
I tried to change the path following ways to in jenkinsfile
cd /home/ubuntu/node-app
I'm getting following error
/var/jenkins_home/workspace/main#tmp/durable-44039790/script.sh: 1: cd: can't cd to /home/ubuntu/node-app
Note : this path(/var/jenkins_home/workspace/main) is only visible in ec2 machine after exec following command, normally this path is not exist in ec2 machine
docker exec -it jenkins bash
Try with following fix code
stage('Deploy') {
steps {
sh "cd /home/ubuntu/node-app"
sh 'git pull origin main'
sh 'pm2 stop server || true'
sh 'npm install'
sh 'npm run build'
sh 'pm2 start build/server.js '
}
}
Finally I found solution for this issue.
The actual issues is I didn't create any slave agent for jenkins pipeline, that's why jenkins pipeline jobs are running in master agent location, here master agent location was jenkins docker container space, that's why pipeline jobs are strored into /var/jenkins_home/workspace/main this path
Now I added slave agent and mentioned the customWorkspace path( i mentioned customWorkspace path is 'home/ubuntu/node-app') in jenkinsfile. Now my jenkins pipeline is working under custom workspace that is /home/ubuntu/node-app
My updated jenkinsfile code:
pipeline {
agent {
node {
label 'agent1'
customWorkspace '/home/ubuntu/node-app'
}
}
stages {
stage('Build') {
steps {
sh 'npm install && npm run build'
}
}
stage('Deploy') {
steps {
sh 'pm2 restart server'
}
}
}
}
I have a build step in the docker file that generates some files. Since I also need those files locally (when testing) I have the generation of them not in Cloud Build itself but in the Dockerfile (simple node script that executes via npx). Locally this works perfectly fine and my Docker image does contain those generated files. But whenever I throw this Dockerfile into Cloud Build it executes the script but it does not keep the generated files in the resulting image. I also scanned the logs and so on but found no error (such as a persission error or something similar).
Is there any flag or something I am missing here that prevents my Dockerfile from generating those files and storing them into the image?
Edit:
Deployment pipeline is a trigger onto a GitHub pull request that runs the cloud build.yaml in which the docker build command is located. Afterwards the image is getting pushed to the Artifact Registry and to Cloud Run. On Cloud Run itself the files are gone. Steps in-between I can't check but when building locally the files are getting generated and they are persistent in the image.
Dockerfile
FROM node:16
ARG ENVIRONMENT
ARG GOOGLE_APPLICATION_CREDENTIALS
ARG DISABLE_CLOUD_LOGGING
ARG DISABLE_CONSOLE_LOGGING
ARG GIT_ACCESS_TOKEN
WORKDIR /usr/src/app
COPY ./*.json ./
COPY ./src ./src
COPY ./build ./build
ENV ENVIRONMENT="${ENVIRONMENT}"
ENV GOOGLE_APPLICATION_CREDENTIALS="${GOOGLE_APPLICATION_CREDENTIALS}"
ENV DISABLE_CLOUD_LOGGING="${DISABLE_CLOUD_LOGGING}"
ENV DISABLE_CONSOLE_LOGGING="${DISABLE_CONSOLE_LOGGING}"
ENV PORT=8080
RUN git config --global url."https://${GIT_ACCESS_TOKEN}#github.com".insteadOf "ssh://git#github.com"
RUN npm install
RUN node ./build/generate-files.js
RUN rm -rf ./build
EXPOSE 8080
ENTRYPOINT [ "node", "./src/index.js" ]
Cloud Build (stuff before and after is just normal deployment to Cloud Run stuff)
...
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [ '-c', 'docker build --build-arg ENVIRONMENT=${_ENVIRONMENT} --build-arg DISABLE_CONSOLE_LOGGING=true --build-arg GIT_ACCESS_TOKEN=$$GIT_ACCESS_TOKEN -t location-docker.pkg.dev/$PROJECT_ID/atrifact-registry/docker-image:${_ENVIRONMENT} ./' ]
secretEnv: ['GIT_ACCESS_TOKEN']
...
I figured it out. Somehow the build process does not fail when crashing a RUN statement. This lead to me thinking there are no problem, when in fact it could not authorize my generation script. Adding --network=cloudbuild to the docker build command fixed the authorization problem.
I have my cypress tests, but also, I have some services that works with remote database and, most important, Node.js server that I need to send emails in case of error. The structure of project looks like this:
I have created Dockerfile, but it doesn't seem to be working:
FROM cypress/included:9.4.1
WORKDIR /e2e-test
COPY package.json /e2e
RUN npm install
COPY . .
VOLUME [ "/e2e-test" ]
CMD ["npm", "run", "tests"]
So, what I need to do is mount this folder to docker container and then, by npm run tests run local server (NOTE: I need this local server only for nodemailer, because it works only on server side).
Also, script npm run tests looks like this. This script runs server and then tests - "tests": "npm run dev & npx cypress run"
How can I implement this?
This can be done this way:
Dockerfile:
FROM cypress/base:16.13.0
RUN mkdir /e2e-test
WORKDIR /e2e-test
COPY package.json ./
RUN npm install
COPY . .
ENTRYPOINT ["npm", "run", "tests"]
And start this using next command (in Makefile, for example): docker run -it -v $PWD:/e2e-test -w /e2e-test e2e-tests
This way, we can make work node server and make it send emails and at the same time, we will have mounted volume.
I am trying to run an angular application in development mode inside a docker container, but when i run it with docker-compose build it works correctly but when i try to put up the container i obtain the below error:
ERROR: for sypgod Cannot start service sypgod: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"npm\": executable file not found in $PATH
The real problem is that it doesn't recognize the command npm serve, but why??
The setup would be below:
Docker container (Nginx Reverse proxy -> Angular running in port 4000)
I know that there are better ways of deploying this but at this moment I need this setup for some personals reasons
Dockerfile:
FROM node:10.9
COPY package.json package-lock.json ./
RUN npm ci && mkdir /angular && mv ./node_modules ./angular
WORKDIR /angular
RUN npm install -g #angular/cli
COPY . .
FROM nginx:alpine
COPY toborFront.conf /etc/nginx/conf.d/
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]
CMD ["npm", "serve", "--port 4000"]
NginxServerSite
server{
listen 80;
server_name sypgod;
location / {
proxy_read_timeout 5m;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:4000/;
}
}
Docker Compose file(the important part where I have the problem)
sypgod: # The name of the service
container_name: sypgod # Container name
build:
context: ../angular
dockerfile: Dockerfile # Location of our Dockerfile
The image that's finally getting run is this:
FROM nginx:alpine
COPY toborFront.conf /etc/nginx/conf.d/
EXPOSE 8080
CMD ["npm", "serve", "--port 4000"]
The first stage doesn't have any effect (you could COPY --from=... files out of it), and if there are multiple CMDs, only the last one has an effect. Since you're running this in a plain nginx image, there's no npm command, leading to the error you see.
I'd recommend using Node on the host for a live development environment. When you've built and tested your application and are looking to deploy it, then use Docker if that's appropriate. In your Dockerfile, run ng build in the first stage to compile the application to static files, add a COPY --from=... in the second stage to get the built application into the Nginx image, and delete all the CMD lines (nginx has an appropriate default CMD). #VikramJakhar's answer has a more complete Dockerfile showing this.
It looks like you might be trying to run both Nginx and the Angular development server in Docker. If that's your goal, you need to run these in two separate containers. To do this:
Split this Dockerfile into two. Put the CMD ["npm", "serve"] line at the end of the first (Angular-only) Dockerfile.
Add a second block in the docker-compose.yml file to run the second container. The backend npm serve container doesn't need to publish ports:.
Change the host name of the backend server in the Nginx config from localhost to the Docker Compose name of the other container.
It would appear the npm can't be accessed from the container.
Try defining where it tries to execute it from:
docker run -v "$PWD":/usr/src/app -w /usr/src/app node:10.9 npm serve --port 4000
source: https://gist.github.com/ArtemGordinsky/b79ea473e8bc6f67943b
Also make sure that npm is installed on the computer running the docker container.
You can do something like below
### STAGE 1: Build ###
# We label our stage as ‘builder’
FROM node:alpine as builder
RUN apk --no-cache --virtual build-dependencies add \
git \
python \
make \
g++
RUN mkdir -p /ng-app/dist
WORKDIR /ng-app
COPY package.json package-lock.json ./
## Storing node modules on a separate layer will prevent unnecessary npm installs at each build
RUN npm install
COPY . .
## Build the angular app in production mode and store the artifacts in dist folder
RUN npm run ng build -- --prod --output-path=dist
### STAGE 2: Setup ###
FROM nginx:1.14.1-alpine
## Copy our default nginx config
COPY toborFront.conf /etc/nginx/conf.d/
## Remove default nginx website
RUN rm -rf "/usr/share/nginx/html/*"
## From ‘builder’ stage copy over the artifacts in dist folder to default nginx public folder
COPY --from=builder /ng-app/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
If you have Portainer.io installed for managing your Docker setup, you can open the console for a particular container from a browser.
This is useful if you want to run a reference command like "npm list" to show what versions of dependencies have been loaded.
So that you can view it like this:
I found this useful for diagnosing issues where an update to a dependency had broken something, which worked fine in a test environment, but the docker version had installed newer minor versions which broke the application.
I have stages for Jenkins jobs to test and deploy my nodejs with docker, I run docker on port 3000 but when I tried to browser my ip-server:3000, it doesn't work and nothing
my dockers are running
here is my Jenkinsfile
node {
try {
stage('Checkout') {
checkout scm
}
stage('Environment') {
sh 'git --version'
echo "Branch: ${env.BRANCH_NAME}"
sh 'docker -v'
sh 'printenv'
}
stage('Build Docker test'){
sh 'docker build -t crud-test -f Dockerfile.test --no-cache .'
}
stage('Docker test'){
sh 'docker run --rm crud-test'
}
stage('Clean Docker test'){
sh 'docker rmi crud-test'
}
stage('Deploy'){
if(env.BRANCH_NAME == 'master'){
sh 'docker build -t crud --no-cache .'
sh 'docker run -d -p 3000:3000 -e DB_USERNAME=myusername -e DB_PASSWORD=12345678 -e DB_NAME=employee crud'
}
}
}
catch (err) {
throw err
}
}
Dockerfile:
# Extending image
FROM node:carbon
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get -y install autoconf automake libtool nasm make pkg-config git apt-utils
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Versions
RUN npm -v
RUN node -v
COPY ./server/ /usr/src/app
RUN npm install
# Port to listener
EXPOSE 3000
# Environment variables
ENV PORT 4000
ENV DB_USERNAME myusername
ENV DB_PASSWORD 12345678
ENV DB_NAME employee
# Main command
CMD [ "npm", "run", "dev" ]
I run Jenkins via docker-compose on my ubuntu server,
is that something I missing or wrong ??
because my goal here using Jenkins to test my server and after the test successfully it gonna deploy my Nodejs on my ubuntu server
so after building finish on Jenkins, I could browser my server API on the browser to make sure it works on ip-sever:3000
but on the above pipeline, is that correct that job do every time I push to master, my server API will be updated without click build now on Jenkins? if not, how to configure it?
I am also don't know how to hide my ENV, so I show up on my stage Jenkinsfile because on multibranches pipeline does not have any option for env parameters
I can see your env variable port 4000 is that your server port ? If it's so you have to change docker run command to map port 4000 to work on port 3000 in your host, or you can change your env variable Port to be 3000
The thing is that most probably Jenkins is spawning a container to run your build.
If that container for build does not use docker engine from host but uses "docker-in-docker" your docker run command does expose port 3000 not on your host but on container on that host running dind.
Try runnning docker ps and see what containers are running on your host and see if the tested container is there. If it is there, examine by docker inspect what exacly it does, what ports does it expose, etc.
You can also inspect docker 'nat' network to see if port 3000 is really forwarded to host.