I am following the official tutorial from Jenkins website. I have a blueocean Docker container that is running the pipeline in the Jenkinsfile as per tutorial:
pipeline {
agent {
docker {
image 'node:6-alpine'
args '-p 3000:3000'
}
}
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
}
}
The problem is that the pipeline fails when it tries to pull the Docker image:
[ode-js-react-npm-app_master-6PEWX3VWDA4SAdDMJA4YKJCZSABJSAQCSGVYMKHINXGDDJLA] Running shell script
+ docker pull node:6-alpine
Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon. Is the docker daemon running on this host?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
script returned exit code 1
After some troubleshooting, I realized that this is failing because Jenkins is trying to pull the Docker container itself, rather than the host. This is not what I want, and the documentation in fact states:
This image parameter (of the agent section’s docker parameter) downloads the node:6-alpine Docker image (if it’s not already available on your machine) and runs this image as a separate container. This means that:
You’ll have separate Jenkins and Node containers running locally in Docker.
The Node container becomes the agent that Jenkins uses to run your Pipeline project. However, this container is short-lived - its lifespan is only that of the duration of your Pipeline’s execution.
Can someone explain what I am doing wrong and why the Node.js Docker image is tried being pulled inside Jenkins instead of the local machine? I want to have a separate Jenkins container from the Nodejs container that orchestrates the app.
I solved this problem by running the container as root, as otherwise it would not have access to the Docker daemon socket /var/run/docker.sock ...
Related
custom:
pythonRequirements:
dockerizePip: true
in Python lambda using serverless with dockerizePip , I'm getting this message.
I know dockerizePip uses docker and it works fine in the local. But, when using it via pipeline, the container it uses to build doesn't seem to have 'docker' there.
Or, Maybe it's there but not running? I get this error message.
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Should I use ECR when I use dockerizePip : true?
Is there a way to not use ECR?
You don't need to use ECR, but docker daemon has to be running on the machine. Docker container will be launched to actually build your dependencies with serverless-python-requirements plugin. You can also try specifying dockerizePip: non-linux, as it might not be needed to dockerize packaging when running on linux machine, but I would advise to test it first on non-prod environment.
I am following the tutorial located here. I am able to get a self hosted agent running in a Docker container. After the agent is running, I am able to run jobs on it in a pipeline only while the container is running. I would like to keep this docker container build agent running as a service, so I don't have to start it up for each time I am executing a pipeline. Any advice on how to configure a docker container build agent to keep running continuously would be helpful.
I am able to run jobs on it in a pipeline only while the container is
running.
Agent in Docker should be 'run as a service' by default, you need to make sure the container is running, otherwise, the agent will not run.
I need help in creating the container through Jenkins job.
Let me know the steps to be followed: I have already created 3 jobs in Jenkins, and I want to create httpd container through the jobs created.
Should I install any plugins or write any script ?
Assuling we are not talking about Jenkins in docker, or Jenkins agents in Docker, you need to create your http container manually first, without Jenkins.
That means:
validate your SSH access to the remote server
Check it has Docker installed
execute docker commands to run an http container, s described in Docker httpd
Once that is working, you can replicate the process in a Jenkins Job, provided your remote server (the one with Docker, where you want to run your httpd container) is declared as agent to the main Jenkins controller.
What are the options to secure Jenkins slave nodes that have docker daemon? Someone could launch docker container and essentially access files owned by root.
Environment and problem details:
Jenkins
Docker daemon available on Jenkins slaves
Jenkins agent run as user “build”
User build is added to group “docker” so that
the jobs can build and start containers
We are considering to use
docker pipeline https://jenkins.io/doc/book/pipeline/docker/
Problem:
A user can mount the server root as a docker volume
They could execute a command like “rm -rf /hostroot/*” or print the contents of a secret file owned by root.
Couple options:
Option 1: Use the –user argument to run the commands as a specific user, but since the Jenkinsfile is managed by development team this can’t be enforced strictly
Option 2: Enable user namespace on the docker daemon (there may be some permissions issues – especially on the workspace folder but some workarounds can be put in place)
Do you use user namespace?
Option 3: build using AWS Fargate (need to look in to this a bit) – so the docker image is not built on Jenkins slave nodes and the build won’t affect the files on the nodes
Example:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /:/hostroot'
(args '—user 500 ' --> since Jenkinsfile is managed by development team, ensuring everyone puts this parameter is tricky… someone could forget to put it)
}
}
stages {
stage('Build') {
steps {
sh 'rm -rf /hostroot/*' --> delete the entire filesystem!
sh 'cat /path/to/secret/file' --> see some files owned by root!
}
}
}
}
A similar question was posted here but that is 2 years old and doesn't have a good answer:
Building Docker images with Jenkins that runs inside a Docker container
Steps followed during rolling updates:
Create an image for the v2 version of the application with some changes
Re-Build a Docker Image with Maven. pom.xml. Run command in SSH or Cloud Shell:
docker build -t gcr.io/satworks-1/springio/gs-spring-boot-docker:v2 .
Push the new updated docker image to the Google Container Registry. Run command in SSH or Cloud Shell
gcloud docker -- push gcr.io/satworks-1/springio/gs-spring-boot-docker:v2
Apply a rolling update to the existing deployment with an image update. Run command in SSH or Cloud Shell
kubectl set image deployment/spring-boot-kube-deployment-port80 spring-boot-kube-deployment-port80=gcr.io/satworks-1/springio/gs-spring-boot-docker:v2
Revalidate the application again through curl or browser
curl 35.227.108.89
and observe the changes take effect.
When do we come across the "CrashLoopBackOff" error and how can we resolve this issue? Does it happen at application level or at kubernetes pods level?