I wrote a node app, he can build and push docker image, but I hope to run him as a Docker container.
I can use the following method to run the docker command in the container, but this cannot be running the Node App.
docker run -i --rm --privileged docker:dind sh
Finally I found the answer:
FROM node
RUN apt-get update && apt-get install -y apt-transport-https \
ca-certificates curl gnupg2 \
software-properties-common
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN apt-key fingerprint 0EBFCD88
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable"
RUN apt-get update
RUN apt-get install -y docker-ce-cli
RUN apt-get install -y docker-compose
CMD ["node", "-v"]
Related
So, I'm trying to run docker inside Dockerfile ("the best idea ever") using command sudo docker service start but it's always responds with
#9 0.142 mkdir: cannot create directory ‘cpuset’: Read-only file system
------
Dockerfile looks like this.
FROM jenkins/jenkins:lts
USER root
RUN apt-get update && apt-get upgrade -y
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
apt-get update && \
apt-get -y install docker.io && \
apt-get -y install sudo
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
VOLUME /var/run/docker.sock
RUN sudo service docker start ///////1111111\\\\\\\\ The Error goes there.....
RUN adduser jenkins sudo
RUN echo “jenkins ALL=NOPASSWD: ALL” >> /etc/sudoers
RUN usermod -aG docker jenkins
RUN chown root:jenkins /var/run/docker.sock
USER jenkins
From the Error Response, Seems like I need to enable Write Mode,there are many solutions/suggestions for that, but most of them unfortunately does not work....
Can it be the problem related to Image?
Would really appreciate any suggestions/tips, (but more likely the solution :))
Even if you wanted to run docker inside docker, you would not start the docker daemon during container build time but rather at container run time.
That means move the 'sudo service docker start' command into your entrypoint script.
I am building arm64 image on my x86_64 amd machine using docker buildx everything is working fine except whenever I try to build arm image it start building it from scratch.
Steps to reproduce
Create a docker builder
export DOCKER_CLI_EXPERIMENTAL=enabled
docker buildx create --name buildkit
docker buildx use buildkit
docker buildx inspect --bootstrap
Build docker image using command
docker buildx build . --platform linux/arm64 -t test-image:p-arm64 -f ./arm64.Dockerfile
When I build it multiple time all the steps are executing which takes around 20-30 min for every build. I want to minimize this time by caching.
This is what my dockerfile looks like
FROM python:3.7-slim
USER root
RUN apt-get update && \
apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
software-properties-common \
gnupg \
g++ \
&& rm -rf /var/lib/apt/lists/*
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg && \
echo "deb [arch=arm64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update && \
apt-get install -y docker-ce-cli \
&& rm -rf /var/lib/apt/lists/*
RUN python3 -m pip install --no-cache-dir \
ruamel.yaml==0.16.12 \
pyyaml==5.4.1 \
requests==2.27.1
RUN apt-get remove -y g++ && apt-get purge g++
ENTRYPOINT ["/bin/bash"]
Any suggestion is appretiated.
I make use of the below scripts to install the Jenkins file on Ubuntu 22.04 in digital ocean and I have open the firewall, but when I type "http://ipv4:8080", it displays the below error page and I cannot launch Jenkins
Error message when get access to Jenkins
Open firewall
# install dependencies
apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
# get gpg key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
# add docker repo
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
# update repository
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io
systemctl enable docker
# jenkins setup
mkdir -p /var/jenkins_home/.ssh
cp /root/.ssh/authorized_keys /var/jenkins_home/.ssh/authorized_keys
chmod 700 /var/jenkins_home/.ssh
chmod 600 /var/jenkins_home/.ssh/authorized_keys
chown -R 1000:1000 /var/jenkins_home
docker run -p 2222:22 -v /var/jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock --restart always -d wardviaene/jenkins-slave
I am trying to build docker image using docker file .
The docker file will contain database creation.
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get update && apt-get install -y nodejs
RUN node -v
RUN npm -v
RUN apt-get install -y redis-server
RUN redis-server -v
RUN apt-get install -my wget gnupg
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv D68FA50FEA312927
RUN echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update
RUN apt-get install -y mongodb-org
RUN mongodb -version
I am unable to start redis server after installation in docker container
You should to expose ports for redis:
EXPOSE 6379
And please remember, that each RUN creates a new layer in your image, and you can group all shell commands in one RUN directive. It should be something like this:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y curl wget gnupg && \
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv D68FA50FEA312927 && \
echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list && \
curl -sL https://deb.nodesource.com/setup_8.x | bash - && \
apt-get update && \
apt-get install -y nodejs redis-server mongodb-org redis-server && \
node -v && \
npm -v && \
mongodb -version
EXPOSE 6379
And one more thing. Docker way tell us run only one process in one container, so you should to separete your Redis, Mongo and other apps to different containers and run it with some orchestrator(such as docker-swarm or node or kubernetes) or just docker-compose.
I'm trying to write a dockerfile that will install Azure CLI so that I can run CLI commands in bitbucket pipelines.
However the installation of the CLI always fails:
E: Unable to locate package azure-cli
The command '/bin/sh -c apt-get install azure-cli' returned a non-zero code: 100
Here is my dockerfile
FROM atlassian/default-image:latest
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
RUN apt-get update
RUN apt-get install -y libssl-dev libffi-dev
RUN apt-get install -y python-dev
RUN apt-get install apt-transport-https lsb-release software-properties-common -y
ENV AZ_REPO $(lsb_release -cs)
RUN echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | tee /etc/apt/sources.list.d/azure-cli.list
RUN apt-key --keyring /etc/apt/trusted.gpg.d/Microsoft.gpg adv \
--keyserver packages.microsoft.com \
--recv-keys BC528686B50D79E339D3721CEB3E94ADBE1229CF
RUN apt-get install azure-cli
CMD ["/bin/bash"]
you need to do apt-get update after importing new package feed to get packages from that feed.