Absolute path gitlab project - linux

I have a GitLab instance self-managed and one of my project has a folder which contains 3 sub-directories, these 3 sub-directories have a Dockerfile.
All my Dockerfile's have a grep command to get the latest version from the CHANGELOG.md which is located in the root directory.
I tried something like this to go back 2 steps but it doesn't work (grep: ../../CHANGELOG.md: No such file or directory)
Dockerfile:
grep -m 1 '^## v.*$' "../../CHANGELOG.md"
example:
Link:
https://mygitlab/project/images/myproject
repo content:
.
├──build
├──image1
├──image2
├──image3
├──CHANGELOG.md
gitlab-ci.yaml
script:
- docker build --network host -t $VAL_IM ./build/image1
- docker push $VAL_IM
The issue is happening when I build the images.

docker build --network host -t $VAL_IM ./build/image1
Here, you have set the build context to ./build/image1 -- builds cannot access directories or files outside of the build context. Also keep in mind that if you use RUN in a docker build, it can only access files that have already been copies inside the container (and as stated you can't copy files outside the build context!) so this doesn't quite make sense as stated.
If you're committed to this versioning strategy, what you probably want to do is perform your grep command as part of your GitLab job before calling docker build and pass in the version as a build arg.
In your Dockerfile, add an ARG:
FROM # ...
ARG version
# now you can use the version in the build... eg:
LABEL com.example.version="$version"
RUN echo version is "$version"
Then your GitLab job might be like:
script:
- version=$(grep -m 1 '^## v.*$' "./CHANGELOG.md")
- docker build --build-arg version="${version}" --network host -t $VAL_IM ./build/image1
- docker push $VAL_IM

Related

Tagging docker image with tag from git repository

I am using Gitlab for repository and ci/cd.
Currently trying to set up pipeline that creates a docker image from a build stage.
All examples I have seen have a simple naming of image where e.g. the branch is used(master)
My question is if I want to tag the image based on the current tag in the repository how do I do this?
I am presuming I can use a Gitlab runner variable but do not see one to us
There are a lot of predefined variables in Gitlab CI. I think you are looking for CI_COMMIT_TAG.
So you could use it this way:
docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
So the image would look like registry.example.com/group/project:tag
As shell command is not possible yet with variables inside .gitlab-ci.yml, you may edit a build script that get the current tag and build the image inside that script
Both file at the root of your project :
build.sh :
#!/bin/sh
IMAGE="$CI_REGISTRY/$CI_PROJECT_PATH:$CI_COMMIT_REF_NAME-$(git describe --abbrev=0 --tags)"
docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
docker build --pull -t $IMAGE .
docker push $IMAGE
.gitlab-ci.yml :
image: docker:latest
services:
- docker:dind
stages:
- release
release:
stage: release
script:
- apk update && apk add git
- ./build.sh

Access private git repos via npm install in a Docker container

I am in the process of setting up a Docker container that will pull private repos from GitHub as part of the process. At the moment I am using an Access Token that I pass from the command line (will change once build gets triggered via Jenkins).
docker build -t my-container --build-arg GITHUB_API_TOKEN=123456 .
# Dockerfile
# Env Vars
ARG GITHUB_API_TOKEN
ENV GITHUB_API_TOKEN=${GITHUB_API_TOKEN}
RUN git clone https://${GITHUB_API_TOKEN}#github.com/org/my-repo
This works fine and seems to be a secure way of doing this? (though need to check the var GITHUB_API_TOKEN only being available at build time)
I am looking to find out how people deal with ssh keys or access tokens when running npm install and dependencies pull from github
"devDependencies": {
"my-repo": "git#github.com:org/my-repo.git",
"electron": "^1.7.4"
}
At the moment I cannot pull this repo as I get the error Please make sure you have the correct access rights as I have no ssh keys setup in this container
Use the multi-stage build approach.
Your Dockerfile should look something like this:
FROM alpine/git as base_clone
ARG GITHUB_API_TOKEN
WORKDIR /opt
RUN git clone https://${GITHUB_API_TOKEN}#github.com/org/my-repo
FROM <whatever>
COPY --from=base_clone /opt/my-repo /opt
...
...
...
Build:
docker build -t my-container --build-arg GITHUB_API_TOKEN=123456 .
The Github API Token secret won't be present in the final image.
docker secrets is a thing, but it's only available to containers that are part of a docker swarm. It is meant for handling things like SSH keys. You could do as the documentation suggests and create a swarm of 1 to utilize this feature.
docker-compose also supports secrets, though I haven't used them with compose.

How copy files from current docker container to another container

I am building a docker container with a nodejs application, which will be build from meteorJS. For the build a shell runner is used (`meteor build /opt/project/build/core --directory) as this is all done in a gitlab CI.
build:
stage: build
tags:
- deploy
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- meteor npm install
- meteor build /opt/project/build/core --directory
script:
- cd /opt/project/build/core/bundle
- docker build -t $CI_REGISTRY_IMAGE:latest .
So the files of the application are now at /opt/project/build/core. Now I want to copy those file into another docker image (project-e2e:latest)
I tried to do
docker cp /opt/project/build/core/bundle project-e2e:latest/opt/project/build/core
But this gives me the error
Error response from daemon: No such container: project-e2e
But I see, the container is running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a238132e37a2 project-e2e:latest "/bin/bash" 14 hours ago Up 14 hours clever_kirch
Maybe the problem is, that I'm trying to copy out of the shell runner docker image and the target project-e2e is 'outside'?
If you want to get the files generated into the container you can copy them by using docker:
docker cp nightwatch:/opt/project/build/core/your_file <your_local_path>
Basically the pattern is:
docker cp <source> <target>
If the source/target is a container, you have to use:
<container_name>:<path_inside_container>

Building sphinx documents inside Docker container

I have a Flask project that runs inside a Docker container. I have managed to build my application and run it successfully. However, I would like to also build the sphinx documentation, so its static files can be served. The documentation is normally built using make html in the docs/ file. I've found a docker source for sphinx, and have set up a docker-compose config that runs successfully, however, I am not able to pass the make html command to sphinx -- I believe because I am running the command a level up, since make html needs to be run from within docs/ and not from within the base directory.
I get the following error when I try to build the sphinx documentation:
docker-compose run --rm sphinx make html
Starting web_project
Pulling sphinx (nickjer/docker-sphinx:latest)...
latest: Pulling from nickjer/docker-sphinx
c62795f78da9: Pull complete
d4fceeeb758e: Pull complete
5c9125a401ae: Pull complete
0062f774e994: Pull complete
6b33fd031fac: Pull complete
aac5b231ab1e: Pull complete
97be0ae484bc: Pull complete
ec7c8cca5e46: Pull complete
82cc981959eb: Pull complete
151a33a826a1: Pull complete
Digest: sha256:8125ca919069235278a5da631c002926cc57d741fa041b59c758183ebd48121f
Status: Downloaded newer image for nickjer/docker-sphinx:latest
make: *** No rule to make target 'html'. Stop.
My project has the following directory structure
docs/
web/
Dockerfile
run.py
requirements.txt
....
docker-compose.yml
README.md
And the following docker-compose configuration
version: '2'
services:
web:
restart: always
build: ./web
ports:
- "7000:7000"
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn -w 2 -b :7000 run:app
sphinx:
image: "nickjer/docker-sphinx"
volumes:
- "${PWD}:/docs"
user: "1000:1000"
depends_on:
- web
How do I build my sphinx documentation within the Docker container? Do I need to add another Dockerconfig file to my docs module?
I believe because I am running the command a level up, since make html
needs to be run from within docs/ and not from within the base
directory.
To test this theory, could you try something like this command?
docker-compose run --rm sphinx bash -c "cd docs; make html"
or possibly
docker-compose exec sphinx bash -c "cd docs; make html"
I had success with the following to build and deploy my sphinx docs for static read by the Flask app.
WORKDIR /pathapp/app
ENV PYTHON /pathapp/app
RUN python /pathapp/app/setup.py build_sphinx -b html
RUN python /pathapp/app/scripts/script_to_copy_build_sphinx_html_to_docs.py
The move script is just a simple copy directory.

Docker in Docker cannot mount volume

I am running a Jenkins cluster where in the Master and Slave, both are running as a Docker containers.
The Host is latest boot2docker VM running on MacOS.
To allow Jenkins to be able to perform deployment using Docker, I have mounted the docker.sock and docker client from the host to the Jenkins container like this :-
docker run -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker -v $HOST_JENKINS_DATA_DIRECTORY/jenkins_data:/var/jenkins_home -v $HOST_SSH_KEYS_DIRECTORY/.ssh/:/var/jenkins_home/.ssh/ -p 8080:8080 jenkins
I am facing issues while mounting a volume to Docker containers that are run inside the Jenkins container. For example, if I need to run another Container inside the Jenkins container, I do the following :-
sudo docker run -v $JENKINS_CONTAINER/deploy.json:/root/deploy.json $CONTAINER_REPO/$CONTAINER_IMAGE
The above runs the container, but the file "deploy.json" is NOT mounted as a file, but instead as a "Directory". Even if I mount a Directory as a Volume, I am unable to view the files in the resulting container.
Is this a problem, because of file permissions due to Docker in Docker case?
A Docker container in a Docker container uses the parent HOST's Docker daemon and hence, any volumes that are mounted in the "docker-in-docker" case is still referenced from the HOST, and not from the Container.
Therefore, the actual path mounted from the Jenkins container "does not exist" in the HOST. Due to this, a new directory is created in the "docker-in-docker" container that is empty. Same thing applies when a directory is mounted to a new Docker container inside a Container.
Very basic and obvious thing which I missed, but realized as soon I typed the question.
Lots of good info in these posts but I find none of them are very clear about which container they are referring to. So let's label the 3 environments:
host: H
docker container running on H: D
docker container running in D: D2
We all know how to mount a folder from H into D: start D with
docker run ... -v <path-on-H>:<path-on-D> -v /var/run/docker.sock:/var/run/docker.sock ...
The challenge is: you want path-on-H to be available in D2 as path-on-D2.
But we all got bitten when trying to mount the same path-on-H into D2, because we started D2 with
docker run ... -v <path-on-D>:<path-on-D2> ...
When you share the docker socket on H with D, then running docker commands in D is essentially running them on H. Indeed if you start D2 like this, all works (quite unexpectedly at first, but makes sense when you think about it):
docker run ... -v <path-on-H>:<path-on-D2> ...
The next tricky bit is that for many of us, path-on-H will change depending on who runs it. There are many ways to pass data into D so it knows what to use for path-on-H, but probably the easiest is an environment variable. To make the purpose of such var clearer, I start its name with DIND_. Then from H start D like this:
docker run ... -v <path-on-H>:<path-on-D> --env DIND_USER_HOME=$HOME \
--env DIND_SOMETHING=blabla -v /var/run/docker.sock:/var/run/docker.sock ...
and from D start D2 like this:
docker run ... -v $DIND_USER_HOME:<path-on-D2> ...
Another way to go about this is to use either named volumes or data volume containers. This way, the container inside doesn't have to know anything about the host and both Jenkins container and the build container reference the data volume the same way.
I have tried doing something similar to what you are doing, except with an agent rather that using the Jenkins master. The problem was the same in that I couldn't mount the Jenkins workspace in the inner container. What worked for me was using the data volume container approach and the workspace files were visible to both the agent container and the inner container. What I liked about the approach is the both containers reference the data volume in the same way. Mounting directories with an inner container would be tricky as the inner container now needs to know something about the host that its parent container is running on.
I have detailed blog post about my approach here:
http://damnhandy.com/2016/03/06/creating-containerized-build-environments-with-the-jenkins-pipeline-plugin-and-docker-well-almost/
As well as code here:
https://github.com/damnhandy/jenkins-pipeline-docker
In my specific case, not everything is working the way I'd like it to in terms of the Jenkins Pipeline plugin. But it does address the issue of the inner container being able to access the Jenkins workspace directory.
Regarding your use case related to Jenkins, you can simply fake the path by creating a symlink on the host:
ln -s $HOST_JENKINS_DATA_DIRECTORY/jenkins_data /var/jenkins_home
If you are like me and don't want to mess with Jenkins Setup or too lazy to go through all this trouble, here is a simple workaround I did to get this working for me.
Step 1 - Add following variables to the environment section of pipeline
environment {
ABSOLUTE_WORKSPACE = "/home/ubuntu/volumes/jenkins-data/workspace"
JOB_WORKSPACE = "\${PWD##*/}"
}
Step 2 - Run you container with following command Jenkins pipeline as follows.
steps {
sh "docker run -v ${ABSOLUTE_WORKSPACE}/${JOB_WORKSPACE}/my/dir/to/mount:/targetPath imageName:tag"
}
Take note of the double quotes in the above statement, Jenkins will not convert the env variables if the quotes are not formatted properly or single quotes are added instead.
What does each variable signify?
ABSOLUTE_WORKSPACE is the path of our Jenkins volume which we had mounted while starting Jenkins Docker Container. In my case, the docker run command was as follows.
sudo docker run \
-p 80:8080 \
-v /home/ubuntu/volumes/jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-d -t jenkinsci/blueocean
Thus the varible ABSOLUTE_WORKSPACE=/home/ubuntu/volumes/jenkins-data + /workspace
JOB_WORKSPACE command gives us the current workspace directory where your code's lives. This is also the root dir of your code base. Just followed this answer for reference.
How does this work ?
It is very straight forward, as mentioned in #ZephyrPLUSPLUS ( credits where due ) answer, the source path for our docker container which is being run in Jenkins pipeline is not the path in current container, rather the path taken is host's path. All we are doing here is constructing the path where our Jenkins pipeline is being run. And mounting it to our container. Voila!!
Here's a little illustration to help clarify ...
This also works via docker-compose and/or named volumes so you don't need to create a data only container, but you still need to have the empty directory on the host.
Host setup
Make host side directories and set permissions to allow Docker containers to access
sudo mkdir -p /var/jenkins_home/{workspace,builds,jobs} && sudo chown -R 1000 /var/jenkins_home && sudo chmod -R a+rwx /var/jenkins_home
docker-compose.yml
version: '3.1'
services:
jenkins:
build: .
image: jenkins
ports:
- 8080:8080
- 50000:50000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- workspace:/var/jenkins_home/workspace/
# Can also do builds/jobs/etc here and below
jenkins-lts:
build:
context: .
args:
versiontag: lts
image: jenkins:lts
ports:
- 8081:8080
- 50001:50000
volumes:
workspace:
driver: local
driver_opts:
type: none
o: bind
device: /var/jenkins_home/workspace/
When you docker-compose up --build jenkins (you may want to incorporate this into a ready to run example like https://github.com/thbkrkr/jks where the .groovy scripts pre-configure Jenkins to be useful on startup) and then you will be able to have your jobs clone into the $JENKINS_HOME/workspace directory and shouldn't get errors about missing files/etc because the host and container paths will match, and then running further containers from within the Docker-in-Docker should work as well.
Dockerfile (for Jenkins with Docker in Docker)
ARG versiontag=latest
FROM jenkins/jenkins:${versiontag}
ENV JAVA_OPTS="-Djenkins.install.runSetupWizard=false"
COPY jenkins_config/config.xml /usr/share/jenkins/ref/config.xml.override
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
USER root
RUN curl -L http://get.docker.io | bash && \
usermod -aG docker jenkins
# Since the above takes a while make any other root changes below this line
# eg `RUN apt update && apt install -y curl`
# drop back to the regular jenkins user - good practice
USER jenkins
EXPOSE 8080
A way to work around this issue is to mount a directory (inside your docker container in which you mounted your docker socket) using the exact same path for its destination. Then, when you run a container from within that container, you are able to mount anything within that mount's path into the new container using docker -v.
Take this example:
# Spin up your container from which you will use docker
docker run -v /some/dir:/some/dir -v /var/run/docker.sock:/var/run.docker.sock docker:latest
# Now spin up a container from within this container
docker run -v /some/dir:/usr/src/app $CONTAINER_IMAGE
The folder /some/dir is now mounted across your host, the intermediate container as well as your destination container. Since the mount's path exists on both the host as the "nearly docker-in-docker" container, you can use docker -v as expected.
It's kind of similar to the suggestion of creating a symlink on the host but I found this (at least in my case), a cleaner solution. Just don't forget to cleanup the dir on the host afterwards! ;)
I have same problem in Gitlab CI, I solved this by using docker cp to do something like mount
script:
- docker run --name ${CONTAINER_NAME} ${API_TEST_IMAGE_NAME}
after_script:
- docker cp ${CONTAINER_NAME}:/code/newman ./
- docker rm ${CONTAINER_NAME}
Based from the description mentioned by #ZephyrPLUSPLUS
here is how I managed to solve this:
vagrant#vagrant:~$ hostname
vagrant
vagrant#vagrant:~$ ls -l /home/vagrant/dir-new/
total 4
-rw-rw-r-- 1 vagrant vagrant 10 Jun 19 11:24 file-new
vagrant#vagrant:~$ cat /home/vagrant/dir-new/file-new
something
vagrant#vagrant:~$ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker /bin/sh
/ # hostname
3947b1f93e61
/ # ls -l /home/vagrant/dir-new/
ls: /home/vagrant/dir-new/: No such file or directory
/ # docker run -it --rm -v /home/vagrant/dir-new:/magic ubuntu /bin/bash
root#3644bfdac636:/# ls -l /magic
total 4
-rw-rw-r-- 1 1000 1000 10 Jun 19 11:24 file-new
root#3644bfdac636:/# cat /magic/file-new
something
root#3644bfdac636:/# exit
/ # hostname
3947b1f93e61
/ # vagrant#vagrant:~$ hostname
vagrant
vagrant#vagrant:~$
So docker is installed on a Vagrant machine. Lets call it vagrant. The directory you want to mount is in /home/vagrant/dir-new in vagrant.
It starts a container, with host 3947b1f93e61. Notice that /home/vagrant/dir-new/ is not mounted for 3947b1f93e61.
Next we use the exact location from vagrant, which is /home/vagrant/dir-new as the source of the mount and specify any mount target we want, in this case it is /magic. Also note that /home/vagrant/dir-new does not exist in 3947b1f93e61.
This starts another container, 3644bfdac636.
Now the contents from /home/vagrant/dir-new in vagrant can be accessed from 3644bfdac636.
I think because docker-in-docker is not a child, but a sibling. and the path you specify must be the parent path and not the sibling's path. So any mount would still refer to the path from vagrant, no matter how deep you do docker-in-docker.
You can solve this passing in an environment variable.
Example:
.
├── docker-compose.yml
└── my-volume-dir
└── test.txt
In docker-compose.yml
version: "3.3"
services:
test:
image: "ubuntu:20.04"
volumes:
- ${REPO_ROOT-.}/my-volume-dir:/my-volume
entrypoint: ls /my-volume
To test run
docker run -e REPO_ROOT=${PWD} \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ${PWD}:/my-repo \
-w /my-repo \
docker/compose \
docker-compose up test
You should see in the output:
test_1 | test.txt

Resources