Containerized jenkins : Cant find the /var/lib/jenkins folder - linux

I am new to Jenkins.
For the matter of testing, I have installed a containerized version on my machine (as described here)
The installation creates two containers. The one running the jenkins engine is jenkins-blueocean.
In non containerized version Jenkins saves its files in the /var/lib/jenkins folder but I cant find such folder in my containers:
docker container exec -it jenkins-blueocean /bin/sh
$ ls /var/lib/jenkins
ls: cannot access '/var/lib/jenkins': No such file or directory
Jenkins is running and I can see it both with ps and curl it on localhost:8080
So is the containerized jenkins saving files elsewhere or I am missing something?
Thanks in advance

Well I just checked.
Containerized Jenkins stores all files in /var/jenkins_home.
Cheers

Related

"Docker context ls" and "sudo docker context ls" don't have same setting options

I am a docker newbie. I just installed Docker and Docker Desktop as per offical instruction. Soon, I start to have problem like: the Docker Desktop does not show container. I think it's because I haven’t set the contexts same for with and without sudo privilege, according to this post.
But I don’t understand why I only have the “default” option for “sudo docker context ls”. Please help me on this. Many thanks!
OS:Ubuntu 20.04.5 LTS
screenshot
The docker context data is stored in the user's home directory. When you use sudo, that changes users and home directories. Without sudo it might look in /home/yourname/.docker/contexts, but when you switch to root with sudo it also changes home directories and looks in /root/.docker/contexts.
You do not need Docker Desktop on native Linux. Installing Docker (what the Docker documentation now calls "Docker Engine") through your OS's package manager is sufficient. If you are on a single-user system, you can grant your ordinary user access to the Docker socket, but be aware that it's all but trivial to use this access to root the entire host.
When you do uninstall Docker Desktop, there are additional files in your home directory you need to remove
rm -rf $HOME/.docker/desktop
$EDITOR $HOME/.docker/config.json
# and remove `credsStore` and `currentContext`
Once you've done this cleanup, you'll revert to Docker's default behavior of using the $DOCKER_SOCK environment variable, or without that, /var/run/docker.sock. That system-global Docker socket file is the same regardless of which user you are, and it won't be affected by sudo.

Docker won't copy files from the container to the host's /tmp folder

I am trying to copy a file from a linux container to a linux host using docker cp. I want to copy this file to the /tmp folder on the host machine.
The problem is simple: I can copy to other places, such as my home folder. For example, this works:
docker cp my_container:/certificate.cer /home/adam/Documents/certificate.cer
But this does not work:
docker cp my_container:/certificate.cer /tmp/certificate.cer.
However, the command completes with a zero exit code as if the operation was successful. I get no error feedback, but the file definitely isn't there.
Am I missing something, or is this a bug with the Docker CLI?
edit: From further testing I have noticed that creating a new directory in /tmp, (i.e.
mkdir /tmp/test) Then trying to copy the file into that subfolder, fails with an error: stat /tmp/test/: not a directory.
This seems to indicate that perhaps docker is looking at a different folder? I am not sure where it could be looking though.
Thanks
I believe I have found the answer to this:
Docker was installed as an Ubuntu Snap, which as I understand, is sandboxed. Running sudo ls /tmp/snap.docker/tmp showed me all the files I was missing.
So, it seems the snap version of docker works a little differently than expected. Uninstalling it and reinstalling from apt fixed the problem. :)

Docker - accessing files inside container from host

I am new to docker.
I ran a node-10 images and inside the running container I cloned a repository, ran the app which started a server with file watcher. I need to access the codebase inside the container, open it up in an IDE running on the windows host. If that is done, then I also want that as I change the files in the IDE these changes induce the filewatcher in the container.
Any help is appreciated. Thanks,
The concept you are looking for is called volumes. You need to start a container and mount a host directory inside it. For the container, it will be a regular folder, and it will create files in it. For you, it will also be a regular folder. Changes made by either side will be visible to another.
docker run -v /a/local/dir:/a/dir/in/your/container
Note though that you can run into permission issues that you will need to figure out separately.
It depends on what you want to do with the files.
There is the docker cp command that you can use to copy files to/from a container.
However, it sounds to me like you are using docker for development, so you should mount a volume instead, that is, you mount a directory on the host as a volume in docker, so anything written to that directory will show up in the container, and vice versa.
For instance if you have the code base that you develop against in C:\src on your windows machine, then you run docker like docker run -v c:\src:/app where /app is the location that node is looking in. However, for Windows there are a few things to consider since Docker is not native in Windows, so have a look at the documentation first.
Hi I think you should use mount volumes for the source code and edit your code from your IDE normally:
docker run -it -v "$PWD":/app -w /app -u node node:10 yarn dev
here docker will create an image setting the working dir to "/app", mount the current dir to "/app" and run "yarn dev" at start up with the "node" user (none root user)
Hope this is helpfull.

Volume Mounting in local OSX gitlab-runner for exec docker

I'm trying to test my gitlab-ci.yml file by running the jobs through gitlab-runner on my laptop (OSX). The yml looks like
image: ruby:2.2
start:
script:
- echo "made it"
The executor is docker. I've tried:
gitlab-runner --debug exec docker start
gitlab-runner --debug exec docker --docker-volumes /users/Shared/Sites/Werk/werk-mailer:/users/Shared/Sites/Werk/werk-mailer
And a many other paths and flags, but no luck. I keep getting this message:
ERROR: Job failed (system failure): Error response from daemon: Mounts denied:
The path /users/Shared/Sites/Werk/werk-mailer
is not shared from OS X and is not known to Docker.
You can configure shared paths from Docker -> Preferences... -> File Sharing.
See https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.
So apparently either gitlab-runner or docker only mounts the /Users/ folder. The /Users/Shared folder (in which I share repos with other accounts) is not added.
I moved my repo into /Users//Sites/ and it was fine.
An alternative fix for a related problem, although it doesn't seem to be apparent from the initial question: I found that Docker tried to locate a folder all in lower case. Docker runs linux, which is case sensitive, whereas MacOS's file system is not case sensitive :/ I simply created a new self-owned directory /development (sudo mkdir /development && sudo chown {username}:staff /development) and symlinked my project's folder there (cd /development && ln -s {path to project}) and added /development to the list of folders Docker for macOS has access to. Running the gitlab runner from that point worked for me.

What is the file-system of a Docker container? On which file system does an application running inside this container runs on?

Basically, I am running Docker on my Windows 10 machine. I have mounted a windows directory inside this container to access the files on my windows machine, on which a couple of tasks are to be performed.
Which file system runs inside a docker container?
Is it the same as that of the OS on which this container is based? For instance, if I run a container having ubuntu as base OS, will it be the one that the current version of ubuntu (inside this container is running)?
Or is it the one that is running on the docker daemon?
Also, I am running an application inside this container, which accesses files in my windows directory as well as creates a couple of files. Now, these files are being written to Windows, hence follow the file system of Windows(NTFS).
So, how does it work? (Different file system inside the docker container and the windows file system; both in conjunctuion?)
Which file system runs inside a docker container?
The one from the docker host (Windows NTFS or Ubuntu FS).
$ docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /opt/webapp.
If the path /opt/webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content.
Once the mount is removed, the content is accessible again.
Now, these files are being written to Windows, hence follow the file system of Windows(NTFS).
Yes, and that filesystem is case-sensitive (as illustrated in 18756).

Resources