How to connect paths in Docker file - linux

I am running a Jenkins job, inside a docker container, this job requires doxygen but im getting an error saying -
[exec] /bin/sh: /opt/fc4-usr-local/bin/doxygen: No such file or directory
I have doxygen installed in my Docker image, but the path is -
usr/bin/doxygen
Inside my docker image, I want to connect this old path - /opt/fc4-usr-local/bin/doxygen with new path usr/bin/doxygen
So whenever my job looks for doxygen it goes to new path usr/bin/doxygen.
Note 1. The reason I cant just edit my job to look for doxygen in the new path is that its files are locked and im not allowed to changes them,
Note 2. So my idea is that, when my Jenkins job look for doxygen in my docker container it goes straight for new path not the old one.
Could anyone please suggest any ideas for this.

Add these lines near the bottom of your Dockerfile:
RUN mkdir -p /opt/fc4-usr-local/bin
RUN ln -s /usr/bin/doxygen /opt/fc4-usr-local/bin/doxygen
The first line creates the directory.
The second line creates a symlink from one path to the other

Related

docker-render vs docker file changing permission

I am able to change permission of a file when I do docker-compose exec app <change file permission command>
However if I try to do from docker file it errors out saying it's only readonly file system.
Changing file inside /etc I know they are mounted on runtime wanted to know if its possible to do from dockerfile.
You can try to add a script in your Dockerfile that you copy into the image and run when the container starts. You are facing this error because images are, indeed, read-only file systems, while containers are mutable. So, adding a script should fix the problem.

Build docker image jar file : COPY failed: no source files were specified

I have a leshan server jar file (to which I have made some changes) obtained by running the maven clean install. I specify that I work in linux and I put this jar file inside a "leshan_docker" folder contained in the desktop. within the same folder there is also a dockerfile to build the server image, and it is written as follows:
FROM openjdk:8-jre-alpine
COPY /Desktop/leshan_docker/leshan-server-demo-*.jar /Desktop/leshan_docker/
CMD ["java", "-jar", "/leshan-server-demo-2.0.0-SNAPSHOT.jar"]
but when I go to build through this command:
sudo docker build -f Dockerfile3 -t leshan-server3 .
It reports me the following error:
Sending build context to Docker daemon 12MB
Step 1/3 : FROM openjdk:8-jre-alpine
---> f7a292bbb70c
Step 2/3 : COPY /Desktop/leshan_docker/leshan-server-demo-*.jar /Desktop/leshan_docker/
COPY failed: no source files were specified
How can I go about solving the problem? Thanks in advance for your answers.
Your source path with the COPY command should be relative to the build context. Your build context is in the folder you're running sudo docker build in since the final argument you gave was .. I highly recommend taking a look at the docs.
The destination path for the COPY command should be relative to the path in your container. What may work now is to move your .jar to the root directory and run it from there.
So if your jar files are in the same directory you're running the command in, change it to:
COPY leshan-server-demo-*.jar /
It would be better practice to actually create a new directory in the container to hold your .jar file to keep your work more organized.

Getting "exec user process caused "no such file or directory"" error when trying to run Docker image

I'm trying to run a simple image that executes .sh file but I got this error.
standard_init_linux.go:185: exec user process caused "no such file or directory"
Here is my Dockerfile
FROM python:2
ADD . .
CMD ["./test.sh"]
test.sh:
#!/bin/bash
echo "test"
I'm running Docker in Windows 10 and I have checked '/bin/bash' is existed in the container.
Why I got this error?
I have faced the exact same issue when I tried to create a Linux container image with Docker on windows 10. When you copy the file from windows to docker image, the file format is that of dos. You may need to run dos2unix utility on all the files before copying them inside docker image.
To make things clear, I would share my experience. When I checked out my project source code using git on windows and tried to create a linux docker container by building the image locally, I got the exact same error message. This happened because I created my git project on Linux but this time, I checked out on windows. My default git global configuration on windows was checkout Windows-style commit Unix-style, i.e. Git will convert LF to CRLF when checking out text files. When committing text files, CRLF will be converted to LF. For cross-platform projects, this is the recommended setting on Windows ("core.autocrlf" is set to "true")
In order to resolve this, I changed my windows' git global config as I wanted to checkout Unix-style commit Unix-style.(How to change line-ending settings)
git config --global autocrlf input
After this, I checked out my project again and created a fresh local image that ran perfectly.

Cannot install inside docker container

I'm quite new at docker, but I'm facing a problem I have no idea how to solve it.
I have a jenkins (docker) image running and everything was fine. A few days ago I created a job so I can run my nodejs tests every time a pull request is made. one of the job's build steps is to run npm install. And the job is constantly failing with this error:
tar (child): bzip2: Cannot exec: No such file or directory
So, I know that I have to install bzip2 inside the jenkins container, but how do I do that? I've already tried to run docker run jenkins bash -c "sudo apt-get bzip2" but I got: bash: sudo: command not found.
With that said, how can I do that?
Thanks in advance.
Answer to this lies inside the philosophy of dcoker containers. Docker containers are/should be immutable. So, this is what you can try to fix this issue.
Treat your base image i.e, jenkins as starting point.
login to this base image and install bzip2.
commit these changes and this should result in a new image.
Now use above image from step 3 to install any other package like npm.
Now commit above image.
Note: To execute commands in much controlled way, I always prefer to use something like this;
docker exec -it jenkins bash
In nutshell, answer to both of your current issues lie in the fact that images are immutable so to make any change that will get propagated is to commit them and use newly created image to make further changes. I hope this helps.
Lots of issues here, but the biggest one is that you need to build your images with the tools you need rather than installing inside of a running container. As techtrainer mentions, images are immutable and don't change (at least from your running container), and containers are disposable (so any changes you make inside them are lost when you restart them unless your data is stored outside the container in a volume).
I do disagree with techtrainer on making your changes in a container and committing them to an image with docker commit. This will work, but it's the hand built method that is very error prone and not easily reproduced. Instead, you should leverage a Dockerfile and use docker build. You can either modify the jenkins image you're using by directly modifying it's Dockerfile, or you can create a child image that is FROM jenkins:latest.
When modifying this image, the Jenkins image is configured to run as the user "jenkins", so you'll need to switch to root to perform your application installs. The "sudo" app is not included in most images, but external to the container, you can run docker commands as any user. From the cli, that's as easy as docker run -u root .... And inside your Dockerfile, you just need a USER root at the top and then USER jenkins at the end.
One last piece of advice is to not run your builds directly on the jenkins container, but rather run agents with your needed build tools that you can upgrade independently from the jenkins container. It's much more flexible, allows you to have multiple environments with only the tools needed for that environment, and if you scale this up, you can use a plugin to spin up agents on demand so you could have hundreds of possible agents to use and only be running a handful of them concurrently.

'docker build' gives error that 'docker run' doesn't. How are they different?

My project is setup like this:
./ -
Dockerfile
package.json
build
compiled files from frontend and backend directories get put here
backend
app.js
frontend
frontend files...
scripts
startServer.sh
build.sh
startServer.sh:
docker build ../ --tag myImage
# The build script compiles all my assets and places
# them in the top level 'build' directory which i am
# trying to link to my docker image so I can recompile
# on each file change and have the changes show in the docker image.
./build.sh
docker run --volume /path/to/build/dir:/src/app myImage
Dockerfile:
FROM node:4.4.7
RUN ls src/app
The RUN command in the Dockerfile gives me this error when the build command from the startServer script is called:
ls: cannot access src/app: No such file or directory
If I change RUN to CMD it gives no error. Also, even after the build gives that error, it finishes the build and the docker run command gives no error.
Is the 'docker build' command actually trying to add the 'build' folder to the image from which containers are launched? Or is it just compiling some commands for the images to use when they are made?
If it is the later, how do you make one Dockerfile that is used for both building and running that works in both cases?
I feel like I might be missing a crucial concept with Docker, but I've gone through the tutorials and docs and couldn't solve this.
There is no src/app folder in the node image, so this is an expected error. The node image expects you to add your own /usr/src/app, either with a COPY step in your build, or with a volume mapping after the build is finished.
The RUN gives a step to run to add a layer to the resulting built image, so an ls makes little sense there since you didn't modify the image with new content.
The CMD gives a default command to run if one is not passed at the end of the docker run, so if you do a docker run node /bin/bash, the ls src/app CMD will never be run. This also runs after other steps in your build, and after any volume mounts you may be running on your container, which would create this folder.
When you run the docker image, you mount data volume to /src/app
But in docker script, you tried to access src/app
Because the default working directory is not a root. you cannot access src directory.
so, edit your docker file to
FROM node:4.4.7
RUN ls /src/app

Resources