Why does my Dockerfile CMD not work? - linux

So at the end of my Dockerfile I have this:
WORKDIR /home
CMD django-admin startproject whattt
CMD /bin/bash
When I create image and then run container, everything works as expected there are no errors, and no errors in the Docker log. However there are still some issues that I cannot seem to figure out.
The first and most important problem is that CMD django-admin startproject is not actually creating any project. AFTER I run the container, then I can manually run django-admin startproject and it works as expected. When I issue this as a CMD from the Dockerfile though, then no project gets created.
The second issue is after the django-admin line, I put a second CMD with /bin/bash so when I run the container it opens a shell (so I can go in and check if my django project was created). Will this create a problem or conflict with the previous django-admin line? If I remove this line, then when I run the container I have no way to open the shell and check if my django project is there do I ?
Any help would be appreciated, thanks.

“There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.” via Dockerfile reference. So your first CMD will not take effects.
If you want to execute the bash of your container, try docker exec command, and the document provides example commands so you can follow.

Related

Why docker entry point script fails when in contrast it runs perfectly inside container manually?

I have a Ubuntu:20.04 image with my software being installed by dockerfile RUN commands. So the script i want to execute is build by Dockerfile RUN call to buildmyscripts.sh
This program installs perfectly and then if i run the container (with default entrypoint of /bin/sh or /bin/bash)
and execute manually: /root/build/script.sh -i arg1 -m arg2 it works then.
However same doesn't work with ENTRYPOINT set to the /root/build/script.sh followed by CMD set to the arguments. I get following error on running the image
Error: cannot load shared library xyz.so
Xyz.so is a common shared library installed by RUN anyway before.
Please assist thanks
Note: i run as USER root because i have self hosted runner on a hardened Server so security not an issue.
Apparently we need to source the script for environment variables by prepending to the entrypoint/cmd variable in dockerfile. Since the source was through another script it wasnt working alone with ENV variable

Python Script to run commands in a Docker container

I am currently working on automating commands for a Docker container with a Python script on the host machine. This Python script for now, builds and runs a docker-compose file, with the commands for the containers written into the docker-compose file and the Dockerfile itself.
What I want to do is have the Python script action all commands to run within the container, so if I have different scripts I want to run, I am not changing the container. I have tried 2 ways.
First was to run os.system() command within the Python script, however, this works only as far as opening the shell for the container, the os.system() command does not execute code in the Docker container itself.
The second way uses CMD within the Dockerfile, however, this is limited and is hard coded to the container. If I have multiple scripts I have to change the Dockerfile, I don't want this. What I want is to build a default container with all services running, then run Python scripts on the host to run a sequence of commands on the container.
I am fairly new to Docker and think there must be something I am overlooking to run scripted commands on the container. One possible solution I have come across is nsenter. Is this a reliable solve and how does it work? Or is there a much simpler way? I have also used docker-volume to copy the python files into the container to be run on build, however, I can still not find a solve to automate the accessing and running these python scripts from the host machine.
If the scripts need to be copied into a running container, you can do this via the docker cp command. e.g. docker cp myscript.sh mycontiainer:/working/dir.
Once the scripts are in the container, you can run them via a docker exec command. e.g docker exec -it mycontainer /working/dir/myscript.sh.
Note, this isn't a common practice. Typically the script(s) you need would be built (not copied) into container image(s). Then when you want to execute the script(s), within a container, you would run the container via a docker run command. e.g. docker run -it mycontainerimage /working/dir/myscript.sh

Docker file ENTRYPOINT can't detect my start script

I'm trying to create a docker image. This image should run a shell script "startService.sh" when the container is created. The image was built successfully, but when trying to run the image, I get the following error:
"./startService.sh: 6: ./startService.sh: source: not found"
But I know I copied the startService.sh script into the image. My Dockerfile is shown below.
FROM openjdk:8
VOLUME /opt/att/ajsc/config
COPY startService.sh /startService.sh
RUN chmod 777 /startService.sh
ENTRYPOINT ./startService.sh
Where did I go wrong?
The error isn't saying that your start script isn't found; it's saying that the source command (which your script apparently uses) isn't found. source is a bash-specific synonym for the . command; if you want your script to be compatible with the Docker image's /bin/sh, you need to use . instead.

Dockerfile Build Error

I am trying to build a dockerfile for a Euler App to test ShinyProxy via "http://www.shinyproxy.io/deploying-apps/"
I am using the dockerfile from that link.
Upon using the command sudo docker build -t openanalytics/shinyproxy-template .
I get an error while the build is processing that:
Error: unexpected end of input
Execution halted
The command '/bin/sh -c R -e "install.packages(c('shiny', 'rmarkdown', repos='https://cloud.r-project.org/')" ' returned a non-zero code: 1.
I am curious why I am getting this error as this is the same exact command from the dockerfile.
What can I do to resolve this.
-Thanks
Look closely at the syntax of the R install library line and you will see its missing a closing parenthesis
I just manually fixed that syntax and it correctly builds that step
correct syntax
RUN R -e "install.packages(c('shiny', 'rmarkdown'), repos='https://cloud.r-project.org/')"
build it as
docker build --tag r_base .
NOTE - as docker build progresses it then fails later attempting to
COPY euler /root/euler
lstat euler: no such file or directory
To troubleshot this just comment out all Dockefile lines from offending onward and replace bottom line with
CMD ["/bin/bash"]
then it will build correctly and allow you to login to running container to further troubleshoot
docker run -ti r_base bash
I know nothing of R so will leave it to the reader to fix euler COPY ... evidently you must have euler sitting in your local directory prior to issuing the docker build command
...now after you issue above docker run command then from its internal to container prompt issue
cd /
find . | grep Rprofile.site
./usr/lib/R/etc/Rprofile.site
That looks good so leave commented out its COPY in Dockerfile

Docker Compose In Production

Trying to use docker-compose to build and up a simple Node.js application. Although I ran into the same problem with a Django application so I think I'm just missing some sort of vital step. Here is my Dockerfile:
FROM node:4.2.1
CMD mkdir -p /var/app
COPY . /var/app
EXPOSE 3000
CMD node /var/app/index.js
When I run docker compose up pointed towards a digital ocean machine it throws a node error suggesting it can't find the code in /var/app. Is there some other mechanism I am supposed to use to get my code onto the machine other than docker?
The line CMD mkdir -p /var/app is wrong. It should be only one CMD in a Dockerfile, usually at the end.
Only the last CMD directive in a chain of inherited docker images will be executed.
You should use RUN instead
From Dockerfile reference
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
The main purpose of a CMD is to provide defaults for an executing container.
Try taking out the mkdir step. You also need to set the working directory.

Resources