I'm trying to create a docker image. This image should run a shell script "startService.sh" when the container is created. The image was built successfully, but when trying to run the image, I get the following error:
"./startService.sh: 6: ./startService.sh: source: not found"
But I know I copied the startService.sh script into the image. My Dockerfile is shown below.
FROM openjdk:8
VOLUME /opt/att/ajsc/config
COPY startService.sh /startService.sh
RUN chmod 777 /startService.sh
ENTRYPOINT ./startService.sh
Where did I go wrong?
The error isn't saying that your start script isn't found; it's saying that the source command (which your script apparently uses) isn't found. source is a bash-specific synonym for the . command; if you want your script to be compatible with the Docker image's /bin/sh, you need to use . instead.
Related
I have a Ubuntu:20.04 image with my software being installed by dockerfile RUN commands. So the script i want to execute is build by Dockerfile RUN call to buildmyscripts.sh
This program installs perfectly and then if i run the container (with default entrypoint of /bin/sh or /bin/bash)
and execute manually: /root/build/script.sh -i arg1 -m arg2 it works then.
However same doesn't work with ENTRYPOINT set to the /root/build/script.sh followed by CMD set to the arguments. I get following error on running the image
Error: cannot load shared library xyz.so
Xyz.so is a common shared library installed by RUN anyway before.
Please assist thanks
Note: i run as USER root because i have self hosted runner on a hardened Server so security not an issue.
Apparently we need to source the script for environment variables by prepending to the entrypoint/cmd variable in dockerfile. Since the source was through another script it wasnt working alone with ENV variable
I am currently working on automating commands for a Docker container with a Python script on the host machine. This Python script for now, builds and runs a docker-compose file, with the commands for the containers written into the docker-compose file and the Dockerfile itself.
What I want to do is have the Python script action all commands to run within the container, so if I have different scripts I want to run, I am not changing the container. I have tried 2 ways.
First was to run os.system() command within the Python script, however, this works only as far as opening the shell for the container, the os.system() command does not execute code in the Docker container itself.
The second way uses CMD within the Dockerfile, however, this is limited and is hard coded to the container. If I have multiple scripts I have to change the Dockerfile, I don't want this. What I want is to build a default container with all services running, then run Python scripts on the host to run a sequence of commands on the container.
I am fairly new to Docker and think there must be something I am overlooking to run scripted commands on the container. One possible solution I have come across is nsenter. Is this a reliable solve and how does it work? Or is there a much simpler way? I have also used docker-volume to copy the python files into the container to be run on build, however, I can still not find a solve to automate the accessing and running these python scripts from the host machine.
If the scripts need to be copied into a running container, you can do this via the docker cp command. e.g. docker cp myscript.sh mycontiainer:/working/dir.
Once the scripts are in the container, you can run them via a docker exec command. e.g docker exec -it mycontainer /working/dir/myscript.sh.
Note, this isn't a common practice. Typically the script(s) you need would be built (not copied) into container image(s). Then when you want to execute the script(s), within a container, you would run the container via a docker run command. e.g. docker run -it mycontainerimage /working/dir/myscript.sh
I’m a fresh beginner on bioinformatics. Recently, I start learning it with the book named “Bioinformatics with Python Cookbook (by Antao, Tiago)”. I met some issues while setting up Docker for Linux. Please see below for the issues:
I was trying to set up the Docker files following the author’s instruction, but I found some files were “failed to download”.
docker build -t bio
https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile
Then I still went ahead set up the container following the instruction:
“Now, you are ready to run the container, as follows: docker run -ti -p 9875:9875 -v YOUR_DIRECTORY:/data bio”
I typed as docker run -ti -p 9875:9875 -v C:/Users/guangliang/Desktop/Bioinformation/data bio
However, it gave me an error saying “Unable to find image “bio:latest” locally”.
Can anyone give me any suggestions on this? My thought could be the first step I missed downloading some files for setting the Dockers, but I am not sure if I can fetch these files.
Thank you so much for any comments!
Best regards
Johnny
I tried downloading the docker files a few time, but the error still appears
docker build -t bio
https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile
docker run -ti -p 9875:9875 -v C:/Users/guangliang/Desktop/Bioinformation/data bio
In the first issue, I found some files were “failed to download”.
In the 2nd issue, an error saying “Unable to find image “bio:latest” locally”. appears
Here you have a couple of problems:
1) It looks you do not download that docker file and build required docker image locally
2) You are getting that error about not finding image locally because of previous problem
So, you should do like this:
1) Download that Dockerfile (https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile). If you cant download that file for some reason, just open it at the git, select all content, copy, than in some folder on your computer make a new file, name it "Dockerfile" and paste the content.
2) Build locally image - go to the folder you download that dockerfile and execute following command:
docker build -t bio .
3)Run your container with docker run ... command
I am trying to build a dockerfile for a Euler App to test ShinyProxy via "http://www.shinyproxy.io/deploying-apps/"
I am using the dockerfile from that link.
Upon using the command sudo docker build -t openanalytics/shinyproxy-template .
I get an error while the build is processing that:
Error: unexpected end of input
Execution halted
The command '/bin/sh -c R -e "install.packages(c('shiny', 'rmarkdown', repos='https://cloud.r-project.org/')" ' returned a non-zero code: 1.
I am curious why I am getting this error as this is the same exact command from the dockerfile.
What can I do to resolve this.
-Thanks
Look closely at the syntax of the R install library line and you will see its missing a closing parenthesis
I just manually fixed that syntax and it correctly builds that step
correct syntax
RUN R -e "install.packages(c('shiny', 'rmarkdown'), repos='https://cloud.r-project.org/')"
build it as
docker build --tag r_base .
NOTE - as docker build progresses it then fails later attempting to
COPY euler /root/euler
lstat euler: no such file or directory
To troubleshot this just comment out all Dockefile lines from offending onward and replace bottom line with
CMD ["/bin/bash"]
then it will build correctly and allow you to login to running container to further troubleshoot
docker run -ti r_base bash
I know nothing of R so will leave it to the reader to fix euler COPY ... evidently you must have euler sitting in your local directory prior to issuing the docker build command
...now after you issue above docker run command then from its internal to container prompt issue
cd /
find . | grep Rprofile.site
./usr/lib/R/etc/Rprofile.site
That looks good so leave commented out its COPY in Dockerfile
So at the end of my Dockerfile I have this:
WORKDIR /home
CMD django-admin startproject whattt
CMD /bin/bash
When I create image and then run container, everything works as expected there are no errors, and no errors in the Docker log. However there are still some issues that I cannot seem to figure out.
The first and most important problem is that CMD django-admin startproject is not actually creating any project. AFTER I run the container, then I can manually run django-admin startproject and it works as expected. When I issue this as a CMD from the Dockerfile though, then no project gets created.
The second issue is after the django-admin line, I put a second CMD with /bin/bash so when I run the container it opens a shell (so I can go in and check if my django project was created). Will this create a problem or conflict with the previous django-admin line? If I remove this line, then when I run the container I have no way to open the shell and check if my django project is there do I ?
Any help would be appreciated, thanks.
“There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.” via Dockerfile reference. So your first CMD will not take effects.
If you want to execute the bash of your container, try docker exec command, and the document provides example commands so you can follow.