I have created a docker file with the following:
FROM python:3.9.2
MAINTAINER KSMC
RUN apt-get update -y && apt-get install -y python3-pip python-dev
WORKDIR /Users/rba/Documents/Projects/DD/DD-N4
RUN pip3 install -r requirements.txt
ENTRYPOINT ["python3"]
CMD ["main.py"]
My python code is in main.py , my python version is 3.9.2 and all my python code, requirements.txt and docker file are in the location /Users/rba/Documents/Projects/DD/DD-N4. Upon trying to create docker image using:
docker build -t ddn4image .
I am getting the following error:
#8 1.547 ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
------ executor failed running [/bin/sh -c pip3 install -r requirements.txt]: exit code: 1
Can someone point out what is causing this?
You forgot to do a COPY statement.
Before you run RUN pip3 install -r requirements.txt, just put the following line
COPY . .
You need to do this because your local files need to be copied into the docker image build, so when your container is created, the files will exists on it.
Read the docs about COPY at docker docs.
Hint
Remove the ENTRYPOINT statement and use
CMD ["python3", "main.py"]
Here it is a good explanation about ENTRYPOINT and CMD difference.
Related
I am very elementary at Docker and I know a few commands to test my tasks. I have built a Docker image for 64-bit python because I had to install CatBoost in it. My dockerfile looks like this:
FROM amd64/python:3.9-buster
RUN apt update
RUN apt install -y libgl1-mesa-glx apt-utils
RUN pip install -U pip
RUN pip install --upgrade setuptools
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip --default-timeout=100 install -r requirements.txt
COPY . /app
EXPOSE 5000
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
I built the image with docker build -t myimage ., it gets built for a long time presumably successfully. When I run the image using the following command: docker run -p 5000:5000 myimage, it gives a warning WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested and downloads some things for while and then exits. Not sure whats happening. How do I tackle this?
I managed to solve it by running the same thing on a linux machine.
Building a docker image, I've installed Negbio in my Dockerfile using:
RUN git clone https://github.com/ncbi-nlp/NegBio.git xyz && \
python xyz/setup.py install
When I try to run my Django application at localhost:1227 I get:
No module named 'negbio' ModuleNotFoundError exception
When I run pip list I can see negbio. What am I missing?
As per your comment, It wouldn't install with pip and hence not installing via pip.
Firstly, to make sure the https://github.com/ncbi-nlp/NegBio is properly installed via python setup.py install, you need to install it's dependencies via pip install -r requirements first. So either ways, you are doomed to have pip inside Docker.
For example, this is the sample Dockerfile that would install the negbio package properly:
FROM python:3.6-slim
RUN mkdir -p /apps
WORKDIR /apps
# Steps for installing the package via Docker:
RUN apt-get update && apt-get -y upgrade && apt-get install -y git gcc build-essential
RUN git clone https://github.com/ncbi-nlp/NegBio.git
WORKDIR /apps/NegBio
RUN pip install -r requirements.txt
RUN python setup.py install
ENV PATH=~/.local/bin:$PATH
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
So wouldn't harm if you actually install it via requirements.txt
I would do it like this:
requirements.txt --> have all your requirements added here
negbio==0.9.4
And make sure it's installed on the fly inside the docker using RUN pip install -r requirements.txt
I ultimately resolved my issue by going to an all Anaconda environment. Thank you for everyone's input.
I'm trying to deploy a lambda function using a Docker image, but I want to modify some of the code in the Python packages I'm installing. I can't find where the packages are installed for me to modify the source code.
My Dockerfile is as follows:
FROM public.ecr.aws/lambda/python:3.8
WORKDIR /usr/src/project
COPY lambda_handler.py ${LAMBDA_TASK_ROOT}
COPY requirements.txt ./
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir -r requirements.txt
CMD [ "lambda_handler.lambda_handler" ]
Question: Where are the packages from the requirements.txt file installed? I tried going into the container but it doesn't give me the bash terminal because the container is made from a Lambda image, which requires the entry point to be the lambda handler.
You can override the entrypoint of the image and then print the paths that python searches for packages.
docker run --rm -it --entrypoint python \
public.ecr.aws/lambda/python:3.8 -c 'import sys; print("\n".join(sys.path))'
/var/lang/lib/python38.zip
/var/lang/lib/python3.8
/var/lang/lib/python3.8/lib-dynload
/var/lang/lib/python3.8/site-packages
Given what we find above, you can enter a bash shell in your built image and look at the contents of
/var/lang/lib/python3.8/site-packages
with the following:
docker run --rm -it --entrypoint bash public.ecr.aws/lambda/python:3.8
ls /var/lang/lib/python3.8/site-packages
I'm trying to build a docker container that runs a Python script. I want the code to be cloned from git when I build the image. I'm using this docker file as a base and added the following BEFORE the first line:
FROM debian:buster-slim AS intermediate
RUN apt-get update
RUN apt-get install -y git
ARG SSH_PRIVATE_KEY
RUN mkdir /root/.ssh/
RUN echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa
RUN chmod 600 /root/.ssh/id_rsa
RUN touch /root/.ssh/known_hosts
RUN ssh-keyscan [git hostname] >> /root/.ssh/known_hosts
RUN git clone git#...../myApp.git
... then added the following directly after the first line:
# Copy only the repo from the intermediate image
COPY --from=intermediate /myApp /myApp
... then at the end I added this to install some dependencies:
RUN set -ex; \
apt-get update; \
apt-get install -y gcc g++ unixodbc-dev libpq-dev; \
\
pip install pyodbc; \
pip install paramiko; \
pip install psycopg2
And I changed the command to run to:
CMD ["python3 /myApp/main.py"]
If, at the end of the dockerfile before the CMD, I add the command "RUN ls -l /myApp" it lists all the files I would expect during the build. But when I use "docker run" to run the image, it gives me the following error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: "python3 /myApp/main.py": stat python3 /myApp/main.py: no such file or directory": unknown.
My build command is:
docker build --file ./Dockerfile --tag my_app --build-arg SSH_PRIVATE_KEY="$(cat sshkey)" .
Then run with docker run my_app
There is probably some docker fundamental that I am misunderstanding, but I can't seem to figure out what it is.
This is hard to answer without your command line or your docker-compose.yml (if any). A recurrent mistake is to map a volume from the host into the container at a non empty location, in this case, your container files are hidden by the content of the host folder.
The last CMD should be like this:
CMD ["python3", "/myApp/main.py"]
So, after building out a pipeline, I realized I will need some custom libraries for a python script I will be pulling from SCM. To install Jenkins in Docker, I used the following tutorial:
https://jenkins.io/doc/book/installing/
Like so:
docker run \
-u root \
--rm \
-d \
-p 8080:8080 \
-p 50000:50000 \
-v jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
jenkinsci/blueocean
Now, I will say I'm not a Docker guru, but I'm aware the Dockerfile allows for passing in library installs for Python. However, because I'm pulling the docker image from dockerhub, I'm not sure if it's possible to add a "RUN pip install " as an argument. Maybe there is an alternate approach someone may have.
Any help is appreciated.
EDIT 1: Here's the output of the first commenter's recommendation:
Step 1/6 : FROM jenkinsci/blueocean
---> b7eef16a711e
Step 2/6 : USER root
---> Running in 150bba5c4994
Removing intermediate container 150bba5c4994
---> 882bcec61ccf
Step 3/6 : RUN apt-get update
---> Running in 324f28f384e0
/bin/sh: apt-get: not found
The command '/bin/sh -c apt-get update' returned a non-zero code: 127
Error:
/bin/sh: apt-get: not found
The command '/bin/sh -c apt-get update' returned a non-zero code: 127
Observation:
This error comes when the container that you want to run is not Debian based, hence does not support 'apt'.
To resolve this, we need to find out which package manager it utilizes.
In my case it was: 'apk'.
Resolution:
Replace 'apt-get' with 'apk' in your Dockerfile. (If this does not work you can try 'yum' package manager as well).
Command in your Dockerfile should look like:
RUN apk update
You can create a Dockerfile
FROM jenkins:latest
USER root
RUN apt-get update
RUN apt-get install -y python-pip
# Install app dependencies
RUN pip install --upgrade pip
You can build the custom image using
docker build -t jenkinspython .
Similar to what Hemant Sing's answer, but 2 slightly different things.
First, create a unique directory: mkdir foo
"cd" to that directory and run:
docker build -f jenkinspython .
Where jenkinspython contains:
FROM jenkins:latest
USER root
RUN apt-get update
RUN apt-get install -y python-pip
# Install app dependencies
RUN pip install --upgrade pip
Notice that my change has -f, not -t. And notice that the build output does indeed contain:
Step 5/5 : RUN pip install --upgrade pip
---> Running in d460e0ebb11d
Collecting pip
Downloading https://files.pythonhosted.org/packages/5f/25/e52d3f31441505a5f3af41213346e5b6c221c9e086a166f3703d2ddaf940/pip-18.0-py2.py3-none-any.whl (1.3MB)
Installing collected packages: pip
Found existing installation: pip 9.0.1
Not uninstalling pip at /usr/lib/python2.7/dist-packages, outside environment /usr
Successfully installed pip-18.0
Removing intermediate container d460e0ebb11d
---> b7d342751a79
Successfully built b7d342751a79
So now that the image has been built (in my case, b7d342751a79), fire it up and verify that pip has indeed been updated:
$ docker run -it b7d342751a79 bash
root#9f559d448be9:/# pip --version
pip 18.0 from /usr/local/lib/python2.7/dist-packages/pip (python 2.7)
So now your image has pip installed, so then you can feel free to pip install whatever crazy packages you need :)