Building a docker image, I've installed Negbio in my Dockerfile using:
RUN git clone https://github.com/ncbi-nlp/NegBio.git xyz && \
python xyz/setup.py install
When I try to run my Django application at localhost:1227 I get:
No module named 'negbio' ModuleNotFoundError exception
When I run pip list I can see negbio. What am I missing?
As per your comment, It wouldn't install with pip and hence not installing via pip.
Firstly, to make sure the https://github.com/ncbi-nlp/NegBio is properly installed via python setup.py install, you need to install it's dependencies via pip install -r requirements first. So either ways, you are doomed to have pip inside Docker.
For example, this is the sample Dockerfile that would install the negbio package properly:
FROM python:3.6-slim
RUN mkdir -p /apps
WORKDIR /apps
# Steps for installing the package via Docker:
RUN apt-get update && apt-get -y upgrade && apt-get install -y git gcc build-essential
RUN git clone https://github.com/ncbi-nlp/NegBio.git
WORKDIR /apps/NegBio
RUN pip install -r requirements.txt
RUN python setup.py install
ENV PATH=~/.local/bin:$PATH
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
So wouldn't harm if you actually install it via requirements.txt
I would do it like this:
requirements.txt --> have all your requirements added here
negbio==0.9.4
And make sure it's installed on the fly inside the docker using RUN pip install -r requirements.txt
I ultimately resolved my issue by going to an all Anaconda environment. Thank you for everyone's input.
Related
I am very elementary at Docker and I know a few commands to test my tasks. I have built a Docker image for 64-bit python because I had to install CatBoost in it. My dockerfile looks like this:
FROM amd64/python:3.9-buster
RUN apt update
RUN apt install -y libgl1-mesa-glx apt-utils
RUN pip install -U pip
RUN pip install --upgrade setuptools
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip --default-timeout=100 install -r requirements.txt
COPY . /app
EXPOSE 5000
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
I built the image with docker build -t myimage ., it gets built for a long time presumably successfully. When I run the image using the following command: docker run -p 5000:5000 myimage, it gives a warning WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested and downloads some things for while and then exits. Not sure whats happening. How do I tackle this?
I managed to solve it by running the same thing on a linux machine.
Inside my Dockerfile I have:
FROM python:3.7
RUN apt update
RUN apt install -y git
RUN groupadd -g 1001 myuser
RUN useradd -u 1001 -g 1001 -ms /bin/bash myuser
USER 1001:1001
USER myuser
WORKDIR /home/myuser
COPY --chown=myuser:myuser requirements.txt ./
ENV PYTHONPATH="/home/myuser/.local/lib/python3.7/site-packages:.:$PYTHONPATH"
RUN python3.7 -m pip install -r requirements.txt
COPY --chown=myuser:myuser . .
ENV PATH="/home/myuser/.local/bin/:$PATH"
ENV HOME=/home/myuser
ENV PYTHONHASHSEED=1
EXPOSE 8001
CMD [ "python3.7", "app.py" ]
During the build, pip list displays all the libraries correctly:
basicauth 0.4.1
pip 21.1.1
python-dateutil 2.8.1
pytz 2019.1
PyYAML 5.1.1
requests 2.22.0
setuptools 56.0.0
six 1.16.0
urllib3 1.25.11
wheel 0.36.2
But once OpenShift deploys the container, I only get the following libraries installed:
WARNING: The directory '/home/myuser/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
Package Version
---------- -------
pip 21.1.1
setuptools 56.0.0
wheel 0.36.2
The CMD command runs as expected, but none of the packages are installed...
Traceback (most recent call last :
File "app.py", line 16, in ‹module>
import requests
ModuleNotFoundError: No module named 'requests'
A revised Dockerfile more in line with standard practices:
FROM python:3.7
RUN apt update && \
apt install -y --no-install-recommends git && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY requirements.txt .
RUN python3.7 -m pip install -r requirements.txt
COPY . .
ENV PYTHONHASHSEED=1
USER nobody
CMD [ "python3.7", "app.py" ]
I combined the initial RUN layers for a smaller image, and cleaned up the apt lists before exiting the layer. Packages are installed globally as root, and then only after that it changes to runtime user. In this case unless you very specifically need a homedir, I would stick with nobody/65534 as the standard way to express "low privs runtime user".
Remember that OpenShift overrides the container-level USER info https://www.openshift.com/blog/a-guide-to-openshift-and-uids
I am using windows 10 OS. I want to build an container based on linux so I can replicate code and dependencies developed from ubuntu. When I try to build it outputs Error message as above.
From my understanding docker for desktop runs linux OS kernel under-the-hood therefore allowing window users to run linux based containers, not sure why it is outputting this error.
My dockerfile looks like this:
FROM ubuntu:18.04
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"
RUN apt update \
&& apt install -y htop python3-dev wget
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir root/.conda \
&& sh Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN conda create -y -n ml python=3.7
COPY . src/
RUN /bin/bash -c "cd src \
&& source activate ml \
&& pip install -r requirements.txt"
requirements.txt contains:
apturl==0.5.2
asn1crypto==0.24.0
bleach==2.1.2
Brlapi==0.6.6
certifi==2020.11.8
chardet==3.0.4
click==7.1.2
command-not-found==0.3
configparser==5.0.1
cryptography==2.1.4
cupshelpers==1.0
dataclasses==0.7
When I run docker build command it outputs:
1.649 ERROR: Could not find a version that satisfies the requirement apturl==0.5.2 1.649 ERROR: No matching distribution found for apturl==0.5.2 Deleting it and running it lead to another error. All error seem to be associated with ubuntu packages.
Am I not running a ubuntu container? why aren't I allowed to install ubuntu packages?
Thanks!
You try to install ubuntu packages with pip (which is for python packages")
try apt install -y apturl
If you want to install python packages write pip install package_name
So, after building out a pipeline, I realized I will need some custom libraries for a python script I will be pulling from SCM. To install Jenkins in Docker, I used the following tutorial:
https://jenkins.io/doc/book/installing/
Like so:
docker run \
-u root \
--rm \
-d \
-p 8080:8080 \
-p 50000:50000 \
-v jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
jenkinsci/blueocean
Now, I will say I'm not a Docker guru, but I'm aware the Dockerfile allows for passing in library installs for Python. However, because I'm pulling the docker image from dockerhub, I'm not sure if it's possible to add a "RUN pip install " as an argument. Maybe there is an alternate approach someone may have.
Any help is appreciated.
EDIT 1: Here's the output of the first commenter's recommendation:
Step 1/6 : FROM jenkinsci/blueocean
---> b7eef16a711e
Step 2/6 : USER root
---> Running in 150bba5c4994
Removing intermediate container 150bba5c4994
---> 882bcec61ccf
Step 3/6 : RUN apt-get update
---> Running in 324f28f384e0
/bin/sh: apt-get: not found
The command '/bin/sh -c apt-get update' returned a non-zero code: 127
Error:
/bin/sh: apt-get: not found
The command '/bin/sh -c apt-get update' returned a non-zero code: 127
Observation:
This error comes when the container that you want to run is not Debian based, hence does not support 'apt'.
To resolve this, we need to find out which package manager it utilizes.
In my case it was: 'apk'.
Resolution:
Replace 'apt-get' with 'apk' in your Dockerfile. (If this does not work you can try 'yum' package manager as well).
Command in your Dockerfile should look like:
RUN apk update
You can create a Dockerfile
FROM jenkins:latest
USER root
RUN apt-get update
RUN apt-get install -y python-pip
# Install app dependencies
RUN pip install --upgrade pip
You can build the custom image using
docker build -t jenkinspython .
Similar to what Hemant Sing's answer, but 2 slightly different things.
First, create a unique directory: mkdir foo
"cd" to that directory and run:
docker build -f jenkinspython .
Where jenkinspython contains:
FROM jenkins:latest
USER root
RUN apt-get update
RUN apt-get install -y python-pip
# Install app dependencies
RUN pip install --upgrade pip
Notice that my change has -f, not -t. And notice that the build output does indeed contain:
Step 5/5 : RUN pip install --upgrade pip
---> Running in d460e0ebb11d
Collecting pip
Downloading https://files.pythonhosted.org/packages/5f/25/e52d3f31441505a5f3af41213346e5b6c221c9e086a166f3703d2ddaf940/pip-18.0-py2.py3-none-any.whl (1.3MB)
Installing collected packages: pip
Found existing installation: pip 9.0.1
Not uninstalling pip at /usr/lib/python2.7/dist-packages, outside environment /usr
Successfully installed pip-18.0
Removing intermediate container d460e0ebb11d
---> b7d342751a79
Successfully built b7d342751a79
So now that the image has been built (in my case, b7d342751a79), fire it up and verify that pip has indeed been updated:
$ docker run -it b7d342751a79 bash
root#9f559d448be9:/# pip --version
pip 18.0 from /usr/local/lib/python2.7/dist-packages/pip (python 2.7)
So now your image has pip installed, so then you can feel free to pip install whatever crazy packages you need :)
I'm trying to build a docker image with python 3 and virtualenv.
I understand that I wouldn't need to use wirtualenv in a docker image as I'm going to use only python 3, yet I see some clean isolation benefits of using virtualenv anyways.
What's the best practice? Should I avoid using virtualenv on docker?
If that's the case, how can I setup python3 and pip3 to be used as python and pip (without the 3)?
This is my Dockerfile:
FROM openjdk:8-alpine
RUN apk update && apk add bash gcc musl-dev
RUN apk add python3 python3-dev
RUN apk add py3-pip
RUN apk add libxslt-dev libxml2-dev
ENV PROJECT_HOME /opt/app
RUN mkdir -p /opt/app
RUN mkdir -p /opt/app/modules
ENV LD_LIBRARY_PATH /usr/lib/python3.6/site-packages/jep
ENV LD_PRELOAD /usr/lib/libpython3.6m.so
RUN pip3 install jep
RUN pip3 install ads
RUN pip3 install gspread
RUN pip3 list
COPY target/my-server-1.0-SNAPSHOT.jar $PROJECT_HOME/my-server-1.0-SNAPSHOT.jar
WORKDIR $PROJECT_HOME
CMD ["java", "-Dspring.data.mongodb.uri=mongodb://my-mongo:27017/mydb","-jar","./my-server-1.0-SNAPSHOT.jar"]
Thanks
=== UPDATE 1 ===
I'm trying to create a new virtual env in the WORKDIR, install some libs and then execute a shell script, even though I see it creates the whole thing when I build the image, when running the container the environment folder is empty.
This is from my Dockerfile:
RUN virtualenv ./env && source ./env/bin/activate && pip install jep \
googleads gspread oauth2client
ENTRYPOINT ["/bin/bash", "./startup.sh"]
startup.sh:
#!/bin/sh
source ./env/bin/activate
java -Dspring.data.mongodb.uri=mongodb://my-mongo:27017/mydb -jar ./my-server-1.0-SNAPSHOT.jar
It builds fine but on docker-compose up -d this is the output:
./startup.sh: source: line 2: can't open './env/bin/activate'
The env folder exists, but it's empty.
Any ideas?
Thanks!
=== UPDATE 2 ===
This is the working config:
RUN virtualenv ./my-env && source ./my-env/bin/activate \
&& pip install gspread==0.6.2 jep oauth2client googleads pandas
CMD ["/bin/bash", "-c", "./startup.sh"]
This is startup.sh:
#!/bin/sh
source ./my-env/bin/activate
java -Dspring.data.mongodb.uri=mongodb://my-mongo:27017/mydb -jar ./my-server-1.0-SNAPSHOT.jar
I don't think using virtualenv in docker is something really negative, it will slow down your container builds just a bit.
As for renaming pip3 and python3, you can create a hard link like this:
ln /usr/bin/python3 /usr/bin/python
ln /usr/bin/pip3 /usr/bin/pip
assuming python3 executable is in /usr/bin/. You can find its location by running which python3
P.S.: Your dockerfile contains loads of RUN instructions, that are creating unnecessary intermediate containers. Combine them to save space and time:
RUN apk update && apk add bash gcc musl-dev \
python3 python3-dev py3-pip \
libxslt-dev libxml2-dev
RUN mkdir -p /opt/app/modules # you don't need the first one, -p will create it for you
RUN pip3 install jep ads gspread
Or combine them even further, if you aren't planning to change them often:
RUN apk update
&& apk add bash gcc musl-dev \
python3 python3-dev py3-pip \
libxslt-dev libxml2-dev \
&& mkdir -p /opt/app/modules \
&& pip3 install jep ads gspread
The only "workaround" I've found in order to use virtualenv from my docker container is to enter to the docker by ssh, create the environment, install the libs and set its folder as a volume in the docker-compose config so it won't be deleted and I can use it afterward.
(Or to have it ready and just copy the folder at build time) which could be a good option for saving build time, isn't it?
Otherwise, If I create it on Dockerfile and install the libs there, its folder gets empty when the container runs. Don't know why.
I appreciate if anyone can suggest a better way to deal with that.