Shell command won't run in Docker build - linux

When I am inside the container, the following works:
pip uninstall -y -r <(pip freeze)
Yet when I try to do the same in my Dockerfile like this:
RUN pip uninstall -y -r <(pip freeze)
I get:
------
> [ 7/10] RUN pip uninstall -y -r <(pip freeze):
#11 0.182 /bin/sh: 1: Syntax error: "(" unexpected
------
executor failed running [/bin/sh -c pip uninstall -y -r <(pip freeze)]: exit code: 2
How do I re-write this command to make it happy?
Edit:
I am using this work around. Still interested in the one-liner answer however
RUN pip freeze > to_delete.txt
RUN pip uninstall -y -r to_delete.txt

The default shell for RUN is sh which looks not compatiable with above command.
You could change the shell for RUN to bash to make it work.
Dockerfile:
FROM python:3
SHELL ["/bin/bash", "-c"]
RUN pip install pyyaml
RUN pip uninstall -y -r <(pip freeze)
Execution:
$ docker build -t abc:1 .
Sending build context to Docker daemon 5.12kB
Step 1/4 : FROM python:3
---> da24d18bf4bf
Step 2/4 : SHELL ["/bin/bash", "-c"]
---> Running in da64b8004392
Removing intermediate container da64b8004392
---> 4204713810f9
Step 3/4 : RUN pip install pyyaml
---> Running in 17a91e8cc768
Collecting pyyaml
Downloading PyYAML-5.4.1-cp39-cp39-manylinux1_x86_64.whl (630 kB)
Installing collected packages: pyyaml
Successfully installed pyyaml-5.4.1
WARNING: You are using pip version 20.3.3; however, version 21.2.4 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
Removing intermediate container 17a91e8cc768
---> 1e6dba7439ac
Step 4/4 : RUN pip uninstall -y -r <(pip freeze)
---> Running in 256f7194350c
Found existing installation: PyYAML 5.4.1
Uninstalling PyYAML-5.4.1:
Successfully uninstalled PyYAML-5.4.1
Removing intermediate container 256f7194350c
---> 10346fb83333
Successfully built 10346fb83333
Successfully tagged abc:1

Related

Installed pip packages are not available when deploying the container

Inside my Dockerfile I have:
FROM python:3.7
RUN apt update
RUN apt install -y git
RUN groupadd -g 1001 myuser
RUN useradd -u 1001 -g 1001 -ms /bin/bash myuser
USER 1001:1001
USER myuser
WORKDIR /home/myuser
COPY --chown=myuser:myuser requirements.txt ./
ENV PYTHONPATH="/home/myuser/.local/lib/python3.7/site-packages:.:$PYTHONPATH"
RUN python3.7 -m pip install -r requirements.txt
COPY --chown=myuser:myuser . .
ENV PATH="/home/myuser/.local/bin/:$PATH"
ENV HOME=/home/myuser
ENV PYTHONHASHSEED=1
EXPOSE 8001
CMD [ "python3.7", "app.py" ]
During the build, pip list displays all the libraries correctly:
basicauth 0.4.1
pip 21.1.1
python-dateutil 2.8.1
pytz 2019.1
PyYAML 5.1.1
requests 2.22.0
setuptools 56.0.0
six 1.16.0
urllib3 1.25.11
wheel 0.36.2
But once OpenShift deploys the container, I only get the following libraries installed:
WARNING: The directory '/home/myuser/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
Package Version
---------- -------
pip 21.1.1
setuptools 56.0.0
wheel 0.36.2
The CMD command runs as expected, but none of the packages are installed...
Traceback (most recent call last :
File "app.py", line 16, in ‹module>
import requests
ModuleNotFoundError: No module named 'requests'
A revised Dockerfile more in line with standard practices:
FROM python:3.7
RUN apt update && \
apt install -y --no-install-recommends git && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY requirements.txt .
RUN python3.7 -m pip install -r requirements.txt
COPY . .
ENV PYTHONHASHSEED=1
USER nobody
CMD [ "python3.7", "app.py" ]
I combined the initial RUN layers for a smaller image, and cleaned up the apt lists before exiting the layer. Packages are installed globally as root, and then only after that it changes to runtime user. In this case unless you very specifically need a homedir, I would stick with nobody/65534 as the standard way to express "low privs runtime user".
Remember that OpenShift overrides the container-level USER info https://www.openshift.com/blog/a-guide-to-openshift-and-uids

Why am I getting ModuleNotFoundError after installing Negbio?

Building a docker image, I've installed Negbio in my Dockerfile using:
RUN git clone https://github.com/ncbi-nlp/NegBio.git xyz && \
python xyz/setup.py install
When I try to run my Django application at localhost:1227 I get:
No module named 'negbio' ModuleNotFoundError exception
When I run pip list I can see negbio. What am I missing?
As per your comment, It wouldn't install with pip and hence not installing via pip.
Firstly, to make sure the https://github.com/ncbi-nlp/NegBio is properly installed via python setup.py install, you need to install it's dependencies via pip install -r requirements first. So either ways, you are doomed to have pip inside Docker.
For example, this is the sample Dockerfile that would install the negbio package properly:
FROM python:3.6-slim
RUN mkdir -p /apps
WORKDIR /apps
# Steps for installing the package via Docker:
RUN apt-get update && apt-get -y upgrade && apt-get install -y git gcc build-essential
RUN git clone https://github.com/ncbi-nlp/NegBio.git
WORKDIR /apps/NegBio
RUN pip install -r requirements.txt
RUN python setup.py install
ENV PATH=~/.local/bin:$PATH
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
So wouldn't harm if you actually install it via requirements.txt
I would do it like this:
requirements.txt --> have all your requirements added here
negbio==0.9.4
And make sure it's installed on the fly inside the docker using RUN pip install -r requirements.txt
I ultimately resolved my issue by going to an all Anaconda environment. Thank you for everyone's input.

docker ERROR: Could not find a version that satisfies the requirement apturl==0.5.2

I am using windows 10 OS. I want to build an container based on linux so I can replicate code and dependencies developed from ubuntu. When I try to build it outputs Error message as above.
From my understanding docker for desktop runs linux OS kernel under-the-hood therefore allowing window users to run linux based containers, not sure why it is outputting this error.
My dockerfile looks like this:
FROM ubuntu:18.04
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"
RUN apt update \
&& apt install -y htop python3-dev wget
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir root/.conda \
&& sh Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN conda create -y -n ml python=3.7
COPY . src/
RUN /bin/bash -c "cd src \
&& source activate ml \
&& pip install -r requirements.txt"
requirements.txt contains:
apturl==0.5.2
asn1crypto==0.24.0
bleach==2.1.2
Brlapi==0.6.6
certifi==2020.11.8
chardet==3.0.4
click==7.1.2
command-not-found==0.3
configparser==5.0.1
cryptography==2.1.4
cupshelpers==1.0
dataclasses==0.7
When I run docker build command it outputs:
1.649 ERROR: Could not find a version that satisfies the requirement apturl==0.5.2 1.649 ERROR: No matching distribution found for apturl==0.5.2 Deleting it and running it lead to another error. All error seem to be associated with ubuntu packages.
Am I not running a ubuntu container? why aren't I allowed to install ubuntu packages?
Thanks!
You try to install ubuntu packages with pip (which is for python packages")
try apt install -y apturl
If you want to install python packages write pip install package_name

Docker file builds correctly but fails to run

$ docker image build -t python-blog .
Step 1/5 : FROM ubuntu:16.04
---> 005d2078bdfa
Step 2/5 : RUN apt-get update -y && apt-get install -y python-pip python-dev
---> Using cache
---> 7c1fdabcc215
Step 3/5 : ADD . .
---> Using cache
---> 5e2784fac3bd
Step 4/5 : RUN pip install -r requirements.txt
---> Using cache
---> 7d038ff8d993
Step 5/5 : CMD ["/usr/bin/python3","run.py"]
---> Using cache
---> 24f691d13886
Successfully built 24f691d13886
Successfully tagged python-blog:latest
$ docker run -p 5000:5000 python-blog:latest
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:349: starting container process caused "exec:
\"/usr/bin/python3\": stat /usr/bin/python3: no such file or
directory": unknown. ERROr[0005] error waiting for container: context
canceled
Dockerfile
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python-pip python-dev
ADD . .
RUN pip install -r requirements.txt
CMD ["/usr/bin/python3","run.py"]
Your Dockerfile, doesn't not specifically installs python3, ubuntu-16.04 by default selects python (2.7.12-1ubuntu0~16.04.11), that's why it fails to execute the CMD instruction which expects /usr/bin/python3 which does not exists.
you have to update your Dockerfile to specifically install python3 if you want to run your code with python3.

Jenkins in a Docker Container - How do I install custom Python libraries?

So, after building out a pipeline, I realized I will need some custom libraries for a python script I will be pulling from SCM. To install Jenkins in Docker, I used the following tutorial:
https://jenkins.io/doc/book/installing/
Like so:
docker run \
-u root \
--rm \
-d \
-p 8080:8080 \
-p 50000:50000 \
-v jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
jenkinsci/blueocean
Now, I will say I'm not a Docker guru, but I'm aware the Dockerfile allows for passing in library installs for Python. However, because I'm pulling the docker image from dockerhub, I'm not sure if it's possible to add a "RUN pip install " as an argument. Maybe there is an alternate approach someone may have.
Any help is appreciated.
EDIT 1: Here's the output of the first commenter's recommendation:
Step 1/6 : FROM jenkinsci/blueocean
---> b7eef16a711e
Step 2/6 : USER root
---> Running in 150bba5c4994
Removing intermediate container 150bba5c4994
---> 882bcec61ccf
Step 3/6 : RUN apt-get update
---> Running in 324f28f384e0
/bin/sh: apt-get: not found
The command '/bin/sh -c apt-get update' returned a non-zero code: 127
Error:
/bin/sh: apt-get: not found
The command '/bin/sh -c apt-get update' returned a non-zero code: 127
Observation:
This error comes when the container that you want to run is not Debian based, hence does not support 'apt'.
To resolve this, we need to find out which package manager it utilizes.
In my case it was: 'apk'.
Resolution:
Replace 'apt-get' with 'apk' in your Dockerfile. (If this does not work you can try 'yum' package manager as well).
Command in your Dockerfile should look like:
RUN apk update
You can create a Dockerfile
FROM jenkins:latest
USER root
RUN apt-get update
RUN apt-get install -y python-pip
# Install app dependencies
RUN pip install --upgrade pip
You can build the custom image using
docker build -t jenkinspython .
Similar to what Hemant Sing's answer, but 2 slightly different things.
First, create a unique directory: mkdir foo
"cd" to that directory and run:
docker build -f jenkinspython .
Where jenkinspython contains:
FROM jenkins:latest
USER root
RUN apt-get update
RUN apt-get install -y python-pip
# Install app dependencies
RUN pip install --upgrade pip
Notice that my change has -f, not -t. And notice that the build output does indeed contain:
Step 5/5 : RUN pip install --upgrade pip
---> Running in d460e0ebb11d
Collecting pip
Downloading https://files.pythonhosted.org/packages/5f/25/e52d3f31441505a5f3af41213346e5b6c221c9e086a166f3703d2ddaf940/pip-18.0-py2.py3-none-any.whl (1.3MB)
Installing collected packages: pip
Found existing installation: pip 9.0.1
Not uninstalling pip at /usr/lib/python2.7/dist-packages, outside environment /usr
Successfully installed pip-18.0
Removing intermediate container d460e0ebb11d
---> b7d342751a79
Successfully built b7d342751a79
So now that the image has been built (in my case, b7d342751a79), fire it up and verify that pip has indeed been updated:
$ docker run -it b7d342751a79 bash
root#9f559d448be9:/# pip --version
pip 18.0 from /usr/local/lib/python2.7/dist-packages/pip (python 2.7)
So now your image has pip installed, so then you can feel free to pip install whatever crazy packages you need :)

Resources