Dockerfile support multiple from - python-3.x

I have a need to run python on different CPU architectures such as AMD64 and ARM (Raspberry Pi) etc. I read the documentation from Docker that Multi Stage Build is probably the way to go.
FROM python:3.10-slim
LABEL MAINTAINER=XXX
ADD speedtest-influxdb.py /speedtest/speedtest-influxdb.py
ENV INFLUXDB_SERVER="http://10.88.88.10:49161"
RUN apt-get -y update
RUN python3 -m pip install 'influxdb-client[ciso]'
RUN python3 -m pip install speedtest-cli
CMD python3 /speedtest/speedtest-influxdb.py
FROM arm32v7/python:3.10-slim
LABEL MAINTAINER=XXX
ADD speedtest-influxdb.py /speedtest/speedtest-influxdb.py
ENV INFLUXDB_SERVER="http://10.88.88.10:49161"
RUN apt-get -y update
RUN python3 -m pip install 'influxdb-client[ciso]'
RUN python3 -m pip install speedtest-cli
CMD python3 /speedtest/speedtest-influxdb.py
But as you can see there's a bit to repetition. Is there a better way?

A typical use of a multi-stage build is to build some artifact you want to run or deploy, then COPY it into a final image that doesn't include the build tools. It's not usually the right tool when you want to build several separate things, or several variations on the same thing.
In your case, all of the images are identical except for the FROM line. You can use an ARG to specify the image you're building FROM (in this specific case ARG comes before FROM)
ARG base_image=python:3.10-slim
FROM ${base_image}
...
CMD ["/speedtest/speedtest.py"]
# but only once
If you just docker build this image, you'll get the default base_image, which will use a default Python for the current default architecture. But you can request a different base image, or a different target platform
# Use the ARM32-specific image
docker build --build-arg base_image=arm32v7/python:3.10-slim .
# Force an x86 image (with emulation if supported)
docker build --platform linux/amd64 .

Related

Unable to containerize python-tkinter based project on alpine3.15

This is the link to my project HERE
I am trying to containerize this and i have written a dockerfile for this which below,
FROM python:3-alpine3.15
WORKDIR /app
COPY . /app
RUN apk update
RUN apk add python3
RUN apk add python3-tkinter
#RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 3000
CMD python3 ./sortingAlgs.py
It doesn't run since "tkinter-tclerror-no-display-name-and-no-display-environment-variable" (the error in docker desktop terminal)
After a lot of struggle I found "matplotlib" module can help and i tried including
import matplotlib
matplotlib.use('Agg')
at the starting of my source-code file ("sortingAlgs.py")
but no luck..
I am unable to make use of "requirements.txt" since i am new to docker.. and i am getting errors like (module not found) when it comes at "pip install -r requirements.txt" and fails.
and i only have experience on debian linux (ubuntu). hencce, working with alpine is a problem as well..
i want to host this application as an containerized application and push it on docker hub

Python "exec /usr/local/bin/python3: exec format error" on Docker while using Apple M1 Max

My docker images were working fine as soon as I moved to my new Mac M1 Max. Even with my M1 Max, I have installed docker and successfully created image, and pushed to AWS ECR. Now when I'm running that image, it doesn't run but throws an error
exec /usr/local/bin/python3: exec format error
My Dockerfile looks like below but no luck yet. The same error everytime. I know that it's not straight forward to built docker images on Mac M1 Max and run them but in many StackOverflow answers I found that the below addition --platform=linux/arm64 helps but not in my case yet.
FROM --platform=linux/amd64 python:3.8-slim-buster
# FROM --platform=linux/arm64 python:3.8-slim-buster (tried this one as well)
# FROM --platform=linux/arm64/v8 python:3.8-slim-buster (tried this one as well)
WORKDIR /project
COPY ./requirements.txt .
RUN apt-get -qq update
RUN pip3 --quiet install --requirement requirements.txt \
--force-reinstall --upgrade
COPY . .
I got it working by just adding as build.
So the first line will look like
FROM --platform=linux/amd64 python:3.8-slim-buster as build
I'm not exactly sure what's the difference between the above line and the one I was using earlier but some as build is mentioned in the docker docs. Now it works fine

Docker run exits when running an amd64/python image

I am very elementary at Docker and I know a few commands to test my tasks. I have built a Docker image for 64-bit python because I had to install CatBoost in it. My dockerfile looks like this:
FROM amd64/python:3.9-buster
RUN apt update
RUN apt install -y libgl1-mesa-glx apt-utils
RUN pip install -U pip
RUN pip install --upgrade setuptools
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip --default-timeout=100 install -r requirements.txt
COPY . /app
EXPOSE 5000
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
I built the image with docker build -t myimage ., it gets built for a long time presumably successfully. When I run the image using the following command: docker run -p 5000:5000 myimage, it gives a warning WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested and downloads some things for while and then exits. Not sure whats happening. How do I tackle this?
I managed to solve it by running the same thing on a linux machine.

Python virtualenv is not activated when the ubuntu docker container is started [duplicate]

I have a Dockerfile where I try to activate python virtualenv after what, it should install all dependencies within this env. However, everything still gets installed globally. I used different approaches and non of them worked. I also do not get any errors. Where is a problem?
1.
ENV PATH $PATH:env/bin
2.
ENV PATH $PATH:env/bin/activate
3.
RUN . env/bin/activate
I also followed an example of a Dockerfile config for the python-runtime image on Google Cloud, which is basically the same stuff as above.
Setting these environment variables are the same as running source /env/bin/activate.
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
Additionally, what does ENV VIRTUAL_ENV /env mean and how it is used?
You don't need to use virtualenv inside a Docker Container.
virtualenv is used for dependency isolation. You want to prevent any dependencies or packages installed from leaking between applications. Docker achieves the same thing, it isolates your dependencies within your container and prevent leaks between containers and between applications.
Therefore, there is no point in using virtualenv inside a Docker Container unless you are running multiple apps in the same container, if that's the case I'd say that you're doing something wrong and the solution would be to architect your app in a better way and split them up in multiple containers.
EDIT 2022: Given this answer get a lot of views, I thought it might make sense to add that now 4 years later, I realized that there actually is valid usages of virtual environments in Docker images, especially when doing multi staged builds:
FROM python:3.9-slim as compiler
ENV PYTHONUNBUFFERED 1
WORKDIR /app/
RUN python -m venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
COPY ./requirements.txt /app/requirements.txt
RUN pip install -Ur requirements.txt
FROM python:3.9-slim as runner
WORKDIR /app/
COPY --from=compiler /opt/venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
COPY . /app/
CMD ["python", "app.py", ]
In the Dockerfile example above, we are creating a virtualenv at /opt/venv and activating it using an ENV statement, we then install all dependencies into this /opt/venv and can simply copy this folder into our runner stage of our build. This can help with with minimizing docker image size.
There are perfectly valid reasons for using a virtualenv within a container.
You don't necessarily need to activate the virtualenv to install software or use it. Try invoking the executables directly from the virtualenv's bin directory instead:
FROM python:2.7
RUN virtualenv /ve
RUN /ve/bin/pip install somepackage
CMD ["/ve/bin/python", "yourcode.py"]
You may also just set the PATH environment variable so that all further Python commands will use the binaries within the virtualenv as described in https://pythonspeed.com/articles/activate-virtualenv-dockerfile/
FROM python:2.7
RUN virtualenv /ve
ENV PATH="/ve/bin:$PATH"
RUN pip install somepackage
CMD ["python", "yourcode.py"]
Setting this variables
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
is not exactly the same as just running
RUN . env/bin/activate
because activation inside single RUN will not affect any lines below that RUN in Dockerfile. But setting environment variables through ENV will activate your virtual environment for all RUN commands.
Look at this example:
RUN virtualenv env # setup env
RUN which python # -> /usr/bin/python
RUN . /env/bin/activate && which python # -> /env/bin/python
RUN which python # -> /usr/bin/python
So if you really need to activate virtualenv for the whole Dockerfile you need to do something like this:
RUN virtualenv env
ENV VIRTUAL_ENV /env # activating environment
ENV PATH /env/bin:$PATH # activating environment
RUN which python # -> /env/bin/python
Although I agree with Marcus that this is not the way of doing with Docker, you can do what you want.
Using the RUN command of Docker directly will not give you the answer as it will not execute your instructions from within the virtual environment. Instead squeeze the instructions executed in a single line using /bin/bash. The following Dockerfile worked for me:
FROM python:2.7
RUN virtualenv virtual
RUN /bin/bash -c "source /virtual/bin/activate && pip install pyserial && deactivate"
...
This should install the pyserial module only on the virtual environment.
If you your using python 3.x :
RUN pip install virtualenv
RUN virtualenv -p python3.5 virtual
RUN /bin/bash -c "source /virtual/bin/activate"
If you are using python 2.x :
RUN pip install virtualenv
RUN virtualenv virtual
RUN /bin/bash -c "source /virtual/bin/activate"
Consider a migration to pipenv - a tool which will automate virtualenv and pip interactions for you. It's recommended by PyPA.
Reproduce environment via pipenv in a docker image is very simple:
FROM python:3.7
RUN pip install pipenv
COPY src/Pipfile* ./
RUN pipenv install --deploy
...

Python 3.6 in tensorflow gpu docker images

How can I have python3.6 in tensorflow docker images.
All the images I tried (latest, nighty) are using python3.5 and I don't want to modify all my scripts.
The Tensorflow images are based on Ubuntu 16.04, as you can see from the Dockerfile. This release ships with Python 3.5 as standard.
So you'll have to re-build the image, and the Dockerfile will need editing, even though you need to do the actual build with the parameterized_docker_build.sh script.
This answer on ask Ubuntu covers how to get Python 3.6 on Ubuntu 16.04
The simplest way would probably be just to change the From line in the Dockerfile to FROM ubuntu:16.10, and python to python3.6 in the initial apt-get install line
Of course, this may break some other Ubuntu version-specific thing, so an alternative would be to keep Ubuntu 16.04 and install one of the alternative ppa's also listed in the linked answer:
RUN add-apt-repository ppa:deadsnakes/ppa &&
apt-get update &&
apt-get install -y python3.6
Note that you'll need this after the initial apt-get install, because that installs software-properties-common, which you need to add the ppa.
Note also, as in the comments to the linked answer, that you will need to symlink to Python 3.6.
Finally, note that I haven't tried any of this. The may be gotchas, and you may need to make another change to ensure that the correct version of Python is used by the running container.
You can use stable images which are supplied by third parties, like ufoym/deepo.
One that fits TensorFlow, python3.6 and cuda10 can be found here or you can pull it directly using the command docker pull ufoym/deepo:py36-cu100
I use their images all the time, never had problems
With this anwer, I just wanted to specify how I solved this problem (the previous answer of SiHa helped me a lot but I had to add a few steps so that it worked completly).
Context:
I'm using a package (segmentation model for unet++) that requires tensorflow==1.4.0 and keras==2.2.2.
I tried to use the docker image for tensorflow 1.4.0, however, the default version of python of this image is 3.5 which is not compatible with my package.
I managed to install python3.6 on the docker images thanks to the following files:
My Dockerfile contains the following lines:
Dockerfile:
FROM tensorflow/tensorflow:1.4.0-gpu-py3
RUN mkdir /AI_PLATFORM
WORKDIR /AI_PLATFORM
COPY ./install.sh ./install.sh
COPY ./requirements.txt ./requirements.txt
COPY ./computer_vision ./computer_vision
COPY ./config.ini ./config.ini
RUN bash install.sh
Install.sh:
#!/urs/bin/env bash
pip install --upgrade pip
apt-get update
apt-get install -y python3-pip
add-apt-repository ppa:deadsnakes/ppa &&
apt-get update &&
apt-get install python3.6 --assume-yes
apt-get install libpython3.6
python3.6 -m pip install --upgrade pip
python3.6 -m pip install -r requirements.txt
Three things are important:
use python3.6 -m pip instead of pip, else the packages are installed on python 3.5 default version of Ubuntu 16.04
use docker run python3.6 <command> to run your containers with python==3.6
in the requirements.txt file, I had to specify the following things:
h5py==2.10.0
tensorflow-gpu==1.4.1
keras==2.2.2
keras-applications==1.0.4
keras-preprocessing==1.0.2
I hope that this answer will be useful
Maybe the image I created will help you. It is based on the cuda-10.0-devel image and has tensorflow 2.0a-gpu installed.
You can use it as base image for your own implementation. The image itself doesn't do anything. I put the image on dockerhub https://cloud.docker.com/repository/docker/patientzero/tensorflow2.0a-gpu-py3.6
The github repo is located here: https://github.com/patientzero/tensorflow2.0-python3.6-Docker
Pulling it won't do much, but for completeness:
$ docker pull patientzero/tensorflow2.0-gpu-py3.6
edit: changed to general tensorflow 2.0x image.
Also as mentioned here, the official image for the beta 2.0 release now comes with python 3.6 support

Resources