ERROR: unsatisfiable constraints in docker - linux

I am new to docker.
I have two questions
Question # 1
I've created this basic docker file that installs the Apache-Airflow and Apache-Celery. But for now, just wanted to install airflow. I am facing a strange issue saying unsatisfiable constraints.
I'm getting tired. I've tried but not able to resolve the issue. Any help will be appreciated a lot.
FROM python:3.6-alpine
WORKDIR /airflow
RUN apk add git gcc python3-dev gpgme-dev libc-dev python-devel python-setuptools mysql-devel gcc-c++
COPY airflow/requirements.txt airflow/requirements.txt
RUN pip install -r airflow/requirements.txt
COPY . /airflow
EXPOSE 8080 5555
CMD ["airflow", "initdb"]
I've my requirements.txt file which has the dependencies for Apache-Airflow.
requirements.txt
pytz==2015.7
cryptography
requests
pyOpenSSL
ndg-httpsclient
pyasn1
psycopg2
celery>=4.0.0
flower>=0.7.3
Flask==1.1.1
requests==2.22.0
airflow==1.10.8
MySQL-python
flask_bcrypt
Question # 2
We use conda-library image continuumio/miniconda3 to install the dependencies. Is it a good approach to use???

Made a few changes, here's the new dockerfile:
FROM python:3.6-alpine
WORKDIR /airflow
RUN apk add build-base libffi-dev musl-dev postgresql-dev mariadb-connector-c-dev
COPY requirements.txt ./requirements.txt
RUN pip install -r requirements.txt
COPY . /airflow
EXPOSE 8080 5555
CMD ["airflow", "initdb"]
And the new requirements.txt:
pytz==2015.7
cryptography
pyOpenSSL
ndg-httpsclient
pyasn1
psycopg2
celery>=4.0.0
flower>=0.7.3
Flask==1.1.1
requests==2.22.0
apache-airflow==1.10.8
mysqlclient
flask_bcrypt
Summary of changes:
You were trying to download packages that don't exist in Alpine (they look like debian packages), I replaced most of them with apk add build-base
added libffi-dev for the cryptography package
added musl-dev and postgresql-dev for psycopg2
MySQL-python doesn't support python3 so I replaced it with mysqlclient
added mariadb-connect-c-dev for mysqlclient
other minor fixes, fixed copy paths, removed duplicate dependencies
And yes, generally you might be better off not using alpine to build python packages (https://pythonspeed.com/articles/alpine-docker-python/) . If you switch to continuumio/miniconda3 it is a little simpler (and much faster to build).
FROM continuumio/miniconda3
WORKDIR /airflow
RUN apt-get update && apt-get install -y libpq-dev libmariadbclient-dev build-essential
COPY requirements.txt ./requirements.txt
RUN pip install -r requirements.txt
COPY . /airflow
EXPOSE 8080 5555
CMD ["airflow", "initdb"]

Related

Dockerfile ot working in Linux Sever but working in Ubuntu VM

I have a Dockerfile and it's working fine in Ubuntu VM. However, the same Dockerfile does not build in Linux Server.
Dockerfile:
FROM python:3.9.7-slim as builder-image
ARG DEBIAN_FRONTEND=noninteractive
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONFAULTHANDLER 1
RUN apt-get update && apt-get install -y --no-install-recommends python3-dev gcc libc-dev musl-dev libffi-dev g++ cargo && \
apt-get clean && rm -rf /var/lib/apt/lists/*
RUN python3.9 -m venv /home/myuser/venv
ENV PATH="/home/myuser/venv/bin:$PATH"
RUN /home/myuser/venv/bin/pip install --upgrade pip
WORKDIR /home/myuser/venv
COPY /data/requirements.txt requirements.txt
RUN pip3 install --no-cache-dir wheel
RUN pip3 install --no-cache-dir -r requirements.txt
FROM python:3.9.7-slim
RUN useradd --create-home myuser
COPY --from=builder-image /home/myuser/venv /home/myuser/venv
USER myuser
RUN mkdir /home/myuser/code
WORKDIR /home/myuser/code
ENV PYTHONUNBUFFERED=1
ENV VIRTUAL_ENV=/home/myuser/venv
ENV PATH="/home/myuser/venv/bin:$PATH"
ENTRYPOINT ["/bin/bash"]
docker build -t python-docker_14122021 .
Error:
Sending build context to Docker daemon 49.66 kB
Step 1/23 : FROM python:3.9-slim-buster as builder-image
Error parsing reference: "python:3.9-slim-buster as builder-image" is not a valid repository/tag: invalid reference format
You have a very old docker on the server. You need to have at least version 17.06 of docker to support multi-staging builds.

Installed pip packages are not available when deploying the container

Inside my Dockerfile I have:
FROM python:3.7
RUN apt update
RUN apt install -y git
RUN groupadd -g 1001 myuser
RUN useradd -u 1001 -g 1001 -ms /bin/bash myuser
USER 1001:1001
USER myuser
WORKDIR /home/myuser
COPY --chown=myuser:myuser requirements.txt ./
ENV PYTHONPATH="/home/myuser/.local/lib/python3.7/site-packages:.:$PYTHONPATH"
RUN python3.7 -m pip install -r requirements.txt
COPY --chown=myuser:myuser . .
ENV PATH="/home/myuser/.local/bin/:$PATH"
ENV HOME=/home/myuser
ENV PYTHONHASHSEED=1
EXPOSE 8001
CMD [ "python3.7", "app.py" ]
During the build, pip list displays all the libraries correctly:
basicauth 0.4.1
pip 21.1.1
python-dateutil 2.8.1
pytz 2019.1
PyYAML 5.1.1
requests 2.22.0
setuptools 56.0.0
six 1.16.0
urllib3 1.25.11
wheel 0.36.2
But once OpenShift deploys the container, I only get the following libraries installed:
WARNING: The directory '/home/myuser/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
Package Version
---------- -------
pip 21.1.1
setuptools 56.0.0
wheel 0.36.2
The CMD command runs as expected, but none of the packages are installed...
Traceback (most recent call last :
File "app.py", line 16, in ‹module>
import requests
ModuleNotFoundError: No module named 'requests'
A revised Dockerfile more in line with standard practices:
FROM python:3.7
RUN apt update && \
apt install -y --no-install-recommends git && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY requirements.txt .
RUN python3.7 -m pip install -r requirements.txt
COPY . .
ENV PYTHONHASHSEED=1
USER nobody
CMD [ "python3.7", "app.py" ]
I combined the initial RUN layers for a smaller image, and cleaned up the apt lists before exiting the layer. Packages are installed globally as root, and then only after that it changes to runtime user. In this case unless you very specifically need a homedir, I would stick with nobody/65534 as the standard way to express "low privs runtime user".
Remember that OpenShift overrides the container-level USER info https://www.openshift.com/blog/a-guide-to-openshift-and-uids

Why am I getting ModuleNotFoundError after installing Negbio?

Building a docker image, I've installed Negbio in my Dockerfile using:
RUN git clone https://github.com/ncbi-nlp/NegBio.git xyz && \
python xyz/setup.py install
When I try to run my Django application at localhost:1227 I get:
No module named 'negbio' ModuleNotFoundError exception
When I run pip list I can see negbio. What am I missing?
As per your comment, It wouldn't install with pip and hence not installing via pip.
Firstly, to make sure the https://github.com/ncbi-nlp/NegBio is properly installed via python setup.py install, you need to install it's dependencies via pip install -r requirements first. So either ways, you are doomed to have pip inside Docker.
For example, this is the sample Dockerfile that would install the negbio package properly:
FROM python:3.6-slim
RUN mkdir -p /apps
WORKDIR /apps
# Steps for installing the package via Docker:
RUN apt-get update && apt-get -y upgrade && apt-get install -y git gcc build-essential
RUN git clone https://github.com/ncbi-nlp/NegBio.git
WORKDIR /apps/NegBio
RUN pip install -r requirements.txt
RUN python setup.py install
ENV PATH=~/.local/bin:$PATH
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
So wouldn't harm if you actually install it via requirements.txt
I would do it like this:
requirements.txt --> have all your requirements added here
negbio==0.9.4
And make sure it's installed on the fly inside the docker using RUN pip install -r requirements.txt
I ultimately resolved my issue by going to an all Anaconda environment. Thank you for everyone's input.

How can i upgrade pip's setup tools

I am quite new to Docker/Python and trying to update an existing AWX's Docker Image's Dockerfile to ensure that i have the latest versions of the following packages in /usr/lib/python3.6/site-packages
/usr/lib/python3.6/site-packages/pip/_vendor/requests/sessions.py
/usr/lib/python3.6/site-packages/pip/_vendor/urllib3/connectionpool.py
/usr/lib/python3.6/site-packages/pip/_vendor/urllib3/poolmanager.py
/usr/lib/python3.6/site-packages/pip/_vendor/urllib3/util/retry.py
Following is my environment info:
Docker version 19.03.3, build a872fc2f86
Docker Base Image: centos:latest
The Dockerfile in which i am making changes for this is available here.
From what i have read from multiple posts/blogs so far, this should normally be updated by a simple:
pip3 install --upgrade pip
but when i run this, i am getting the following (and it shows a different path which includes local in the path.
bash-4.4# pip3 install --upgrade pip
Requirement already up-to-date: pip in /usr/local/lib/python3.6/site-packages (19.3.1)
I have even tried the following but this didn't help:
bash-4.4# python3 -m ensurepip --upgrade
Requirement already up-to-date: setuptools in /usr/lib/python3.6/site-packages
Requirement already up-to-date: pip in /usr/local/lib/python3.6/site-packages
bash-4.4# pip3 install --upgrade setuptools
Requirement already up-to-date: setuptools in /usr/local/lib/python3.6/site-packages (42.0.2)
This is how the above changes were applied in the Dockerfile
In this Dockerfile, i have added the following lines after line 124 in the Dockerfile to update pip packages:
RUN yum update -y
RUN pip2 install --upgrade pip
RUN pip3 install --upgrade pip
Snapshot of required sections from Dockerfile
FROM centos:latest
USER root
# sync with installer/roles/image_build/templates/Dockerfile.j2
RUN dnf -y update && \
dnf -y install epel-release 'dnf-command(config-manager)' && \
dnf module -y enable 'postgresql:10' && \
dnf config-manager --set-enabled PowerTools && \
.
.
.
ansible \
python3-devel \
python3-libselinux \
python3-pip \
python3-psycopg2 \
python3-setuptools \
dnf-utils
ADD https://github.com/krallin/tini/releases/download/v0.14.0/tini /tini
RUN chmod +x /tini
RUN python3 -m ensurepip && pip3 install virtualenv
RUN pip3 install supervisor
ADD Makefile /tmp/Makefile
RUN mkdir /tmp/requirements
ADD requirements/requirements_ansible.txt \
requirements/requirements_ansible_uninstall.txt \
requirements/requirements_ansible_git.txt \
requirements/requirements.txt \
requirements/requirements_tower_uninstall.txt \
requirements/requirements_git.txt \
/tmp/requirements/
RUN cd /tmp && VENV_BASE="/var/lib/awx/venv" make requirements_awx requirements_ansible_py3
.
.
.
RUN echo "{{ awx_version }}" > /var/lib/awx/.tower_version
COPY {{ awx_sdist_file }} /tmp/{{ awx_sdist_file }}
RUN OFFICIAL=yes /var/lib/awx/venv/awx/bin/pip install /tmp/{{ awx_sdist_file }}
Somehow the changes are not getting reflected in the correct path. Can someone please suggest how to update the packages inside: /usr/lib/python3.6/site-packages

Building numpy from source in docker

Hi everyone I am trying to build numpy from source in docker container.
This is my Dockerfile:
FROM debian:testing
MAINTAINER Dr Suman Khanal <suman81765#gmail.com>
LABEL updated_at '2017-07-26'
WORKDIR /
RUN apt-get update \
&& apt-get install -y gnupg git wget build-essential python3 python3-dev
python3-setuptools python3-pip libatlas3-base libatlas-dev libatlas-base-dev
liblapack-dev libblas-common libblas3 libblas-dev cython
RUN git clone https://github.com/numpy/numpy.git
WORKDIR /numpy
RUN python3 setup.py build --fcompiler=gnu95 install
CMD ["numpy"]
But its throwing this error.
Build failed: The command '/bin/sh -c python3 setup.py build --fcompiler=gnu95 install' returned a non-zero code: 1
Any help?
Many thanks,
Suman
Here is working Dockerfile.
FROM debian:testing
MAINTAINER Dr Suman Khanal <suman81765#gmail.com>
LABEL updated_at '2017-07-26'
WORKDIR /
RUN apt-get update \
&& apt-get install -y gnupg git wget build-essential python3 python3-dev \
&& apt-get install -y python3-setuptools python3-pip libatlas3-base \
&& apt-get install -y libatlas-dev libatlas-base-dev libblas3 libblas-dev cython
RUN git clone https://github.com/numpy/numpy.git
WORKDIR /numpy
RUN python3 setup.py build --fcompiler=gnu95 install
RUN pip3 install nose
CMD ["python3", "/numpy/numpy/tests/test_ctypeslib.py"]
I tested it with success build and run:
$ docker run -it test-numpy
.......
----------------------------------------------------------------------
Ran 7 tests in 0.004s
OK
Also I dont know exactly what you want to achieve with CMD ["numpy"] because it's directory. I added install nose b/c it is required for numpy's test.
You can test and play with numpy in docker:
docker exec -it test-numpy bash

Resources