ERROR: No matching distribution found for catboost on ARM64 architecture - python-3.x

I am trying to build a multi-architecture image for ARM64 using buildx command. I am using python:3.8-slim as a base image and trying to install catboost using pip, but I am getting the following error.
ERROR
ERROR: Could not find a version that satisfies the requirement catboost==1.0.4 (from versions: none)
ERROR: No matching distribution found for catboost==1.0.4
Dockerfile
FROM --platform=linux/arm64/v8 python:3.8-slim
RUN apt update && \
apt upgrade -y && \
pip install -U pip && \
pip install --upgrade setuptools
steps...
Command
docker buildx build --platform linux/arm64 -t dockerId:test-arm -f ./dockerfiles/Dockerfile .
Alternatives I have tried
In the Dockerfile I tried using the --platform=linux/amd64 which builds the arm64 successfully but still doesn't work when I deploy it on the ARM machine.
I have used anaconda for installing the packages but the error to install catboost remains the same.
#0 122.6 PackagesNotFoundError: The following packages are not available from current channels:
#0 122.6
#0 122.6 - catboost
This is a shoutout to the #catboost team. I am currently working on a deadline and would like to know if the catboost has support for ARM64 over a docker image.

Related

How to install google-cloud-bigquery on python-alpine based docker?

I'm trying to build a docker with python 3 and google-cloud-bigquery with the following docker file:
FROM python:3.10-alpine
RUN pip3 install google-cloud-bigquery
WORKDIR /home
COPY *.py /home/
ENTRYPOINT ["python3", "-u", "myscript.py"]
But getting errors on the pip3 install google-cloud-bigquery (too long for here)..
What's missing for installing this on python-alpine?
Looks like an incompatibility issue with the latest version of google-cloud-bigquery (>3) and numpy:
ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects
Try specifying a previous version, this works for me:
RUN pip3 install google-cloud-bigquery==2.34.4
Actually it seems like not a problem with numpy, which builds smoothly with all the dependency libs install, but rather with pyarrow, which does not support alpine+pip build. I've found a workaround by using alpine pre-built version of pyarrow. It is much easier than building pyarrow from source. This build works for me just fine:
FROM python:3.10.6-alpine3.16
RUN apk add --no-cache build-base linux-headers \
py3-apache-arrow=8.0.0-r0
# Copying pyarrow to site-package of actual python path. Alpine python path
# and python's docker hub path are different.
RUN mv /usr/lib/python3.10/site-packages/* \
/usr/local/lib/python3.10/site-packages/
RUN rm -rf /usr/lib/python3.10
RUN --mount=type=cache,target=/root/.cache/pip \
pip install google-cloud-bigquery==3.3.2
Update python version, alpine version and py3-apache-arrow version to install later versions. This is the latest one on the time of writing.
And make sure to remove build dependencies (build-base, linux-headers) for your release docker. I prefer multistage dockers for this.

docker ERROR: Could not find a version that satisfies the requirement apturl==0.5.2

I am using windows 10 OS. I want to build an container based on linux so I can replicate code and dependencies developed from ubuntu. When I try to build it outputs Error message as above.
From my understanding docker for desktop runs linux OS kernel under-the-hood therefore allowing window users to run linux based containers, not sure why it is outputting this error.
My dockerfile looks like this:
FROM ubuntu:18.04
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"
RUN apt update \
&& apt install -y htop python3-dev wget
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir root/.conda \
&& sh Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN conda create -y -n ml python=3.7
COPY . src/
RUN /bin/bash -c "cd src \
&& source activate ml \
&& pip install -r requirements.txt"
requirements.txt contains:
apturl==0.5.2
asn1crypto==0.24.0
bleach==2.1.2
Brlapi==0.6.6
certifi==2020.11.8
chardet==3.0.4
click==7.1.2
command-not-found==0.3
configparser==5.0.1
cryptography==2.1.4
cupshelpers==1.0
dataclasses==0.7
When I run docker build command it outputs:
1.649 ERROR: Could not find a version that satisfies the requirement apturl==0.5.2 1.649 ERROR: No matching distribution found for apturl==0.5.2 Deleting it and running it lead to another error. All error seem to be associated with ubuntu packages.
Am I not running a ubuntu container? why aren't I allowed to install ubuntu packages?
Thanks!
You try to install ubuntu packages with pip (which is for python packages")
try apt install -y apturl
If you want to install python packages write pip install package_name

unable to install tensorflow model server

I am trying to deploy my model on tensorflow serving. But I am facing issue with the installation of tensorflow model server itself. Do I need to install anything else before model server can be installed? I am using python v3.6 and tensorflow version 1.12.0 currently on VM.
conda install tensorflow-model-server
pip install tensorflow-model-server
Below are the two ways using which I am trying to install:
using conda install which gives me below error.
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
tensorflow-model-server
using pip which is says:
Collecting tensorflow-model-server
Could not find a version that satisfies the requirement tensorflow-model-server (from versions: )
No matching distribution found for tensorflow-model-server
Did you try to follow instruction that are provide into documentation?
At very first, you should try to Add Tensorflow Service as package source using instructions as below
echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list && \
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -
# then install
apt-get update && apt-get install tensorflow-model-server
For more information, please look at link below:
Tensorflow Serving doc

Unable to find libssl.so.1.0.2 and libssl.so.1.0.2 when trying to use PyODBC on a Docker image

I have a docker file which uses python:3 (based on debian). I am installing the drivers for PyODBC as per the microsoft docs.
FROM python:3
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \
curl https://packages.microsoft.com/config/debian/9/prod.list > /etc/apt/sources.list.d/mssql-release.list && \
apt-get update && \
ACCEPT_EULA=Y apt-get install msodbcsql17 unixodbc-dev -y
I can build the image, but when trying to run it I get the error: Can't open lib /opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.3.so.1.1
I have ran: ldd /opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.3.so.1.1 and get the output that says the below two libs cannot be found:
libcrypto.so.1.0.2 => not found
libssl.so.1.0.2 => not found
I have also tried dpkg --search libssl and dpkg --search libsslcrypto which yielded:
libssl1.1:amd64: /usr/lib/x86_64-linux-gnu/libssl.so.1.1
libssl1.1:amd64: /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1
From ldd /opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.3.so.1.1 there are other libraries being picked up in /usr/lib/x86_64-linux-gnu/
Very new to docker/linux, so how can I install libcrypto.so.1.0.2 and libssl.so.1.0.2 or downgrade the versions in '/usr/lib/x86_64-linux-gnu/' so that they can be used for msodbcsql17 (have tried apt get -y install libssl1.0=1.0.2) ?
The docker image python:3 appears to be built on Debian 10.
The package repository you are installing appears to be built for Debian 9, and does not appear to be compatible with Debian 10.
You should probably be using the repository with packages built for Debian 10 to get compatible packages.

Jenkins in a Docker Container - How do I install custom Python libraries?

So, after building out a pipeline, I realized I will need some custom libraries for a python script I will be pulling from SCM. To install Jenkins in Docker, I used the following tutorial:
https://jenkins.io/doc/book/installing/
Like so:
docker run \
-u root \
--rm \
-d \
-p 8080:8080 \
-p 50000:50000 \
-v jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
jenkinsci/blueocean
Now, I will say I'm not a Docker guru, but I'm aware the Dockerfile allows for passing in library installs for Python. However, because I'm pulling the docker image from dockerhub, I'm not sure if it's possible to add a "RUN pip install " as an argument. Maybe there is an alternate approach someone may have.
Any help is appreciated.
EDIT 1: Here's the output of the first commenter's recommendation:
Step 1/6 : FROM jenkinsci/blueocean
---> b7eef16a711e
Step 2/6 : USER root
---> Running in 150bba5c4994
Removing intermediate container 150bba5c4994
---> 882bcec61ccf
Step 3/6 : RUN apt-get update
---> Running in 324f28f384e0
/bin/sh: apt-get: not found
The command '/bin/sh -c apt-get update' returned a non-zero code: 127
Error:
/bin/sh: apt-get: not found
The command '/bin/sh -c apt-get update' returned a non-zero code: 127
Observation:
This error comes when the container that you want to run is not Debian based, hence does not support 'apt'.
To resolve this, we need to find out which package manager it utilizes.
In my case it was: 'apk'.
Resolution:
Replace 'apt-get' with 'apk' in your Dockerfile. (If this does not work you can try 'yum' package manager as well).
Command in your Dockerfile should look like:
RUN apk update
You can create a Dockerfile
FROM jenkins:latest
USER root
RUN apt-get update
RUN apt-get install -y python-pip
# Install app dependencies
RUN pip install --upgrade pip
You can build the custom image using
docker build -t jenkinspython .
Similar to what Hemant Sing's answer, but 2 slightly different things.
First, create a unique directory: mkdir foo
"cd" to that directory and run:
docker build -f jenkinspython .
Where jenkinspython contains:
FROM jenkins:latest
USER root
RUN apt-get update
RUN apt-get install -y python-pip
# Install app dependencies
RUN pip install --upgrade pip
Notice that my change has -f, not -t. And notice that the build output does indeed contain:
Step 5/5 : RUN pip install --upgrade pip
---> Running in d460e0ebb11d
Collecting pip
Downloading https://files.pythonhosted.org/packages/5f/25/e52d3f31441505a5f3af41213346e5b6c221c9e086a166f3703d2ddaf940/pip-18.0-py2.py3-none-any.whl (1.3MB)
Installing collected packages: pip
Found existing installation: pip 9.0.1
Not uninstalling pip at /usr/lib/python2.7/dist-packages, outside environment /usr
Successfully installed pip-18.0
Removing intermediate container d460e0ebb11d
---> b7d342751a79
Successfully built b7d342751a79
So now that the image has been built (in my case, b7d342751a79), fire it up and verify that pip has indeed been updated:
$ docker run -it b7d342751a79 bash
root#9f559d448be9:/# pip --version
pip 18.0 from /usr/local/lib/python2.7/dist-packages/pip (python 2.7)
So now your image has pip installed, so then you can feel free to pip install whatever crazy packages you need :)

Resources