Dockerfile build fails all of a sudden - linux

I'm trying to resolve build problems when running commands to build below dockerfile,
errors for example I get:
1.
/bin/sh: 1: /opt/conda/bin/pip: not found
The command '/bin/sh -c wget -q https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-x86_64.sh -O /tmp/miniconda.sh && echo 'd0c7c71cc5659e54ab51f2005a8d96f3 */tmp/miniconda.sh' | md5sum -c - && bash /tmp/miniconda.sh -f -b -p /opt/conda && /opt/conda/bin/conda install --yes -c conda-forge python=3.5 sqlalchemy tornado jinja2 traitlets requests pip pycurl nodejs configurable-http-proxy && /opt/conda/bin/pip install --upgrade pip && rm /tmp/miniconda.sh' returned a non-zero code: 127
2.
When trying to comment the problematic part we get another issue with npm such as:
/bin/sh: 1: npm: not found
Any idea what's going on here?
Dockerfile
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
FROM debian:jessie
MAINTAINER Jupyter Project <jupyter#googlegroups.com>
# install nodejs, utf8 locale, set CDN because default httpredir is unreliable
ENV DEBIAN_FRONTEND noninteractive
RUN REPO=http://cdn-fastly.deb.debian.org && \
echo "deb $REPO/debian jessie main\ndeb $REPO/debian-security jessie/updates main" > /etc/apt/sources.list && \
apt-get -y update && \
apt-get -y upgrade && \
apt-get -y install wget locales git bzip2 &&\
/usr/sbin/update-locale LANG=C.UTF-8 && \
locale-gen C.UTF-8 && \
apt-get remove -y locales && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
ENV LANG C.UTF-8
# install Python + NodeJS with conda
RUN wget -q https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-x86_64.sh -O /tmp/miniconda.sh && \
echo 'd0c7c71cc5659e54ab51f2005a8d96f3 */tmp/miniconda.sh' | md5sum -c - && \
bash /tmp/miniconda.sh -f -b -p /opt/conda && \
/opt/conda/bin/conda install --yes -c conda-forge \
python=3.5 sqlalchemy tornado jinja2 traitlets requests pip pycurl \
nodejs configurable-http-proxy && \
/opt/conda/bin/pip install --upgrade pip && \
rm /tmp/miniconda.sh
ENV PATH=/opt/conda/bin:$PATH
EXPOSE 8000
RUN mkdir -p /src/jupyterhub
WORKDIR /src/jupyterhub
ADD . /src/jupyterhub
RUN npm install --unsafe-perm && \
pip install . && \
rm -rf $PWD ~/.cache ~/.npm
ADD . /src/jupyterhub
LABEL org.jupyter.service="jupyterhub"
CMD ["jupyterhub"]

The latest pip package hosted by conda forge is noarch/pip-20.0.2-py_2.tar.bz2 and it has the bin folder missing hence calling /opt/conda/bin/pip will give /opt/conda/bin/pip: not found error.
I would suggest enforcing the versions of the packages to prevent updated versions causing build error, this will achieve deterministic builds in different machines which saves time on having to figure out what version change is causing the error.
In order to get pip properly, amending the Dockerfile with the below should do the trick:
RUN wget -q https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-x86_64.sh -O /tmp/miniconda.sh && \
echo 'd0c7c71cc5659e54ab51f2005a8d96f3 */tmp/miniconda.sh' | md5sum -c - && \
bash /tmp/miniconda.sh -f -b -p /opt/conda && \
/opt/conda/bin/conda install --yes -c conda-forge \
python=3.5 sqlalchemy tornado jinja2 traitlets requests pip=18.0=py35_1001 pycurl \
nodejs configurable-http-proxy && \
/opt/conda/bin/pip install --upgrade pip && \
rm /tmp/miniconda.sh

Related

docker file error for rm: unrecognized option '--silent'

trying to build a docker file using gcloud command
gcloud --project $PROJECT builds submit --config=cloudbuild.yaml
--substitutions=_PROJECT_ID=$PROJECT,_REPOSITORY="gitlab-runner",_IMAGE="cloudcicd:latest" .
and my docker file looks like this
FROM python:latest
# Avoid warnings by switching to noninteractive
ENV DEBIAN_FRONTEND=noninteractive
#Versions
#ENV HELM_VERSION=v3.6.3
ENV KUBECTL_VERSION=v1.20.9
ENV MAVEN_OPTS="-Djavax.net.ssl.trustStore=/cicd/assets/truststore.jks"
ENV TERRAFORM_VERSION=1.2.0
ENV GOLANG_VERSION=1.18.6
ENV TERRAGRUNT_VERSION=v0.38.7
#Copy python requirements file
COPY requirements.txt /tmp/pip-tmp/
# Makes the Ansible directories
RUN mkdir /etc/ansible /ansible
RUN mkdir ~/.ssh
# Configure apt and install python packages
RUN apt-get update -y -q \
&& apt-get upgrade -y -q \
&& apt-get -y install --no-install-recommends apt-utils dialog 2>&1 \
&& apt-get install -y --no-install-recommends apt-utils \
&& apt-get -y install ca-certificates software-properties-common build-essential curl git gettext-base maven sshpass krb5-user \
&& pip --disable-pip-version-check --no-cache-dir install -r /tmp/pip-tmp/requirements.txt \
&& apt-get -y install jq \
&& rm -rf /tmp/pip-tmp \
#Install helm
#RUN wget https://get.helm.sh/helm-${HELM_VERSION}-linux-amd64.tar.gz \
#&& tar -zxvf helm-${HELM_VERSION}-linux-amd64.tar.gz \
#&& mv linux-amd64/helm /usr/local/bin/helm
#Install kubectl
RUN curl --silent https://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl --output /usr/local/bin/kubectl \
&& chmod +x /usr/local/bin/kubectl
#Install Docker CLI
RUN curl -sSL https://get.docker.com/ | sh \
&& curl -L "https://github.com/docker/compose/releases/download/2.11.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose \
&& chmod +x /usr/local/bin/docker-compose
#Install AWS CLI
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" \
&& unzip awscliv2.zip \
&& ./aws/install
#Copy Assets
RUN mkdir -p /cicd
COPY assets /cicd
#Install helm plugins
#RUN helm plugin install /cicd/helm-nexus-push
# Downloading gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
# Installing the package
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
# Adding the package path to local
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
RUN cd /tmp && \
wget https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip && \
unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip -d /usr/local/bin && \
rm -rf /tmp/*
RUN cd /tmp && \
wget https://dl.google.com/go/go${GOLANG_VERSION}.linux-amd64.tar.gz && \
tar -xzf go${GOLANG_VERSION}.linux-amd64.tar.gz -C /usr/local && \
rm -rf /tmp/*
RUN cd /tmp && \
wget https://github.com/gruntwork-io/terragrunt/releases/download/${TERRAGRUNT_VERSION}/terragrunt_linux_amd64 && \
mv terragrunt_linux_amd64 /usr/local/bin/terragrunt && \
chmod +x /usr/local/bin/terragrunt && \
rm -rf /tmp/*
RUN git config --global http.sslCAinfo /etc/ssl/certs/ca-certificates.crt
ENV GOPATH=/usr/local/go
ENV PATH=/usr/local/go/bin:$PATH
ENV CGO_ENABLED=0
RUN go version
RUN terraform --version
RUN terragrunt --version
RUN ansible --version
CMD bash
and I get the following error
Reading state information...
The following additional packages will be installed:
libjq1 libonig5
The following NEW packages will be installed:
jq libjq1 libonig5
0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
Need to get 384 kB of archives.
After this operation, 1148 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian bullseye/main amd64 libonig5 amd64 6.9.6-1.1 [185 kB]
Get:2 http://deb.debian.org/debian bullseye/main amd64 libjq1 amd64 1.6-2.1 [135 kB]
Get:3 http://deb.debian.org/debian bullseye/main amd64 jq amd64 1.6-2.1 [64.9 kB]
Fetched 384 kB in 0s (1621 kB/s)
Selecting previously unselected package libonig5:amd64.
(Reading database ... 28446 files and directories currently installed.)
Preparing to unpack .../libonig5_6.9.6-1.1_amd64.deb ...
Unpacking libonig5:amd64 (6.9.6-1.1) ...
Selecting previously unselected package libjq1:amd64.
Preparing to unpack .../libjq1_1.6-2.1_amd64.deb ...
Unpacking libjq1:amd64 (1.6-2.1) ...
Selecting previously unselected package jq.
Preparing to unpack .../archives/jq_1.6-2.1_amd64.deb ...
Unpacking jq (1.6-2.1) ...
Setting up libonig5:amd64 (6.9.6-1.1) ...
Setting up libjq1:amd64 (1.6-2.1) ...
Setting up jq (1.6-2.1) ...
Processing triggers for libc-bin (2.31-13+deb11u4) ...
rm: unrecognized option '--silent'
Try 'rm --help' for more information.
The command '/bin/sh -c apt-get update -y -q && apt-get upgrade -y -q && apt-get -y install --no-install-recommends apt-utils dialog 2>&1 && apt-get install -y --no-install-recommends apt-utils && apt-get -y install ca-certificates software-properties-common build-essential curl git gettext-base maven sshpass krb5-user && pip --disable-pip-version-check --no-cache-dir install -r /tmp/pip-tmp/requirements.txt && apt-get -y install jq && rm -rf /tmp/pip-tmp RUN curl --silent https://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl --output /usr/local/bin/kubectl && chmod +x /usr/local/bin/kubectl' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BUILD FAILURE: Build step failure: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
ERROR: (gcloud.builds.submit) build 46bc93d2-9bbf-4de2-96db-8312f9b06843 completed with status "FAILURE"
im trying to push the docker file google artifact registry

Docker not exposing ports as expected

I have the below dockerfile that I am trying to build. At the end of the file I am attempting to expose ports 8888 and 6006.
FROM nvidia/cuda:9.2-devel-ubuntu16.04
LABEL maintainer="nweir <nweir#iqt.org>"
ARG solaris_branch='master'
# prep apt-get and cudnn
RUN apt-get update && apt-get install -y --no-install-recommends \
apt-utils && \
rm -rf /var/lib/apt/lists/*
# install requirements
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
bc \
bzip2 \
ca-certificates \
curl \
git \
libgdal-dev \
libssl-dev \
libffi-dev \
libncurses-dev \
libgl1 \
jq \
nfs-common \
parallel \
python-dev \
python-pip \
python-wheel \
python-setuptools \
unzip \
vim \
wget \
build-essential \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
SHELL ["/bin/bash", "-c"]
ENV PATH /opt/conda/bin:$PATH
# install anaconda
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh -O ~/miniconda.sh && \
/bin/bash ~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh && \
/opt/conda/bin/conda clean -tipsy && \
ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \
echo "conda activate base" >> ~/.bashrc
# prepend pytorch and conda-forge before default channel
RUN conda update conda && \
conda config --prepend channels conda-forge && \
conda config --prepend channels pytorch
# get dev version of solaris and create conda environment based on its env file
WORKDIR /tmp/
RUN git clone https://github.com/cosmiq/solaris.git && \
cd solaris && \
git checkout ${solaris_branch} && \
conda env create -f environment.yml
ENV PATH /opt/conda/envs/solaris/bin:$PATH
RUN cd solaris && pip install .
# install various conda dependencies into the space_base environment
RUN conda install -n solaris \
jupyter \
jupyterlab \
ipykernel
# add a jupyter kernel for the conda environment in case it's wanted
RUN source activate solaris && python -m ipykernel.kernelspec \
--name solaris --display-name solaris
# open ports for jupyterlab and tensorboard
EXPOSE 8888
EXPOSE 6006
RUN ["/bin/bash"]
After building the dockerfile into an image I attempt to expose the ports by running the following command:
docker run -p localhost:8888:8888 -p localhost:6006:6006 1ff
When I run docker ps -a I get the below image. As you can see the ports are not exposed.
I am currently using Ubuntu 20.04.
I can't for the life of me figure out what is wrong, your help would be greatly appreciated!

PyTorch Jupyter Notebook image unable to find torch

I have built a pytorch jupyter notebook image using the Dockerfile below. The only thing I changed from Tensorflow Jupyter Dockerfile is the base image (From Tensorflow to PyTorch).
However, when I launch the Notebook in Kubeflow, I’m unable to import torch. However, with !pip list, I can actually find the torch module. Any solutions?
ARG BASE_IMAGE=pytorch/pytorch:1.5.1-cuda10.1-cudnn7-runtime
FROM $BASE_IMAGE
ARG TF_SERVING_VERSION=0.0.0
ARG NB_USER=jovyan
# TODO: User should be refactored instead of hard coded jovyan
USER root
ENV DEBIAN_FRONTEND noninteractive
ENV NB_USER $NB_USER
ENV NB_UID 1000
ENV HOME /home/$NB_USER
ENV NB_PREFIX /
ENV PATH $HOME/.local/bin:$PATH
# Use bash instead of sh
SHELL ["/bin/bash", "-c"]
RUN apt-get update && apt-get install -yq --no-install-recommends \
apt-transport-https \
build-essential \
bzip2 \
ca-certificates \
curl \
g++ \
git \
gnupg \
graphviz \
locales \
lsb-release \
openssh-client \
sudo \
unzip \
vim \
wget \
zip \
emacs \
python3-pip \
python3-dev \
python3-setuptools \
&& apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Install Nodejs for jupyterlab-manager
RUN curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
RUN apt-get update && apt-get install -yq --no-install-recommends \
nodejs \
&& apt-get clean && \
rm -rf /var/lib/apt/lists/*
ENV DOCKER_CREDENTIAL_GCR_VERSION=1.4.3
RUN curl -LO https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${DOCKER_CREDENTIAL_GCR_VERSION}/docker-credential-gcr_linux_amd64-${DOCKER_CREDENTIAL_GCR_VERSION}.tar.gz && \
tar -zxvf docker-credential-gcr_linux_amd64-${DOCKER_CREDENTIAL_GCR_VERSION}.tar.gz && \
mv docker-credential-gcr /usr/local/bin/docker-credential-gcr && \
rm docker-credential-gcr_linux_amd64-${DOCKER_CREDENTIAL_GCR_VERSION}.tar.gz && \
chmod +x /usr/local/bin/docker-credential-gcr
# Install AWS CLI
RUN curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "/tmp/awscli-bundle.zip" && \
unzip /tmp/awscli-bundle.zip && ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws && \
rm -rf ./awscli-bundle
# Install Azure CLI
RUN curl -sL https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor | tee /etc/apt/trusted.gpg.d/microsoft.asc.gpg > /dev/null && \
AZ_REPO=$(lsb_release -cs) && \
echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | tee /etc/apt/sources.list.d/azure-cli.list && \
apt-get update && \
apt-get install azure-cli
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \
locale-gen
ENV LC_ALL en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US.UTF-8
# Create NB_USER user with UID=1000 and in the 'users' group
# but allow for non-initial launches of the notebook to have
# $HOME provided by the contents of a PV
RUN useradd -M -s /bin/bash -N -u $NB_UID $NB_USER && \
chown -R ${NB_USER}:users /usr/local/bin && \
mkdir -p $HOME && \
chown -R ${NB_USER}:users ${HOME}
RUN export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)" && \
echo "deb https://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" > /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update && \
apt-get install -y google-cloud-sdk kubectl
# Install Tini - used as entrypoint for container
RUN cd /tmp && \
wget --quiet https://github.com/krallin/tini/releases/download/v0.18.0/tini && \
echo "12d20136605531b09a2c2dac02ccee85e1b874eb322ef6baf7561cd93f93c855 *tini" | sha256sum -c - && \
mv tini /usr/local/bin/tini && \
chmod +x /usr/local/bin/tini
# Install base python3 packages
RUN pip3 --no-cache-dir install \
jupyter-console==6.0.0 \
jupyterlab \
kubeflow-fairing==1.0.1
RUN docker-credential-gcr configure-docker && chown ${NB_USER}:users $HOME/.docker/config.json
# Configure container startup
EXPOSE 8888
USER jovyan
ENTRYPOINT ["tini", "--"]
CMD ["sh","-c", "jupyter notebook --notebook-dir=/home/${NB_USER} --ip=0.0.0.0 --no-browser --allow-root --port=8888 --NotebookApp.token='' --NotebookApp.password='' --NotebookApp.allow_origin='*' --NotebookApp.base_url=${NB_PREFIX}"]

Forgerock - Forgeops - util - building with RHEL?

I am trying to take this Dockerfile here - https://github.com/ForgeRock/forgeops/blob/release/6.5.0/docker/util/Dockerfile
And change the old version which is Alpine linux (seen below):
FROM alpine:3.7
...
RUN apk add --update ca-certificates \
&& apk add --update -t deps curl\
&& curl -L https://storage.googleapis.com/kubernetes-release/release/${KUBE_LATEST_VERSION}/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl \
&& chmod +x /usr/local/bin/kubectl \
&& apk del --purge deps \
&& apk add --update jq su-exec unzip curl bash openldap-clients \
&& rm /var/cache/apk/* \
&& mkdir -p $FORGEROCK_HOME \
&& addgroup -g 11111 forgerock \
&& adduser -s /bin/bash -h "$FORGEROCK_HOME" -u 11111 -D -G forgerock forgerock
To change it to run off of RHEL 7 (my changes below)
FROM ubi7-stigd:7.6
...
# Install epel, so we can install jq later
RUN rpm --import http://download.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7 \
&& yum install -y --disableplugin=subscription-manager https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# Install other stuff
RUN yum -y --disableplugin=subscription-manager update \
&& yum install -y --disableplugin=subscription-manager jq su-exec unzip curl bash openldap-clients ca-certificates deps \
&& curl -L https://storage.googleapis.com/kubernetes-release/release/${KUBE_LATEST_VERSION}/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl \
&& chmod +x /usr/local/bin/kubectl \
&& mkdir -p $FORGEROCK_HOME \
&& groupadd -g 11111 forgerock \
&& useradd -m -s /bin/bash -d "$FORGEROCK_HOME" -u 11111 -g forgerock -G root forgerock
The container builds just fine (although it complains about not being able to find "su-exec" and "deps"). But when I upload this image to my OpenShift and run it via an OpenAM pod, the container fails to start, timing out after 10 minutes. The events show that the container started, and logs only show 2 lines, saying it timed out after 10 minutes.
Anyone know what the issue might be?
I needed to install the "nc" package, as one of the .sh files uses nc.

Docker, running NVM script in a new bash shell

I have the following in my Dockerfile:
run apt-get update; \
apt-get install -y curl && \
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.5/install.sh | bash
Following this line of code, I need to run a command in a new bash shell so that the environment variable set from the NVM script are used.
I have tired the following to install Nodejs and it does not work:
run ["/bin/bash", "-c", "nvm install 8.7.0"]
What can I do?
It's better to use a Dockerhub repo and use it in your Dockerfile.
You can check this repositorie or this link for more repositories, please read description before choosing a repositorie.
So for example, you can add the code line below in your Dockerfile it will pull the nvm image and install it then add your app instructions.
FROM livingdocs/nvm
Or you can read their Dockerfile and use the command they used it to install nvm
ADD ./.nvmrc /app/.nvmrc
RUN bash -c '. /usr/share/nvm/nvm.sh && cd /app && nvm install && nvm alias default'
if it didn't put this one from another repositorie:
RUN sudo apt-get update && \
sudo apt-get install -y build-essential libssl-dev libmysqlclient-dev && \
sudo apt-get clean && \
sudo rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN curl --location https://raw.github.com/creationix/nvm/master/install.sh | sh && \
sudo /bin/bash -c "echo \"[[ -s \$HOME/.nvm/nvm.sh ]] && . \$HOME/.nvm/nvm.sh\" >> /etc/profile.d/npm.sh" && \
echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc

Resources