Alpine 3.11 diff: unrecognized option: c BusyBox v1.31.1 () multi-call binary - linux

I am using an alpine 3.11 to build my image, everything goes well during the build the dockefile is here below :
FROM alpine:3.11
LABEL version="1.0"
ARG UID="110"
ARG PYTHON_VERSION="3.8.10-r0"
ARG ANSIBLE_VERSION="5.0.1"
ARG AWSCLI_VERSION="1.22.56"
# Create jenkins user with sudo privileges
RUN adduser -u ${UID} -D -h /home/jenkins/ jenkins
RUN echo 'jenkins ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN mkdir -p /tmp/.ansible
RUN chown -R jenkins:jenkins /tmp/.ansible
# Install minimal packages
RUN apk --update --no-cache add bash bind-tools curl gcc git libffi-dev libpq make mysql-client openssl postgresql-client sudo unzip wget coreutils
#RUN apk --update --no-cache add py-mysqldb
RUN apk --update --no-cache add python3=${PYTHON_VERSION} python3-dev py3-pip py3-cryptography
# Install JQ from sources
RUN wget https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64
RUN mv jq-linux64 /usr/bin/jq
RUN chmod +x /usr/bin/jq
# Install ansible and awscli with python package manager
RUN pip3 install --upgrade pip
RUN pip3 install yq --ignore-installed PyYAML
RUN pip3 install ansible==${ANSIBLE_VERSION}
RUN pip3 install awscli==${AWSCLI_VERSION} boto boto3 botocore s3cmd pywinrm pymysql 'python-dateutil<2.8.1'
# Clean cache
RUN rm -rf /var/cache/apk/*
# Display packages versions
RUN python3 --version && \
pip3 --version && \
ansible --version && \
aws --version
this image is later used to lunch some jenkins jobs nothing unusual.
But when i try to use the diff command in of these jobs I have the following error :
diff: unrecognized option: c BusyBox v1.31.1 () multi-call binary
that's why i tried to install the coreutils package but still the "-c" option is still unrecognized which is weird.
So my question is there a way to add the -c option for the diff command because in the manual of GNU this should be available automatically but apparently not on Alpine ? if there is a way could anyone please share it.
P.S : In case you are wondering why am I using the diff command it is just to compare two json files and the -c is necessary for me in this context.

Well I just had to add the diffutils package to the list after installing it everything works well

In spite of it being required in the POSIX diff specification it looks like the BusyBox implementation of diff doesn't support the -c option.
One thing you could do is change your diff invocation to use unified context diff format. Again, BusyBox diff appears to not support -u, so you need to use an explicit -U option with the number of lines of context
diff -U3 file.orig file.new
In general, the Alpine environment has many small differences like this. If you're installing the GNU versions of these tools anyways – your Dockerfile already installs GNU bash and coreutils – you'll probably find minimal to no space savings from using an Alpine base image, and using a Debian or Ubuntu base that already includes the GNU versions of these tools will be easier.
FROM ubuntu:20.04 # not Alpine
...
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
bind9-utils \
build-essential \
curl \
git-core \
...
You may need to search on https://packages.debian.org/ to find equivalent Debian packages. build-essential is a metapackage that includes the entire C toolchain (gcc, make, et al.); bash, coreutils, and diffutils would typically be installed as part of the base distribution image.

Related

How to add user and a group in Docker Container running Macosx

I have a Docker container running "FROM arm64v8/oraclelinux:8" , I am running this on a Mac m1 mini using tightvnc.
I want to add a user called "suiteuser" (uid 42065) and in a group called "cvsgroup" (gid 513), inside my docker container, So that when I run the container it starts under my user directly.
Here is my entire Dockerfile-
FROM arm64v8/oraclelinux:8
# Setup basic environment stuff
ENV container docker
ENV LANG en_US.UTF-8
ENV TZ EST
ENV DEBIAN_FRONTEND=noninteractive
# Base image stuff
#RUN yum install -y zlib-devel bzip2 bzip2-devel readline-devel sqlite sqlite-devel openssl-devel vim yum-utils sssd sssd-tools krb5-libs krb5-workstation.x86_64
# CCSMP dependent
RUN yum install -y wget
RUN yum install -y openssl-libs-1.1.1g-15.el8_3.aarch64
RUN yum install -y krb5-workstation krb5-libs krb5-devel
RUN yum install -y glibc-devel glibc-common
RUN yum install -y make gcc java-1.8.0-openjdk-devel tar perl maven svn openssl-devel gcc
RUN yum install -y gdb
RUN yum install -y openldap* openldap-clients nss-pam-ldapd
RUN yum install -y zlib-devel bzip2 bzip2-devel vim yum-utils sssd sssd-tools
# Minor changes to image to get ccsmp to build
RUN ln -s /usr/lib/jvm/java-1.8.0-openjdk /usr/lib/jvm/default-jvm
RUN cp /usr/include/linux/stddef.h /usr/include/stddef.h
# Install ant 1.10.12
RUN wget https://mirror.its.dal.ca/apache//ant/binaries/apache-ant-1.10.12-bin.zip
RUN unzip apache-ant-1.10.12-bin.zip && mv apache-ant-1.10.12/ /opt/ant
ENV JAVA_HOME /usr
ENV ANT_HOME="/usr/bin/ant"
ENV PATH="/usr/bin/ant:$PATH"
CMD /bin/bash
could anyone please suggest any ideas on how to do this.
Note 1. I know its not advisable to do this directly in the container as, every time you want to make any changes you would have to rebuild it, but this time i want to do this.
To create the group:
RUN groupadd -g 513 cvsgroup
To create the user, as a member of that group:
RUN useradd -G cvsgroup -m -u 42065 suiteuser
And toward the end of Dockerfile, you can set the user:
USER suiteuser
There may be more to do here, though, depending on your application. For example, you may need to chown some of the contents to be owned by suiteuser.

Docker make Nvidia GPUs visible during docker build process

I want to build a docker image where I want to compile custom kernels with pytorch. Therefore I need access to the available gpus in order to compile the custom kernels during docker build process. On the host machine everything is setted up including nvidia-container-runtime, nvidia-docker, Nvidia-Drivers, Cuda etc. The following command shows docker runtime information on the host system:
$ docker info|grep -i runtime
Runtimes: nvidia runc
Default Runtime: runc
As you can see the default runtime of docker in my case is runc. I think changing the default runtime from runc to nvidia would solve this problem, as noted here.
The proposed solution doesn't work in my case because:
I have no permissions to change the default runtime on system I use
I have no permissions to make changes to the daemon.json file
Is there a way to get access to the gpus during the build process in the Dockerfile in order to compile custom pytorch kernels for CPU and GPU (in my case DCNv2)?
Here is the minimal example of my Dockerfile to reproduce this problem. In this image, DCNv2 is only compiled for CPU and not for GPU.
FROM nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y tzdata && \
apt-get install -y --no-install-recommends software-properties-common && \
add-apt-repository ppa:deadsnakes/ppa && \
apt update && \
apt install -y --no-install-recommends python3.6 && \
apt-get install -y --no-install-recommends \
build-essential \
python3.6-dev \
python3-pip \
python3.6-tk \
pkg-config \
software-properties-common \
git
RUN ln -s /usr/bin/python3 /usr/bin/python & \
ln -s /usr/bin/pip3 /usr/bin/pip
RUN python -m pip install --no-cache-dir --upgrade pip setuptools && \
python -m pip install --no-cache-dir torch==1.4.0 torchvision==0.5.0
RUN git clone https://github.com/CharlesShang/DCNv2/
#Compile DCNv2
WORKDIR /DCNv2
RUN bash ./make.sh
# clean up
RUN apt-get clean && \
rm -rf /var/lib/apt/lists/*
#Build: docker build -t my_image .
#Run: docker run -it my_image
An not opitmal solution which worked would be be the following:
Comment out line RUN bash ./make.sh in Dockerfile
Build image: docker build -t my_image .
Run image in interactive mode: docker run --gpus all -it my_image
Compile DCNv2 manually: root#1cd02fd62461:/DCNv2# ./make.sh
Here DCNv2 is compiled for CPU and GPU, but that seems to me not an ideal solution, because I must compile DCNv2 every time when i start the container.

apk not found error while changing to node-buster from Alpine base image

I have changed my image in docker from Alpine base image to node:14.16-buster, While running the code I am getting 'apk not found' error.
Sharing the codes snippet :
FROM node:14.16-buster
# ========= steps for Oracle instant client installation (start) ===============
RUN apk --no-cache add libaio libnsl libc6-compat curl && \
cd /tmp && \
curl -o instantclient-basiclite.zip https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip -SL && \
unzip instantclient-basiclite.zip && \
mv instantclient*/ /usr/lib/instantclient && \
rm instantclient-basiclite.zip
Can you please help here, what do I need to change?
The issue comes from the fact that you're changing your base image from Alpine based to Debian based.
Debian based Linux distributions use apt as their package manager (Alpine uses apk).
That is the reason why you get apk not found. Use apt install, but also keep in mind that the package names could differ and you might need to look that up. After all, apt is a different piece of software with it's own capabilities.
Buster images are based on the Debian version.
It doesn't support the APK default package manger is APT
For example you can do :
FROM node:15.14.0-buster-slim
RUN apt-get update && \
apt-get install -y \
curl \
jq \
git \
wget \
openssl \
bash \
tar \
net-tools && \
rm -rf /var/lib/apt/lists/*
RUN mkdir /app && \
chown node:node /app
APK is part of the Linux alpine version you have to change the base version if you want to use the APK.
The buster node images are Debian based. buster is the release name for Debian 10 (11 will be bullseye).
Debian uses APT for packaging. apt-get can be used from scripts
apt-get update && apt-get install libaio1 curl
libnsl2 is not available in Buster, but you might not need it

docker ERROR: Could not find a version that satisfies the requirement apturl==0.5.2

I am using windows 10 OS. I want to build an container based on linux so I can replicate code and dependencies developed from ubuntu. When I try to build it outputs Error message as above.
From my understanding docker for desktop runs linux OS kernel under-the-hood therefore allowing window users to run linux based containers, not sure why it is outputting this error.
My dockerfile looks like this:
FROM ubuntu:18.04
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"
RUN apt update \
&& apt install -y htop python3-dev wget
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir root/.conda \
&& sh Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN conda create -y -n ml python=3.7
COPY . src/
RUN /bin/bash -c "cd src \
&& source activate ml \
&& pip install -r requirements.txt"
requirements.txt contains:
apturl==0.5.2
asn1crypto==0.24.0
bleach==2.1.2
Brlapi==0.6.6
certifi==2020.11.8
chardet==3.0.4
click==7.1.2
command-not-found==0.3
configparser==5.0.1
cryptography==2.1.4
cupshelpers==1.0
dataclasses==0.7
When I run docker build command it outputs:
1.649 ERROR: Could not find a version that satisfies the requirement apturl==0.5.2 1.649 ERROR: No matching distribution found for apturl==0.5.2 Deleting it and running it lead to another error. All error seem to be associated with ubuntu packages.
Am I not running a ubuntu container? why aren't I allowed to install ubuntu packages?
Thanks!
You try to install ubuntu packages with pip (which is for python packages")
try apt install -y apturl
If you want to install python packages write pip install package_name

How to refresh your shell when using a Dockerfile?

I am trying to build a Dockerfile that can make use of Azure functions. After unsuccessfully trying to build it using alpine:3.9 because of library issues, I swapped to ubuntu:18.04. Now I have a problem in that I can't install nvm (node version manager) in such a way that I can install node. My Dockerfile is below. I have managed to install nvm but now, while trying to use nvm, I cannot install the node version I want. The problem probably has to do with refreshing the shell but that is tricky to do as it appears that Docker continues to use the original shell it entered to run the next build stages. Any suggestions on how to refresh the shell so nvm can work effectively?
FROM ubuntu:18.04
RUN apt update && apt upgrade -y && apt install -qq -y --no-install-recommends \
python-pip \
python-setuptools \
wget \
build-essential \
libssl-dev
RUN pip install azure-cli
RUN wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.0/install.sh | bash
RUN . /root/.nvm/nvm.sh && nvm install 10.14.1 && node
ENTRYPOINT ["/bin/bash"]
After install nvm command put:
SHELL ["/bin/bash", "--login" , "-c"]
RUN nvm install 17
SHELL ["/bin/sh", "-c"]
Default shell is sh and first command switches it to bash. Parameter --login is required as you want to source .bashrc.
As all subsequent commands would be executed with changed shell it's good to switch it back to sh if you don't need it anymore.
You usually don't need version managers like nvm in a Docker image. Since a Docker image packages only a single application, and since it has its own isolated filesystem, you can just install the single version of Node you need.
The first thing I'd try is to just install whatever version of Node the standard Ubuntu package has (in Ubuntu 18.04, looks like 8.11). While there are some changes between Node versions, for the most part the language and core library have been pretty stable.
RUN apt update && apt-install nodejs
Or, if you need something newer, there are official Debian packages:
RUN curl -sSL https://deb.nodesource.com/gpgkey/nodesource.gpg.key | apt-key add - \
&& echo "deb https://deb.nodesource.com/node_10.x cosmic main" > /etc/apt/sources.list.d/nodesource.list \
&& apt update \
&& apt install nodejs
This will give you a current version of that major version of Node (as of this writing, 10.15.1).
If you really need that specific version of Node, there are official binary packages. I might write:
FROM ubuntu:18.04
ARG node_version=10.14.1
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
ca-certificates \
curl \
xz-utils
RUN cd /usr/local \
&& curl -o- https://nodejs.org/dist/v${node_version}/node-v${node_version}-linux-x64.tar.xz \
| tar xJf - --strip 1
...where the last couple of lines unpack the Node tarball directly into /usr/local.

Resources