So Currently I have a Dockerfile to create "oraclelinux-8-x86" image, but I want to edit this Dockefile to create "Oraclelinux-8-arm64v8" Image instead.
This is how my current Dockerfile looks like-
FROM oraclelinux:8
# Setup basic environment stuff
ENV container docker
ENV LANG en_US.UTF-8
ENV TZ EST
# Base image stuff
#RUN yum install -y zlib-devel bzip2 bzip2-devel readline-devel sqlite sqlite-devel openssl-devel vim yum-utils sssd sssd-tools krb5-libs krb5-workstation.x86_64
# CCSMP dependent
RUN yum install -y glibc-devel.i686 krb5-devel
RUN yum install -y wget
RUN yum install -y make gcc java-1.8.0-openjdk-devel tar perl maven svn
# Minor changes to image to get ccsmp to build
RUN ln -s /usr/lib/jvm/java-1.8.0-openjdk /usr/lib/jvm/default-jvm
RUN cp /usr/include/linux/stddef.h /usr/include/stddef.h
RUN wget https://mirror.its.dal.ca/apache//ant/binaries/apache-ant-1.10.12-bin.zip
RUN unzip apache-ant-1.10.12-bin.zip && mv apache-ant-1.10.12/ /opt/ant
ENV JAVA_HOME /usr
ENV ANT_HOME="/usr/bin/ant"
ENV PATH="/usr/bin/ant:$PATH"
CMD /bin/bash
Is there any way to do this? Any suggestions are highly appreciated.
The easiest way to achieve this is to use docker buildx for your builder. Buildx has a flag called --platform with which you can tell the builder what architecture you are building for. More of this of the official docker page.
Example:
docker buildx build -t adamparco/helloworld:latest --platform linux/arm64 --push github.com/adamparco/helloworld
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t adamparco/demo:latest --push .
Related
I have a Docker container running "FROM arm64v8/oraclelinux:8" , I am running this on a Mac m1 mini using tightvnc.
I want to add a user called "suiteuser" (uid 42065) and in a group called "cvsgroup" (gid 513), inside my docker container, So that when I run the container it starts under my user directly.
Here is my entire Dockerfile-
FROM arm64v8/oraclelinux:8
# Setup basic environment stuff
ENV container docker
ENV LANG en_US.UTF-8
ENV TZ EST
ENV DEBIAN_FRONTEND=noninteractive
# Base image stuff
#RUN yum install -y zlib-devel bzip2 bzip2-devel readline-devel sqlite sqlite-devel openssl-devel vim yum-utils sssd sssd-tools krb5-libs krb5-workstation.x86_64
# CCSMP dependent
RUN yum install -y wget
RUN yum install -y openssl-libs-1.1.1g-15.el8_3.aarch64
RUN yum install -y krb5-workstation krb5-libs krb5-devel
RUN yum install -y glibc-devel glibc-common
RUN yum install -y make gcc java-1.8.0-openjdk-devel tar perl maven svn openssl-devel gcc
RUN yum install -y gdb
RUN yum install -y openldap* openldap-clients nss-pam-ldapd
RUN yum install -y zlib-devel bzip2 bzip2-devel vim yum-utils sssd sssd-tools
# Minor changes to image to get ccsmp to build
RUN ln -s /usr/lib/jvm/java-1.8.0-openjdk /usr/lib/jvm/default-jvm
RUN cp /usr/include/linux/stddef.h /usr/include/stddef.h
# Install ant 1.10.12
RUN wget https://mirror.its.dal.ca/apache//ant/binaries/apache-ant-1.10.12-bin.zip
RUN unzip apache-ant-1.10.12-bin.zip && mv apache-ant-1.10.12/ /opt/ant
ENV JAVA_HOME /usr
ENV ANT_HOME="/usr/bin/ant"
ENV PATH="/usr/bin/ant:$PATH"
CMD /bin/bash
could anyone please suggest any ideas on how to do this.
Note 1. I know its not advisable to do this directly in the container as, every time you want to make any changes you would have to rebuild it, but this time i want to do this.
To create the group:
RUN groupadd -g 513 cvsgroup
To create the user, as a member of that group:
RUN useradd -G cvsgroup -m -u 42065 suiteuser
And toward the end of Dockerfile, you can set the user:
USER suiteuser
There may be more to do here, though, depending on your application. For example, you may need to chown some of the contents to be owned by suiteuser.
I am using an alpine 3.11 to build my image, everything goes well during the build the dockefile is here below :
FROM alpine:3.11
LABEL version="1.0"
ARG UID="110"
ARG PYTHON_VERSION="3.8.10-r0"
ARG ANSIBLE_VERSION="5.0.1"
ARG AWSCLI_VERSION="1.22.56"
# Create jenkins user with sudo privileges
RUN adduser -u ${UID} -D -h /home/jenkins/ jenkins
RUN echo 'jenkins ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN mkdir -p /tmp/.ansible
RUN chown -R jenkins:jenkins /tmp/.ansible
# Install minimal packages
RUN apk --update --no-cache add bash bind-tools curl gcc git libffi-dev libpq make mysql-client openssl postgresql-client sudo unzip wget coreutils
#RUN apk --update --no-cache add py-mysqldb
RUN apk --update --no-cache add python3=${PYTHON_VERSION} python3-dev py3-pip py3-cryptography
# Install JQ from sources
RUN wget https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64
RUN mv jq-linux64 /usr/bin/jq
RUN chmod +x /usr/bin/jq
# Install ansible and awscli with python package manager
RUN pip3 install --upgrade pip
RUN pip3 install yq --ignore-installed PyYAML
RUN pip3 install ansible==${ANSIBLE_VERSION}
RUN pip3 install awscli==${AWSCLI_VERSION} boto boto3 botocore s3cmd pywinrm pymysql 'python-dateutil<2.8.1'
# Clean cache
RUN rm -rf /var/cache/apk/*
# Display packages versions
RUN python3 --version && \
pip3 --version && \
ansible --version && \
aws --version
this image is later used to lunch some jenkins jobs nothing unusual.
But when i try to use the diff command in of these jobs I have the following error :
diff: unrecognized option: c BusyBox v1.31.1 () multi-call binary
that's why i tried to install the coreutils package but still the "-c" option is still unrecognized which is weird.
So my question is there a way to add the -c option for the diff command because in the manual of GNU this should be available automatically but apparently not on Alpine ? if there is a way could anyone please share it.
P.S : In case you are wondering why am I using the diff command it is just to compare two json files and the -c is necessary for me in this context.
Well I just had to add the diffutils package to the list after installing it everything works well
In spite of it being required in the POSIX diff specification it looks like the BusyBox implementation of diff doesn't support the -c option.
One thing you could do is change your diff invocation to use unified context diff format. Again, BusyBox diff appears to not support -u, so you need to use an explicit -U option with the number of lines of context
diff -U3 file.orig file.new
In general, the Alpine environment has many small differences like this. If you're installing the GNU versions of these tools anyways – your Dockerfile already installs GNU bash and coreutils – you'll probably find minimal to no space savings from using an Alpine base image, and using a Debian or Ubuntu base that already includes the GNU versions of these tools will be easier.
FROM ubuntu:20.04 # not Alpine
...
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
bind9-utils \
build-essential \
curl \
git-core \
...
You may need to search on https://packages.debian.org/ to find equivalent Debian packages. build-essential is a metapackage that includes the entire C toolchain (gcc, make, et al.); bash, coreutils, and diffutils would typically be installed as part of the base distribution image.
I want to build a docker image where I want to compile custom kernels with pytorch. Therefore I need access to the available gpus in order to compile the custom kernels during docker build process. On the host machine everything is setted up including nvidia-container-runtime, nvidia-docker, Nvidia-Drivers, Cuda etc. The following command shows docker runtime information on the host system:
$ docker info|grep -i runtime
Runtimes: nvidia runc
Default Runtime: runc
As you can see the default runtime of docker in my case is runc. I think changing the default runtime from runc to nvidia would solve this problem, as noted here.
The proposed solution doesn't work in my case because:
I have no permissions to change the default runtime on system I use
I have no permissions to make changes to the daemon.json file
Is there a way to get access to the gpus during the build process in the Dockerfile in order to compile custom pytorch kernels for CPU and GPU (in my case DCNv2)?
Here is the minimal example of my Dockerfile to reproduce this problem. In this image, DCNv2 is only compiled for CPU and not for GPU.
FROM nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y tzdata && \
apt-get install -y --no-install-recommends software-properties-common && \
add-apt-repository ppa:deadsnakes/ppa && \
apt update && \
apt install -y --no-install-recommends python3.6 && \
apt-get install -y --no-install-recommends \
build-essential \
python3.6-dev \
python3-pip \
python3.6-tk \
pkg-config \
software-properties-common \
git
RUN ln -s /usr/bin/python3 /usr/bin/python & \
ln -s /usr/bin/pip3 /usr/bin/pip
RUN python -m pip install --no-cache-dir --upgrade pip setuptools && \
python -m pip install --no-cache-dir torch==1.4.0 torchvision==0.5.0
RUN git clone https://github.com/CharlesShang/DCNv2/
#Compile DCNv2
WORKDIR /DCNv2
RUN bash ./make.sh
# clean up
RUN apt-get clean && \
rm -rf /var/lib/apt/lists/*
#Build: docker build -t my_image .
#Run: docker run -it my_image
An not opitmal solution which worked would be be the following:
Comment out line RUN bash ./make.sh in Dockerfile
Build image: docker build -t my_image .
Run image in interactive mode: docker run --gpus all -it my_image
Compile DCNv2 manually: root#1cd02fd62461:/DCNv2# ./make.sh
Here DCNv2 is compiled for CPU and GPU, but that seems to me not an ideal solution, because I must compile DCNv2 every time when i start the container.
I am unsure if stack overflow or system fault is the right stack exchange site but I'm going with stack overflow cause the alicloud site posted to add a tag and ask a question here.
So. I'm currently building an image based on Docker:stable, that is an alpine distro, that will have aliyun-cli installed and available for use. However I am getting a weird error of Command Not Found when I'm running it. I have followed the guide here https://partners-intl.aliyun.com/help/doc-detail/139508.htm and moved the aliyun binary to /usr/sbin
Here is my Dockerfile for example
FROM docker:stable
RUN apk update && apk add curl
#Install python 3
RUN apk update && apk add python3 py3-pip
#Install AWS Cli
RUN pip3 install awscli --upgrade
# Install Aliyun CLI
RUN curl -L -o aliyun-cli.tgz https://aliyuncli.alicdn.com/aliyun-cli-linux-3.0.30-amd64.tgz
RUN tar -xzvf aliyun-cli.tgz
RUN mv aliyun /usr/bin
RUN chmod +x /usr/bin/aliyun
RUN rm aliyun-cli.tgz
However when i'm running aliyun (which can be auto-completed) I am getting this
/ # aliyun
sh: aliyun: not found
I've tried moving it to other bins. Cding into the folder and calling it explicitly but still always getting a command not found. Any suggestions would be welcome.
Did you check this Dockerfile?
Also why you need to install aws-cli in the same image and why you will need to maintain it for your self when AWS provide managed aws-cli image.
docker run --rm -it amazon/aws-cli --version
that's it for aws-cli image,but if you want in existing image then you can try
RUN pip install awscli --upgrade
DockerFile
FROM python:2-alpine3.8
LABEL com.frapsoft.maintainer="Maik Ellerbrock" \
com.frapsoft.version="0.1.0"
ARG SERVICE_USER
ENV SERVICE_USER ${SERVICE_USER:-aliyun}
RUN apk add --no-cache curl
RUN curl https://raw.githubusercontent.com/ellerbrock/docker-collection/master/dockerfiles/alpine-aliyuncli/requirements.txt > /tmp/requirements.txt
RUN \
adduser -s /sbin/nologin -u 1000 -H -D ${SERVICE_USER} && \
apk add --no-cache build-base && \
pip install aliyuncli && \
pip install --no-cache-dir -r /tmp/requirements.txt && \
apk del build-base && \
rm -rf /tmp/*
USER ${SERVICE_USER}
WORKDIR /usr/local/bin
ENTRYPOINT [ "aliyuncli" ]
CMD [ "--help" ]
build and run
docker build -t aliyuncli .
docker run -it --rm aliyuncli
output
docker run -it --rm abc aliyuncli
usage: aliyuncli <command> <operation> [options and parameters]
<aliyuncli> the valid command as follows:
batchcompute | bsn
bss | cms
crm | drds
ecs | ess
ft | ocs
oms | ossadmin
ram | rds
risk | slb
ubsms | yundun
After a lot of lookup I found a github issue in the official aliyun-cli that sort of describes that it is not compatible with alpine linux because of it's not muslc compatible.
Link here: https://github.com/aliyun/aliyun-cli/issues/54
Following the workarounds there I build a multi-stage docker file with the following that simply fixed my issue.
Dockerfile
#Build aliyun-cli binary ourselves because of issue
#in alpine https://github.com/aliyun/aliyun-cli/issues/54
FROM golang:1.13-alpine3.11 as cli_builder
RUN apk update && apk add curl git make
RUN mkdir /srv/aliyun
WORKDIR /srv/aliyun
RUN git clone https://github.com/aliyun/aliyun-cli.git
RUN git clone https://github.com/aliyun/aliyun-openapi-meta.git
ENV GOPROXY=https://goproxy.cn
WORKDIR aliyun-cli
RUN make deps; \
make testdeps; \
make build;
FROM docker:19
#Install python 3 & jq
RUN apk update && apk add python3 py3-pip python3-dev jq
#Install AWS Cli
RUN pip3 install awscli --upgrade
# Install Aliyun CLI from builder
COPY --from=cli_builder /srv/aliyun/aliyun-cli/out/aliyun /usr/bin
RUN aliyun configure set --profile default --mode EcsRamRole --ram-role-name build --region cn-shanghai
I installed libpcap in my container using below docker file using docker file below. How do I make sure it was installed and working as expected?
I tried below with the hope to see libpcap
D:\work >docker exec -u 0 -it containerId sh
/app # cd /etc/apk
/etc/apk # cat repositories
http://dl-cdn.alpinelinux.org/alpine/v3.8/main
http://dl-cdn.alpinelinux.org/alpine/v3.8/community
/etc/apk #
Below is my docker file
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-alpine AS build
# Install packages
RUN apk update
RUN apk -U --no-cache add libpcap
Running the apk info command has below output
WARNING: Ignoring APKINDEX.adfa7ceb.tar.gz: No such file or directory
WARNING: Ignoring APKINDEX.efaa1f73.tar.gz: No such file or directory
musl
busybox
alpine-baselayout
alpine-keys
libressl2.7-libcrypto
libressl2.7-libssl
libressl2.7-libtls
ssl_client
zlib
apk-tools
scanelf
musl-utils
libc-utils
ca-certificates
krb5-conf
libcom_err
keyutils-libs
libverto
krb5-libs
libgcc
libintl
libcrypto1.0
libssl1.0
libstdc++
userspace-rcu
lttng-ust
tzdata
Run docker exec command and try this
$ apk info
This will list all the installed packages in alpine.
I can see libcap in the output.
If you still can't see the package. Make sure you have run apk update before installing libcap