GLIBC_2.27 not Found in Docker Container - linux

I am now running a Docker container in my Linux machine. The dockerfile is as follows:
# 1. basic image
FROM tensorflow/tensorflow:1.12.0-devel-py3
ENV DEBIAN_FRONTEND=noninteractive LANG='en_US.UTF-8' LANGUAGE='en_US:en' LC_CTYPE="UTF-8"
# 2. apps
RUN apt update && apt install -y --no-install-recommends \
software-properties-common && \
add-apt-repository -y ppa:ubuntu-desktop/ubuntu-make && \
apt update \
&& apt install -y --no-install-recommends \
build-essential \
vim \
ubuntu-make \
&& umake ide pycharm /root/.local/share/umake/ide/pycharm
Everything goes on well, but when I enter the Docker container using the following command:
sudo docker run --ipc=host --gpus all --net=host -it -d --rm -h docker -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw
-v /usr/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu, -v /usr/lib/i386-linux-gnu:/usr/lib/i386-linux-gnu
--privileged
Then I try command such as apt update, I will receive the following messages:
apt: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.27' not found (required by /usr/lib/x86_64-linux-gnu/libapt-pkg.so.5.0)
However, this will not happen if the command is invoked in the Docker image file Dockerfile. For example, at the end of the Dockerfile, if I invoke
RUN apt update && apt install -y firefox, no errors appear.
I cannot understand why only in the Docker container is GLIBC_2.27 link problem identified.

I got the answer thanks to the help of #KamilCuk.
The reason why I got this error is because my host machine is Ubuntu 18.04 and my guest machine (Container) is Ubuntu 16.04.
This is no problem, but when I enter the Container, I share the two folders between host machine and guest machine:
/usr/lib/x86_64-linux-gnu
/usr/lib/i386-linux-gnu
As a result, the guest system tries to use libraries in the host machine, which is wrong. I forgot the reason why I decide to share host machine's libraries with the guest machine. Anyway, if I disable it, everything goes on well.

Related

Docker make Nvidia GPUs visible during docker build process

I want to build a docker image where I want to compile custom kernels with pytorch. Therefore I need access to the available gpus in order to compile the custom kernels during docker build process. On the host machine everything is setted up including nvidia-container-runtime, nvidia-docker, Nvidia-Drivers, Cuda etc. The following command shows docker runtime information on the host system:
$ docker info|grep -i runtime
Runtimes: nvidia runc
Default Runtime: runc
As you can see the default runtime of docker in my case is runc. I think changing the default runtime from runc to nvidia would solve this problem, as noted here.
The proposed solution doesn't work in my case because:
I have no permissions to change the default runtime on system I use
I have no permissions to make changes to the daemon.json file
Is there a way to get access to the gpus during the build process in the Dockerfile in order to compile custom pytorch kernels for CPU and GPU (in my case DCNv2)?
Here is the minimal example of my Dockerfile to reproduce this problem. In this image, DCNv2 is only compiled for CPU and not for GPU.
FROM nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y tzdata && \
apt-get install -y --no-install-recommends software-properties-common && \
add-apt-repository ppa:deadsnakes/ppa && \
apt update && \
apt install -y --no-install-recommends python3.6 && \
apt-get install -y --no-install-recommends \
build-essential \
python3.6-dev \
python3-pip \
python3.6-tk \
pkg-config \
software-properties-common \
git
RUN ln -s /usr/bin/python3 /usr/bin/python & \
ln -s /usr/bin/pip3 /usr/bin/pip
RUN python -m pip install --no-cache-dir --upgrade pip setuptools && \
python -m pip install --no-cache-dir torch==1.4.0 torchvision==0.5.0
RUN git clone https://github.com/CharlesShang/DCNv2/
#Compile DCNv2
WORKDIR /DCNv2
RUN bash ./make.sh
# clean up
RUN apt-get clean && \
rm -rf /var/lib/apt/lists/*
#Build: docker build -t my_image .
#Run: docker run -it my_image
An not opitmal solution which worked would be be the following:
Comment out line RUN bash ./make.sh in Dockerfile
Build image: docker build -t my_image .
Run image in interactive mode: docker run --gpus all -it my_image
Compile DCNv2 manually: root#1cd02fd62461:/DCNv2# ./make.sh
Here DCNv2 is compiled for CPU and GPU, but that seems to me not an ideal solution, because I must compile DCNv2 every time when i start the container.

Docker run doesn't work as part of a terraform startup script

I'm using terraform to provision a bunch of machines at once. Each one should run the same docker container. The startup script looks like this:
sudo apt-get remove docker docker-engine docker.io containerd runc -Y
sudo apt-get update -Y
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common -Y
curl https://get.docker.com | sh && sudo systemctl --now enable docker
sudo docker build -t dockertest /path/to/dockerfile
sudo docker run --gpus all -it -v /path/to/mount:/usr/src/app dockertest script.py -b 03
Basically it installs docker and then builds the container and then runs it.
Only the last line doesn't work. If I ssh into the machine, it works fine. But not as part of the startup script.
How can I get it to work as part of the startup script? It's a hassle to ssh into each of a swarm of machines.
If anyone else encounters this problem: the solution is simply to take -it out of the docker run command.

Docker command not found while docker.service is Active (running)

I've installed docker on CentOS 7, but when I run docker, I get bash: docker: command not found...
Other apps that require docker gave this error: "docker": executable file not found in $PATH
which docker returns: no docker in (/usr/.....
whereis docker returns: docker: /etc/docker /usr/libexec/docker /usr/share/man/man1/docker.1.gz
This is how I installed it:
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
sudo yum update -y && sudo yum install -y \
containerd.io-1.2.13 \
docker-ce-19.03.11 \
docker-ce-cli-19.03.11
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
sudo systemctl enable docker
I would recommend you to read the official documentation at docs.docker.com.
Did you successfully meet the OS requirements?
To install Docker Engine you need a maintained version of CentOS 7,
archived versions are not supported or tested.
The centos-extras repository must be activated. This repository is
activated by default, but if you deactivated it, you have to activate
it again.
The Overlay2 storage driver is recommended.
Have you deleted the older versions?
Older versions of Docker were called docker or docker-engine. If
these are installed, uninstall them, along with associated
dependencies.
$ sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
I have sent you some document texts from the official Docker documentation page, I would recommend you to read the whole document page.

Problem with Android Studio emulator in Docker container

I have a Docker container with Android studio 3.6 and it works perfectly. The problem is that the emulator does not run because the Ubuntu machine does not have the CPU to reproduce x86. Does anyone know how to include it in the Dockerfile ?. Thank you.
This is my Dockerfile:
FROM ubuntu:16.04
RUN dpkg --add-architecture i386
RUN apt-get update
# Download specific Android Studio bundle (all packages).
RUN apt-get install -y curl unzip
RUN apt-get install -y git
RUN curl 'https://uit.fun/repo/android-studio-ide-3.6.3-linux.tar.gz' > /studio.tar.gz && \
tar -zxvf studio.tar.gz && rm /studio.tar.gz
# Install X11
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get install -y xorg
# Install other useful tools
RUN apt-get install -y vim ant
# install Java
RUN apt-get install -y default-jdk
# Install prerequisites
RUN apt-get install -y libz1 libncurses5 libbz2-1.0:i386 libstdc++6 libbz2-1.0 lib32stdc++6 lib32z1
RUN apt-get install wget
RUN wget 'https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip' -P /tmp \
&& unzip -d /opt/android /tmp/sdk-tools-linux-4333796.zip
RUN apt install xserver-xorg-video-amdgpu
# Clean up
RUN apt-get clean
RUN apt-get purge
ENTRYPOINT [ "android-studio/bin/studio.sh" ]
When you're using ubuntu in docker, the only way to run an android emulator is to find a system image with "arm" (e.g. system-images;android-25;google_apis;armeabi-v7a).
However, even though you're able to run emulator in the container, you will probably be disappointed about that. Since emulator based on arm is typically slow enough to boot, not to mention that running in docker could be even slower.
If you really want to create it, you can do something like below.
sdkmanager "system-images;android-25;google_apis;armeabi-v7a"
avdmanager create avd -n demoTest -d "pixel" -k "system-images;android-25;google_apis;armeabi-v7a" -g "google_apis" -b "armeabi-v7a"
emulator #demoTest -no-window -no-audio -verbose &
Once you got this prompt message
emulator: got message from guest system fingerprint HAL
Your emulator is ready to go.

Cant launch chrome in docker linux container

I have an asp.net core application that uses the jsreport nuget packages to run reports. I am attempting to deploy it with a linux docker container. I seem to be having trouble getting chrome to launch when I run a report. I am getting the error:
Failed to launch chrome! Running as root without --no-sandbox is not supported.
I have followed the directions on the .net local reporting page (https://jsreport.net/learn/dotnet-local) regarding docker, but I am still getting the error.
Here is my full docker file:
#use the .net core 2.1 runtime default image
FROM microsoft/dotnet:2.1-aspnetcore-runtime
#set the working directory to the server
WORKDIR /server
#copy all contents in the current directory to the container server directory
COPY . /server
#install node
RUN apt-get update -yq \
&& apt-get install curl gnupg -yq \
&& curl -sL https://deb.nodesource.com/setup_8.x | bash \
&& apt-get install nodejs -yq
#install jsreport-cli
RUN npm install jsreport-cli -g
#install chrome for jsreport linux
RUN apt-get update && \
apt-get install -y gnupg libgconf-2-4 wget && \
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' && \
apt-get update && \
apt-get install -y google-chrome-unstable --no-install-recommends
ENV chrome:launchOptions:executablePath google-chrome-unstable
ENV chrome:launchOptions:args --no-sandbox
#expose port 80
EXPOSE 80
CMD dotnet Server.dll
Is there another step that I am missing somewhere?
Its little late but may be can help someone else.
For me, the only option that was needed to fix this issue in the docker container was to run chrome in a headless mode (so cause was in tests not in dockerfile).
ChromeOptions options = new ChromeOptions().setHeadless(true);
WebDriver driver = new ChromeDriver(options);
Results: Now tests run successfully, without any errors.
Expanding on Pramod's answer, my own issues were only solved by running with both the --headless and --no-sandbox flags.

Resources