I wrote this Dockerfile for an os
FROM randomdude/gcc-cross-x86_64-elf
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y nasm
RUN apt-get install -y xorriso
RUN apt-get install -y grup-pc-bin
RUN apt-get install -y grup-common
VOLUME /
WORKDIR /
and while running sudo docker build buildenv -t testos-buildenv
on the terminal i got this log
Sending build context to Docker daemon 2.048kB
Step 1/9 : FROM randomdude/gcc-cross-x86_64-elf
---> c7e17c42eb04
Step 2/9 : RUN apt-get update
---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
---> Running in 32e48dbf4a9c
exec /bin/sh: exec format error
The command '/bin/sh -c apt-get update' returned a non-zero code: 1
this file is inside /home/user/Desktop/os-systems/test-os/buildenv
i need help to solve it
Of course..It is for x86.
It depends on what you want to do.
If you want use it on your os. You have to build an arm64 version image. You have to replace some x86 dependences in the original Dockerfile and re-build it.
This Dockerfile is mentioned by the base image's description you use.
If you want use a x86 image but just want to build it on your OS (arm64), then you could try to use buildx.
Related
EDIT
While troubleshooting I'm getting different errors:
...
Err:1 http://deb.debian.org/debian bullseye InRelease
Temporary failure resolving 'deb.debian.org'
...
I'm guessing it has something to do with my firewall settings(nftables)
Runningdocker run busybox nslookup google.com
gives me
;; connection timed out; no servers could be reached so the docker has no connection to the outside?
Systems
Dev environment: Ubuntu 22.04
Prod environment: debian 10.12 64bit / Linux 4.19.0-20-amd64
Dockerfile inside my node backend folder
FROM node:slim
# Install wkhtmltopdf
RUN apt-get update
RUN apt-get install -y wkhtmltopdf
RUN npm install -g pm2#latest
WORKDIR /var/api
COPY . .
RUN npm i
EXPOSE 10051-10053
# Start PM2 as PID 1 process
ENTRYPOINT ["pm2-runtime"]
CMD ["process.json"]
When building this file on my dev system (Ubuntu 22.04) it works fine.
However, deploying it go my server and letting it build, I get this output:
Building backend
Sending build context to Docker daemon 159.2kB
Step 1/10 : FROM node:slim
---> 6c8b32c67190
Step 2/10 : RUN apt-get update
---> Using cache
---> b28ad6ee8ebf
Step 3/10 : RUN apt-get install -y wkhtmltopdf
---> Running in 2f76d2582ac0
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package wkhtmltopdf
The command '/bin/sh -c apt-get install -y wkhtmltopdf' returned a non-zero code: 100
ERROR: Service 'backend' failed to build : Build failed
What I have tried
Running apt-get install -y wkhtmltopdf solo on my server installs the package fine.
Added different repos to the /etc/apt/sources.list
I know its package https://packages.debian.org/buster/wkhtmltopdf (?)
Some troubleshooting.
According to Docker docs:
Using apt-get update alone in a RUN statement causes caching issues and subsequent apt-get install instructions fail.
So for your case, you should do:
RUN apt-get update && apt-get install -y wkhtmltopdf
Instead of:
RUN apt-get update
RUN apt-get install -y wkhtmltopdf
I found the solution, problem was nftables and docker.
Docker adds iptables rules to the ruleset, all I have to do was this:
use an ip and ipv6 table instead of inet
name all chains exactly as in iptables: INPUT, OUTPUT & FORWARD
source: https://ehlers.berlin/blog/nftables-and-docker/
Instead of fixing the problem, I downloaded .deb and installed it, in my case with gdebi but you can also use dpkg.
RUN echo "### Install wkhtmltopdf ###" \
&& wget -nv -O /tmp/wkhtmltopdf.deb https://github.com/wkhtmltopdf/packaging/releases/download/0.12.6-1/wkhtmltox_0.12.6-1.buster_amd64.deb \
&& gdebi --non-interactive /tmp/wkhtmltopdf.deb
I am Running docker on an M1 Macbook Pro , here i am using this docker script
FROM node:current-buster
# Create and set user
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN apt-get update && apt install -y ./google-chrome-stable_current_amd64.deb
This throws an error
google-chrome-stable:amd64 : Depends: libasound2:amd64 (>= 1.0.16) but it is not installable
and same for other dependencies
I have tried various ways:
changing base image
changing the installation step to
apt-get install -y wget gnupg ca-certificates procps libxss1 &&
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'&&
apt-get update &&
apt-get install -y google-chrome-stable
(This gives an error unable to locate package)
The script runs on a linux machine but for m1 mac it doesnt work.
I actually wanted to run puppeteer inside docker for which i am trying to install chrome incase there is an another way around.
docker buildx build --platform=linux/amd64
This allows us to build the image atleast. Not sure if running it would produce the same result on M1 machine but atleast the image is built
EDIT::
so chrome has no arm image and that was the main cause for this problem changing it to chromium on base ubuntu 18.04 seems to work fine
FROM ubuntu:18.04
RUN apt-get install -y chromium-browser
It should work on both debian and ubuntu, try first to run sudo apt update after that it was able to install arm build of chromium.
I have a Docker container with Android studio 3.6 and it works perfectly. The problem is that the emulator does not run because the Ubuntu machine does not have the CPU to reproduce x86. Does anyone know how to include it in the Dockerfile ?. Thank you.
This is my Dockerfile:
FROM ubuntu:16.04
RUN dpkg --add-architecture i386
RUN apt-get update
# Download specific Android Studio bundle (all packages).
RUN apt-get install -y curl unzip
RUN apt-get install -y git
RUN curl 'https://uit.fun/repo/android-studio-ide-3.6.3-linux.tar.gz' > /studio.tar.gz && \
tar -zxvf studio.tar.gz && rm /studio.tar.gz
# Install X11
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get install -y xorg
# Install other useful tools
RUN apt-get install -y vim ant
# install Java
RUN apt-get install -y default-jdk
# Install prerequisites
RUN apt-get install -y libz1 libncurses5 libbz2-1.0:i386 libstdc++6 libbz2-1.0 lib32stdc++6 lib32z1
RUN apt-get install wget
RUN wget 'https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip' -P /tmp \
&& unzip -d /opt/android /tmp/sdk-tools-linux-4333796.zip
RUN apt install xserver-xorg-video-amdgpu
# Clean up
RUN apt-get clean
RUN apt-get purge
ENTRYPOINT [ "android-studio/bin/studio.sh" ]
When you're using ubuntu in docker, the only way to run an android emulator is to find a system image with "arm" (e.g. system-images;android-25;google_apis;armeabi-v7a).
However, even though you're able to run emulator in the container, you will probably be disappointed about that. Since emulator based on arm is typically slow enough to boot, not to mention that running in docker could be even slower.
If you really want to create it, you can do something like below.
sdkmanager "system-images;android-25;google_apis;armeabi-v7a"
avdmanager create avd -n demoTest -d "pixel" -k "system-images;android-25;google_apis;armeabi-v7a" -g "google_apis" -b "armeabi-v7a"
emulator #demoTest -no-window -no-audio -verbose &
Once you got this prompt message
emulator: got message from guest system fingerprint HAL
Your emulator is ready to go.
I followed this tutorial step by step : https://learn.microsoft.com/en-us/azure/iot-edge/tutorial-c-module
But at the step "Build and Push your solution" (https://learn.microsoft.com/en-us/azure/iot-edge/tutorial-c-module#build-and-push-your-solution) I have the following error in the terminal :
standard_init_linux.go:207: exec user process caused "no such file or directory"
I check the 3 points listed on the tutorials ("If you receive an error trying to build and push your module") but I still have the error.
I don't even know what file it is about..
Do someone have an idea of the problem ?
Thanks
EDIT
I add all the terminal output :
Sending build context to Docker daemon 106kB
Step 1/14 : FROM arm32v7/ubuntu:xenial AS base
---> 8593318db04f
Step 2/14 : RUN apt-get update && apt-get install -y --no-install-recommends software-properties-common && add-apt-repository -y ppa:aziotsdklinux/ppa-azureiot && apt-get update && apt-get install -y azure-iot-sdk-c-dev && rm -rf /var/lib/apt/lists/*
---> Running in 8bed4f396527
standard_init_linux.go:207: exec user process caused "no such file or directory"
The command '/bin/sh -c apt-get update && apt-get install -y --no-install-recommends software-properties-common && add-apt-repository -y ppa:aziotsdklinux/ppa-azureiot && apt-get update && apt-get install -y azure-iot-sdk-c-dev && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 1
Looks like one of the paths in your command cannot be found in the intermediate docker image. Try running a shell directly on the intermediate image using:
docker run -it --entrypoint sh 8593318db04f
to check /var/lib/apt/lists/ and /bin/sh are actually present on the image. You should be able to manually run the command specified in your docker file.
I have found that quite helpful in debugging failing docker builds.
It seems you are building arm32v7 image, so what os is your host machine? Can you try to build the amd64 image instead of arm32v7?
So, after building out a pipeline, I realized I will need some custom libraries for a python script I will be pulling from SCM. To install Jenkins in Docker, I used the following tutorial:
https://jenkins.io/doc/book/installing/
Like so:
docker run \
-u root \
--rm \
-d \
-p 8080:8080 \
-p 50000:50000 \
-v jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
jenkinsci/blueocean
Now, I will say I'm not a Docker guru, but I'm aware the Dockerfile allows for passing in library installs for Python. However, because I'm pulling the docker image from dockerhub, I'm not sure if it's possible to add a "RUN pip install " as an argument. Maybe there is an alternate approach someone may have.
Any help is appreciated.
EDIT 1: Here's the output of the first commenter's recommendation:
Step 1/6 : FROM jenkinsci/blueocean
---> b7eef16a711e
Step 2/6 : USER root
---> Running in 150bba5c4994
Removing intermediate container 150bba5c4994
---> 882bcec61ccf
Step 3/6 : RUN apt-get update
---> Running in 324f28f384e0
/bin/sh: apt-get: not found
The command '/bin/sh -c apt-get update' returned a non-zero code: 127
Error:
/bin/sh: apt-get: not found
The command '/bin/sh -c apt-get update' returned a non-zero code: 127
Observation:
This error comes when the container that you want to run is not Debian based, hence does not support 'apt'.
To resolve this, we need to find out which package manager it utilizes.
In my case it was: 'apk'.
Resolution:
Replace 'apt-get' with 'apk' in your Dockerfile. (If this does not work you can try 'yum' package manager as well).
Command in your Dockerfile should look like:
RUN apk update
You can create a Dockerfile
FROM jenkins:latest
USER root
RUN apt-get update
RUN apt-get install -y python-pip
# Install app dependencies
RUN pip install --upgrade pip
You can build the custom image using
docker build -t jenkinspython .
Similar to what Hemant Sing's answer, but 2 slightly different things.
First, create a unique directory: mkdir foo
"cd" to that directory and run:
docker build -f jenkinspython .
Where jenkinspython contains:
FROM jenkins:latest
USER root
RUN apt-get update
RUN apt-get install -y python-pip
# Install app dependencies
RUN pip install --upgrade pip
Notice that my change has -f, not -t. And notice that the build output does indeed contain:
Step 5/5 : RUN pip install --upgrade pip
---> Running in d460e0ebb11d
Collecting pip
Downloading https://files.pythonhosted.org/packages/5f/25/e52d3f31441505a5f3af41213346e5b6c221c9e086a166f3703d2ddaf940/pip-18.0-py2.py3-none-any.whl (1.3MB)
Installing collected packages: pip
Found existing installation: pip 9.0.1
Not uninstalling pip at /usr/lib/python2.7/dist-packages, outside environment /usr
Successfully installed pip-18.0
Removing intermediate container d460e0ebb11d
---> b7d342751a79
Successfully built b7d342751a79
So now that the image has been built (in my case, b7d342751a79), fire it up and verify that pip has indeed been updated:
$ docker run -it b7d342751a79 bash
root#9f559d448be9:/# pip --version
pip 18.0 from /usr/local/lib/python2.7/dist-packages/pip (python 2.7)
So now your image has pip installed, so then you can feel free to pip install whatever crazy packages you need :)