I've created Docker images with opam before, but I don't know why this one is not working. I start from an image that already has opam, but that doesn't seem to work.
Dockerfile:
FROM continuumio/miniconda3
#FROM ocaml/opam:latest
FROM ruby:3.1.2
MAINTAINER Brando Miranda "brandojazz#gmail.com"
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ssh \
git \
m4 \
libgmp-dev \
wget \
ca-certificates \
rsync \
strace \
gcc \
rlwrap \
sudo \
lsb-release \
opam
# RUN apt-get clean all
# - This most likely won't work. For now I don't have a solution for a Ruby on Docker container Ubuntu: https://stackoverflow.com/questions/74695464/why-cant-i-install-ruby-3-1-2-in-linux-docker-container?noredirect=1#comment131843536_74695464
#RUN apt-get install -y --no-install-recommends rbenv
#RUN apt-get install -y --no-install-recommends ruby-build
#RUN apt-get install -y --no-install-recommends ruby-full
#RUN rbenv install 3.1.2
#RUN rbenv global 3.1.2
# https://github.com/giampaolo/psutil/pull/2103
RUN useradd -m bot
# format for chpasswd user_name:password
RUN echo "bot:bot" | chpasswd
RUN adduser bot sudo
WORKDIR /home/bot
USER bot
ADD https://api.github.com/repos/IBM/pycoq/git/refs/heads/main version.json
# -- setup opam like VP's PyCoq
# https://stackoverflow.com/questions/74711264/how-does-one-initialize-opam-inside-a-dockerfile
RUN opam init --disable-sandboxing
Error:
(meta_learning) brandomiranda~/pycoq ❯ docker build -t brandojazz/pycoq:latest_arm .
[+] Building 8.6s (12/34)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 3.56kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ruby:3.1.2 0.0s
=> CACHED [stage-1 1/30] FROM docker.io/library/ruby:3.1.2 0.0s
=> CACHED https://api.github.com/repos/IBM/pycoq/git/refs/heads/main 0.0s
=> [stage-1 2/30] RUN apt-get update && apt-get install -y --no-install-recommends ssh git m4 libgmp-dev wget ca-cert 3.8s
=> [stage-1 3/30] RUN useradd -m bot 0.3s
=> [stage-1 4/30] RUN echo "bot:bot" | chpasswd 0.3s
=> [stage-1 5/30] RUN adduser bot sudo 0.3s
=> [stage-1 6/30] WORKDIR /home/bot 0.0s
=> [stage-1 7/30] ADD https://api.github.com/repos/IBM/pycoq/git/refs/heads/main version.json 0.0s
=> ERROR [stage-1 8/30] RUN opam init --disable-sandboxing 3.7s
------
> [stage-1 8/30] RUN opam init --disable-sandboxing:
#12 0.123 [NOTE] Will configure from built-in defaults.
#12 0.127 Checking for available remotes: rsync and local, git, mercurial.
#12 0.132 - you won't be able to use darcs repositories unless you install the darcs command on your system.
#12 0.132
#12 0.141
#12 0.141 <><> Fetching repository information ><><><><><><><><><><><><><><><><><><><><><>
#12 3.718 [ERROR] Could not update repository "default": Failed to extract archive /tmp/opam-7-6d07ae/index.tar.gz: "/bin/tar xfz /tmp/opam-7-6d07ae/index.tar.gz -C /home/bot/.opam/repo/default.new" exited with code 2
#12 3.718 [ERROR] Initial download of repository failed
------
executor failed running [/bin/sh -c opam init --disable-sandboxing]: exit code: 40
How do I fix this?
Related
this is my dockerfile:
FROM public.ecr.aws/lambda/python:3.8-arm64
COPY requirements.txt ./
RUN yum update -y && \
yum install -y gifsicle && \
pip install -r requirements.txt
COPY . .
CMD ["app.handler"]
I'm getting the following error:
#8 200.6 No package gifsicle available.
#8 200.7 Error: Nothing to do
I finally managed to do it by building the package from source. My Dockerfile:
FROM public.ecr.aws/lambda/python:3.8-arm64
RUN yum -y install install make gcc wget gzip
RUN wget https://www.lcdf.org/gifsicle/gifsicle-1.93.tar.gz
RUN tar -xzf gifsicle-1.93.tar.gz
RUN cd gifsicle-1.93 && \
./configure && \
make && \
make install
COPY requirements.txt ./
RUN yum update -y && \
pip install -r requirements.txt
COPY . .
CMD ["app.handler"]
I'm following this documentation to write a multi-stage build.
My Dockerfile:
FROM ubuntu:trusty
RUN apt-get update && apt-get install apt-transport-https -y
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] https://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-get update && apt-get install google-chrome-stable -y
FROM node:alpine
COPY . ./
RUN npm install
RUN npm run lighthouse
I'm trying to install Google Chrome onto the image before running Google Lighthouse. However, according to the logs, the build runs the 2nd stage first.
=> CACHED [stage-1 2/4] COPY . ./ 0.0s
=> [stage-1 3/4] RUN npm install 100.8s
=> ERROR [stage-1 4/4] RUN npm run lighthouse
Why is this happening?
They are running parallel, cause neither of the stages depend on eachother.. If you are doing this just to understand multi stage builds in docker; Here is a sample:
FROM ubuntu:trusty
RUN apt-get update && apt-get install apt-transport-https -y
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] https://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-get update && apt-get install google-chrome-stable -y
FROM someangularapp:alpine as builder
COPY . ./
RUN npm install
RUN ng build
##Above stage generates a `dist` folder in its workspace
FROM nginx:latest as deployer
COPY --from=builder /app/dist /usr/share/nginx/html/
Now whenever you run:
docker build -t someimagename --target deployer .
The builder stage executes before deployer stage... because deployer uses --from=builder which means it has a dependecy on builder stage to copy some files in this case.
I have a Dockerfile and it's working fine in Ubuntu VM. However, the same Dockerfile does not build in Linux Server.
Dockerfile:
FROM python:3.9.7-slim as builder-image
ARG DEBIAN_FRONTEND=noninteractive
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONFAULTHANDLER 1
RUN apt-get update && apt-get install -y --no-install-recommends python3-dev gcc libc-dev musl-dev libffi-dev g++ cargo && \
apt-get clean && rm -rf /var/lib/apt/lists/*
RUN python3.9 -m venv /home/myuser/venv
ENV PATH="/home/myuser/venv/bin:$PATH"
RUN /home/myuser/venv/bin/pip install --upgrade pip
WORKDIR /home/myuser/venv
COPY /data/requirements.txt requirements.txt
RUN pip3 install --no-cache-dir wheel
RUN pip3 install --no-cache-dir -r requirements.txt
FROM python:3.9.7-slim
RUN useradd --create-home myuser
COPY --from=builder-image /home/myuser/venv /home/myuser/venv
USER myuser
RUN mkdir /home/myuser/code
WORKDIR /home/myuser/code
ENV PYTHONUNBUFFERED=1
ENV VIRTUAL_ENV=/home/myuser/venv
ENV PATH="/home/myuser/venv/bin:$PATH"
ENTRYPOINT ["/bin/bash"]
docker build -t python-docker_14122021 .
Error:
Sending build context to Docker daemon 49.66 kB
Step 1/23 : FROM python:3.9-slim-buster as builder-image
Error parsing reference: "python:3.9-slim-buster as builder-image" is not a valid repository/tag: invalid reference format
You have a very old docker on the server. You need to have at least version 17.06 of docker to support multi-staging builds.
I am trying to install python 3.5 inside docker with a base image centos7. This is our Dockerfile
FROM base-centos7:0.0.8
# Install basic tools
RUN yum install -y which vim wget git gcc
# Install python 3.5
RUN yum install -y https://repo.ius.io/ius-release-el7.rpm \
&& yum update -y \
&& yum install -y python35u python35u-libs python35u-devel python35u-pip
RUN python3.5 -m pip install --upgrade pip
But during the build, docker build image is failing with the following errors
executor failed running [/bin/sh -c yum install -y https://repo.ius.io/ius-release-el7.rpm
&& yum update -y
&& sudo yum install -y python35u python35u-libs python35u-devel python35u-pip]: exit code: 127.
Can anyone guide me in resolving this issue. and why am I seeing this issue in very first place.
You can use python image from docker hub
https://hub.docker.com/_/python
Example of dockerfile :
FROM python:3.6
RUN mkdir /code
WORKDIR /code
ADD . /code/
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "/code/app.py"]
i think it's easy , isn't ?
the centos repo uses:
FROM centos/s2i-base-centos7
EXPOSE 8080
ENV PYTHON_VERSION=3.5 \
PATH=$HOME/.local/bin/:$PATH \
PYTHONUNBUFFERED=1 \
PYTHONIOENCODING=UTF-8 \
LC_ALL=en_US.UTF-8 \
LANG=en_US.UTF-8 \
PIP_NO_CACHE_DIR=off
RUN INSTALL_PKGS="rh-python35 rh-python35-python-devel rh-python35-python-setuptools rh-python35-python-pip nss_wrapper \
httpd24 httpd24-httpd-devel httpd24-mod_ssl httpd24-mod_auth_kerb httpd24-mod_ldap \
httpd24-mod_session atlas-devel gcc-gfortran libffi-devel libtool-ltdl enchant" && \
yum install -y centos-release-scl && \
yum -y --setopt=tsflags=nodocs install --enablerepo=centosplus $INSTALL_PKGS && \
rpm -V $INSTALL_PKGS && \
# Remove centos-logos (httpd dependency) to keep image size smaller.
rpm -e --nodeps centos-logos && \
yum -y clean all --enablerepo='*'
source here
The problem is not difficult, I build the image changing
FROM base-centos7:0.0.8 ====> FROM centos:7
You can consult the images version of centos in https://hub.docker.com/_/centos
PD: The container showed: errro exited(1), you should focus on the main process.
I've got a CentOS 8 install, and I'm trying to use a docker container to run Mattermost to set up a local node for my family to use. I've been searching a lot online, but my google-fu appears to be weak as I can't get answers that address my issue.
I've downloaded docker, and docker compose using the following guide, again tailoring it to Centos - https://docs.mattermost.com/install/prod-docker.htm I've successfully run the "Hello World" container.
I'm using this guide and trying to tailor the Mattermost container install - https://wiki.archlinux.org/index.php/Ma ... ith_Docker
I've edited the ~/mattermost-docker/db/Dockerfile to remove references to apk, and put in yum and then dnf, and tried to execute with SUDO in the script and using SU account to run the script. Latest Dockerfile:
FROM postgres:9.4-alpine
ENV DEFAULT_TIMEZONE UTC
# Install some packages to use WAL
RUN echo "azure<5.0.0" > pip-constraints.txt
RUN dnf install -y \
build-base \
curl \
libc6-compat \
libffi-dev \
linux-headers \
python-dev \
py-pip \
py-cryptography \
pv \
libressl-dev \
&& pip install --upgrade pip \
&& pip --no-cache-dir install -c pip-constraints.txt 'wal-e<1.0.0' envdir \
&& rm -rf /tmp/* /var/tmp/* \
&& dnf clean all
# Add wale script
COPY setup-wale.sh /docker-entrypoint-initdb.d/
#Healthcheck to make sure container is ready
HEALTHCHECK CMD pg_isready -U $POSTGRES_USER -d $POSTGRES_DB || exit 1
# Add and configure entrypoint and command
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["postgres"]
VOLUME ["/var/run/postgresql", "/usr/share/postgresql/", "/var/lib/postgresql/data", "/tmp", "/etc/wal-e.d/env"]
However it still fails on: docker-compose build
Error -
Building db
Step 1/10 : FROM postgres:9.4-alpine
---> 4e66908aa630
Step 2/10 : ENV DEFAULT_TIMEZONE UTC
---> Using cache
---> 03d176f9f783
Step 3/10 : RUN echo "azure<5.0.0" > pip-constraints.txt
---> Using cache
---> 35dbc995f705
Step 4/10 : RUN sudo dnf install -y build-base curl libc6-compat libffi-dev linux-headers python-dev py-pip py-cryptography pv libressl-dev && pip install --upgrade pip && pip --no-cache-dir install -c pip-constraints.txt 'wal-e<1.0.0' envdir && rm -rf /tmp/* /var/tmp/* && dnf clean all
---> Running in 4b89205fdca3
/bin/sh: dnf: not found
ERROR:Service 'db' failed to build : The command '/bin/sh -c sudo dnf install -y build-base curl libc6-compat libffi-dev linux-headers python-dev py-pip py-cryptography pv libressl-dev && pip install --upgrade pip && pip --no-cache-dir install -c pip-constraints.txt 'wal-e<1.0.0' envdir && rm -rf /tmp/* /var/tmp/* && dnf clean all' returned a non-zero code: 127````
Confirmed dnf, and yum are present in /bin and /usr/bin, confirmed /bin/sh -> /bin/bash. I'm not even sure what question I should be asking, so I'd appreciate some assistance in figuring out how I can get this container stood up.
Thanks.