I created a Dockerfile and then built it for my team to use. Currently I am pulling from the CentOS:latest image, then building the latest version of Python and saving the image to a .tar file. The idea is for my colleagues to use this image to add their Python projects to the /pyscripts folder. Is this the recommended way of building a base image or is there a better way I can go about doing it?
# Filename: Dockerfile
From centos
RUN yum -y update && yum -y install gcc openssl-devel bzip2-devel libffi-devel wget make && yum clean all
RUN cd /opt && wget --no-check-certificate https://www.python.org/ftp/python/3.8.3/Python-3.8.3.tgz && tar xzf Python-3.8.3.tgz && cd Python-3.8*/ && ./configure --enable-optimizations && make altinstall && rm -rf /opt/Python* && mkdir /pyscripts
Many thanks!
Yes this is the standard and recommended way of building a base image from a parent image (CentOS in this example) if that is what you need Python 3.8.3 (latest version) on CentOS system.
Alternatively you can pull a generic Python image with latest Python version (which is now 3.8.3) but based on other Linux distribution (Debian) from Docker HUB repository by running:
docker pull python:latest
And then build a base image from it where you will just need to create the directory /pyscripts
So the Dockerfile would look like that:
FROM python:latest
RUN mkdir /pyscripts
Or you can pull CentOS/Python already built image (with lower version 3.6) from Docker HUB repository by running:
docker pull centos/python-36-centos7
And then build a base image from it where you will just need to create the directory /pyscripts
So the Dockerfile would look like that:
FROM centos/python-36-centos7:latest
USER root
RUN mkdir /pyscripts
Remember to add this line just after the first line to run the commands as root:
USER root
Otherwise you would get a Permission Denied error message
Related
I am having trouble with azure and docker where my local machine image is behaving differently than the image I push to ACR. while trying to deploy to web, I get this error:
ERROR - failed to register layer: error processing tar file(exit status 1): Container ID 397546 cannot be mapped to a host IDErr: 0, Message: mapped to a host ID
So in trying to fix it, I have come to find out that azure has a limit on uid numbers of 65000. Easy enough, just change ownership of the affected files to root, right?
Not so. I put the following command into my Dockerfile:
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
Works great locally for changing the uids of the affected files from 397546 to 0. I do a command in the cli of the container:
find / -uid 397546
It finds none of the same files it found before. Yay! I even navigate to the directories where the affected files are, and do a quick
ls -n to double confirm they are fine, and sure enough the uids are now 0 on all of them. Good to go?
Next step, push to cloud. When I push and reset the app service, I still continue to get the same exact error above. I have confirmed on multiple fronts that it is indeed pushing the correct image to the cloud.
All of this means that somehow my local image and the cloud image are behaving differently.
I am stumped guys please help.
The Dockerfile is as below:
RUN apt-get update
RUN apt-get install -y curl git wget unzip libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 psmisc
RUN apt-get clean
# Clone the flutter repo
RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter
# Set flutter path
ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}"
# Enable flutter web
RUN flutter upgrade
RUN flutter config --enable-web
# Run flutter doctor
RUN flutter doctor -v
# Change ownership to root of affected files
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
# Copy the app files to the container
COPY ./build/web /usr/local/bin/app
COPY ./startup /usr/local/bin/app/server
COPY ./pubspec.yaml /usr/local/bin/app/pubspec.yaml
# Set the working directory to the app files within the container
WORKDIR /usr/local/bin/app
# Get App Dependencies
RUN flutter pub get
# Build the app for the web
# Document the exposed port
EXPOSE 4040
# Set the server startup script as executable
RUN ["chmod", "+x", "/usr/local/bin/app/server/server.sh"]
# Start the web server
ENTRYPOINT [ "/usr/local/bin/app/server/server.sh" ]```
So basically we have made a shell script to build web BEFORE building the docker image. we then use the static js from the build/web folder and host that on the server. No need to download all of flutter. Makes pipelines a little harder, but at least it works.
New Dockerfile:
FROM ubuntu:20.04 as build-env
RUN apt-get update && \
apt-get install -y --no-install-recommends apt-utils && \
apt-get -y install sudo
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
## preesed tzdata, update package index, upgrade packages and install needed software
RUN echo "tzdata tzdata/Areas select US" > /tmp/preseed.txt; \
echo "tzdata tzdata/Zones/US select Colorado" >> /tmp/preseed.txt; \
debconf-set-selections /tmp/preseed.txt && \
apt-get update && \
apt-get install -y tzdata
RUN apt-get install -y curl git wget unzip libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 python3 nginx nano vim
RUN apt-get clean
# Copy files to container and build
RUN mkdir /app/
COPY . /app/
WORKDIR /app/
RUN cd /app/
# Configure nginx and remove secret files
RUN mv /app/build/web/ /var/www/html/patient
RUN cd /etc/nginx/sites-enabled
RUN cp -f /app/default /etc/nginx/sites-enabled/default
RUN cd /app/ && rm -r .dart_tool .vscode assets bin ios android google_place lib placepicker test .env .flutter-plugins .flutter-plugins-dependencies .gitignore .metadata analysis_options.yaml flutter_01.png pubspec.lock pubspec.yaml README.md
# Record the exposed port
EXPOSE 5000
# Start the python server
RUN ["chmod", "+x", "/app/server/server.sh"]
ENTRYPOINT [ "/app/server/server.sh"]
I have a docker image that I want to use with Jenkins, but I cannot access a folder. I have a feeling that the the ubuntu jenkins user just cannot see/access the folder copied when the Dockerfile is built. Below is my Dockerfile.
FROM jaas/jdk8-jenkins-slave:latest
USER root
RUN apt-get update -y
&& apt-get upgrade -y
&& apt-get install git -y
COPY ./ModusToolbox_2.1.0.1266-linux-install.tar.gz ./
RUN tar -xzf ModusToolbox_2.1.0.1266-linux-install.tar.gz
&& rm ModusToolbox_2.1.0.1266-linux-install.tar.gz
USER jenkins
RUN date +%a-%b-%d-%Y:%Z-%H:%M > openshift/BUILD_DAT
Is there any way I could make the file extracted from the .tar.gz visible to the jenkins user? I have tried looking around on how to share folders between users, but it didn't seem to work... Finally, the only directory I can access in jenkins is /home/jenkins/agent/workspace/ModusJenkins, but changing the docker file to extract the .tar.gz to this directory didn't work, as when I did "ls" in the jenkins execute build step, it showed nothing.
Any suggestions/advice would be appreciated. Thanks!
I am having trouble with extracting a .tar.gz file and accessing its files on a docker image. I've tried looking around Stackoverflow, but the solutions didn't fix my problem... Below is my folder structure and my Dockerfile. I've made an image called modus.
Folder structure:
- modus
Dockerfile
ModusToolbox_2.1.0.1266-linux-install.tar.gz
Dockerfile:
FROM ubuntu:latest
USER root
RUN apt-get update -y && apt-get upgrade -y && apt-get install git -y
COPY ./ModusToolbox_2.1.0.1266-linux-install.tar.gz /root/
RUN cd /root/ && tar -C /root/ -zxzf ModusToolbox_2.1.0.1266-linux-install.tar.gz
I've been running the commands below, but when I try to check /root/ the extracted files aren't there...
docker build .
docker run -it modus
root#e19d081664e4:/# cd root
root#e19d081664e4:/# ls
<prints nothing>
There should be a folder called ModusToolBox, but I can't find it anywhere. Any help is appreciated.
P.S
I have tried changing ADD to COPY, but both don't work.
You didn't provide a tag option with -t but you're using a tag in
docker run -it modus. By doing that you run some other modus image, not the one you have just built. Docker should say something
like Successfully built <IMAGE_ID> at the end of the build, run
docker run -it <IMAGE_ID> to run a newly built image if you don't want to provide a tag.
I'm currently rebuilding our build server, and creating a set of Docker images for our various projects, as each has rather different toolchain and library requirements. Since Docker currently only runs on 64-bit hosts, the build server will be a x86_64 Fedora 22 machine.
These images must be able to build historical/tagged releases of our projects without modification; we can make changes to the build process for each project if needed, but only for current trunk and future releases.
Now, one of my build environments needs to reproduce an old i686 build server. For executing 32-bit programs I can simply install i686 support libraries (yum install glibc.i686 ncurses-libs.i686), but that doesn't help me to build 32-bit programs, without having to modify Makefiles to pass -m32 to GCC … and, as stated above, I do not wish to alter historical codebases at all.
So, my current idea is to basically fake a i686 version of CentOS in a Docker container by installing all i686 packages, including GCC. That way, although uname -a will report the host's x86_64 architecture, everything else within the container should be pretty consistent. I took the idea (and centos6.tar.gz) from the "centos-i386" base image which, in essence, I'm trying to reproduce for my own local image.
Sadly, it's not going very well.
Here's a minimal-ish Dockerfile:
FROM scratch
# Inspiration from https://hub.docker.com/r/toopher/centos-i386/~/dockerfile/
ADD centos6.tar.gz /
RUN echo "i686" > /etc/yum/vars/arch && \
echo "i386" > /etc/yum/vars/basearch
ENTRYPOINT ["linux32"]
# Base packages
RUN yum update -y && yum -y install epel-release patch sed subversion bzip zip
# AT91SAM9260 ARM compiler
ADD arm-2009q1-203-arm-none-linux-gnueabi-i686-pc-linux-gnu.tar.bz2 /usr/local/
ENV PATH $PATH:/usr/local/arm-2009q1/bin
# AT91SAM9260 & native cxxtest
ADD cxxtest-3.10.1.tar.gz /staging/
WORKDIR /staging/cxxtest/
RUN cp -r cxxtest /usr/local/arm-2009q1/arm-none-linux-gnueabi/include/
RUN cp -r cxxtest /usr/local/include/
RUN cp cxxtestgen.pl /usr/bin/
RUN ln -s /usr/bin/cxxtestgen.pl /usr/bin/cxxtestgen
WORKDIR /
RUN rm -rf /staging/
The build fails on the first "RUN" in the cxxtest installation step:
/bin/sh: cp: command not found
The command '/bin/sh -c cp -r cxxtest /usr/local/arm-2009q1/arm-none-linux-gnueabi/include/' returned a non-zero code: 127
What's wrong?
Because your image is being built from "scratch", not from the "centos6" base image (as is the case with the published "centos6-i686" image), even though you unpacked CentOS 6 into the filesystem as your first step, Bash was started up before that so your shell context has no meaningful PATH set. Adding the following after your "ENTRYPOINT" will result in all the usual binaries being accessible again, for the duration of the build process:
ENV PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Containers created from your image (had it built; say, by not trying to build cxxtest) would never have been affected, as the fresh Bash instances would have had the PATH correctly set through /etc/profile.
I'm trying to deploy a Yesod app to an Ubuntu server using Keter. So far this is what I've done:
Install Keter on the server using the provided setup script
wget -O - https://raw.github.com/snoyberg/keter/master/setup-keter.sh | bash
Run yesod keter to create a bundle on my dev machine (running OS X Mavericks)
scp the *.keter file into /opt/keter/incoming on the server
At this point, I think I should be able to go to my domain and have the app working, but I'm seeing a "Welcome to nginx" page instead. Additionally, all I have in /opt/keter/log/keter/current.log is:
2014-05-10 18:21:01.48: Unpacking bundle '/opt/keter/etc/../incoming/DoDeployTest.keter'
And I think I should have lines about starting a process and loading an app.
What do I need to do to deploy Yesod with Keter? Is there a good tutorial covering this (so far alot of the ones I'm reading seem somewhat outdated based on not mentioning useful things like yesod keter; hard to say though).
I'm pretty new to Haskell/Yesod/Keter/Sysadmin work so any help is appreciated.
Appendix:
Github repo of the Yesod project (Its vanilla yesod init w/ postgres + configuring the keter.yaml file)
Keter.yaml file:
exec: ../dist/build/DoDeployTest/DoDeployTest
args:
- production
host: "http://www.yesodonrails.com"
postgres: true
root: ../static
To ensure the maximum level of success possible, I would strongly advise you to compile and run both Keter and your Yesod application on the same platform. The recommendation is also to compile your application on a different machine from the one you're deploying on, since GHC compilation is very resource intensive. It looks like you're already doing this (albeit compiling on OS X and deploying on an Ubuntu server, which is not going to work, as described in response to your own answer).
My recommendation would be to use Docker containers to ensure consistent environments. I have a GitHub project containing a number of Dockerfiles I've been working on to address this and I'll describe roughly what they do here. Note that this GitHub project is still a work in progress and I don't have everything absolutely perfect yet. This is also similar to the answer I gave to this question.
keter-build:
FROM haskell:latest
RUN apt-get update && apt-get install -y \
git
RUN mkdir /src
RUN cd src && \
git clone https://github.com/snoyberg/keter && \
cd keter && \
git checkout e8b5a3fd5e14dfca466f8acff2a02f0415fceeb0
WORKDIR /src/keter
RUN cabal update
RUN cabal install keter
This configures a container that can be used to build the keter binary at a given revision from the Keter GitHub project.
keter-host:
FROM debian
RUN apt-get update && apt-get install -y \
ca-certificates \
libgmp-dev \
nano \
postgresql
COPY artifacts/keter /opt/keter/bin/
COPY artifacts/keter-config.yaml /opt/keter/etc/
EXPOSE 80
CMD ["/opt/keter/bin/keter", "/opt/keter/etc/keter-config.yaml"]
This container is a self-contained Keter host. You should ensure that the keter binary built in the keter-build container is available in the artifacts directory so that the COPY artifacts/keter /opt/keter/bin/ instruction copies it into the image.
yesod-env:
FROM haskell:latest
RUN apt-get update && apt-get install -y \
ca-certificates \
git \
nano \
wget
RUN echo 'deb http://download.fpcomplete.com/debian/jessie stable main' > /etc/apt/sources.list.d/fpco.list
RUN wget -q -O- https://s3.amazonaws.com/download.fpcomplete.com/debian/fpco.key | apt-key add -
RUN apt-get update && apt-get install -y \
stack
This is a container for building the Yesod app. Note that this Dockerfile is incomplete and haven't got it to clone the app's source code and build it yet. However, this might get you started.
Note that all three containers are ultimately based off the same debian Docker base image, so that binaries produced in each container will have a good chance of being valid in other containers. Some of my work was inspired by Dockerfiles designed by thoughtbot.
Ah, so based on the advice from the blog post introducing Keter, I tried to run the executable inside the *.keter file manually. Doing so yielded the message "cannot execute binary file". I suspect this is because I was compiling on a Mac originally, and deploying to an Ubuntu instance (I had this same problem trying to deploy to Heroku).
Process for discovering this (might be slightly inaccurate):
cp /opt/keter/incoming/DoDeployTest.keter /tmp
cd /tmp
mv DoDeployTest.keter DoDeployTest.tar.gz
gunzip DoDeployTest.tar.gz
tar xvf DoDeployTest.tar
# run executable
/dist/build/appname/appname