Folder copied by Docker inaccessible in Jenkins - linux

I have a docker image that I want to use with Jenkins, but I cannot access a folder. I have a feeling that the the ubuntu jenkins user just cannot see/access the folder copied when the Dockerfile is built. Below is my Dockerfile.
FROM jaas/jdk8-jenkins-slave:latest
USER root
RUN apt-get update -y
&& apt-get upgrade -y
&& apt-get install git -y
COPY ./ModusToolbox_2.1.0.1266-linux-install.tar.gz ./
RUN tar -xzf ModusToolbox_2.1.0.1266-linux-install.tar.gz
&& rm ModusToolbox_2.1.0.1266-linux-install.tar.gz
USER jenkins
RUN date +%a-%b-%d-%Y:%Z-%H:%M > openshift/BUILD_DAT
Is there any way I could make the file extracted from the .tar.gz visible to the jenkins user? I have tried looking around on how to share folders between users, but it didn't seem to work... Finally, the only directory I can access in jenkins is /home/jenkins/agent/workspace/ModusJenkins, but changing the docker file to extract the .tar.gz to this directory didn't work, as when I did "ls" in the jenkins execute build step, it showed nothing.
Any suggestions/advice would be appreciated. Thanks!

Related

Attempting to host a flutter project on Azure App Services using a docker image; local image and cloud image behave differently

I am having trouble with azure and docker where my local machine image is behaving differently than the image I push to ACR. while trying to deploy to web, I get this error:
ERROR - failed to register layer: error processing tar file(exit status 1): Container ID 397546 cannot be mapped to a host IDErr: 0, Message: mapped to a host ID
So in trying to fix it, I have come to find out that azure has a limit on uid numbers of 65000. Easy enough, just change ownership of the affected files to root, right?
Not so. I put the following command into my Dockerfile:
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
Works great locally for changing the uids of the affected files from 397546 to 0. I do a command in the cli of the container:
find / -uid 397546
It finds none of the same files it found before. Yay! I even navigate to the directories where the affected files are, and do a quick
ls -n to double confirm they are fine, and sure enough the uids are now 0 on all of them. Good to go?
Next step, push to cloud. When I push and reset the app service, I still continue to get the same exact error above. I have confirmed on multiple fronts that it is indeed pushing the correct image to the cloud.
All of this means that somehow my local image and the cloud image are behaving differently.
I am stumped guys please help.
The Dockerfile is as below:
RUN apt-get update
RUN apt-get install -y curl git wget unzip libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 psmisc
RUN apt-get clean
# Clone the flutter repo
RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter
# Set flutter path
ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}"
# Enable flutter web
RUN flutter upgrade
RUN flutter config --enable-web
# Run flutter doctor
RUN flutter doctor -v
# Change ownership to root of affected files
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
# Copy the app files to the container
COPY ./build/web /usr/local/bin/app
COPY ./startup /usr/local/bin/app/server
COPY ./pubspec.yaml /usr/local/bin/app/pubspec.yaml
# Set the working directory to the app files within the container
WORKDIR /usr/local/bin/app
# Get App Dependencies
RUN flutter pub get
# Build the app for the web
# Document the exposed port
EXPOSE 4040
# Set the server startup script as executable
RUN ["chmod", "+x", "/usr/local/bin/app/server/server.sh"]
# Start the web server
ENTRYPOINT [ "/usr/local/bin/app/server/server.sh" ]```
So basically we have made a shell script to build web BEFORE building the docker image. we then use the static js from the build/web folder and host that on the server. No need to download all of flutter. Makes pipelines a little harder, but at least it works.
New Dockerfile:
FROM ubuntu:20.04 as build-env
RUN apt-get update && \
apt-get install -y --no-install-recommends apt-utils && \
apt-get -y install sudo
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
## preesed tzdata, update package index, upgrade packages and install needed software
RUN echo "tzdata tzdata/Areas select US" > /tmp/preseed.txt; \
echo "tzdata tzdata/Zones/US select Colorado" >> /tmp/preseed.txt; \
debconf-set-selections /tmp/preseed.txt && \
apt-get update && \
apt-get install -y tzdata
RUN apt-get install -y curl git wget unzip libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 python3 nginx nano vim
RUN apt-get clean
# Copy files to container and build
RUN mkdir /app/
COPY . /app/
WORKDIR /app/
RUN cd /app/
# Configure nginx and remove secret files
RUN mv /app/build/web/ /var/www/html/patient
RUN cd /etc/nginx/sites-enabled
RUN cp -f /app/default /etc/nginx/sites-enabled/default
RUN cd /app/ && rm -r .dart_tool .vscode assets bin ios android google_place lib placepicker test .env .flutter-plugins .flutter-plugins-dependencies .gitignore .metadata analysis_options.yaml flutter_01.png pubspec.lock pubspec.yaml README.md
# Record the exposed port
EXPOSE 5000
# Start the python server
RUN ["chmod", "+x", "/app/server/server.sh"]
ENTRYPOINT [ "/app/server/server.sh"]

CentOS with Python as base image

I created a Dockerfile and then built it for my team to use. Currently I am pulling from the CentOS:latest image, then building the latest version of Python and saving the image to a .tar file. The idea is for my colleagues to use this image to add their Python projects to the /pyscripts folder. Is this the recommended way of building a base image or is there a better way I can go about doing it?
# Filename: Dockerfile
From centos
RUN yum -y update && yum -y install gcc openssl-devel bzip2-devel libffi-devel wget make && yum clean all
RUN cd /opt && wget --no-check-certificate https://www.python.org/ftp/python/3.8.3/Python-3.8.3.tgz && tar xzf Python-3.8.3.tgz && cd Python-3.8*/ && ./configure --enable-optimizations && make altinstall && rm -rf /opt/Python* && mkdir /pyscripts
Many thanks!
Yes this is the standard and recommended way of building a base image from a parent image (CentOS in this example) if that is what you need Python 3.8.3 (latest version) on CentOS system.
Alternatively you can pull a generic Python image with latest Python version (which is now 3.8.3) but based on other Linux distribution (Debian) from Docker HUB repository by running:
docker pull python:latest
And then build a base image from it where you will just need to create the directory /pyscripts
So the Dockerfile would look like that:
FROM python:latest
RUN mkdir /pyscripts
Or you can pull CentOS/Python already built image (with lower version 3.6) from Docker HUB repository by running:
docker pull centos/python-36-centos7
And then build a base image from it where you will just need to create the directory /pyscripts
So the Dockerfile would look like that:
FROM centos/python-36-centos7:latest
USER root
RUN mkdir /pyscripts
Remember to add this line just after the first line to run the commands as root:
USER root
Otherwise you would get a Permission Denied error message

Docker not extracting .tar.gz and cannot find the file in the image

I am having trouble with extracting a .tar.gz file and accessing its files on a docker image. I've tried looking around Stackoverflow, but the solutions didn't fix my problem... Below is my folder structure and my Dockerfile. I've made an image called modus.
Folder structure:
- modus
Dockerfile
ModusToolbox_2.1.0.1266-linux-install.tar.gz
Dockerfile:
FROM ubuntu:latest
USER root
RUN apt-get update -y && apt-get upgrade -y && apt-get install git -y
COPY ./ModusToolbox_2.1.0.1266-linux-install.tar.gz /root/
RUN cd /root/ && tar -C /root/ -zxzf ModusToolbox_2.1.0.1266-linux-install.tar.gz
I've been running the commands below, but when I try to check /root/ the extracted files aren't there...
docker build .
docker run -it modus
root#e19d081664e4:/# cd root
root#e19d081664e4:/# ls
<prints nothing>
There should be a folder called ModusToolBox, but I can't find it anywhere. Any help is appreciated.
P.S
I have tried changing ADD to COPY, but both don't work.
You didn't provide a tag option with -t but you're using a tag in
docker run -it modus. By doing that you run some other modus image, not the one you have just built. Docker should say something
like Successfully built <IMAGE_ID> at the end of the build, run
docker run -it <IMAGE_ID> to run a newly built image if you don't want to provide a tag.

How to get npm to use cache

UPDATE
I changed directions from this question and ended up taking advantage of Docker image layers to cache the npm install unless there is changes to the package.config, see here.
Note, in relation to this question, I still build my AngularJs Docker image in a slave Jenkins Docker image but I no longer run the npm install in the Docker slave, I copy my app files to my AngularJs Docker image and run the npm install in the AngularJs Docker image, thus getting a Docker cache layer of the npm install, inspiration from this great idea/answer here.
-------------------------------END UPDATE------------------------------
Ok, I should add the caveat that I am in a Docker container but that really shouldn't matter much possibly, I do not stop the container and I have volumes for the for the npm cache folder as well as the /home folder for the user running npm commands.
The purpose of the Docker container, with npm installed, is that it is a build slave, spun up by Jenkins to build an AngularJs application. The problem is that it is incredibly slow, downloading all the needed npm packages, every time.
jenkins is the user, a jenkins account on a build server is "whom" is running npm install
I have Volumes for both the npm folder for the user running the npm install cmd: /home/jenkins/.npm and also the folder that the command npm config get cache says is my cache directory: /root/.npm. Not that container volumes should even matter because I have not stopped the container after running npm install.
Ok the steps I take to start debugging, to start, I "bash into the container" with this command:
docker exec -it <container_id> bash
All commands I run from this point forward I am connected to the running container with npm installed.
echo "$HOME" results in /root
npm config get cache results in root/.npm
Any time jenkins runs npm install in this container, after that command finishes successfully, I run npm cache ls which always yields empty, nothing cached: ~/.npm
Many packages were downloaded however as we can see with ls -a /home/jenkins/.npm/:
So I tried setting the cache-min to a very long expiration time: npm config set cache-min 9999999 that didn't help.
I am not sure what else to do, it just seems that none of my npm packages are being cached, how do I get npm to cache packages?
here is a truncated npm install output:
Downloading binary from https://github.com/sass/node-sass/releases/download/v4.5.3/linux-x64-48_binding.node
Download complete
Binary saved to /home/jenkins/workspace/tsl.frontend.development/node_modules/node-sass/vendor/linux-x64-48/binding.node
Caching binary to /home/jenkins/.npm/node-sass/4.5.3/linux-x64-48_binding.node
Binary found at /home/jenkins/workspace/tsl.frontend.development/node_modules/node-sass/vendor/linux-x64-48/binding.node
Testing binary
Binary is fine
typings WARN deprecated 3/24/2017: "registry:dt/core-js#0.9.7+20161130133742" is deprecated (updated, replaced or removed)
[?25h
+-- app (global)
`-- core-js (global)
And here is my Dockerfile:
FROM centos:7
MAINTAINER Brian Ogden
RUN yum update -y && \
yum clean all
#############################################
# Jenkins Slave setup
#############################################
RUN yum install -y \
git \
openssh-server \
java-1.8.0-openjdk \
sudo \
make && \
yum clean all
# gen dummy keys, centos doesn't autogen them like ubuntu does
RUN /usr/bin/ssh-keygen -A
# Set SSH Configuration to allow remote logins without /proc write access
RUN sed -ri 's/^session\s+required\s+pam_loginuid.so$/session optional pam_loginuid.so/' /etc/pam.d/sshd
# Create Jenkins User
RUN useradd jenkins -m -s /bin/bash
# Add public key for Jenkins login
RUN mkdir /home/jenkins/.ssh
COPY /files/id_rsa.pub /home/jenkins/.ssh/authorized_keys
#setup permissions for the new folders and files
RUN chown -R jenkins /home/jenkins
RUN chgrp -R jenkins /home/jenkins
RUN chmod 600 /home/jenkins/.ssh/authorized_keys
RUN chmod 700 /home/jenkins/.ssh
# Add the jenkins user to sudoers
RUN echo "jenkins ALL=(ALL) ALL" >> etc/sudoers
#############################################
# Expose SSH port and run SSHD
EXPOSE 22
#Technically, the Docker Plugin enforces this call when it starts containers by overriding the entry command.
#I place this here because I want this build slave to run locally as it would if it was started in the build farm.
CMD ["/usr/sbin/sshd","-D"]
#############################################
# Docker and Docker Compose Install
#############################################
#install required packages
RUN yum install -y \
yum-utils \
device-mapper-persistent-data \
lvm2 \
curl && \
yum clean all
#add Docker CE stable repository
RUN yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
#Update the yum package index.
RUN yum makecache fast
#install Docker CE
RUN yum install -y docker-ce-17.06.0.ce-1.el7.centos
#install Docker Compose 1.14.0
#download Docker Compose binary from github repo
RUN curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
#Apply executable permissions to the binary
RUN chmod +x /usr/local/bin/docker-compose
#############################################
ENV NODE_VERSION 6.11.1
#############################################
# NodeJs Install
#############################################
RUN yum install -y \
wget
#Download NodeJs package
RUN wget https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.gz
#extract the binary package into our system's local package hierarchy with the tar command.
#The archive is packaged within a versioned directory, which we can get rid of by passing the --strip-components 1 option.
#We will specify the target directory of our command with the -C command:
#This will install all of the components within the /usr/local branch
RUN tar --strip-components 1 -xzvf node-v* -C /usr/local
#############################################
#############################################
# npm -setup volume for package cache
# this will speed up builds
#############################################
RUN mkdir /home/jenkins/.npm
RUN chown jenkins /home/jenkins/.npm .
RUN mkdir /root/.npm
RUN chown jenkins /root/.npm .
#for npm cache, this cannot be expressed in docker-compose.yml
#the reason for this is that Jenkins spins up slave containers using
#the docker plugin, this means that there
VOLUME /home/jenkins/.npm
VOLUME /root/.npm
#############################################
When you run docker exec -it <container> bash you access the Docker container as the root user. npm install thus saves the cache to /root/.npm, which isn't a volume saved by the container. Jenkins, on the other hand, uses the jenkins user, which saves to /home/jenkins/.npm, which is being cached. So in order to emulate the functionality of the actual Jenkins workflow, you need to su jenkins before you can npm install.
That being said, the npm cache is not a perfect solution (especially if you have a ton of automated Jenkins builds). Some things to look into that would be better long-term solutions:
Install a local NPM Cache like sinopia. I found this guide to be particularly helpful.
Use Docker to build you app (which would work fine with Docker In Docker). Docker would cache after each build step, saving the repeated fetching of dependencies.

Jenkins docker missing some binaries

I am running Jenkins in docker from official docker hub .
I created job which runs my own shell script, however I see some binaries
are missing in docker e.g.file command.
They mention on docker hub that one can install additional binaries over Ubuntu's aptitude however I don't know which package to install to get e.g file command working.
Unless Ubuntu did something different than the base Debian environment, file is included in the file package.
apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -f file

Resources