Jenkins docker missing some binaries - linux

I am running Jenkins in docker from official docker hub .
I created job which runs my own shell script, however I see some binaries
are missing in docker e.g.file command.
They mention on docker hub that one can install additional binaries over Ubuntu's aptitude however I don't know which package to install to get e.g file command working.

Unless Ubuntu did something different than the base Debian environment, file is included in the file package.
apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -f file

Related

install python3 on VM without using internet

I need to install python 3 on my virtual machine (I have python 2.7) but I don't have access to internet from my VM. Is there any way to do that without using internet I have access to a private gitlab repository and private dokcer hub.
Using GitLab
Ultimately, you can put whatever resources you need to install Python3 directly in GitLab.
For example, you could use the generic packages registry to upload the files you need and download them from GitLab in your VM. For example, you can redistribute the files from python.org/downloads this way.
If you're using a debian-based Linux distribution like Ubuntu, you could even provide the necessary packages in the GitLab debian registry (disabled by default, but can be enabled by an admin) and just use your package manager like apt install python3-dev after configuring your apt lists to point to the gitlab debian repo.
Using docker
If you have access to dockerhub, technically you can access files from docker images as well. Here I'll assume you're using ubuntu or some debian-based distribution, but the same principle applies for any OS.
Suppose you build an image:
FROM ubuntu:<a tag that matches your VM version>
# downloads all the `.deb` files you need to install python3
RUN apt update && apt install --download-only python3-dev
You can push this image to your docker registry
Then on your VM, you can pull this image and extract the necessary install files from /var/cache/apt/archives/*.deb in the image then install using dpkg
Extract files from the image (in this case, to a temp directory)
image=myprivateregistry.example.com/myrepo/myimage
source_path=/var/cache/apt/archives
destination_path=$(mktemp -d)
docker pull "$image"
container_id=$(docker create "$image")
docker cp "$container_id:$source_path" "$destination_path"
docker rm "$container_id"
Install python3 using dpkg:
dpkg --force-all -i "${destination_path}/*.deb"

I can't find the docker command

Assumptions and what you want to achieve
I can't find "dokcer" command after installing with "sudo apt install docker" on Linux.
How do I use docker on Linux?
Also, if this is a PATH problem, I'd like to know which folder it was in.
Occurring problems and error messages
bash: docker: command not found
The corresponding source code
$ docker
Supplementary information (e.g. FW/tool version)
MX Linux
Translated with www.DeepL.com/Translator (free version)
try:
sudo apt install docker.io
Then you would have docker cli command

Copy file from container to local filesystem

History:
My docker build file worked for years, without any problem, on my Linux Mint VM. When I needed to recreate the VM, I installed everything again, including docker.io.
I'm taking a beating with this error. I already verified that the final file is inside the docker image, but when I try to copy it to a directory external to the container, it says that it does not exist.
I followed the guidelines at Exploring Docker container's file system and verified that the file was in fact in the container.
Environment:
Linux Mint 19 (Tricia)
Docker installed by snap
Command:
docker cp {CONTAINER_ID}:/container_path /local_path
Problem:
stat /container_path: no such file or directory
The solution was simply to uninstall the docker by snap and install it again by apt. This solution still lacks more information, as it is not known if the problem was really caused by the version of the docker installed by snap.
sudo snap remove docker
sudo apt install docker.io

Running SSH script during Microsoft Azure Web App deployment

I am deploying a web app using the Python-Django framework to Microsoft Azure.
I have succeeded in deploying it, but every time I deploy, I have to open the Azure SSH tool and run the command apt-get install libgtk2.0-dev which I gather is some Linux dependency for the opencv-python image processing library.
I wonder if there is a way to install the required software using deploy.sh files.
deploy.sh
echo "Running Linux Deployment Script..."
apt-get update && apt install -y libxrender1 libxext6
apt-get install -y libfontconfig1
apt-get install libgtk2.0-dev
Thanks in advance for your help.
You can create a script to install libgtk2.0-dev, say test.sh under /home/site.
And then add an app setting under 'Configuration' called PRE_BUILD_SCRIPT_PATH with /home/site/test.sh as the value.
You can run a script on every Webapp startup. Just adjust your script as described here: https://stackoverflow.com/a/69923647/2606766
Create a start.sh file, e.g. like this:
# install package & start app
apt-get update -y
apt install -y libxrender1 libxext6
apt-get install -y libfontconfig1
apt-get install libgtk2.0-dev
# don't forget to start your webapp service at the end of this script, e.g.:
python manage.py runserver
Set it as your startup script:
Note: There are two pitfalls to this approach:
The script must be executable, so either install w/ unix and chmod 755 start.sh or use a git command (see SO).
The packages are installed on every startup, thus you depend on external servers/repositories when starting the webapp.
You can set SCM_POST_DEPLOYMENT_ACTIONS_PATH environment variable to configure a folder. All scripts in this folder will be executed after deployment. As far as I can see this should work both on Windows and Linux.
If you need root permissions then I would suggest to use a custom docker container which has these packages already installed.
You can start by adding this command directly to the startup script, As mentioned in the answer by #HeyMan. But instead of adding a file just add the command there apt-get update && apt install -y libxrender1 libxext6 && apt-get install -y libfontconfig1 && apt-get install libgtk2.0-dev
Add this command in a single line there.
If this method also does not work for you, then you should follow the container based approach.
Create a dockerfile and add all the required dependency there.
docker build to create a docker image.
Push that image to Azure container registry using docker push
Instead of deploying from local git, deploy with help of docker.
Look at this link for help
https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli?tabs=azure-cli

Need to install rpm package inside docker container without internet connectivity

I have a machine which has no internet connectivity and no access to any docker repository (so no image pulls are possible). I want to install memcached and I have the .rpm file available.
When I want to install on the host machine I execute the command rpm -ivh memcached-1.4.15-10.el7_3.1.x86_64.rpm . But I suppose that is because the rpm package manager comes pre-installed on the host OS.
In the docker container I load the .rpm file and in the dockerfile I include the command RUN rpm -ivh memcached-1.4.15-10.el7_3.1.x86_64.rpm. After that I get the following error:
/bin/sh: 1: rpm: not found
The command '/bin/sh -c rpm -ivh /home/memcached-1.4.15-10.el7_3.1.x86_64.rpm' returned a non-zero code: 127
I suppose that is because in a docker container the OS has the bare minimum installation. So how do I install the rpm package manager inside the container without internet connection? Is there an installable file for it.
I understand it is not the best practice to not use a central repository for images. Want to know if installing without internet is even possible?
I am creating the container on a CentOS machine right now. And following is the dockerfile:
FROM microsoft/dotnet:2.0-runtime
WORKDIR /home
#mempkgtest folder contains the .rpm file
COPY ${source:-mempkgtest} .
RUN rpm -ivh /home/memcached-1.4.15-10.el7_3.1.x86_64.rpm
ENTRYPOINT ["dotnet", "--info"]
Docker image microsoft/dotnet is build on top of buildpack-deps:jessie-scm (from here) which is build on top debian:jessie (from here and here).
debian does not use rpm package manager, it uses deb format for packages. /bin/sh kindly informs you, that is hasn't found rpm manager by saying /bin/sh: 1: rpm: not found. You can read how to install rpm packages on debian here.
Anyway, why don't you use deb file and dpkg package manager? You can find memcached pacakge for debian jessie here. You can do smth like this in your dockerfile:
ADD http://ftp.us.debian.org/debian/pool/main/m/memcached/memcached_1.4.21-1.1+deb8u1_amd64.deb
RUN dpkg -i memcached_1.4.21-1.1+deb8u1_amd64.deb
And remember, you will need to copy dependencies too.
Why don't you make your docker image on machine with internet access, than export docker image using docker export and then copy and import it on your destination machine? Such way is simpler and apt-get will resolve and install all memcached dependencies for you.

Resources