I need to install python 3 on my virtual machine (I have python 2.7) but I don't have access to internet from my VM. Is there any way to do that without using internet I have access to a private gitlab repository and private dokcer hub.
Using GitLab
Ultimately, you can put whatever resources you need to install Python3 directly in GitLab.
For example, you could use the generic packages registry to upload the files you need and download them from GitLab in your VM. For example, you can redistribute the files from python.org/downloads this way.
If you're using a debian-based Linux distribution like Ubuntu, you could even provide the necessary packages in the GitLab debian registry (disabled by default, but can be enabled by an admin) and just use your package manager like apt install python3-dev after configuring your apt lists to point to the gitlab debian repo.
Using docker
If you have access to dockerhub, technically you can access files from docker images as well. Here I'll assume you're using ubuntu or some debian-based distribution, but the same principle applies for any OS.
Suppose you build an image:
FROM ubuntu:<a tag that matches your VM version>
# downloads all the `.deb` files you need to install python3
RUN apt update && apt install --download-only python3-dev
You can push this image to your docker registry
Then on your VM, you can pull this image and extract the necessary install files from /var/cache/apt/archives/*.deb in the image then install using dpkg
Extract files from the image (in this case, to a temp directory)
image=myprivateregistry.example.com/myrepo/myimage
source_path=/var/cache/apt/archives
destination_path=$(mktemp -d)
docker pull "$image"
container_id=$(docker create "$image")
docker cp "$container_id:$source_path" "$destination_path"
docker rm "$container_id"
Install python3 using dpkg:
dpkg --force-all -i "${destination_path}/*.deb"
Related
My customer is migrating off of Nexus (which has a yum repository), and they want to use Gitlab. I know Docker can hold docker images and JAR files via its maven feature. But does Gitlab allow you to host yum repositories as well? I wasn't able to find anything after some googling.
You can store rpm packages (or .deb, etc.) in the Gitlab Registry, but there isn't official support for that package type so you'd have to use the "Generic" version. The downside to this is that you wouldn't be able to use the Gitlab Registry as a yum repo, however you could do something like:
#this downloads the package with filename `:filename:`
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/:project_id:/packages/generic/:package_name:/:package_version:/:filename:"
# Use rpm to install a package from a local file instead of a yum repo:
rpm -i :filename:.rpm
# For this use case, the file will have to be a .rpm file
The -i flag tells rpm to install the package. Another option is to yum localinstall :filename.rpm:.
Generic Packages must be enabled on your Gitlab instance (if you're using a self-hosted version).
Generic Packages docs are here: https://docs.gitlab.com/ee/user/packages/generic_packages/#download-package-file
An example .gitlab-ci.yml file using Generic Packages is here: https://gitlab.com/guided-explorations/cfg-data/write-ci-cd-variables-in-pipeline/-/blob/master/.gitlab-ci.yml
Check out OpenRepo: https://github.com/openkilt/openrepo
This is an open source package hosting server that can make packages available for both Debian (APT) and Red Hat (RPM) files.
In this case, you would configure your GitLab CI build to push your rpm files to the OpenRepo server.
History:
My docker build file worked for years, without any problem, on my Linux Mint VM. When I needed to recreate the VM, I installed everything again, including docker.io.
I'm taking a beating with this error. I already verified that the final file is inside the docker image, but when I try to copy it to a directory external to the container, it says that it does not exist.
I followed the guidelines at Exploring Docker container's file system and verified that the file was in fact in the container.
Environment:
Linux Mint 19 (Tricia)
Docker installed by snap
Command:
docker cp {CONTAINER_ID}:/container_path /local_path
Problem:
stat /container_path: no such file or directory
The solution was simply to uninstall the docker by snap and install it again by apt. This solution still lacks more information, as it is not known if the problem was really caused by the version of the docker installed by snap.
sudo snap remove docker
sudo apt install docker.io
I have a machine which has no internet connectivity and no access to any docker repository (so no image pulls are possible). I want to install memcached and I have the .rpm file available.
When I want to install on the host machine I execute the command rpm -ivh memcached-1.4.15-10.el7_3.1.x86_64.rpm . But I suppose that is because the rpm package manager comes pre-installed on the host OS.
In the docker container I load the .rpm file and in the dockerfile I include the command RUN rpm -ivh memcached-1.4.15-10.el7_3.1.x86_64.rpm. After that I get the following error:
/bin/sh: 1: rpm: not found
The command '/bin/sh -c rpm -ivh /home/memcached-1.4.15-10.el7_3.1.x86_64.rpm' returned a non-zero code: 127
I suppose that is because in a docker container the OS has the bare minimum installation. So how do I install the rpm package manager inside the container without internet connection? Is there an installable file for it.
I understand it is not the best practice to not use a central repository for images. Want to know if installing without internet is even possible?
I am creating the container on a CentOS machine right now. And following is the dockerfile:
FROM microsoft/dotnet:2.0-runtime
WORKDIR /home
#mempkgtest folder contains the .rpm file
COPY ${source:-mempkgtest} .
RUN rpm -ivh /home/memcached-1.4.15-10.el7_3.1.x86_64.rpm
ENTRYPOINT ["dotnet", "--info"]
Docker image microsoft/dotnet is build on top of buildpack-deps:jessie-scm (from here) which is build on top debian:jessie (from here and here).
debian does not use rpm package manager, it uses deb format for packages. /bin/sh kindly informs you, that is hasn't found rpm manager by saying /bin/sh: 1: rpm: not found. You can read how to install rpm packages on debian here.
Anyway, why don't you use deb file and dpkg package manager? You can find memcached pacakge for debian jessie here. You can do smth like this in your dockerfile:
ADD http://ftp.us.debian.org/debian/pool/main/m/memcached/memcached_1.4.21-1.1+deb8u1_amd64.deb
RUN dpkg -i memcached_1.4.21-1.1+deb8u1_amd64.deb
And remember, you will need to copy dependencies too.
Why don't you make your docker image on machine with internet access, than export docker image using docker export and then copy and import it on your destination machine? Such way is simpler and apt-get will resolve and install all memcached dependencies for you.
I am new to Debian 8, and still very much a Linux beginner. I am currently running Debian 8 Oracle VM Virtualbox in Windows 10, for reference.
For a project I am working on, my task is installing Debian 8 packages from the source package to a specified rootfs folder. After getting the source files (.tar.gz, .diff.gz, .dsc) and extracting them, I run:
dpkg-source -x <package>.dsc
Which extracts the source to the working directory.
The issue I'm having is generating the .deb files from the extracted. The standard way to do it is to let apt handle the installation of the dependencies from the online repository via:
apt-get build-dep <package>
then generate the .deb files via:
dpkg-buildpackage -b
But this will install the dependencies to my rootfs. In addition, since I downloaded the majority of the packages to my local machine, I'd like to be able to manually install each dependency from my local source packages rather than online.
From my understanding, I was tasked this to avoid polluting the specified fs with documentation and non-essential files, since the number of Debian 8 packages that will be added to this fs is >700.
If there are any mistakes / misunderstandings with my knowledge of Linux & Debian 8, please let me know.
You can create a docker container and install your dependencies in there and do all your work in there. You can configure docker to put the docker containers on any filesystem you like.
Any approach that does not use containers is unlikely to work because AFAIK most Linux distributions, including Debian, do not support dependency relocation. Nix is an exception. So containers are a way around that.
I am running Jenkins in docker from official docker hub .
I created job which runs my own shell script, however I see some binaries
are missing in docker e.g.file command.
They mention on docker hub that one can install additional binaries over Ubuntu's aptitude however I don't know which package to install to get e.g file command working.
Unless Ubuntu did something different than the base Debian environment, file is included in the file package.
apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -f file