When creating a docker, i am currently doing pip install -r requirements.txt.
Instead of pip install, can I just copy all the already installed modules in my venv of project on local host into docker? Is it equivalent or is there a difference? I am assuming here that local host is same as docker container in terms of image and configuration.
It is not recommended to copy the installed modules from your host machine to the container. The code might not work if your host OS is different than the container’s base OS. Moreover you may be copying unwanted cache files, which will increase the docker image size.
Related
I need to install python 3 on my virtual machine (I have python 2.7) but I don't have access to internet from my VM. Is there any way to do that without using internet I have access to a private gitlab repository and private dokcer hub.
Using GitLab
Ultimately, you can put whatever resources you need to install Python3 directly in GitLab.
For example, you could use the generic packages registry to upload the files you need and download them from GitLab in your VM. For example, you can redistribute the files from python.org/downloads this way.
If you're using a debian-based Linux distribution like Ubuntu, you could even provide the necessary packages in the GitLab debian registry (disabled by default, but can be enabled by an admin) and just use your package manager like apt install python3-dev after configuring your apt lists to point to the gitlab debian repo.
Using docker
If you have access to dockerhub, technically you can access files from docker images as well. Here I'll assume you're using ubuntu or some debian-based distribution, but the same principle applies for any OS.
Suppose you build an image:
FROM ubuntu:<a tag that matches your VM version>
# downloads all the `.deb` files you need to install python3
RUN apt update && apt install --download-only python3-dev
You can push this image to your docker registry
Then on your VM, you can pull this image and extract the necessary install files from /var/cache/apt/archives/*.deb in the image then install using dpkg
Extract files from the image (in this case, to a temp directory)
image=myprivateregistry.example.com/myrepo/myimage
source_path=/var/cache/apt/archives
destination_path=$(mktemp -d)
docker pull "$image"
container_id=$(docker create "$image")
docker cp "$container_id:$source_path" "$destination_path"
docker rm "$container_id"
Install python3 using dpkg:
dpkg --force-all -i "${destination_path}/*.deb"
History:
My docker build file worked for years, without any problem, on my Linux Mint VM. When I needed to recreate the VM, I installed everything again, including docker.io.
I'm taking a beating with this error. I already verified that the final file is inside the docker image, but when I try to copy it to a directory external to the container, it says that it does not exist.
I followed the guidelines at Exploring Docker container's file system and verified that the file was in fact in the container.
Environment:
Linux Mint 19 (Tricia)
Docker installed by snap
Command:
docker cp {CONTAINER_ID}:/container_path /local_path
Problem:
stat /container_path: no such file or directory
The solution was simply to uninstall the docker by snap and install it again by apt. This solution still lacks more information, as it is not known if the problem was really caused by the version of the docker installed by snap.
sudo snap remove docker
sudo apt install docker.io
I have a machine which has no internet connectivity and no access to any docker repository (so no image pulls are possible). I want to install memcached and I have the .rpm file available.
When I want to install on the host machine I execute the command rpm -ivh memcached-1.4.15-10.el7_3.1.x86_64.rpm . But I suppose that is because the rpm package manager comes pre-installed on the host OS.
In the docker container I load the .rpm file and in the dockerfile I include the command RUN rpm -ivh memcached-1.4.15-10.el7_3.1.x86_64.rpm. After that I get the following error:
/bin/sh: 1: rpm: not found
The command '/bin/sh -c rpm -ivh /home/memcached-1.4.15-10.el7_3.1.x86_64.rpm' returned a non-zero code: 127
I suppose that is because in a docker container the OS has the bare minimum installation. So how do I install the rpm package manager inside the container without internet connection? Is there an installable file for it.
I understand it is not the best practice to not use a central repository for images. Want to know if installing without internet is even possible?
I am creating the container on a CentOS machine right now. And following is the dockerfile:
FROM microsoft/dotnet:2.0-runtime
WORKDIR /home
#mempkgtest folder contains the .rpm file
COPY ${source:-mempkgtest} .
RUN rpm -ivh /home/memcached-1.4.15-10.el7_3.1.x86_64.rpm
ENTRYPOINT ["dotnet", "--info"]
Docker image microsoft/dotnet is build on top of buildpack-deps:jessie-scm (from here) which is build on top debian:jessie (from here and here).
debian does not use rpm package manager, it uses deb format for packages. /bin/sh kindly informs you, that is hasn't found rpm manager by saying /bin/sh: 1: rpm: not found. You can read how to install rpm packages on debian here.
Anyway, why don't you use deb file and dpkg package manager? You can find memcached pacakge for debian jessie here. You can do smth like this in your dockerfile:
ADD http://ftp.us.debian.org/debian/pool/main/m/memcached/memcached_1.4.21-1.1+deb8u1_amd64.deb
RUN dpkg -i memcached_1.4.21-1.1+deb8u1_amd64.deb
And remember, you will need to copy dependencies too.
Why don't you make your docker image on machine with internet access, than export docker image using docker export and then copy and import it on your destination machine? Such way is simpler and apt-get will resolve and install all memcached dependencies for you.
To develop driver program, we need /lib/modules//build directory. But I found under docker image of centos, even after I
yum install kernel-devel
There's still no such a directory with all its contents. Question:
(1) how to make it possible to develop driver in a docker linux environment?
(2) is it possible to load this developed module?
Docker is not virtual machine.
Ubuntu with docker is not real ubuntu.
If you want to develop with ubuntu, you should use virtualbox or vmware.
Check this link for more information
Docker uses the kernel of the host machine.
After reading this page, I almost gave up building a kernel module in Docker so I'm adding this answer hoping it helps somebody. See also what-is-the-difference-between-kernel-drivers-and-kernel-modules
You can build Kernel modules in Docker as long as the Kernel source required for the build is available inside Docker. Lets say you want to build against the latest kernel source available in your yum repos, you could install the kernel source using yum install kernel-devel. The source will be in /usr/src/kernels/<version> directory. You could install specific version of kernel-devel from your repo if that is what you want.
Then build the module using $ make -C <path_to_kernel_src> M=$PWD where the path to the kernel source would be /usr/src/kernels/<version>.
Read - Kernel Build System » Building External Modules
Docker container uses the kernel of the host machine so if you want to build against the running kernel, i.e., the kernel of the Docker host machine, you could try running the container in privileged mode and mounting the modules directory. docker run --name container_name --privileged --cap-add=ALL -v /dev:/dev -v /lib/modules:/lib/modules image_id See this
You should not load the modules on a kernel that is not the same as the one the module was built for. You could force install it but that is highly discouraged. Remember your running kernel, i.e., the Docker host kernel, is the kernel of the Docker container irrespective of what kernel-devel version you installed.
To see the kernel the module was built for (or built using), run modinfo <module> and look for vermagic value.
Dynamic Kernel Module Support is also worth a read.
How do I add a couple of packages to the default Ubuntu installation in lxc, so that the results are cached?
Currently my script creates containers like this
lxc-create -t ubuntu -n foo -- --packages "firefox,python2.7,python-pip"
It works but is very slow, as it downloads installation packages with massive dependencies every single time I create a container. Is there a way to include these in the default Ubuntu installation, so that they would be downloaded once and then cached, speeding up creation of consecutive containers?
I would recommend looking at the apt-cacher-ng package: https://launchpad.net/ubuntu/+source/apt-cacher-ng.
I found a guide on how to install it here: http://www.distrogeeks.com/install-apt-cacher-ng-ubuntu/.
Apt on your host machine "should" cache packages downloaded on your host machine, but there is no harm in configuring apt on your host machine to use the apt-cacher-ng cache as well.
But configuring the container "machines" to use apt-cacher-ng on the host machine can reduce the time spent on downloading the same packages on different containers quite a lot.