Adding installation packages to linux containers (lxc) caches - linux

How do I add a couple of packages to the default Ubuntu installation in lxc, so that the results are cached?
Currently my script creates containers like this
lxc-create -t ubuntu -n foo -- --packages "firefox,python2.7,python-pip"
It works but is very slow, as it downloads installation packages with massive dependencies every single time I create a container. Is there a way to include these in the default Ubuntu installation, so that they would be downloaded once and then cached, speeding up creation of consecutive containers?

I would recommend looking at the apt-cacher-ng package: https://launchpad.net/ubuntu/+source/apt-cacher-ng.
I found a guide on how to install it here: http://www.distrogeeks.com/install-apt-cacher-ng-ubuntu/.
Apt on your host machine "should" cache packages downloaded on your host machine, but there is no harm in configuring apt on your host machine to use the apt-cacher-ng cache as well.
But configuring the container "machines" to use apt-cacher-ng on the host machine can reduce the time spent on downloading the same packages on different containers quite a lot.

Related

Docker - pip install from requirements or copy installed modules

When creating a docker, i am currently doing pip install -r requirements.txt.
Instead of pip install, can I just copy all the already installed modules in my venv of project on local host into docker? Is it equivalent or is there a difference? I am assuming here that local host is same as docker container in terms of image and configuration.
It is not recommended to copy the installed modules from your host machine to the container. The code might not work if your host OS is different than the container’s base OS. Moreover you may be copying unwanted cache files, which will increase the docker image size.

Install ansible off-line from binaries [duplicate]

This question already has answers here:
How to install packages offline?
(12 answers)
Closed 2 years ago.
we have rhel linux machine without network access
and we want to install ansible on that machine
but we want to install the ansible from binaries ( not like pip/yum install ) , because we want to avoid any pip dependencies issues
is any approach that is relevant ?
example of the legacy way
Step 1: Update your Control Node
Any time you are installing new software, it is a good idea to ensure your existing operating system software is up to date. Let’s start with that task first.
yum update
Step 2: Install the EPEL Repository
Installing Ansible is pretty straightforward. First, we’ll need to install the CentOS 7 EPEL repository.
yum install epel-release
Step 3: Install Ansible
Next, we install the Ansible package from the EPEL repository.
yum install ansible
Perhaps not ideal, but you can just run from source. I've done it that way for years without any problems. I just put the initialization routine in my .bashrc file, so it's always ready to use.
Running Ansible from source (devel)
Once you pull from git on a machine that has internet access, sneakernet it over to the machine you want it on.
As mentioned in the official documentation you can use rpm available in official release repo. Since you dont have internet access you will have to download it somewhere else & copy it over to control node.
RPMs for currently supported versions of RHEL, CentOS, and Fedora are available from EPEL as well as releases.ansible.com.
Or
You can also build an RPM yourself. From the root of a checkout or tarball, use the make rpm command to build an RPM you can distribute and install
However I would not recommend Running Ansible from source (devel) because as already mentioned in the doc, this could be unstable.
Note
You should only run Ansible from devel if you are actively developing content for Ansible. This is a rapidly changing source of code and can become unstable at any point.
If you would like to build rpm on your own, you should probably use the tagged releases.
Available both in github & Ansible releases

Installing Debian 8 packages & dependencies to a specified fs directory

I am new to Debian 8, and still very much a Linux beginner. I am currently running Debian 8 Oracle VM Virtualbox in Windows 10, for reference.
For a project I am working on, my task is installing Debian 8 packages from the source package to a specified rootfs folder. After getting the source files (.tar.gz, .diff.gz, .dsc) and extracting them, I run:
dpkg-source -x <package>.dsc
Which extracts the source to the working directory.
The issue I'm having is generating the .deb files from the extracted. The standard way to do it is to let apt handle the installation of the dependencies from the online repository via:
apt-get build-dep <package>
then generate the .deb files via:
dpkg-buildpackage -b
But this will install the dependencies to my rootfs. In addition, since I downloaded the majority of the packages to my local machine, I'd like to be able to manually install each dependency from my local source packages rather than online.
From my understanding, I was tasked this to avoid polluting the specified fs with documentation and non-essential files, since the number of Debian 8 packages that will be added to this fs is >700.
If there are any mistakes / misunderstandings with my knowledge of Linux & Debian 8, please let me know.
You can create a docker container and install your dependencies in there and do all your work in there. You can configure docker to put the docker containers on any filesystem you like.
Any approach that does not use containers is unlikely to work because AFAIK most Linux distributions, including Debian, do not support dependency relocation. Nix is an exception. So containers are a way around that.

docker linux container doesn't support driver development?

To develop driver program, we need /lib/modules//build directory. But I found under docker image of centos, even after I
yum install kernel-devel
There's still no such a directory with all its contents. Question:
(1) how to make it possible to develop driver in a docker linux environment?
(2) is it possible to load this developed module?
Docker is not virtual machine.
Ubuntu with docker is not real ubuntu.
If you want to develop with ubuntu, you should use virtualbox or vmware.
Check this link for more information
Docker uses the kernel of the host machine.
After reading this page, I almost gave up building a kernel module in Docker so I'm adding this answer hoping it helps somebody. See also what-is-the-difference-between-kernel-drivers-and-kernel-modules
You can build Kernel modules in Docker as long as the Kernel source required for the build is available inside Docker. Lets say you want to build against the latest kernel source available in your yum repos, you could install the kernel source using yum install kernel-devel. The source will be in /usr/src/kernels/<version> directory. You could install specific version of kernel-devel from your repo if that is what you want.
Then build the module using $ make -C <path_to_kernel_src> M=$PWD where the path to the kernel source would be /usr/src/kernels/<version>.
Read - Kernel Build System » Building External Modules
Docker container uses the kernel of the host machine so if you want to build against the running kernel, i.e., the kernel of the Docker host machine, you could try running the container in privileged mode and mounting the modules directory. docker run --name container_name --privileged --cap-add=ALL -v /dev:/dev -v /lib/modules:/lib/modules image_id See this
You should not load the modules on a kernel that is not the same as the one the module was built for. You could force install it but that is highly discouraged. Remember your running kernel, i.e., the Docker host kernel, is the kernel of the Docker container irrespective of what kernel-devel version you installed.
To see the kernel the module was built for (or built using), run modinfo <module> and look for vermagic value.
Dynamic Kernel Module Support is also worth a read.

Package manager on the Docker Machine default VM?

I'm developing on OSX using Docker Machine. I used the quickstart terminal to let it create the default VM which is extremely minimal:
In an OS X installation, the docker daemon is running inside a Linux VM called default. The default is a lightweight Linux VM made specifically to run the Docker daemon on Mac OS X. The VM runs completely from RAM, is a small ~24MB download, and boots in approximately 5s.
I want to install dnsmasq, but none of these instructions could work. I expect to come across this kind of problem again, so beyond installing dnsmasq I want to have some tool such as apt-get to be able to easily install things. With so few commands available I don't know how to get started. I have curl, wget, sh, git, and other very basic commands. I don't have any of the following:
apt
apt-get
deb
pkg
pkg_add
yum
make
gcc
g++
python
bash
What can I do? Should I just download a more complete VM such as Ubuntu? My laptop is not very fast so a very lightweight VM was very appealing to me, but this is starting to seem like a bit much.
The docker-machine VM is based on TinyCore. To install extra packages use tce or tce-load, the apt-get counterpart of TinyCore.
A word of warning, you shouldn't treat the docker-machine VM as a regular VM where you install tons of packages and customize. It's only meant to run containers. It's best to keep it that way.

Resources