What Linux version does CircleCI use? Can it be modified? - linux

We've connected our node.js app with CircleCI. I understand how to control which services are running on the machine, but not how to identify the OS version, or whether there's a way to change it, so that the unit tests will run on the same OS version as the production machine.

Per the documentation, CircleCI currently uses Ubuntu 12.04.
You can check for yourself by running a build with ssh enabled and examining one of the build instances:
$ ssh -p 64538 ubuntu#54.205.50.104 cat /etc/os-release
NAME="Ubuntu"
VERSION="12.04.5 LTS, Precise Pangolin"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu precise (12.04.5 LTS)"
VERSION_ID="12.04"
The question of whether you can use a different OS has already been answered here. The answer is that you can use a Docker image with a different OS, but you can't replace the build container's base OS.

Another way to detect OS version, other than via ssh, is adding the same command to the circle.yml file. For example:
machine:
pre:
- cat /etc/os-release
That way, the OS version will show in the log of every build.

Related

Are there any limitations regarding the age of a linux distribution which can be used to create a docker base-image?

Im wondering if its possible to use very old Linux Distribution like Debian GNU/Linux 3.1 (Sarge) and create a base-image of it to run legacy code not working under "younger" distros.
Only Thing i found about it was somebody successfully using Ubuntu Feisty: Run old Linux release in a Docker container?
Are there any known limitations?
Your host needs to have a minimal version of the Linux kernel, and that version is 3.10
See
Docker minimum kernel version 3.8.13 or 3.10
extract from the previous link
There's also a shell-script to check if your system has the required dependencies in place and to check which features are available;
https://github.com/docker/docker/blob/master/contrib/check-config.sh
So you can use this to check if you will be able to use docker on this host.
From
https://wiki.debian.org/DebianSarge?action=show&redirect=Sarge
I see
kernel : linux 2.4.27 and 2.6.8
So it may not work

Oracle on lxc in ubuntu

I'm currently trying to install an oracle server (11g) in a linux container on ubuntu (following this tutorial (http://www2.hawaii.edu/~lipyeow/ics321/2014fall/installoracle11g.html).
When I try to change the file handler with sysctl, the modifications doesn't save into my container. Moreover, when I make the modification in the main ubuntu kernel, it propagates to the containers, so my question is as follow:
How can I modify the file handlers only in my oracle container ?
Thanks.
Try out the Orabuntu-LXC project code. It supports Ubuntu 16.04, 17.04, 17.10 and is purpose-built for running Any Oracle on Any Linux, including Ubuntu Linux. Note that as you probably already know, Oracle Corp does not formally support or certify Oracle on Ubuntu Linux.
As far as you question about the file handlers, some sysctl values can only be set at the LXC host level, and some can be set in the container.
https://sites.google.com/site/nandydandyoracle/oracle-rac-in-lxc-linux-containers/oracle-lxc-vlc#TOC-Install-the-etc-sysctl.conf-File-Required-for-Oracle
https://github.com/gstanden/orabuntu-lxc
https://sites.google.com/site/nandydandyoracle/
Please note that the step-by-step guides are quite old and that the basic LXC infrastructure together with OpenvSwitch, an LXC-containerized DNS/DHCP, and an optional SCST Linux SAN can all be installed on Ubuntu 16.04, 17.04 and 17.10 with one command:
./anylinux-services.sh
after completion of which all you would need do is download your Oracle database installtion media and install.

Which commands of the defined Linux Distribution are available in a Docker container?

I'm new to docker and understand that the linux kernel is shared between the host-os and the containers. But I don't really understand how deep docker emulates a specific linux-distribution. Lets say we have a simple docker file like this:
FROM ubuntu:16.10
RUN apt-get install nginx
It will give me a docker container with nginx installed in an Ubuntu 16.10 environment. So I should be able to use apt-get as default package manager of Ubuntu. But how deep is this? Can I assume that typical commands of those distribution like lsb_release are emulated like in a full VM with Ubuntu 16.10 installed?
The reason behind my question is that linux distributions are different. I need to know which commands are avaliable, for example when I run a container with Ubuntu 16.10 like the one above on a host which a different distribution installed (like Red Hat, CentOS etc).
A Ubuntu image in Docker is about 150 MB. So I think there are not all tools included like in a real installation. But how can I know on which I can desert that they're there.
Base OS images for Docker are deliberately stripped down, and for Ubuntu they are removing more commands with each new release. The image is meant as the base for a dedicated application to run, you wouldn't typically connect to the container and run commands inside it, and a smaller image is easier to move around and has a smaller attack vector.
There isn't a list of commands in each image version that I know of, you'll only know by building your image. But when images are tagged you can assume a future minor update will not break downstream images - a good argument for explicitly specifying a tag in your Dockerfile.
E.g, this Dockerfile builds correctly:
FROM ubuntu:trusty
RUN ping -c 1 127.0.0.1
This one fails:
FROM ubuntu:xenial
RUN ping -c 1 127.0.0.1
That's because ping was removed from the image for the xenial release. If you just used FROM ubuntu then the same Dockerfile would have built correctly when trusty was the latest tag and then failed when it was replaced by xenial.
A container is presenting you with the same software environment as the non-containerized distribution. It may not have (in fact, probably does not have) all the same packages installed by default, but you can install whatever you need using the appropriate package manager. The availability of software in the container has nothing to do with the distribution running on your host (the Ubuntu image will be the same regardless of whether your are running Docker under CentOS, Fedora, Ubuntu, Arch, etc).
If you require certain commands to be available, just ensure that they are installed in your Dockerfile.
One of the few things that works differently inside a container is that there is typically no service management process running (like init or systemd or whatever), so you cannot start services the same way you can on the host without a little bit of work.

Linux dev environment in osx (docker as mv or any other)

I'd love to hear from you some advice on setting up what I'm looking for.
I'm using OSX and I need to develop some code on a Linux machine, the thing is that I was looking for some VM alternative since it takes too much battery power.
The first thing I come across with was a docker container. I know It is not what it was designed for, but I thought it might work anyway. So I tried running a container as
docker run -i -t ubuntu /bin/bash
and it worked well. However all the changes I make are gone and I can't fins a way to solve it. I also tried
docker run -i -v /Users/JaimehRubiks/test:/home/Jaime -t ubuntu /bin/bash
and all files in there are saved (also very interesting because I can share my files with host), but it's kind of boring having to commit to the docker image if I change anything in the config files of my ubuntu.
What I'm looking for is just a simple way to run linux in my mac, and then access to it somehow, like I did in docker or via SSH.
Docker currently does not run natively on osx as Docker relies on the Linux kernel for its isolation features. In fact, the Docker Toolbox uses a Virtual Box virtual machine running the boot2docker Linux distro to run the Docker daemon on osx. See the official documentation on Mac osx installation.
The boot2docker linux image is quite light weight, but I'm not sure you will get much benefit from running Docker on osx for Linux development over simply running a full Virtualbox machine with Ubuntu (or other distro). If you want to run a virtual machine vagrant is a good tool to help you set that up. It lets you easily pull down images from an image repo, setup the image, and ssh into it. It also makes host -> guest-machine folder sharing and port forwarding quite simple.
but it's kind of boring having to commit to the docker image if I change anything in the config files of my ubuntu.
You don't have to docker commit anything: any file change make on the host (/Users/JaimehRubiks/test) will be visible in the container (/home/Jaime)
what about using vagrant to run Ubuntu or CentOS? you can access the system via command vagrant ssh and configure it with configuration file and share it like using docker.

Emulating Linux binaries under Mac OS X

How do I run Linux binaries under Mac OS X?
Googling around I found a couple of emulators but none for running Linux binaries on a Mac. There are quite a few posts about running Mac OS X on Linux and that kind of stuff - but that's the opposite of what I want to do.
Update:
Thanks for all the answers! I am fully aware of MacPorts and Fink or any of the other things; and no, I do not want any of these utilities, and I do not want any of the package managers, I prefer to compile things myself. I also have Parallels and could set up virtual machines and all that jazz...
The only thing I want to do is to find a way to run a binary that I do not have the source code for and has been compiled for Linux, but I do not want to run it under Linux but under Mac OS X. Therefore my question about emulators.
Well there is a project introducing something like Linux's binfmt_misc to OS X so now what you need is an ELF loader, a dynamic linker that can load both Mach-O and ELF, and some mechanism to translate Linux calls to OS X ones.
Just for inspiration, you can implement the dynamic linker in the fashion that it ignores filename extension - both libfoo.so.1 (as an Linux ELF) and libfoo.1.dylib (as an Mach-O) can be loaded so that OS X versions of system libraries can be reused so that you do not need to write a "hosted on OS X" libc.so and syscalls can be handled by an kext that translates Linux calls to OS X ones in kernel.
Or, in an more elegant way, implement a stripped down Linux kernel as a kext that makes the OS X kernel a dual-purpose. However that will require you to use two sets of libraries. (Binaries do not clash so it is largely okay)
Set up a virtual machine (I personally use VMWare Fusion) and then install whatever distro of Linux you desire on the virtual machine.
Or, if you have the source to the Linux program, chances are you can recompile it on a Mac and run it natively. If you install Fink or MacPorts, you can install a lot of open source programs without much trouble.
I recently found Noah, which you can use to run Linux binaries on macOS. You can install Noah via homebrew (brew install linux-noah/noah/noah). Then you should be able to do this:
noah linux_binary
In my experience the behavior of the binary matches what I see on my Ubuntu machine.
You might have some luck with running Linux executables under Mac OS X using Qemu's User Space Emulator
If you decide to go the virtualization route, consider also VirtualBox.
Also, if you only need UNIX like command line tools, there is the MacPorts project. This is basically how I set up git on my mac: after having installed MacPorts you just have to run the sudo port install git command to install git on your system.
noah does not allow the binaries to execute properly for me. Use Docker Desktop for Mac.
Just do:
docker pull centos:latest # 73MB CentOS docker image
Make a folder for what is needed to run your binary, and in your Dockerfile:
FROM centos
COPY your_binary /bin/
ENTRYPOINT ["your_binary"]
and you can build it with
docker build -t image_name
then execute with
docker run image_name as if it were the binary itself. Worked for me. Hope it helps someone else. And if you need specific outputs or to store files somewhere you can mount volumes onto the docker with -v, for example:
docker run -v path_to_my_stuff:/docker_stuff image_name,
though adding a WORKDIR /docker_stuff line to the Dockerfile before ENTRYPOINT is probably best.
If you change ENTRYPOINT to
ENTRYPOINT ["bash", "-c"]
and add
CMD ["your_binary"]
underneath it, you can actually pass the command into the image like
docker run -v path_on_local:/in_container_path image_name "your_binary some_parameters -optionrequiringzerowhitespacebeforeinputvalue"

Resources