My EC2 is running with CentOS 7 HVM community image which comes default as xfs file system, and I had a root volume with 10GB which I extended to 15GB after detaching > creating snap > reattaching to /dev/sda1 but I noticed that I did not had to run any kind of resize command just like with ext file system we have to run resize2fs and partition now shows 15GB without doing anything. While there is a command xfs_growfs but even that I did not run.
So, is this normal behavior of how xfs is expanded in EC2 world(and others), or something else?
Regards,
Farmi
This is normal behaviour on the Amazon Linux AMI.
From Amazon Linux AMI 2014.03 Release Notes:
Cloud-Init 0.7.2
Cloud-Init has been updated to the 0.7 series, adding a number of useful features. One example is dracut-modules-growroot, which automatically resizes your root filesystem on boot.
I note that CentOS 7 (x86_64) with Updates HVM says:
Starting with CentOS-7 we now include cloud-init support in all CentOS AMI's
So, it is likely that your disk image is using this version of cloud-init that does the automatic resize.
Please note that Amazon Linux AMIs are based on CentOS, so you may want to use them instead of the CentOS AMI, since they are directly updated by AWS.
Related
I configured Windows Subsystem for Linux, installed a Microsoft packaged Ubuntu on Win 10, to get my hands on Docker using Linux. From what I understood, Docker does not need a guest OS, unlike VMWare - that's one of the main advantages.
I browsed the dockerhub and found an official Ubuntu image. What is it for, as there is no need of a guest OS?
Shared OS is probably the wrong term here, because many include the Linux distribution and filesystem as part of the OS. Containers run with a shared Linux kernel, but in isolated namespaces from the host and each other. One of those namespaces is the mount namespace, including your root filesystem. Therefore when you enter a container, the files in /bin and other directories are assembled from the image (plus volume mounts, and changes made within the container).
The Ubuntu docker image is an initial filesystem that includes a minimal Ubuntu environment you can use the create other images for running your containers. If you were to start a container without that, you wouldn't have anything, no /bin/sh, no apt, no libraries, and would have to create every binary and needed libraries to run commands inside the container first.
I'm new to docker and understand that the linux kernel is shared between the host-os and the containers. But I don't really understand how deep docker emulates a specific linux-distribution. Lets say we have a simple docker file like this:
FROM ubuntu:16.10
RUN apt-get install nginx
It will give me a docker container with nginx installed in an Ubuntu 16.10 environment. So I should be able to use apt-get as default package manager of Ubuntu. But how deep is this? Can I assume that typical commands of those distribution like lsb_release are emulated like in a full VM with Ubuntu 16.10 installed?
The reason behind my question is that linux distributions are different. I need to know which commands are avaliable, for example when I run a container with Ubuntu 16.10 like the one above on a host which a different distribution installed (like Red Hat, CentOS etc).
A Ubuntu image in Docker is about 150 MB. So I think there are not all tools included like in a real installation. But how can I know on which I can desert that they're there.
Base OS images for Docker are deliberately stripped down, and for Ubuntu they are removing more commands with each new release. The image is meant as the base for a dedicated application to run, you wouldn't typically connect to the container and run commands inside it, and a smaller image is easier to move around and has a smaller attack vector.
There isn't a list of commands in each image version that I know of, you'll only know by building your image. But when images are tagged you can assume a future minor update will not break downstream images - a good argument for explicitly specifying a tag in your Dockerfile.
E.g, this Dockerfile builds correctly:
FROM ubuntu:trusty
RUN ping -c 1 127.0.0.1
This one fails:
FROM ubuntu:xenial
RUN ping -c 1 127.0.0.1
That's because ping was removed from the image for the xenial release. If you just used FROM ubuntu then the same Dockerfile would have built correctly when trusty was the latest tag and then failed when it was replaced by xenial.
A container is presenting you with the same software environment as the non-containerized distribution. It may not have (in fact, probably does not have) all the same packages installed by default, but you can install whatever you need using the appropriate package manager. The availability of software in the container has nothing to do with the distribution running on your host (the Ubuntu image will be the same regardless of whether your are running Docker under CentOS, Fedora, Ubuntu, Arch, etc).
If you require certain commands to be available, just ensure that they are installed in your Dockerfile.
One of the few things that works differently inside a container is that there is typically no service management process running (like init or systemd or whatever), so you cannot start services the same way you can on the host without a little bit of work.
I am trying to boot a cloned image on KVM ,which is in the raw format, the image is a clone of aws ubuntu 14.04 LTS hvm instance. It gives me an error saying no bootable device found. The same image boots up when I specify the kernel path explicitly while creating the VM.
I am using virt-manager to create the VM and the qemu version is 2.0.0
I have tried changing the disk bus but nothing helped.
Can anyone help ?
It was because of the partitions on the Ubuntu 14.04 LTS. Cloned the entire disk. Boots up fine now.
Are docker images portable across different linux flavours? Let's say, if I have OEL based docker image with database installed in it, can I run this in boot2docker on a Mac?
Yes, you can archive an image (docker save/docker load), copy it on your mac unless your image and run a container in a boot2docker Tiny Core VM.
The only case where an image might not be portable is if its OS filesystem depends on certain patch level of the kernel.
In that case, a container from that image would only run on the right kernel.
hek2mgl mentions in the comments that a feature like inotify works only on Linux (should work on the TinyCore VM of boot2docker), but would not work when sharing a folder from the (non-Linux) host (ticket VBox 10660 or boot2docker PR 284 comment).
I attached Three EBS Volumes of 3GB to an Amazon EC2 Micro-instance and mounted the disks xvdd, xvdc and xvdb.
My aim was to create a zfs pool using these 3 disks.
I had updated, upgraded Ubuntu 12.04, installed the zfs-linux dependencies, added the zfs-native repo PPA and then issued the zfs install command whhich is
sudo apt-get install ubuntu-zfs
After this, I get the console status which is as below and after the "run-parts:" status indicated below, the install process never proceeded further. I waited for 20+ minutes and got this:
Setting up zfs-dkms (0.6.0.91-0ubuntu1~precise1) ...
Loading new zfs-0.6.0.91 DKMS files...
First Installation: checking all kernels...
Building only for 3.2.0-31-virtual
Module build for the currently running kernel was skipped since the
kernel source for this kernel does not seem to be installed.
Setting up linux-headers-3.2.0-35 (3.2.0-35.55) ...
Setting up linux-headers-3.2.0-35-generic (3.2.0-35.55) ...
Examining /etc/kernel/header_postinst.d.
run-parts: executing /etc/kernel/header_postinst.d/dkms 3.2.0-35-generic /boot/vmlinuz-3.2.0-35-generic
Is this issue related to EC2 kernel for Ubuntu? or does the machine to run ZFS should be of higher capacity?
Usually with hosting providers the kernel is the case. My provider (ovh) delivers their own customized (and allegedly more secure) kernel (alas without sources), although reluctantly permits installing the generic kernel, which solved the problem for me. I don't know about Amazon - perhaps their customized kernel is crucial for their EC2 service. On the other hand I very much doubt any hosting provider would produce the source code of their kernel.