How to list pools in linux redhat? - linux

In Solaris, using zpool list command to list available pools.
What is the command for listing pools in Linux red-hat 7.6

If you're using LVM-based storage pools, you can use something like virsh pool-list --all
There's extensive documentation at docs.redhat.com.
There's also ZFS on RedHat, but you have to install it separately.

Related

Oracle on lxc in ubuntu

I'm currently trying to install an oracle server (11g) in a linux container on ubuntu (following this tutorial (http://www2.hawaii.edu/~lipyeow/ics321/2014fall/installoracle11g.html).
When I try to change the file handler with sysctl, the modifications doesn't save into my container. Moreover, when I make the modification in the main ubuntu kernel, it propagates to the containers, so my question is as follow:
How can I modify the file handlers only in my oracle container ?
Thanks.
Try out the Orabuntu-LXC project code. It supports Ubuntu 16.04, 17.04, 17.10 and is purpose-built for running Any Oracle on Any Linux, including Ubuntu Linux. Note that as you probably already know, Oracle Corp does not formally support or certify Oracle on Ubuntu Linux.
As far as you question about the file handlers, some sysctl values can only be set at the LXC host level, and some can be set in the container.
https://sites.google.com/site/nandydandyoracle/oracle-rac-in-lxc-linux-containers/oracle-lxc-vlc#TOC-Install-the-etc-sysctl.conf-File-Required-for-Oracle
https://github.com/gstanden/orabuntu-lxc
https://sites.google.com/site/nandydandyoracle/
Please note that the step-by-step guides are quite old and that the basic LXC infrastructure together with OpenvSwitch, an LXC-containerized DNS/DHCP, and an optional SCST Linux SAN can all be installed on Ubuntu 16.04, 17.04 and 17.10 with one command:
./anylinux-services.sh
after completion of which all you would need do is download your Oracle database installtion media and install.

Is there a way to share host (ubuntu) file system with guest (centos 7)?

I am trying to use virsh and domain xml to launch a Centos 7 guest from ubuntu 16.04 LTS host.
The "filesystem" node that i am using in domain xml is as below:
<filesystem type='mount' accessmode='passthrough'>
<driver type='path' wrpolicy='immediate'/>
<source dir='/opt/test'/>
<target dir='testlabel'/>
</filesystem>
With the above config, "testlabel" is not visible in the guest and hence i am not able to mount it. Is there anything that i am missing?
I tried to have 9p modules in guest but they don't seem to be available in centos 7.
I do not want to use network based file sharing like NFS or glusterfs either.
RHEL-7 (and thus CentOS-7) explicitly does not support the 9p filesystem. It is disabled in guest kernel builds and also disabled in QEMU builds for RHEL hosts. The reason is that 9p support in QEMU has been largely unmaintained upstream and the QEMU community doesn't have confidence its is security or performance.
If you want to share filesystem locations, pretty much your only choice is to use a traditional network filesystem, whether NFS, SAMBA, or something tunnelled like SSHFS.
Work is ongoing upstream to support a new technology called virtio-vsock, which will allow running NFS-over-vsock, bypassing the need for networking - think of it as akin to NFS over UNIX sockets. This is not ready for use yet though, so not possible for an Ubuntu/RHEL-7 pair.
Use for the guest (CentOS 7) the kernel from the CentOSPlus repository Wiki CentOSPlus. The CentOSPlus kernel has the 9p file system support build in. You can install the "kernel-plus" kernel with
yum --enablerepo=centosplus install kernel-plus
Start the guest with the "kernel-plus" kernel and
mount -t 9p -o trans=virtio {sharetarget} {mountpoint}
works. I use it this way on CentOS 7 guest systems.

Can run ARM/rpi images in Docker on Windows but not linux

I'm able to run the ARM images (eg. hypriot/rpi-node) in Docker on Windows (64bit), but in all linux x86/64 machines I've tried (Debian, CoreOS, Alpine etc) I get the following error - which makes sense to me but I dont get why it'd run in Docker on Windows then, and I wonder whether I'm missing some opportunity to use an x86 machine as a build server for ARM images (ie. the in google/aws cloud/azure). Any ideas how I might be able to?
docker run -ti hypriot/rpi-node ls
standard_init_linux.go:175: exec user process caused "exec format error"
Docker for windows (and docker for mac) both use a linux vm to host containers. However, the difference between the linux vm they use and your linux machines is the fact that their VM has a kernel system called binfmt_misc setup to call qemu whenever it encounters a binary for a foreign architecture (https://github.com/linuxkit/linuxkit/blob/1c552f7a9db7f0660d3c83362d241e54142323ca/pkg/binfmt/etc/binfmt.d/00_linuxkit.conf )
If you were to configure your linux machine appropriately, it could be used as a build server for ARM images. Google qemu-user-static for some ideas of how to set it up.
Note that the linuxkit vm uses the 'F' flag which doesn't seem to be standard when configuring a typical linux environment. Without it, you need to put the qemu binary inside the container. I'm not sure why it isn't standard practice to use 'F' in more places (there does seem to be a debian bug to do so https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=868030 )
On Windows and Mac docker works under Linux VM. So, I think, that for your container under Windows started ARM Linux VM. But under native Linux used native architecture.
The "exec format error" confirms that you are not running your docker image on the correct architecture.
I had this error trying to run a x86 docker image on a Raspberry Pi 2 (Which works with an ARM architecture). I am pretty sure it might be the same error when you do it the other way round.
So, as Kulti said, Windows/MAC must have started an ARM Linux VM.
If you wish to work with ARM docker images on Linux, you may want to try running a linux docker VM manually. I think you can do it using "docker-machine" even on linux : Docker documentation for docker-machine. (Haven't done it myself so I am not sure)
Hope this helps.
Docker on Windows uses a Linux VM which has been configured such that it can run images of other architectures through Qemu user mode emulation. You can configure native linux in a similar way and it too will then run ARM images. There is a well written three part series that describes it all in detail
Main thing to take away from Part#1 is that any file on Linux is executed through an interpreter (even binary files). The choice of interpreter is configurable, through binfmt_misc, based on byte patterns at the beginning of file or filename extension etc.
Part#2 builds on Part#1 to show how to configure Linux kernel (installed on any architecture) to interpret ARM binaries using Qemu User Emulation.
Finally Part#3 shows how to apply the same trick this time to a linux setup in a docker container which means that linux docker container (which could be for any architecture) will be able to execute ARM binaries.
Important thing to note here is that there is nothing special about docker implementation or containerization that allows docker on Windows to be able to execute ARM binaries. Instead any Linux setup (whether on bare metal or in a container) can be configured to execute ARM binaries through Qemu's user mode emulation of an ARM cpu.
I know this post is old but I will post my solution here in case someone came here through Google.
This happen because your Docker host is not able to run images with AMR architecture. To be enable this in your Docker just run:
docker run --rm --privileged hypriot/qemu-register
More info you can find on this post.
You need the kernel configured for qemu's binfmt_misc module, and the container needs to have the static binaries used by qemu available inside the container filesystem.
You can load the files on the host with the hyperiot/qemu-register image, however I prefer the distribution vendor packages when available (ensures that I get patches when I update). For Debian, the imporant packages is qemu-user-static which you can install as root with:
apt-get update && apt-get install qemu-user-static
Ensure the kernel module is loaded (as root):
modprobe binfmt_misc
Then when running the container, you can mount the static qemu binaries into your container rather than packaging them inside your image, e.g. for the arm arch:
docker run -it --rm \
-v /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static:ro \
hypriot/rpi-node /bin/sh
Docker includes binfmt_misc in the embedded Linux VM's used on Docker for Desktop, and there appears to be some additional functionality to avoid the need to manually mount the static qemu files inside the container.

EC2 root volume expansion for CentOS7

My EC2 is running with CentOS 7 HVM community image which comes default as xfs file system, and I had a root volume with 10GB which I extended to 15GB after detaching > creating snap > reattaching to /dev/sda1 but I noticed that I did not had to run any kind of resize command just like with ext file system we have to run resize2fs and partition now shows 15GB without doing anything. While there is a command xfs_growfs but even that I did not run.
So, is this normal behavior of how xfs is expanded in EC2 world(and others), or something else?
Regards,
Farmi
This is normal behaviour on the Amazon Linux AMI.
From Amazon Linux AMI 2014.03 Release Notes:
Cloud-Init 0.7.2
Cloud-Init has been updated to the 0.7 series, adding a number of useful features. One example is dracut-modules-growroot, which automatically resizes your root filesystem on boot.
I note that CentOS 7 (x86_64) with Updates HVM says:
Starting with CentOS-7 we now include cloud-init support in all CentOS AMI's
So, it is likely that your disk image is using this version of cloud-init that does the automatic resize.
Please note that Amazon Linux AMIs are based on CentOS, so you may want to use them instead of the CentOS AMI, since they are directly updated by AWS.

zfs installation issue Ubuntu 12.04 on Amazon EC2

I attached Three EBS Volumes of 3GB to an Amazon EC2 Micro-instance and mounted the disks xvdd, xvdc and xvdb.
My aim was to create a zfs pool using these 3 disks.
I had updated, upgraded Ubuntu 12.04, installed the zfs-linux dependencies, added the zfs-native repo PPA and then issued the zfs install command whhich is
sudo apt-get install ubuntu-zfs
After this, I get the console status which is as below and after the "run-parts:" status indicated below, the install process never proceeded further. I waited for 20+ minutes and got this:
Setting up zfs-dkms (0.6.0.91-0ubuntu1~precise1) ...
Loading new zfs-0.6.0.91 DKMS files...
First Installation: checking all kernels...
Building only for 3.2.0-31-virtual
Module build for the currently running kernel was skipped since the
kernel source for this kernel does not seem to be installed.
Setting up linux-headers-3.2.0-35 (3.2.0-35.55) ...
Setting up linux-headers-3.2.0-35-generic (3.2.0-35.55) ...
Examining /etc/kernel/header_postinst.d.
run-parts: executing /etc/kernel/header_postinst.d/dkms 3.2.0-35-generic /boot/vmlinuz-3.2.0-35-generic
Is this issue related to EC2 kernel for Ubuntu? or does the machine to run ZFS should be of higher capacity?
Usually with hosting providers the kernel is the case. My provider (ovh) delivers their own customized (and allegedly more secure) kernel (alas without sources), although reluctantly permits installing the generic kernel, which solved the problem for me. I don't know about Amazon - perhaps their customized kernel is crucial for their EC2 service. On the other hand I very much doubt any hosting provider would produce the source code of their kernel.

Resources