OpenMPI process affinity - openmpi

When I do mpirun --map-by node --bind-to numa --report-bindings ./out 26, --bind-to option causes the following error:
On Linux, lack of the functionality can mean that you are on a
platform where processor and memory affinity is not supported in Linux
itself, or that hwloc was built without NUMA and/or processor affinity
support. When building hwloc (which, depending on your Open MPI
installation, may be embedded in Open MPI itself), it is important to
have the libnuma header and library files available. Different linux
distributions package these files under different names; look for
packages with the word "numa" in them. You may also need a developer
version of the package (e.g., with "dev" or "devel" in the name) to
obtain the relevant header files.
ompi_info | grep hwloc shows:
MCA hwloc: hwloc1117 (MCA v2.1.0, API v2.0.0, Component v3.0.0) MCA
rtc: hwloc (MCA v2.1.0, API v1.0.0, Component v3.0.0)
So I guess processes affinity is supported. I have 26 dual core nodes and I want to use only 1 CPU per node. Why I can't bind processes?

Are you sure Open MPI cannot bind processes ?
Note your command line tries to bind a MPI process to a NUMA domain (socket most of the time). If you want to bind to a core, then
mpirun -bind-to core ...
In order to check processes binding
mpirun -report-bindings ...
You might be able to set process affinity but not memory affinity because of a missing library.
sudo yum install -y numactl-devel
should do the trick on RedHat based systems.
You will need to configure and make install after that.

Related

How to list pools in linux redhat?

In Solaris, using zpool list command to list available pools.
What is the command for listing pools in Linux red-hat 7.6
If you're using LVM-based storage pools, you can use something like virsh pool-list --all
There's extensive documentation at docs.redhat.com.
There's also ZFS on RedHat, but you have to install it separately.

Does containers depends on to a specific host?

I have an application which have kernel space code and it is compiled in linux kernel 2.6.32-431.el6.x86_64 (centos 6.5). Then it is installed as kernel module to run the application. When I tried to containerized my application with Docker (which is installed in Amazon Linux AWS EC2 instance), it complains about the higher kernel version is incompatible with the module. Is it I have to install Docker in a host which have kernel version as 2.6.32-431.el6.x86_64? If yes, does our containers depend on to a specific host machine?
Containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in userspace on the host operating system.
https://www.docker.com/what-docker
Docker containers use host's kernel. You may install the same version of the kernel required in the container as host's kernel, but you will not be able to run CentOS6 compiled kernel on Amazon Linux.
Considering your specific application requirements, I would suggest using
"real" virtualization solution such as XEN or KVM that allows you to use your own kernel in a VM.

Linux Kernel Modules between different kernel patches

I am running into a issue with RH7 kernels. We are running RH6/Centos6 based systems - we normally compile the kernel module once (Centos6.6) and we could install the kernel module on another Centos kernel in the same series (say Centos6.4).
With Centos7 (3.10 kernel) I cannot build the kernel module with says 3.10.0-329 (Centos7.2) kernel and install on a kernel version 3.10.0-227 (Centos7.1) - insmod returns invalid format.
Anyone run into similar issues - are there any workarounds.
Thanks
--
Jimmy
Probably, you want a binary blob - ready-made object file which is part of the module. Look into kernel documentation for know how to build module which uses binary blobs. – Tsyvarev

Understanding qemu-kvm

On a remote machine, I have qemu-x86_64 installed. Upon trying to find the version of the same I'm presented with the following information.
$ qemu-x86_64 -version
qemu-x86_64 version 1.0 (qemu-kvm-1.0), Copyright (c) 2003-2008 Fabrice Bellard
I'm trying to understand what qemu-kvm is. We will not discuss whole system emulation but only qemu user level emulation.
QEMU supports 2 kinds of emulation : system and user level. In System level emulation the whole system is emulated, and you see that an OS can be booted up using the same. In User level emulation, I'm able to run binaries compiled for an architecture on another architecture. eg: I end up being able to run Linux MIPS binaries on an x86-64 machine.
The version information for qemu-x86_64 on my machine is as follows.
qemu-x86_64 version 2.2.0 (Debian 1:2.2+dfsg-5expubuntu9.2), Copyright (c) 2003-2008 Fabrice Bellard
I'm trying to understand what kind of a qemu-x86_64 is running on the remote machine. Where does kvm fit in? The remote machine is also a 64-bit machine.
When I run a 64-bit binary on the remote machine using its qemu-x86_64, there is no binary translation going on, instead qemu is using KVM to execute the instructions on actual hardware. If so, what part does qemu play? Does it handle privileged instructions? I'm trying to understand where exactly kvm comes into the picture.
In essential binary-translation allows you to run instructions of another architecture (e.g., MIPS) on your physical machine. The target architecture is simulated. For example, the registers in the simulated MIPS machine are just some variables in the QEMU process.
It is true that QEMU can use binary-translation to simulate a x86_64 machine on your computer. However because it's simulating a same architecture, the instructions can actually be directly executed by the host machine without translation! QEMU employs some techniques which use the hardware supports from CPU and OS/software supports such as KVM/Xen. It's still simulation, or you can call it virtualization.

How to distribute kernel modules using a RPM?

What is the recommended approach for distributing a kernel module using a RPM? Ideally, for portability, I would like the RPM to be able to build the modules against the running kernel's headers before installing.
I've used Dynamic Kernel Module Support or DKMS before to distribute a Linux driver targeting multiple kernel versions (2.6.31-37). DKMS itself is a collection of bash scripts that can automate both the building and rebuilding of a kernel module based on the current installed version of Linux. You can distribute drivers as either RPM or DEB files which contain the driver source, DKMS scripts, and optionally, binary versions of the driver tied to particular kernel versions.

Resources