POSIX Message Queue is supported on which linux Kernel - linux

I could successfully implement POSIX Message Queue on Ubuntu 10.04 (Kernel version 2.6.38).
But the code fails when (built and) run on same version of Ubuntu 10.04 (Kernel version 2.6.37) on ARM Processor (Thin client devices like HP T410).
The failure happens to use any of the Message Queue functions (e.g. mq_open, unlink_message_queue()) :
OSError: [Errno 38] Function not implemented
Online information shows that. POSIX MQ is supported from Linux Kernel version 2.6.6.
This is very confusing to me (being new to linux world).
How come the functionality works on a x86 Linux Kernel 2.6.38 , but does not work on Kernel 2.6.37 running on ARM processor. And the document gives a different Version information 2.6.6 about the support.
Is there a better way to verify if the Current OS has the support or not ?
Is it possible the Kernel is trimmed off on Thin client devices.

Related

Is it possible to get OpenCL on Windows Linux Subsystem?

I'v been trying for the past day to get Tensorflow built with OpenCL on the Linux Subsystem.
I followed this guide. But when typing clinfo it says
Number of platforms 0
Then typing /usr/local/computecpp/bin/computecpp_info gives me
OpenCL error -1001: Unable to retrieve number of platforms. Device Info:
Cannot find any devices on the system. Please refer to your OpenCL vendor documentation.
Note that OPENCL_VENDOR_PATH is not defined. Some vendors may require this environment variable to be set.
Am I doing anything wrong? Is it even possible to install OpenCL on Windows Linux Subsystem?
Note:
I'm using an AMD R9 390X from MSI, 64bit Windows Home Edition
With the launch of WSL2, CUDA programs are now supported in WSL (more information here), however there is still no support for OpenCL as of this writing: https://github.com/microsoft/WSL/issues/6951.
According to a Microsoft representative in this forum post, Windows Subsystem for Linux does not support OpenCL or CUDA GPU programs, and support is not currently planned. To experiment with TensorFlow/OpenCL it would probably be easiest to install Linux in a dual-boot configuration.
You could use the Intel OpenCL SDK for the CPU, https://software.intel.com/en-us/articles/opencl-drivers.

Linux porting for RISCV multicore processor

We are developing a multi-core processor with RISCV architecture.
We had already ported Linux for single-core RISCV processor and it is working on our own FPGA based board with busybox rootfs.
I want to port Linux for multi-core RISCV processor now.
My doubts are:
Whether the gnu-riscv-gcc toolchain available now supports multi-core?
Whether spike available now supports multi-core?
Should I make any change to the bbl bootloader (Berkely bootloader) to support multi-core?
What are the changes I should make for my single-core Linux kernel to support multi-core?
The current RISC-V ecosystem already supports SMP Linux.
No changes to the compiler are required for multicore.
Spike can simulate multicore when using the '-p' flag.
BBL supports multicore.
Before building linux, configure it to support SMP.
Any hiccups, are probably due to the toolchain out of sync with the newest privileged spec changes. Last Fall, users successfully built and ran multicore Linux on RISC-V.
This is all expected to work out of the box. My standard testing flow for Linux and QEMU pull requests is to boot a Fedora root filesystem on QEMU via Linux+BBL. Instructions can be found on the QEMU Wiki Article about RISC-V. This will boot in our "virt" board, which uses VirtIO based devices. These devices have standard upstream Linux drivers that are very well supported, so there isn't really any platform-level work to be done.
In addition to the standard VirtIO-based devices, SiFive has devices that are part of the Freedom SOC platform. If you platform differs significantly from SiFive's Freedom platform then you'll need some additional drivers in both Linux and BBL.
We maintain an out-of-tree version of the drivers we haven't cleaned up for upstream yet in freedom-u-sdk, which should give you a rough idea of how much work it is. Running make qemu in that repository will boot Linux on QEMU via BBL, and running make will show you how to flash an SD card image for the HiFive Unleashed board.

On which CPU processor is OpenCL kernel running

I want to determine exactly how AMD schedules its OpenCL kernels on the CPU and I could not find any OpenCL function to determine the physical processor/core id on which it is running.
I could only find the following links related to my problem:
Getting the machine serial number and CPU ID using C/C++ in Linux
How to know on which physical processor and on which physical core my code is running
NUMA Get Current Node/Core
I tried the above but none of the solutions worked. I saw that OpenCL kernels do not support C99 headers like stddef.h which is required for sched.h or even fopen().
Is there any way I can see exactly how the openCL kernels have been assigned to each CPU core/processor?
Note: I am using Ubuntu 14.04, gcc version 4.8.2 and AMD APP SDK 3.0.
Thanks for your help!

Does Installing a driver increase linux kernel memory footprint?

I have read that linux kernel is monolithic kernel and it contains drivers within it, in a single file.
For example, I have two linux kernels 3.16.0.40 and 3.16.0.50
Currently booted system with 3.16.0.40 and installed for e.g nvidia driver
Does the driver is pushed in a kernel ?
If so, If I select 3.16.0.50 from grub and boot a system
Can it access the currently installed driver from previous kernel?
Linux kernel is monolithic indeed, but it also utilizes some micro-kernel features. One of those is loadable kernel modules support. So linux kernel has 2 options for driver:
driver can be built-in; those drivers will reside inside of kernel image file, which is /boot/vmlinuz-$(uname -r)
driver can be loadable; those drivers are separate files; look at /lib/modules/$(uname -r)/kernel/*
So in your case you are gonna have two video drivers (files) in your system, one for each kernel version. And only one driver will be used, for the version of kernel you are running at the moment.

Running x86 printer driver binaries on ARMv6

We are porting a solution to ARM that was originally designed to run on x86/x64 Debian based systems.
So far so good however along with this solution we ship out a printer that is compatible and comes with drivers for Linux (x86 and x64), unfortunately the manufacturer does not have ARM drivers for it, nor is capable of compiling some from source code (don't know why).
I've installed the printer with CUPS and used the x86 binary. But of course, whenever I send a task to the printer, the ARM system cannot use the binary and naturally CUPS reports:
/usr/lib/cups/filter/rastertotg2460 failed
I would like to know how I can run x86 binaries on ARM v6 based systems?
The ARM operating system is Raspbian running on a Raspberry Pi B+ board and the binaries (if you want to take a look) are here.
EDIT:
I was also made aware of this proprietary solution that claims to make it possible running x86 binaries on ARM systems, but all demonstrations are for ARM v7 systems, not sure if it will work on Raspbian with a Raspberry Pi B+ board.
I think this is going to require some serious work, but I had it the wrong way around initially.
Since you want to drive the printer, you're going to have to do the x86 emulation "inside" the CUPS system. It's not enough with a stand-alone x86 emulator, since those aim to give you a full x86 system with peripheral hardware and stuff. You don't need that, you just need to drive the printer.
I can imagine using some kind of x86 emulation library inside a CUPS "virtual" driver, which in turn loads the x86 binary you have and feeds it into the emulator. It would then need to expose the expected CUPS environment to the x86 code inside the emulator.
Something like Soft86 might be a good starting-point.

Resources