Creating threads and sockets from within QEMU - multithreading

I noticed that in QEMU 1.0.1 there were API functions like qemu_thread_create(), qemu_mutex_init() from qemu-thread.hetc as well as objects like QemuTcpServerSocket from qemu_socket.h.
What is the purpose of these API functions? Do you have to use them to avoid problems or are they just abstractions from the operating system?
In the recent QEMU versions, the QEMU socket and thread API functions seem to have changed.
For QEMU 2.0 and higher, is it necessary to use the QEMU functions for thread and socket creation? Where can I find them?
My goal:
I would like to write a dynamic library that is linked to the QEMU code. This library should be able to spwan a thread which opens a listening socket.

Related

Is it possible to run node.js on an RTOS?

I have a 8 core ARM device and I was wondering whether I could use it to build a drone. Does a real time operating system require a specific type or method of programming? Is it possible to use node.js with any of these systems?
In short, yes it is possible to run node.js on RTOS.
About RTOS
You should remove buffer delays. For example, don't block Node.js event loop or don't use Node.js process.nextTick function.
Use event-based approach for better code architecture.
Think like an embedded developer, not like web-developer.
This is an interesting and not trivial job.
About node.js details
As you can see at the link the device has Linux Kernel 4.9 LTS OS.
You can install Node.js and npm modules on Linux Kernel 4.9 LTS OS.
There can be an issue to run native functionality from Node.js. You must have node.js wrapper module in c language. Good example for Raspberry Pi is wiringpi-node
Python can be used as a node.js alternative

Linux Kernel device driver needs access to shared object in userspace

I am trying to write a network device driver for Linux. The device that I have has an API available that allows me to access all of the features I need through a shared object that exists in userspace.
I want to write a network driver such that I can make the device show up as a CAN interface. However, in order to interact with the device I need to use a specific shared object that exists in userspace.
The reason that I need a network device driver is to expose a CAN Interface that can be interacted with via the SocketCAN utilities.
Is there a way that I can write a network device driver in userspace? Or what would the best way for me to architect a solution?
Tl;Dr
Need to write a device driver for a device which can only be interacted with from userspace via a supplied shared object which exposes the API. I need the device to show up as a network interface in order to utilize the SocketCAN utilities and other applications that communicate with CAN interfaces in Linux.
What are my options here? What can I do?
Thanks!
So you are saying that there is no driver for your network device in kernel at all, and it can be only accessed via some user-space library? In that case shared library you mentioned should be communicating with your network device by memory mapping your /dev/mem file, in order to be able to read/write to hardware registers. Or perhaps by using some UIO.
So your driver should be also developed in user-space then... Then the actual question you should ask is how to use kernel CAN API from user-space? And is it possible at all in the first place? For answers I guess you should look at Documentation/networking/can.txt. And if the answer is "no" (means you can't expose CAN interface from user-space), then you should develop also some kernel driver which would interact with your user-space part, exposing CAN interface.
In ideal world the whole driver architecture would look like this:
But you need to use some (proprietary, if I understand correctly) shared library API to interact with your device. So I propose you to use next driver architecture, which depicted on the image below:
blue color stands for parts that need to be developed
magenta is for already existing code
In a nutshell, your app and driver both make a shim between SocketCAN API and shared library API.
So you need to develop 2 components:
Driver (on kernel side). It's in charge of:
talking to SocketCAN utilities
talking to your user-space application
Application (in user-space); it's probably should be a daemon, as it's gonna be running constantly. It's in charge of:
talking to shared library
talking to your driver
The last question remains is which kernel API to use to interact between your kernel space driver and user-space application (marked as IPC on picture). It strictly depends on which kind of data you are going to send between two, and how much of data you will want to send, and which way of sending is most appropriate for your task. It may also depend on your shared library API: you probably don't want to spend much of CPU time to convert messages format (as you already have triple context switching with this driver architecture, which is not really nice for performance). So it's probably should be something packet-oriented, like Netlink.
Next reading can be useful to figure out which IPC to use:
Kernel Space - User Space Interfaces
Linux kernel interfaces

Events that we register with the OS for any files added to the System

I want to process certain types of files lets say pdfs, whenever they get copied/downloaded to the system.
Is there any why that we can register with OS for listening to this kind of events.
I am ready to implement separate solutions for windows, mac and linux if required.
Windows has a concept of filesystem filter drivers (kernel-mode ones). Using it your software can intercept any filesystem operations and alter the data or just perform some action (or even prevent the operation). You can write such driver yourself or use our CallbackFilter library which includes a pre-created driver and provides an API for use in user-mode.
The alternative approach on windows is to use FindFirstChangeNotification system function to register for notification. This function works differently from the filter driver.
MacOS X doesn't have a concept of filter drivers but they have FSEvents API.
Update: (missed the linux part) on Linux inotify exists.

RPC from Windows to linux

Is there some (working) example how to create RPC from windows to linux?
Client should be windows NT application, server is linux.
It needs to be MSRPC.
No Corba, no XML-RPC, SUN-RPC etc
MSDN says this:
RPC can be used in all client/server applications based on Windows
operating systems. It can also be used to create client and server
programs for heterogeneous network environments that include such
operating systems as Unix and Apple.
Unfortunately after spending few hours on google I'm giving up.
My expectation:
Linux node should have samba installed, because their MSRPC implementation works.
Using IDL file I generate stubs for both client and server
Client is built using MSVC
Server is build using gcc with some includes/libraries from samba (or other libs)
Linux node must have such RPC port mapper
Can someone point me out?
I think you have 2 possible ways to deal with this:
1- You can try using DCOM with wine, which means that you will actually write your code for windows, but at the same time you can test your results in the process and avoid using WinAPI calls that wine is not able to handle properly. This approach will allow you to generate stubs code from your IDL files.
2- You can try using Samba RPC Pluggable Modules, but I am afraid in this case the RPC communication will be more primitive.
Edit:
It seems there are many other ways. I found a list of libraries in DCOM-Wikipedia, j-Interop for example looks particularly promising.

Implementing a kernel debugging module for a Linux guest OS inside a VmWare VM

Sorry for the rather long post.
I need some input regarding a project that I am going to undertake.
I am trying to make an application that collects kernel debugging information from a guest Linux OS, located inside a VmWare Virtual Machine, and send them to a host OS efficiently.
So far, I have found a similar project, but written for Windows[1].
The author of the project wrote a DLL that is loaded into memory, and replaces the implementation of the KdSendPacket and KdReceivePacket functions, to use the VmWare GuestRpc[2] mechanism, instead of the slow serial port.
The data are then send to a debugging application on the host(Kd or WinDbg) trough a named pipe.
The author claims that there is a speed-up up to 45%, by avoiding the serial port transmission.
I am trying to achieve something similar ,but for Linux, and try to make the debugging process a little faster, than using the serial port.
My concrete questions are :
Do any similar applications exist?
I didn't manage to find any.
Would such an application be worth it ,comparing its functionality to netconsole[3], for example?
What method of intercepting printk messages would you suggest ?
Is there an equivalent of KdSendPacket/KdReceivePacket on Linux ?
[1]. http://virtualkd.sysprogs.org/dox/operation.html
[2]. http://articles.sysprogs.org/kdvmware/guestrpc.shtml
[3]. http://www.kernel.org/doc/Documentation/networking/netconsole.txt
Using the serial port is really suboptimal.. even the (virtual) network would be preferable to that, but getting back to host-guest IPC channels, VMware's VMCI comes to mind.
many approaches can use to achieve your goal, below methods can be applied if network is connected:
use syslog service and transfer log though network to your server:
syslogd, syslogng seems support sending log to a log server with some filter critiera.
directly call tcp/udp socket functions in your kernel module to sends your collected data back to server.
other approaches, you may write application on host machine that calls hypervisor's share memory access function to read the memory buffer of your kernel module. However, the xen/kvm hypervisor both support these apis and i am not sure about weather vmware have this kind of library.

Resources