Can shared object have its own thread running in the background? - linux

I am a beginner of Linux programming and I am not sure if what I expect is feasible. I would appreciate if someone could give me some tips.
What I want to do is to develop a shared object (.so file), which could be used by multiple applications. If one of those applications calls an initialize function in the shared object, a new thread will be created to run an infinite loop to accept incoming events. This thread will keep running even after the initialize function return. Thus all applications could keep sending events to this thread for processing.
I wonder if this could be achieved? Any idea will be appreciated.

As has been noted in the comments, this cannot be done: you cannot directly invoke a function in another process. This is why RPC, IPC, web services etc. were invented.

Related

Is ESPAsyncWebServer Thread safe

I would like to use the web server library ESPAsyncWebServer to create an API on my ESP32. But I was wondering, since the web server most likely runs on the secondary processor, do I need to make accessing data thread safe? So that the main loop and the web server can both access the data?
Or are they actually running on the same thread an I don't need to worry about it?
Yes, you need to implement thread safe access to any shared data. Asynchronous processing means it runs in its own thread. See the documentation for clarification.

Communication between multiple programs

I am currently planning a complicated networking project on Windows IoT Enterprise. Basically, I will have a C program keeping alive a special network interface. This C program should receive tasks in some way from other programs on the same host, which will generally be written in all sorts of languages (e.g. node.js). I have never did such cooperation between tasks. Do you have any advice on how a node.js server can pass information to an already running C program, and preferably receive a success code or an error message?
It is very important for me that this process is as fast as possible, as this solution will handle multiple thousand requests per second.
In one of the comments I was pointed towards zeroMQ, and I am now using it successfully in my application, thank you for the help!

How worker threads works in Nodejs?

Nodejs can not have a built-in thread API like java and .net
do. If threads are added, the nature of the language itself will
change. It’s not possible to add threads as a new set of available
classes or functions.
Nodejs 10.x added worker threads as an experiment and now stable since 12.x. I have gone through the few blogs but did not understand much maybe due to lack of knowledge. How are they different than the threads.
Worker threads in Javascript are somewhat analogous to WebWorkers in the browser. They do not share direct access to any variables with the main thread or with each other and the only way they communicate with the main thread is via messaging. This messaging is synchronized through the event loop. This avoids all the classic race conditions that multiple threads have trying to access the same variables because two separate threads can't access the same variables in node.js. Each thread has its own set of variables and the only way to influence another thread's variables is to send it a message and ask it to modify its own variables. Since that message is synchronized through that thread's event queue, there's no risk of classic race conditions in accessing variables.
Java threads, on the other hand, are similar to C++ or native threads in that they share access to the same variables and the threads are freely timesliced so right in the middle of functionA running in threadA, execution could be interrupted and functionB running in threadB could run. Since both can freely access the same variables, there are all sorts of race conditions possible unless one manually uses thread synchronization tools (such as mutexes) to coordinate and protect all access to shared variables. This type of programming is often the source of very hard to find and next-to-impossible to reliably reproduce concurrency bugs. While powerful and useful for some system-level things or more real-time-ish code, it's very easy for anyone but a very senior and experienced developer to make costly concurrency mistakes. And, it's very hard to devise a test that will tell you if it's really stable under all types of load or not.
node.js attempts to avoid the classic concurrency bugs by separating the threads into their own variable space and forcing all communication between them to be synchronized via the event queue. This means that threadA/functionA is never arbitrarily interrupted and some other code in your process changes some shared variables it was accessing while it wasn't looking.
node.js also has a backstop that it can run a child_process that can be written in any language and can use native threads if needed or one can actually hook native code and real system level threads right into node.js using the add-on SDK (and it communicates with node.js Javascript through the SDK interface). And, in fact, a number of node.js built-in libraries do exactly this to surface functionality that requires that level of access to the nodejs environment. For example, the implementation of file access uses a pool of native threads to carry out file operations.
So, with all that said, there are still some types of race conditions that can occur and this has to do with access to outside resources. For example if two threads or processes are both trying to do their own thing and write to the same file, they can clearly conflict with each other and create problems.
So, using Workers in node.js still has to be aware of concurrency issues when accessing outside resources. node.js protects the local variable environment for each Worker, but can't do anything about contention among outside resources. In that regard, node.js Workers have the same issues as Java threads and the programmer has to code for that (exclusive file access, file locks, separate files for each Worker, using a database to manage the concurrency for storage, etc...).
It comes under the node js architecture. whenever a req reaches the node it is passed on to "EVENT QUE" then to "Event Loop" . Here the event-loop checks whether the request is 'blocking io or non-blocking io'. (blocking io - the operations which takes time to complete eg:fetching a data from someother place ) . Then Event-loop passes the blocking io to THREAD POOL. Thread pool is a collection of WORKER THREADS. This blocking io gets attached to one of the worker-threads and it begins to perform its operation(eg: fetching data from database) after the completion it is send back to event loop and later to Execution.

rpcgen for Linux

We have used rpcgen to create a rpc server on Linux machine (c language).
When there are many calls to our program it still results in a single
threaded request.
I see that it's common problem from 2004, there is a new rpcgen (or other genarator) that solved this problem?
Thanks,
Kobi
rpcgen will simply generate the serialization routines. Your server might be coded to have several threads. Learn more about pthreads.
You probably should not have too many threads (e.g. at most a dozen, not thousands). You could design your program to use some thread pool, or simply to have a fixed set of worker threads which are continuously handling RPC requests (with the main thread just in charge of accepting connections, etc).
Read rpc(3). You might consider not using svc_run in your server, but instead doing it your own way with threads. Beware that if you use threads, you'll need to synchronize, perhaps with mutex.
You could also consider JSONRPC, or perhaps making your C program some specialized HTTP server (e.g. using libonion) and have your clients do HTTP requests (maybe with libcurl). See also this. And you might consider a message passing architecture, perhaps with Open-MPI.
Beware sun version is being abandoned, look for tirpc

Tibco RV and threads

I'm facing a difficult situation while running an application built with IBM Informix 4GL and Tibco RV library (libtibrv.so).
Informix 4GL is not thread safe and Tibco always create a thread (I think it creates it as sson as we call tibrv_Open(), but maybe it's after the creation of the transport)
Due to something that 4GL does with signals, this leads to application crashes (a 4GL signal handler is run when the process is running the Tibco thread).
With a debugger I noticed this trhead seems to be on a loop... it calls select() with a timeout of 10s.
My questions are:
- Is there a way to avoid the thread creation? (I assume not)
- Is there a way to configure the timeout I mention above?
- If anybody can explain me the purpose of this thread I'd be thankful. I'm assuming we'll have to live with it, but it would be nice to understand why it's there. Maybe it exists to check server timeouts?
P.S.: The application uses C to interface with Tibco. I don't this it is very relevant, but the current scenario is on Tru64 and I believe Tibco rendezvous is 6.9. The environment uses pthread library. These are all very old versions. But the customer is moving to newer versions.
Many thanks in advance for any comments.
I've not come across Tibco, so I'm not sure that I can help, but...
I suggest creating a separate process to run the Tibco code, with the I4GL calling on the same C interface it currently uses to talk to the Tibco library, but gutting the implementation so that the functions send messages across a pipe or socket to the Tibco process (which would be started by an initialization function). The advantage of this is that it gets the thread out of the I4GL code (where it is causing trouble) into a pure C and Tibco process which can be written to ensure that it doesn't cause trouble.

Resources