I'm facing a difficult situation while running an application built with IBM Informix 4GL and Tibco RV library (libtibrv.so).
Informix 4GL is not thread safe and Tibco always create a thread (I think it creates it as sson as we call tibrv_Open(), but maybe it's after the creation of the transport)
Due to something that 4GL does with signals, this leads to application crashes (a 4GL signal handler is run when the process is running the Tibco thread).
With a debugger I noticed this trhead seems to be on a loop... it calls select() with a timeout of 10s.
My questions are:
- Is there a way to avoid the thread creation? (I assume not)
- Is there a way to configure the timeout I mention above?
- If anybody can explain me the purpose of this thread I'd be thankful. I'm assuming we'll have to live with it, but it would be nice to understand why it's there. Maybe it exists to check server timeouts?
P.S.: The application uses C to interface with Tibco. I don't this it is very relevant, but the current scenario is on Tru64 and I believe Tibco rendezvous is 6.9. The environment uses pthread library. These are all very old versions. But the customer is moving to newer versions.
Many thanks in advance for any comments.
I've not come across Tibco, so I'm not sure that I can help, but...
I suggest creating a separate process to run the Tibco code, with the I4GL calling on the same C interface it currently uses to talk to the Tibco library, but gutting the implementation so that the functions send messages across a pipe or socket to the Tibco process (which would be started by an initialization function). The advantage of this is that it gets the thread out of the I4GL code (where it is causing trouble) into a pure C and Tibco process which can be written to ensure that it doesn't cause trouble.
Related
I am currently planning a complicated networking project on Windows IoT Enterprise. Basically, I will have a C program keeping alive a special network interface. This C program should receive tasks in some way from other programs on the same host, which will generally be written in all sorts of languages (e.g. node.js). I have never did such cooperation between tasks. Do you have any advice on how a node.js server can pass information to an already running C program, and preferably receive a success code or an error message?
It is very important for me that this process is as fast as possible, as this solution will handle multiple thousand requests per second.
In one of the comments I was pointed towards zeroMQ, and I am now using it successfully in my application, thank you for the help!
I am using an bpmn process which is already running using thread and also using spring ftp where the Task scheduler thread is running but I found the application is cannot switch from the threads. Is there any way to invoke the task-scheduler process without any interrupt and I am using InboundchannelAdapter to copy files from FTP. Please suggest any feasible way to resolve the issue.
I don't see any issues in your question. And to be honest it fully isn't clear.
Please, be more specific and sharing some code/config/logs/stack-trace sometime is really useful. More info, more chance to get quick and proper answer.
I guess your problem that you download files from FTP and in the same thread run a BPM process which might block eventually waiting for some actor action.
Fro this purpose you should shift Spring Integration flow on the <poller> to different thread and don't steal task-scheduler resources. They are really so expensive for the whole system. Consider to use enough big ThreadPoolTaskExecutor for the task-executor reference on the <poller>. Also there is an ExecutorChannel for you with similar thread shifting capabilities.
We have used rpcgen to create a rpc server on Linux machine (c language).
When there are many calls to our program it still results in a single
threaded request.
I see that it's common problem from 2004, there is a new rpcgen (or other genarator) that solved this problem?
Thanks,
Kobi
rpcgen will simply generate the serialization routines. Your server might be coded to have several threads. Learn more about pthreads.
You probably should not have too many threads (e.g. at most a dozen, not thousands). You could design your program to use some thread pool, or simply to have a fixed set of worker threads which are continuously handling RPC requests (with the main thread just in charge of accepting connections, etc).
Read rpc(3). You might consider not using svc_run in your server, but instead doing it your own way with threads. Beware that if you use threads, you'll need to synchronize, perhaps with mutex.
You could also consider JSONRPC, or perhaps making your C program some specialized HTTP server (e.g. using libonion) and have your clients do HTTP requests (maybe with libcurl). See also this. And you might consider a message passing architecture, perhaps with Open-MPI.
Beware sun version is being abandoned, look for tirpc
I am a beginner of Linux programming and I am not sure if what I expect is feasible. I would appreciate if someone could give me some tips.
What I want to do is to develop a shared object (.so file), which could be used by multiple applications. If one of those applications calls an initialize function in the shared object, a new thread will be created to run an infinite loop to accept incoming events. This thread will keep running even after the initialize function return. Thus all applications could keep sending events to this thread for processing.
I wonder if this could be achieved? Any idea will be appreciated.
As has been noted in the comments, this cannot be done: you cannot directly invoke a function in another process. This is why RPC, IPC, web services etc. were invented.
There is a lot on multithreading on the Corba server side, but I'm interested about the client side. We have a multithreaded client (Solaris, Orbix 6.3) with a Corba singleton "manager" that initialises the ORB. During runtime 'lsof' shows only one TCP connection to the Corba server, so all synchronous calls made from the client worker threads should be serialised.
Would like to change this arrangement to take advantage of parallelism: each thread to manage its own connection. I've changed the setup so that instead of a singleton each worker thread calls ORB_init(), etc.
Totally puzzled now: 'lsof' shows now 2 TCP connections but there are 6 worker threads.
Something is not right, would have expected as many TCP connections as the number of worker threads. May be that the approach is naive - does it makes sense for example to call ORB_init() per thread?
I'd need someones opinion on this. Sample code for a multithreaded client would greatly help. Again, using Orbix 6.3 on Solaris.
Kind regards,
Adrian
The management of connections is implementation specific for plain CORBA. Each vendor has its own proprietary way of configuration their behavior. If you check the RTCORBA specification, that has a standardized way to configure how connections between client and server will be used.
I don't know how Orbix works and whether it supports RTCORBA, that is something you could get from their manuals probably. I do know that TAO has a lot of support for threading at the client side. By default when multiple threads make an invocation to the same server multiple tcpip transports can be opened at the same moment.
Thank you guys for your answers. I found, as Johnny says that this is indeed implementation specific.
omniORB has for example maxGIOPConnectionPerServer - default 5. That's:
The maximum number of concurrent connections the ORB will open to a single server. If multiple threads on the client call the same server, the ORB opens additional connections to the server, up to the maximum specified by this parameter. If the maximum is reached, threads are blocked until a connection becomes free for them to use.
Unfortunately I haven't yet found out what's the equivalent (if any) for Orbix. It's definitely defaulting to 1 connection. Still googling...
Found out though that as part of Solaris -> Linux migration will be moving from Orbix to TAO in a number of months. Hoping TAO would be more friendly and customizable.
Orbix internally uses a lot of optimization routines to ensure that connections are used efficiently. Specifically, it's not going to open up multiple connections to the same server endpoint because it's able to multiplex multiple concurrent GIOP requests over the same TCP connection. CORBA deliberately hides connection management from client and server programmers.
I don't believe this is controllable through configuration. Send a support ticket to Progress Support to confirm. You might be able to force it to happen if you move away from the singleton model and initialize a different ORB for each client (each with their own unique ID), but that would be a very heavy-handed and costly solution to a problem that is a little vague. The underlying ORB is already build to optimize for concurrent requests, so I'm not sure what problem it is you're trying to solve.
In my honest opinion I don't think there is such a concept called multi threaded client for CORBA applications. Because in the server side, there is only one object that is registered with the naming service which is available for all the clients. If you look at the IOR of the object, it will be same for all the clients. So it can establishes at most only one connection to that object. It also leads to thinking that you can not get more than one remote object (which means how much ever you do look-up for the object from different clients, they all get the same reference) for any number of clients. So, in order to support mutli-threading ,the server actually has to support different thread policies. POA the server can have different thread policies. Please go through JAVA PROGRAMMING WITH CORBA for more.
I don't know how exactly Orbix works, but normally ORB initialization in done only once even for a multithreaded setup. The multithreaded (server side) ORB will start an amount of worker threads (on demand or if needed or if configured, a fixed number) to handle incomming connection. These connections are handled by a worker. This worker looks up the servant that can handle this request. Normally this (the real call to the servant) is performed in an extra thread also. But you won't see this thread with lsof. Try so use ps -eLf or top -H with thread support enabled.
EDIT:
On the client side it depends on how many object do you want to call. For each object a caller thread is possible. It is also possible to have more than one caller thread per remote object, but only if called from different threads on the client side logic. (Imagine to have multiple threads and the remote object is shared across the threads)