Get thread handle from memory address - multithreading

My multithreaded delphi application has a VEH exception handler. (http://msdn.microsoft.com/en-us/library/windows/desktop/ms681420(v=vs.85).aspx) i can get memory address, exception type etc when triggered but can't get thread information.
is it possible to get thread id from memory address?

Is it possible to get thread ID from a memory address?
If by memory address you mean address of code then the answer is no. Multiple threads can be simultaneously executing at the same address.
I see no evidence that these exception handlers are called in a thread other than the one which raised the exception.

Related

Why does a thread pool create a lot of unique threads?

A COM application based on the 'free' threading model subscribes to events published from another COM application that runs out of process.
The application works normally. But in some cases (or configurations?) it burns through a lot of so called Tpp worker threads.
These threads apparently belong to a thread pool managed by Windows/COM. And they are at least used by COM to deliver incoming events to the application.
When the application receives events, that always happens in the context of one of these worker threads.
In the normal situation, updates are coming in from at most 2 or 3 unique worker threads.
But in the abnormal situation the application sees new & unique worker thread IDs appear every 3-8 minutes. Over the course of a week the application has seen about 1000 unique threads (!).
I highly suspect there is something wrong here. Because surely the thread pool doesn't need so many different threads, right?
What could be a reason for the thread pool behavior I'm seeing. Is it just normal that it creates different threads from time to time? Are the old threads still sticking around doing nothing? What action could trigger this while the application is running in the context of the worker thread?
Notes:
Our application is an OPC DA client (and the other application is the Siemens OPC-DA server)
The OS is Windows 10
I do not yet know if the worker threads have exited or that they stick around doing nothing
By way of an experiment I have tried several bad/illegal things to see if it is possible for our application to somehow break a worker thread
- which would then explain why the thread pool would have to create a new one - we might have destroyed the old one. But that seems more difficult than I had expected:
When running in the context of the worker thread, I have tried...
deliberately hanging with while (true) {}, result: event delivery process just stalls, no new worker thread is being created for us though
the deliberate uncaught c++ exception, no new worker thread is created
triggering a deliberate (read) access violation, no new thread either...
And that got me thinking, if our application can't kill that worker thread in an obvious way, what else could or why would the thread pool behave like this?

What happens to the data that will be returned when thread is waiting for other threads

Let's say I create many many threads on one core CPU. Each thread does IO operation, for example it reads data from a database or other microservice.
What happens if I create thousands of threads that read something from a DB?
How this communication works?
I assume that in a thread we send some request to a DB or some HTTP call to other service. After that CPU is used by another thread. How is this communcation handled? Does the OS handle messages for other threads and waits until these threads will be used by CPU to pass them data?
Lets say I make 1000 calls in 1000 of threads and each response will be 1MB of data. Where is this data buffored until correct thread become active? (For example we are spawning tenth thread and already got a response fot the first one)
Or maybe someone could pass some nice articles about this topic?
Every time a thread makes an I/O request the OS (kernel) queues that I/O and puts the thread to sleep (assuming we're talking about a synchronous I/O call).
"Queues that I/O" means setting up some link between the socket through which the I/O is performed and the network card queue, and setting up an internal OS buffer to hold request and response data.
When a response arrives at the network card, the OS adds the data socket's buffer and, typically, wakes up the thread that made the associated I/O request.
Note that while an HTTP or DB query response can be 1 MB, it's usually done over a TCP/IP connection, which usually has a much lower MTU. The TCP/IP implementation will require the server to slice the response into packets and send multiple small packets.
If 1000 responses arrive at the same time, and the hardware cannot handle such a load, each server will have to send its packets more slowly, but the OS will still likely handle all such "streams" of responses in parallel.
I assume that in a thread we send some request to a DB or some HTTP call to other service. After that CPU is used by another thread. How is this communcation handled? Does the OS handle messages for other threads and waits until these threads will be used by CPU to pass them data?
It depends on the exact communication method used. Most commonly, it will be some kind of byte stream connection such as a TCP connection. In that case, the thread typically makes a blocking read operation that causes the kernel to mark that thread as waiting for I/O. It attaches the thread to a data structure associated with the TCP connection and does whatever is needed to make that I/O take effect.
When a response is received, the kernel code notices the thread waiting for activity. It then marks that original thread ready-to-run and the scheduler will eventually schedule it. When it runs, it resumes in the kernel's blocking I/O code, but this time there is data waiting for it, so it returns to user space and resumes execution.
Lets say I make 1000 calls in 1000 of threads and each response will be 1MB of data. Where is this data buffored until correct thread become active? (For example we are spawning tenth thread and already got a response fot the first one)
It depends on exactly what communication method is used. If it's a TCP connection, then there are buffers associated with that connection. If it uses shared memory, then the other process just writes into that shared memory page.

What happens when a thread pool is exhausted?

In a recent course at school about networking / operating systems I learned about thread pools. Now he basic functionality is pretty straight forward and I understand this.
However, what's not specified in my book is what happens when the thread pool is exhausted? For example you have a pool with 20 threads in it and you have 20 connected clients. Another client tries to connect but there's no threads left in the pool, what happens then? Does the client go in a queue? Does the system make another thread to put in the pool? Something else?
The answer depends highly on your language, your operation system, and your pool implementation.
what happens when the thread pool is exhausted? Another client tries to connect but there's no threads left in the pool, what happens then? Does the client go in a queue?
Typically in a server situation, it depends on the socket settings. Either the socket connection gets queued by the OS or the connection gets refused. This is usually not handled by the thread-pool. In ~unix operation systems, this queue or "backlog" is handled by the listen method.
Does the system make another thread to put in the pool?
This depends on the thread-pool. Some pools are fixed size so no more threads will be added. Other thread-pools are "cached" thread pools so it will reuse a free thread or will create a new one if none are available. Many web servers have max thread settings on their pools so remote users don't thrash the system by starting too many concurrent connections.
It depends on the policy used by the thread-pool:
the pool size can be static, and when a new thread is requested the caller will wait on a synchronization primitives like a semaphore, or the request can be pushed into a queue
the pool size can be unlimited but this may be dangerous because creating too much threads can greatly reduce the performance; more often than note it is ranged between a min and a max set by the pool user
the pool can use a dynamic policy depending on the context: hardware resources like CPU or RAM, OS resources like synchronization primitives and threads, current process resources (memory, threads, handles...)
An example of a smart thread-pool: http://www.codeproject.com/Articles/7933/Smart-Thread-Pool
It depends on the thread pool implementation. They might be put on a queue, they might get a new thread created for them, or they might even just get an error message saying come back later. Or if you are the one implementing the thread pool, you can do whatever you want.

Error report to external BMC inside interrupt handler

We have one system and an external Baseboard Management Controller (BMC) to monitor this system. When there is a critical error occurred in the system, the error should be logged and sent to the external BMC. The process of sending the error message to the BMC may take a lot of time, as we need to compose the log entry, send the event out via the I2C bus. The error is captured inside the interrupt handler which requires to process the event in a very short time and non-blocking manner. On the other hand, if the error is non-recoverable, the system may reboot immediately.
May you please recommend a good way to handle the error reporting inside the interrupt handler, or is there any standard way for this procedure? Any suggestions are appreciated. Thanks in advanced.
There is no good way.
If your BMC communications sleep, you cannot do them from inside the interupt handler and must move them to a workqueue.
If your system reboots immediately after the interrupt handler, you cannot communicate with the BMC.
If your interrupt handler actually knows that the system will reboot, then you could change the I²C driver to add some method to send data from inside an interrupt handler, by busy-polling instead of sleeping.

"First-chance exception" meaning in MFC Application?

When I run my Windows Application(MFC) I get two Warnings.
First-chance exception at 0x01046a44 in XXX.exe: 0xC0000005: Access violation reading location 0x00000048.
First-chance exception at 0x75fdb9bc (KernelBase.dll) in XXX.exe: 0x000006BA: The RPC server is unavailable.
May I know what they mean?
What is a first chance exception?
When an application is being debugged, the debugger gets notified whenever an exception is encountered. At this point, the application is suspended and the debugger decides how to handle the exception. The first pass through this mechanism is called a "first chance" exception. Depending on the debugger's configuration, it will either resume the application and pass the exception on or it will leave the application suspended and enter debug mode. If the application handles the exception, it continues to run normally.
See this Article for more details.
This error means, that code from ntdll tries to access virtual address 0x00000048, that is not accessible. Maybe you call some function from ntdll and pass invalid pointer as a parameter.
An access violation is where you're trying to read a memory address that isn't yours; given the read address is very low in memory, I would guess that you've got a pointer to a class or struct that is actually null, and your code is attempting to access one of its members.

Resources