Identify threads in a Delphi application outside debugging environment - multithreading

I have found an application which requests process information using wmi queries (all threads and more info on each thread). I modified this application to determine the CPU usage per thread.
(if my application is called 'appy', then the threads are named 'appy/0', 'appy/1', ...)
My question: is there a way to easily identify these threads outside of an IDE or another debugging environment?
I know there is the NameThreadForDebugging method, but this isn't accessible outside the debugging environment.
Is there a way to assign your own thread id upon creating that thread?
Or is the only way to know who is who (the threads) by creating a dictionary and write that dictionary to a file so it is externally accessible.
Thanks in advance!

No, you cannot assign your own thread ID, the thread ID is assigned to a thread by the CreateThread function and cannot be changed during its lifetime. And as you said the only way to identify thread in the external application (not a debugger) is to share the thread identification with that application somehow.
However it's not necessary to share the information through a file, you can use a shared memory block for instance. It will be much more efficient than using files.
As the reference about thread ID you can take the remark by the GetCurrentThreadId function:
Until the thread terminates, the thread identifier uniquely identifies
the thread throughout the system.

Related

How to get OS thread id (with uint64_t) in userspace with Linux OS

I am working on a project where I make some operation in any multithread application. I want to specify that I don't have a possibility to see the source code of that multithread app, it can be any multithread app. My objective in that project is when the multithread app is running, I retrieve the threads of multithreads app on CFS scheduler and put them on another schedulers. In order to achieve this I just retrieve their values in /proc/parent_pid/task/ where parent_pid is pid of multithread app. But my problem is that the value got on /proc/parent_pid/task/ is not sufficient for me, I need the OS thread identifier of each threads. Since it is not possible for me to have access on source code of multithread app, my question is how can I get the OS thread identifier of those threads when their are running ? I think that it is not possible for me to make some syscall such as syscall(__NR_gettid), or gettid(), or pthread_self() because those functions are call generally in the thread source code. It is possible to get that value in some files locate at /proc/parent_pid/? I use Ubuntu 20.04.

How worker threads works in Nodejs?

Nodejs can not have a built-in thread API like java and .net
do. If threads are added, the nature of the language itself will
change. It’s not possible to add threads as a new set of available
classes or functions.
Nodejs 10.x added worker threads as an experiment and now stable since 12.x. I have gone through the few blogs but did not understand much maybe due to lack of knowledge. How are they different than the threads.
Worker threads in Javascript are somewhat analogous to WebWorkers in the browser. They do not share direct access to any variables with the main thread or with each other and the only way they communicate with the main thread is via messaging. This messaging is synchronized through the event loop. This avoids all the classic race conditions that multiple threads have trying to access the same variables because two separate threads can't access the same variables in node.js. Each thread has its own set of variables and the only way to influence another thread's variables is to send it a message and ask it to modify its own variables. Since that message is synchronized through that thread's event queue, there's no risk of classic race conditions in accessing variables.
Java threads, on the other hand, are similar to C++ or native threads in that they share access to the same variables and the threads are freely timesliced so right in the middle of functionA running in threadA, execution could be interrupted and functionB running in threadB could run. Since both can freely access the same variables, there are all sorts of race conditions possible unless one manually uses thread synchronization tools (such as mutexes) to coordinate and protect all access to shared variables. This type of programming is often the source of very hard to find and next-to-impossible to reliably reproduce concurrency bugs. While powerful and useful for some system-level things or more real-time-ish code, it's very easy for anyone but a very senior and experienced developer to make costly concurrency mistakes. And, it's very hard to devise a test that will tell you if it's really stable under all types of load or not.
node.js attempts to avoid the classic concurrency bugs by separating the threads into their own variable space and forcing all communication between them to be synchronized via the event queue. This means that threadA/functionA is never arbitrarily interrupted and some other code in your process changes some shared variables it was accessing while it wasn't looking.
node.js also has a backstop that it can run a child_process that can be written in any language and can use native threads if needed or one can actually hook native code and real system level threads right into node.js using the add-on SDK (and it communicates with node.js Javascript through the SDK interface). And, in fact, a number of node.js built-in libraries do exactly this to surface functionality that requires that level of access to the nodejs environment. For example, the implementation of file access uses a pool of native threads to carry out file operations.
So, with all that said, there are still some types of race conditions that can occur and this has to do with access to outside resources. For example if two threads or processes are both trying to do their own thing and write to the same file, they can clearly conflict with each other and create problems.
So, using Workers in node.js still has to be aware of concurrency issues when accessing outside resources. node.js protects the local variable environment for each Worker, but can't do anything about contention among outside resources. In that regard, node.js Workers have the same issues as Java threads and the programmer has to code for that (exclusive file access, file locks, separate files for each Worker, using a database to manage the concurrency for storage, etc...).
It comes under the node js architecture. whenever a req reaches the node it is passed on to "EVENT QUE" then to "Event Loop" . Here the event-loop checks whether the request is 'blocking io or non-blocking io'. (blocking io - the operations which takes time to complete eg:fetching a data from someother place ) . Then Event-loop passes the blocking io to THREAD POOL. Thread pool is a collection of WORKER THREADS. This blocking io gets attached to one of the worker-threads and it begins to perform its operation(eg: fetching data from database) after the completion it is send back to event loop and later to Execution.

Restricting memory regions to threads

Is there an operating system-specific way in Linux/Darwin/Windows, to restrict access to certain virtual memory pages to only one thread, so that when another thread tries to access it, the OS would intercept and report an error?
I'm trying to emulate the behavior of fork with multiple processes, where each process has its own memory except for some shared memory, mainly to avoid all programming errors where one worker would access memory belonging to another worker.
As a general proposition, this is not possible. The whole idea of threads is to have multiple streams of execution that share the same address. If you're a kernel mode kommando, you might be able to some up with some modification of the page tables that a thread uses to make pages inaccessible from usermode then unlocks them.

Access a specific thread from Grand Central Dispatch

For a Mac application I'm using an external (C++) library that has build in memory management. A drawback of that memory manager is that memory needs to be deleted on the same thread as the new call.
Currently I'm using GCD to run code concurrently, but I run into the problem that objects of that library get allocated on various threads and I can't correctly delete them.
Is there a way to call the delete operator on the original thread that called new? I realise that GCD wants to abstract the underlying threads away from me, but otherwise I've to write a custom GCD-like implementation where I have full control over the threads.

Who can share shared memory in Linux?

I am working on hardening a sandbox for student code execution. I think I'm satisfied that students can't share data on the file system or with signals because I've found express rules dictating those and they execute as different unprivileged users. However, I am having a really hard time looking at documentation to determine, when shared memory (or IPC more generally - queues or semaphores) is created, who can see that. If you create shared memory, can anyone on the same machine open it, or is there a way to control that? Does the control lie in the program that creates the memory, or can the sysadmin limit it?
Any process in the same ipc namespace can see and (potentially) access ipc objects created by other processes in the same ipc namespace. Each ipc object has the same user/group/other-rwx permissions as file system objects objects -- see the svipc(7) manual page.
You can create a new ipc namespace by using the clone(2) system call with the CLONE_NEWIPC flag. You can use the unshare(1) program to do a clone+exec of another program with this or certain other CLONE flags.

Resources