Can CUDA unified memory be written to by another CPU thread? - multithreading

I am writing a program that retrieves images from a camera and processes them with CUDA. In order to gain the best performance, I'm passing a CUDA unified memory buffer to the image acquisition library, which writes to the buffer in another thread.
This causes all sorts of weird results where to program hangs in library code that I do not have access to. If I use a normal memory buffer and then copy to CUDA, the problem is fixed. So I became suspicious that writing from another thread might not allowed, and googled as I did, I could not find a definitive answer.
So is accessing the unified memory buffer from another CPU thread is allowed or not?

There should be no problem writing to a unified memory buffer from multiple threads.
However, keep in mind the restrictions imposed when the concurrentManagedAccess device property is not true. In that case, when you have a managed buffer, and you launch a kernel, no CPU/host thread access of any kind is allowed, to that buffer, or any other managed buffer, until you perform a cudaDeviceSynchronize() after the kernel call.
In a multithreaded environment, this might take some explicit effort to enforce.
I think this is similar to this recital if that is also your posting. Note that TX2 should have this property set to false.
Note that this general rule in the non-concurrent case can be modified through careful use of streams. However the restrictions still apply to buffers attached to streams that have a kernel launched in them (or buffers not explicitly attached to any stream): when the property mentioned above is false, access by any CPU thread is not possible.
The motivation for this behavior is roughly as follows. The CUDA runtime does not know the relationship between managed buffers, regardless of where those buffers were created. A buffer created in one thread could easily have objects in it with embedded pointers, and there is nothing to prevent or restrict those pointers from pointing to data in another managed buffer. Even a buffer that was created later. Even a buffer that was created in another thread. The safe assumption is that any linkages could be possible, and therefore, without any other negotiation, the managed memory subsystem in the CUDA runtime must move all managed buffers to the GPU, when a kernel is launched. This makes all managed buffers, without exception, inaccessible to CPU threads (any thread, anywhere). In the normal program flow, access is restored at the next occurrence of a cudaDeviceSynchronize() call. Once the CPU thread that issues that call completes the call and moves on, then managed buffers are once again visible to (all) CPU threads. Another kernel launch (anywhere) repeats the process, and interrupts the accessibility. To repeat, this is the mechanism that is in effect when the concurrentManagedAccess property on the GPU is not true, and this behavior can be somewhat modified via the aforementioned stream attach mechanism.

Related

What happens to expected memory semantics (such as read after write) when a thread is scheduled on a different CPU core?

Code within a single thread has certain memory guarantees, such as read after write (i.e. writing some value to a memory location, then reading it back should give the value you wrote).
What happens to such memory guarantees if a thread is rescheduled to execute on a different CPU core? Say a thread writes 10 to memory location X, then gets rescheduled to a different core. That core's L1 cache might have a different value for X (from another thread that was executing on that core previously), so now a read of X wouldn't return 10 as the thread expects. Is there some L1 cache synchronization that occurs when a thread is scheduled on a different core?
All that is required in this case is that the writes performed while on the first processor become globally visible before the process begins executing on the second processor. In the Intel 64 architecture this is accomplished by including one or more instructions with memory fence semantics in the code that the OS uses to transfer the process from one core to another. An example from the Linux kernel:
/*
* Make previous memory operations globally visible before
* sending the IPI through x2apic wrmsr. We need a serializing instruction or
* mfence for this.
*/
static inline void x2apic_wrmsr_fence(void)
{
asm volatile("mfence" : : : "memory");
}
This ensures that the stores from the original core are globally visible before execution of the inter-processor interrupt that will start the thread running on the new core.
Reference: Sections 8.2 and 8.3 of Volume 3 of the Intel Architectures Software Developer's Manual (document 325384-071, October 2019).
TL;DR: It depends on the architecture and the OS. On x86, this type of read-after-write hazard is mostly not issue that has to be considered at the software level, except for the weakly-order WC stores which require a store fence to be executed in software on the same logical core before the thread is migrated.
Usually the thread migration operation includes at least one memory store. Consider an architecture with the following property:
The memory model is such that memory stores may not become globally observable in program order. This Wikipedia article has a not-accurate-but-good-enough table that shows examples of architectures that have this property (see the row "Stores can be reordered after stores ").
The ordering hazard you mentioned may be possible on such an architecture because even if the thread migration operation completes, it doesn't necessarily mean that all the stores that the thread has performed are globally observable. On architectures with strict sequential store ordering, this hazard cannot occur.
On a completely hypothetical architecture where it's possible to migrate a thread without doing a single memory store (e.g., by directly transferring the thread's context to another core), the hazard can occur even if all stores are sequential on an architecture with the following property:
There is a "window of vulnerability" between the time when a store retires and when it becomes globally observable. This can happen, for example, due to the presence of store buffers and/or MSHRs. Most modern processors have this property.
So even with sequential store ordering, it may be possible that the thread running on the new core may not see the last N stores.
Note that on an machine with in-order retirement, the window of vulnerability is a necessary but insufficient condition for a memory model that supports stores that may not be sequential.
Usually a thread is rescheduled to run on a different core using one of the following two methods:
A hardware interrupt, such as a timer interrupt, occurs that ultimately causes the thread to be rescheduled on a different logical core.
The thread itself performs a system call, such as sched_setaffinity, that ultimately causes it to run on a different core.
The question is at which point does the system guarantee that retired stores become globally observable? On Intel and AMD x86 processors, hardware interrupts are fully serializing events, so all user-mode stores (including cacheable and uncacheable) are guaranteed to be globally observable before the interrupt handler is executed, in which the thread may be rescheduled to run a different logical core.
On Intel and AMD x86 processors, there are multiple ways to perform system calls (i.e., change the privilege level) including INT, SYSCALL, SYSENTER, and far CALL. None of them guarantee that all previous stores become globally observable. Therefore, the OS is supposed to do this explicitly when scheduling a thread on a different core by executing a store fence operation. This is done as part of saving the thread context (architectural user-mode registers) to memory and adding the thread to the queue associated with the other core. These operations involve at least one store that is subject to the sequential ordering guarantee. When the scheduler runs on the target core, it would see the full register and memory architectural state (at the point of the last retired instruction) of the thread would be available on that core.
On x86, if the thread uses stores of type WC, which do not guarantee the sequential ordering, the OS may not guarantee in this case that it will make these stores globally observable. The x86 spec explicitly states that in order to make WC stores globally observable, a store fence has to be used (either in the thread on the same core or, much simpler, in the OS). An OS generally should do this, as mentioned in #JohnDMcCalpin's answer. Otherwise, if the OS doesn't provide the program order guarantee to software threads, then the user-mode programmer may need to take this into account. One way would be the following:
Save a copy of the current CPU mask and pin the thread to the current core (or any single core).
Execute the weakly-ordered stores.
Execute a store fence.
Restore the CPU mask.
This temporarily disables migration to ensure that the store fence is executed on the same core as the weakly-ordered stores. After executing the store fence, the thread can safely migrate without possibly violating program order.
Note that user-mode sleep instructions, such as UMWAIT, cannot cause the thread to be rescheduled on a different core because the OS does not take control in this case.
Thread Migration in the Linux Kernel
The code snippet from #JohnDMcCalpin's answer falls on the path to send an inter-processor interrupt, which is achieved using a WRMSR instruction to an APIC register. An IPI may be sent for many reasons. For example, to perform a TLB shootdown operation. In this case, it's important to ensure that the updated paging structures are globally observable before invaliding the TLB entries on the other cores. That's why x2apic_wrmsr_fence may be needed, which is invoked just before sending an IPI.
That said, I don't think thread migration requires sending an IPI. Essentially, a thread is migrated by removing it from some data structure that is associated with one core and add it to the one associated with the target core. A thread may be migrated for numerous reasons, such as when the affinity changes or when the scheduler decides to rebalance the load. As mentioned in the Linux source code, all paths of thread migration in the source code end up executing the following:
stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg)
where arg holds the task to be migrated and the destination core identifier. migration_cpu_stop is a function that does the actual migration. However, the task to be migrated may be currently running or waiting in some runqueue to run on the source core (i.e, the core on which the task is currently scheduled). It's required to stop the task before the migrating it. This is achieved by adding the call to the function migration_cpu_stop to the queue of the stopper task associated with the source core. stop_one_cpu then sets the stopper task as ready for execution. The stopper task has the highest priority. So on the next timer interrupt on the source core (Which could be the same as the current core), one of the tasks with the highest priority will be selected to run. Eventually, the stopper task will run and it will execute migration_cpu_stop, which in turn performs the migration. Since this process involves a hardware interrupt, all stores of the target task are guaranteed to be globally observable.
There appears to be a bug in x2apic_wrmsr_fence
The purpose of x2apic_wrmsr_fence is to make all previous stores globally observable before sending the IPI. As discussed in this thread, SFENCE is not sufficient here. To see why, consider the following sequence:
store
sfence
wrmsr
The store fence here can order the preceding store operation, but not the MSR write. The WRMSR instruction doesn't have any serializing properties when writing to an APIC register in x2APIC mode. This is mentioned in the Intel SDM volume 3 Section 10.12.3:
To allow for efficient access to the APIC registers in x2APIC mode,
the serializing semantics of WRMSR are relaxed when writing to the
APIC registers.
The problem here is that MFENCE is also not guaranteed to order the later WRMSR with respect to previous stores. On Intel processors, it's documented to only order memory operations. Only on AMD processors it's guaranteed to be fully serializing. So to make it work on Intel processors, there needs to be an LFENCE after the MFENCE (SFENCE is not ordered with LFENCE, so MFENCE must be used even though we don't need to order loads). Actually Section 10.12.3 mentions this.
If a platform is going to support moving a thread from one core to another, whatever code does that moving must respect whatever guarantees a thread is allowed to rely on. If a thread is allowed to rely on the guarantee that a read after a write will see the updated value, then whatever code migrates a thread from one core to another must ensure that guarantee is preserved.
Everything else is platform specific. If a platform has an L1 cache then hardware must make that cache fully coherent or some form of invalidation or flushing will be necessary. On most typical modern processors, hardware makes the cache only partially coherent because reads can also be prefetched and writes can be posted. On x86 CPUs, special hardware magic solves the prefetch problem (the prefetch is invalidated if the L1 cache line is invalidated). I believe the OS and/or scheduler has to specifically flush posted writes, but I'm not entirely sure and it may vary based on the exact CPU.
The CPU goes to great cost to ensure that a write will always see a previous read in the same instruction stream. For an OS to remove this guarantee and require all user-space code to work without it would be a complete non-starter since user-space code has no way to know where in its code it might get migrated.
Adding my two bits here. On first glance, a barrier seems like an overkill (answers above)
Consider this logic: when a thread wants to write to a cacheline, HW cache coherence kicks in and we need to invalidate all other copies of the cacheline that are present with other cores in the system; the write doesn't proceed without the invalidations. When a thread is re-scheduled to a different core then, it will have to fetch the cacheline from the L1-cache that has write permission thereby maintaining read-after-write sequential behavior.
The problem with this logic is that invalidations from cores aren't applied immediately, hence it is possible to read a stale value after being rescheduled (the read to the new L1-cache somehow beats the pending invalidation present in a queue with that core). This is ok for different threads because they are allowed to slip and slide, but with the same thread a barrier becomes essential.

sand boxing threads without separate processes

In the interest of ease of programming (local function calls instead of IPC) and performance (e.g. avoiding copies of large buffers), I'd like to have a Java VM call native code using JNI instead of through interprocess communication. There would be lots of worker threads, each doing computer vision on some image and sending back a list of detected features.
I've found a few other posts about this topic:
How to implement a native code sandbox?
Linux: Is it possible to sandbox shared library code
but in all cases, the agreed upon solution is to use multiple processes.
But I would like to explore the feasibility of partly sand boxing threads. Clearly, this goes against common sense, but I think if your client processes aren't malicious and if you can recover from faults, and in the worst case, are willing to tolerate a whole system crash once in a blue moon, it might work.
There are some hints that this is possible such as from jmajnert in #2. You would have to capture segfaults and other crashes, and terminate and restart the crashed thread. But I also want to reset the heap of the thread. That means each thread should have a private heap, but I don't know of any common malloc implementation that lets you create multiple heaps (AIX seems to).
Then I would want to close all files opened by the thread when it gets restarted.
Also, if Java objects get compromised by the native code, would it be practical to provide some fault tolerance like recreating them?
Because if complexity of hopping models between some native code, and the JVM -- The idea itself is really not even feasible.
To be feasible, you'd need to be within a single machine/threading model.
Lets assume you're in posix/ansi c.
You'd need to write a custom allocator that allocated from pools. Each time you launched a thread you'd allocate a new pool and set that pool as a thread local variable that all your custom_malloc() functions would allocate from. This way, when your thread died you could crush all of it's memory along with it.
Next, you'll need to set up some niftyness with setjmp/longjmp and signal to catch all those segfaults etc. exit the thread, crush it's memory and restart.
If you have objects from the "parent process" that you don't want to get corrupted, you'd have to create some custom mutexes that would have rollback functions that could be triggered when a threads signal handler was triggered to destroy the thread.

Does v4l2 support multi-map?

I'm trying to share frames(images) that I receive from a USB camera(logitech c270) between two processes so that I can avoid a memcpy. I'm using memory mapping streaming I/O method described here and I can successfully get frames from the camera after using v4l2_mmap. However, I have another process(for image processing) which has to use the image buffers after the dequeue and signal the first process to queue the buffer again.
Searching online, I could find that opening a video device multiple times is allowed, but when I try to map(tried both v4l2_mmap and just mmap) in the second process after a successful v4l2_open, I get an EINVAL error.
I found this pdf which talks about implementing multi-map in v4l2(Not official) and was wondering if this is implemented. I have also tried using User pointer streaming I/O method, the document of which explicitly states that a shared memory can be implemented in this method, but I get an EINVAL when I request for buffers(According to the documentation in linuxtv.org this means the camera doesn't support User pointer streaming I/O).
Note: I want to keep the code modular, hence two processes. If this is not possible, doing all the work in a single process(multiple threads & global frame buffer) is still possible.
Using standard shared memory function calls is not possible as the two processes have to map to the video device file(/dev/video0) and I cannot have it under /dev/shm.
The main problem with multi-consumer mmap is that this needs to be implemented on the device driver side. That is: even if some devices might support multi-map, others might not.
So unless you can control the camera that is being used with your application, you will eventually come across one that does not, in which case your application would not work.
So in any case, your application should provide means to handle non multi-map devices.
Btw, you do not need multiple processes to keep your code modular.
Multiple processes have their merits (e.g. privilige separation, crash resilience,...), but might also encourage code duplication...
This may not be relevant now.....
You don't need to use the full monty multi consumer thing to do this. I have used Python to hand off the processing of the mmap buffers to multiple processes (python multi-threading only allows 1 thread at a time to execute)
If you're running multi-threaded then worker threads can pick up the buffer and process it independently when triggered by the master thread
Since the code is obviously very pythonesq I won't post it here as it wouldn't make sense in other languages as it uses python multi-processing support.

C++/Linux: Using c++11 atomic to avoid partial read on dual-mapped mmap region

I have a program which has two threads. One thread (Writer Thread) writes to a file while the other consuming (Reader Thread) the data from the first. In the program, the same region of the file is mapped twice: one with read & write permission for Writer Thread, another just with read permission for Reader Thread. (The two mapped regions have different pointer/virtual address from mmap as expected). I attempt use a C++11 atomic to control the memory order.
Here is what I have in my mind:
Writer Thread:
Create the data content (fixed size) in the memory mapped region with write permission.
Update the atomic variable with release memory order.
Reader Thread:
Continuously poll on the atomic variable with acquire memory order till there is/are new messages.
If there is an outstanding message, read the data from the read only memory mapped region.
Questions
Even though the read-only mmap region and writable mmap region are referring the same file region, they have different virtual memory addresses. Could the atomic variable protect partial read here? (i.e. if the reader thread saw the atomic variable is updated with acquire semantics, will the read only memory region just have partial message or the message is not yet visible at all?) (It seems to me that if the two virtual memory are mapped to the same physical memory page(s), it should work.)
What if Reader Thread using read system call instead of read-only mmap region? Could the atomic memory variable avoid partial read?
I have written a test program that seems to work. However, I would like to be advised by a more experienced programmer/Linux expert whether it should work. Thanks!
Using different virtual memory ranges does not change things here. For proof note that atomic operations work just fine between two processes using the same shared memory. Each may have it mapped at a different virtual address.
What is important is that it references the same piece of physical memory.
The read() system call does not do anything to lock memory or access it atomically. It is simply a memcpy() done in the kernel from the file cache to user space. If another CPU is writing into that memory it can get a partial read.
Your scenario is perfectly valid and safe. The release in the writer thread and acquire on the reader thread guarantee a happens before relation. To quote the stadard :
29.3.2 2 An atomic operation A that performs a release operation on an atomic object M synchronizes with an atomic operation B that performs an acquire operation on M and takes its value from any side effect in the release sequence headed by A
As a side note your bottleneck would most probably be the file operations so atomics instead of mutex wouldn't probably have an observable effect on performance.
It does not seem like you need atomics here at all. What you need is a volatile variable. Atomicity would be ensured by OS itself, since the memory is backed by a file.
EDIT.
I see a bunch of people downvoted this answer, and in absence of meaningful comments I will assume people didn't understand it, and just reacted to suggested usage of volatile in context of multithreaded application. I will try to explain my position.
Reads and writes to memory backed files are atomic as long as corresponding read() and write() system calls are atomic, and those are atomic as long as memory buffer size does not exceed PIPE_BUF. (4K on Linux if memory serves). They also guarantee ordering. So, as long as you are memcpying chunks of less than 4K, you are good as long as the call has made it through compiler optimizations.
Volatile is needed exactly for this - to prevent compiler from optimizing reads and writes to this memory, and used exactly as prescribed.
On a side note, with exactly the same design on AIX, we've seen a huge performance degradation with compared to slighly modified design, when the writer uses write() to update the memory mapped file directly. Not sure if it is AIX quirk - but if performance is important, you might want to do some benchmarking.

Memory addressing in assembly / multitasking

I understand how programs in machine code can load values from memory in to registers, perform jumps, or store values in registers to memory, but I don't understand how this works for multiple processes. A process is allocated memory on the fly, so must it use relative addressing? Is this done automatically (meaning there are assembly instructions that perform relative jumps, etc.), or does the program have to "manually" add the correct offset to every memory position it addresses.
I have another question regarding multitasking that is somewhat related. How does the OS, which isn't running, stop a thread and move on to the next. Is this done with timed interrupts? If so, then how can the values in registers be preserved for a thread. Are they saved to memory before control is given to a different thread? Or, rather than timed interrupts, does the thread simply choose a good time to give up control. In the case of timed interrupts, what happens if a thread is given processor time and it doesn't need it. Does it have to waste it, can it call the interrupt manually, or does it alert the OS that it doesn't need much time?
Edit: Or are executables edited before being run to compensate for the correct offsets?
That's not how it works. All modern operating systems virtualize the available memory. Giving every process the illusion that it has 2 gigabytes of memory (or more) and doesn't have to share it with anybody. The key component in a machine that does this is the MMU, nowadays built in the processor itself. Another core feature of this virtualization is that it isolates processes. One misbehaving one cannot bring another one down with it.
Yes, a clock tick interrupt is used to interrupt the currently running code. Processor state is simply saved on the stack. The operating system scheduler then checks if any other thread is ready to run and has a high enough priority to get first in line. Some extra code ensures that everybody gets a fair share. Then it just a matter of setting the MMU to resume execution on the other thread. If no thread is ready to run then the CPU gets physically turned off with the HALT instruction. To be woken again by the next clock interrupt.
This is ten-thousand foot view, it is well covered in any book about operating system design.
A process is allocated memory on the fly, so must it use relative addressing?
No, it can use relative or absolute addressing depending on what it is trying to address.
At least historically, the various different addressing modes were more about local versus remote memory. Relative addressing was for memory addresses close to the current address while absolute was more expensive but could address anything. With modern virtual memory systems, these distinctions may be no longer necessary.
A process is allocated memory on the fly, so must it use relative addressing? Is this done automatically (meaning there are assembly instructions that perform relative jumps, etc.), or does the program have to "manually" add the correct offset to every memory position it addresses.
I'm not sure about this one. This is taken care of by the compiler normally. Again, modern virtual memory systems make make this complexity unnecessary.
Are they saved to memory before control is given to a different thread?
Yes. Typically all of the state (registers, etc.) is stored in a process control block (PCB), a new context is loaded, the registers and other context is loaded from the new PCB, and execution begins in the new context. The PCB can be stored on the stack or in kernel memory or in can utilize processor specific operations to optimize this process.
Or, rather than timed interrupts, does the thread simply choose a good time to give up control.
The thread can yield control -- put itself back at the end of the run queue. It can also wait for some IO or sleep. Thread libraries then put the thread in wait queues and switch to another context. When the IO is ready or the sleep expires, the thread is put back into the run queue. The same happens with mutex locks. It waits for the lock in a wait queue. Once the lock is available, the thread is put back into the run queue.
In the case of timed interrupts, what happens if a thread is given processor time and it doesn't need it. Does it have to waste it, can it call the interrupt manually, or does it alert the OS that it doesn't need much time?
Either the thread can run (perform CPU instructions) or it is waiting -- either on IO or a sleep. It can ask to yield but typically it is doing so by [again] sleeping or waiting on IO.
I probably walked into this question quite late, but then, it may be of use to some other programmers. First - the theory.
The modern day operating system will virtualize the memory, and to do so, it maintains, within its system memory area, a series of page pointers. Each page is of a fixed size (usually 4K), and when any program seeks some memory, its allocated memory addresses that are virtualized using the memory page pointer. Its approximates the behaviour of "segment" registers in the prior generation of the processors.
Now when the scheduler decides to get another process running, it may or may not keep the previous process in memory. If it keeps it in memory, then all that the scheduler does is to save the entire register snapshot (now, including YMM registers - this bit was a complex issue earlier as there are no single instructions that saved the entire context : read up on XSAVE), and this has a fixed format (available in Intel SW manual). This is stored in the memory space of the scheduler itself, along with the information on the memory pages that were being used.
If however, the scheduler needs to "dump" the current process context that is about to go to sleep to the hard disk - this situation usually arises when the process that is waking up needs extraordinary amount of memory, then the scheduler writes the memory page files in the disk blocks (called pagefile - reserved area of memory - also the source of "old grandmother wisdom" that pagefile must be equal to size of real memory) and the scheduler preserves the memory page pointer addresses as offsets in the pagefile. When it wakes up, the scheduler reads from pagefile the offset address, allocates real memory and populates the memory page pointers, and then loads the contents from the disk blocks.
Now, to answer your specific questions :
1. Do u need to use only relative addressing, or you can use absolute?
And. You may use either - whatever u perceive to be as absolute is also relative as the memory page pointer relativizes that address in an invisible format. There is no really absolute memory address anywhere (including the io device memories) except the kernel of the operating system itself. To test this, u may unassemble any .EXE program, to see that the entry point is always CALL 0010 which clearly implies that each thread gets a different "0010" to start the execution.
How do threads get life and what if it surrenders the unused slice.
Ans. The threads usually get a slice - modern systems have 20ms as the usual standard - but this is sometimes changed in special purpose compilation for servers that do not have many hardware interrupts to deal with - in order of their position on the process queue. A thread usually surrenders its slice by calling function sleep(), which is a formal (and very nice way) to surrender your balance part of the time slice. Most libraries implementing asynchronous reads, or interrupt actions, call sleep() internally, but in many instances, top level programs also call sleep() - e.g. to create a time gap. An invocation to sleep will certainly change the process context - the CPU actually is not given the liberty to sleep using NOP.
The other method is to wait for an IO to complete, and this is handled differently. The program on asking for an IO process, will cede its time slice, and the process scheduler flags this thread to be in "WAITING FOR AN IO" state - and this thread will not be given a time slice by the processor till its intended IO is completed, or timed out. This feature helps programmers as they do not have to explicitly write a sleep_until_IO() kind of interface.
Trust this sets you going further in your explorations.

Resources