Can InterlockedIncrement64 and __sync_add_and_fetch result in torn read? - multithreading

Suppose we have a 64-bit global variable, initially zero.
volatile uint64_t gDest = 0;
Store (Atomic) – in one thread.
At some point, we atomically increment a 64-bit value of this variable.
AtomicIncrement64 (&gDest);
Here AtomicIncrement64 - can be one of compiler intrinsics or builtins like InterlockedIncrement64 (cl) and __sync_add_and_fetch(GCC)
Load – in another thread
uint64_t a = gDest;
Here if compiler might have implemented the load operation using two machine instructions: The first reads the lower 32 bits into eax, and the second reads the upper 32 bits into edx. In this case, if a concurrent atomic store to gDest becomes visible between the two instructions, will it result in torn read?

Related

Results of doing += on a double from multiple threads

Consider the following code:
void add(double& a, double b) {
a += b;
}
which according to godbolt compiles on a Skylake to:
add(double&, double):
vaddsd xmm0, xmm0, QWORD PTR [rdi]
vmovsd QWORD PTR [rdi], xmm0
ret
If I call add(a, 1.23) and add(a, 2.34) from different threads (for the same variable a), will a definitely end up as either a+1.23, a+2.34, or a+1.23+2.34?
That is, will one of these results definitely happen given this assembly, and a will not end up in some other state?
Here is a relevant questions to me:
Does the CPU fetch the word you are dealing with in a single operation?
Some processor might allow memory access to a variable that happens to be not aligned in memory by doing two fetches one after the other - non atomically of course.
In that case, problems would arise if another thread interjects writing on that area of memory while the first thread had fetched already the first part of the word and then fetches the second part when the other thread has already modified the word.
thread 1 fetches first part of a XXXX
thread 1 fetches second part of a YYYY
thread 2 fetches first part of a XXXX
thread 1 increments double represented as XXXXYYYY that becomes ZZZZWWWW by adding b
thread 1 writes back in memory ZZZZ
thread 1 writes back in memory WWWW
thread 2 fetches second part of a that is now WWWW
thread 2 increments double represented as XXXXWWWW that becomes VVVVPPPP by adding b
thread 2 writes back in memory VVVV
thread 2 writes back in memory PPPP
For keeping it compact I used one character to represent 8 bits.
Now XXXXWWWW and VVVVPPPP are going to be representation of total different floating point values than the one you would have expected. That is because you ended up mixing two parts of two different binary representation (IEEE-754) of double variables.
Said that, I know that in certain ARM based architectures data access are not allowed (that would cause a trap to be generated), but I suspect that Intel processors do allow that instead.
Therefore, if your variable a is aligned, your result can be any of
a+1.23, a+2.34, a+1.23+2.34
if your variable might be mis-aligned (i.e. has got an address that is not a multiple of 8) your result can be any of
a+1.23, a+2.34, a+1.23+2.34 or a rubbish value
As a further note, please bear in mind that even if your environment alignof(double) == 8 that is not necessarily enough to conclude you are not going to have misalignment issues. All depends from where your particular variable comes from. Consider the following (or run it here):
#pragma push()
#pragma pack(1)
struct Packet
{
unsigned char val1;
unsigned char val2;
double val3;
unsigned char val4;
unsigned char val5;
};
#pragma pop()
int main()
{
static_assert(alignof(double) == 8);
double d;
add(d,1.23); // your a parameter is aligned
Packet p;
add(p.val3,1.23); // your a parameter is now NOT aligned
return 0;
}
Therefore asserting alignof() doesn't necessarily guarantee your variable is aligned. If your variable is not involved in any packing then you should be OK.
Please allow me just a disclaimer for whoever else is reading this answer: using std::atomic<double> in these situations is the best compromise in term of implementation effort and performance to achieve thread safety. There are CPUs architectures that have special efficient instructions for dealing with atomic variables without injecting heavy fences. That might end up satisfying your performance requirements already.

Function for unaligned memory access on ARM

I am working on a project where data is read from memory. Some of this data are integers, and there was a problem accessing them at unaligned addresses. My idea would be to use memcpy for that, i.e.
uint32_t readU32(const void* ptr)
{
uint32_t n;
memcpy(&n, ptr, sizeof(n));
return n;
}
The solution from the project source I found is similar to this code:
uint32_t readU32(const uint32_t* ptr)
{
union {
uint32_t n;
char data[4];
} tmp;
const char* cp=(const char*)ptr;
tmp.data[0] = *cp++;
tmp.data[1] = *cp++;
tmp.data[2] = *cp++;
tmp.data[3] = *cp;
return tmp.n;
}
So my questions:
Isn't the second version undefined behaviour? The C standard says in 6.2.3.2 Pointers, at 7:
A pointer to an object or incomplete type may be converted to a pointer to a different
object or incomplete type. If the resulting pointer is not correctly aligned 57) for the
pointed-to type, the behavior is undefined.
As the calling code has, at some point, used a char* to handle the memory, there must be some conversion from char* to uint32_t*. Isn't the result of that undefined behaviour, then, if the uint32_t* is not corrently aligned? And if it is, there is no point for the function as you could write *(uint32_t*) to fetch the memory. Additionally, I think I read somewhere that the compiler may expect an int* to be aligned correctly and any unaligned int* would mean undefined behaviour as well, so the generated code for this function might make some shortcuts because it may expect the function argument to be aligned properly.
The original code has volatile on the argument and all variables because the memory contents could change (it's a data buffer (no registers) inside a driver). Maybe that's why it does not use memcpy since it won't work on volatile data. But, in which world would that make sense? If the underlying data can change at any time, all bets are off. The data could even change between those byte copy operations. So you would have to have some kind of mutex to synchronize access to this data. But if you have such a synchronization, why would you need volatile?
Is there a canonical/accepted/better solution to this memory access problem? After some searching I come to the conclusion that you need a mutex and do not need volatile and can use memcpy.
P.S.:
# cat /proc/cpuinfo
processor : 0
model name : ARMv7 Processor rev 10 (v7l)
BogoMIPS : 1581.05
Features : swp half thumb fastmult vfp edsp neon vfpv3 tls
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x2
CPU part : 0xc09
CPU revision : 10
This code
uint32_t readU32(const uint32_t* ptr)
{
union {
uint32_t n;
char data[4];
} tmp;
const char* cp=(const char*)ptr;
tmp.data[0] = *cp++;
tmp.data[1] = *cp++;
tmp.data[2] = *cp++;
tmp.data[3] = *cp;
return tmp.n;
}
passes the pointer as a uint32_t *. If it's not actually a uint32_t, that's UB. The argument should probably be a const void *.
The use of a const char * in the conversion itself is not undefined behavior. Per 6.3.2.3 Pointers, paragraph 7 of the C Standard (emphasis mine):
A pointer to an object type may be converted to a pointer to a
different object type. If the resulting pointer is not correctly
aligned for the referenced type, the behavior is undefined.
Otherwise, when converted back again, the result shall compare
equal to the original pointer. When a pointer to an object is
converted to a pointer to a character type, the result points to the
lowest addressed byte of the object. Successive increments of the
result, up to the size of the object, yield pointers to the remaining
bytes of the object.
The use of volatile with respect to the correct way to access memory/registers directly on your particular hardware would have no canonical/accepted/best solution. Any solution for that would be specific to your system and beyond the scope of standard C.
Implementations are allowed to define behaviors in cases where the Standard does not, and some implementations may specify that all pointer types have the same representation and may be freely cast among each other regardless of alignment, provided that pointers which are actually used to access things are suitably aligned.
Unfortunately, because some obtuse compilers compel the use of "memcpy" as an
escape valve for aliasing issues even when pointers are known to be aligned,
the only way compilers can efficiently process code which needs to make
type-agnostic accesses to aligned storage is to assume that any pointer of a type requiring alignment will always be aligned suitably for such type. As a result, your instinct that approach using uint32_t* is dangerous is spot on. It may be desirable to have compile-time checking to ensure that a function is either passed a void* or a uint32_t*, and not something like a uint16_t* or a double*, but there's no way to declare a function that way without allowing a compiler to "optimize" the function by consolidating the byte accesses into a 32-bit load that will fail if the pointer isn't aligned.

Is it possible for a binary integer operation that results in an overflow to overwrite adjacent memory?

This question is not about any error I'm currently seeing, it's more about theory and getting educated on the variations in HW architecture design and implementation.
Scenario 1: Assuming a 16-bit processor with 16-bit registers, 16-bit addressing, and sizeof(int) = 16 bits:
unsigned int a, b, c, d;
a=0xFFFF;
b=0xFFFF;
c=a+b;
Is it possible for the memory location next to c to be overwritten? (In this case I would expect that an overflow flag would be raised during the add operation, and c either remain unchanged or be filled with undefined data.)
Scenario 2: Assuming a 32-bit processor with 32-bit registers, 32-bit addressing, sizeof(int) = 32 bits, and sizeof(short int)=16 bits:
unsigned int a, b;
unsigned short int c, d;
a=0xFFFF;
b=0xFFFF;
c=a+b;
Is it possible for the memory location next to c to be overwritten? (I would expect that no overflow flag be raised during the add operation, but whether or not a memory access or overflow flag be raised during the assignment operation would depend on the actual design and implementation of the HW. If d was located in the upper 16 bits of the same 32-bit address location (probably not even possible with 32-bit addressing), it might be overwritten.)
Scenario 3: Assuming a 32-bit processor with 32-bit registers, 16-bit addressing, sizeof(int) = 32 bits, and sizeof(short int)=16 bits:
unsigned int a, b;
unsigned short int c, d;
a=0xFFFF;
b=0xFFFF;
c=a+b;
Is it possible for the memory location next to c to be overwritten? (I would expect some overflow flag or memory violation flag to be raised during the type conversion and assignment operation.)
Scenario 4: Assuming a 32-bit processor with 32-bit registers, 32-bit addressing, and sizeof(int) = 32 bits:
unsigned int a, b;
struct {
unsigned int c:16;
unsigned int d:16;
} e;
a=0xFFFF;
b=0xFFFF;
e.c=a+b;
Is it possible for the memory location next to c, namely d, to be overwritten? (In this case, since c and d are expected to reside in the same 32-bit address and both are technically 32-bit integers, it's conceivable that no overflow flags be raised during addition or assignment, and d could be affected.)
I have not tried to test this on actual hardware because my question is more about theory and possible variations in design and implementation. Any insight would be appreciated.
Is it possible for a binary integer operation that results in an overflow to overwrite adjacent memory?
Are there currently any HW implementations that suffer from similar memory overwrite problems, or have there been systems in the past that had this problem?
What devices do typical processors use to guard against neighboring memory from being overwritten by arithmetic and assignment operations?
None of your 1,2,3,4 will result in any memory corruption, or writing "past" the storage for the integer location you are performing arithmetic on. C specifies what should happen on an overflow of unsigned integers. This is assuming the compiler produces the code it's supposed to. Nothing will guard you acainst a buggy compiler that generates code which copies 4 bytes into a 2 byte variable.
Here's what C99 (6.2.5) says:
"A computation involving unsigned operands can never overflow,
because a result that cannot be represented by the resulting unsigned integer type is
reduced modulo the number that is one greater than the largest value that can be
represented by the resulting type."
So, it's well defined what will happen when you try to "overflow" an unsigned integer.
Now, if your integers had been signed integers, it is an other story. According to C, overflowing a signed integer results in undefined behavior. And undefined behavior means anything, including memory corruption, could occur. I have not yet seen a C compiler that would corrupt anything regarding overflowing an integer though.
What devices do typical processors use to guard against neighboring memory from being overwritten by arithmetic and assignment operations?
There's no guards against neighboring memory regarding assignment and arithmetic operations. The processor simply executes the machine code instructions given to it (And if those instructions overwrites memory it was not suppoed to do as expressed in a higher level language, the processor does not care).
At a slightly different level, the CPU might issue traps if it cannot carry out the operation(e.g. the operation specified memory location that does not exist), or tries to do some illegal operation (e.g. division by zero, or encounters an op code the processor does not understand, or tries to do unaligned access to data).
The addition operation is processed inside the processor afaik, so whatever you do, the add operation will be done inside the CPU (in the ALU more precisely)
The overflow register would be set in case of overflow, and the result will still be in the register, and then copied back to your memory location without risk of corrupting adjacent memory.
This is how the code would (sort of) be translated in asm:
mov ax, ptr [memory location of a]
mov bx, ptr [memory location of b]
add ax,bx
mov ptr [memory location of c], ax
so as you can see, c would only hold what was in ax (which is of a known and fixed size) no matter if an overflow occurred or not
In C, the behavior of overflows with unsigned types is well-defined. Any implementation in which overflow causes any result other than what the standard predicts is non-compliant.
The behavior of overflows with signed types is undefined. While the most common effects of an overflow would either be the assignment of some erroneous value or an outright crash, nothing in the C standard guarantees that the processor won't trigger an overflow fault in a way that the compiled code tries to recover from, but which for some reason trashes the contents of a stack variable or a register. It's also possible that an overflow could cause a flag to get set in a way that code doesn't expect, and that such a flag might cause erroneous behavior on future computations.
Other languages may have different semantics for what happens when an overflow occurs.
note: I've seen processors which trap on overflow, processors where unexpected traps that happen at the same time as an external interrupt may cause data corruption, and processors where an unexpected unsigned overflow could cause a succeeding computation to be off by one. I don't know of any where a signed overflow would latch a flag that would interfere with subsequent calculations, but some DSP's have interesting overflow behaviors, so I wouldn't be surprised if one exists.

x86 equivalent for LWARX and STWCX

I'm looking for an equivalent of LWARX and STWCX (as found on the PowerPC processors) or a way to implement similar functionality on the x86 platform. Also, where would be the best place to find out about such things (i.e. good articles/web sites/forums for lock/wait-free programing).
Edit
I think I might need to give more details as it is being assumed that I'm just looking for a CAS (compare and swap) operation. What I'm trying to do is implement a lock-free reference counting system with smart pointers that can be accessed and changed by multiple threads. I basically need a way to implement the following function on an x86 processor.
int* IncrementAndRetrieve(int **ptr)
{
int val;
int *pval;
do
{
// fetch the pointer to the value
pval = *ptr;
// if its NULL, then just return NULL, the smart pointer
// will then become NULL as well
if(pval == NULL)
return NULL;
// Grab the reference count
val = lwarx(pval);
// make sure the pointer we grabbed the value from
// is still the same one referred to by 'ptr'
if(pval != *ptr)
continue;
// Increment the reference count via 'stwcx' if any other threads
// have done anything that could potentially break then it should
// fail and try again
} while(!stwcx(pval, val + 1));
return pval;
}
I really need something that mimics LWARX and STWCX fairly accurately to pull this off (I can't figure out a way to do this with the CompareExchange, swap or add functions I've so far found for the x86).
Thanks
As Michael mentioned, what you're probably looking for is the cmpxchg instruction.
It's important to point out though that the PPC method of accomplishing this is known as Load Link / Store Conditional (LL/SC), while the x86 architecture uses Compare And Swap (CAS). LL/SC has stronger semantics than CAS in that any change to the value at the conditioned address will cause the store to fail, even if the other change replaces the value with the same value that the load was conditioned on. CAS, on the other hand, would succeed in this case. This is known as the ABA problem (see the CAS link for more info).
If you need the stronger semantics on the x86 architecture, you can approximate it by using the x86s double-width compare-and-swap (DWCAS) instruction cmpxchg8b, or cmpxchg16b under x86_64. This allows you to atomically swap two consecutive 'natural sized' words at once, instead of just the usual one. The basic idea is one of the two words contains the value of interest, and the other one contains an always incrementing 'mutation count'. Although this does not technically eliminate the problem, the likelihood of the mutation counter to wrap between attempts is so low that it's a reasonable substitute for most purposes.
x86 does not directly support "optimistic concurrency" like PPC does -- rather, x86's support for concurrency is based on a "lock prefix", see here. (Some so-called "atomic" instructions such as XCHG actually get their atomicity by intrinsically asserting the LOCK prefix, whether the assembly code programmer has actually coded it or not). It's not exactly "bomb-proof", to put it diplomatically (indeed, it's rather accident-prone, I would say;-).
You're probably looking for the cmpxchg family of instructions.
You'll need to precede these with a lock instruction to get equivalent behaviour.
Have a look here for a quick overview of what's available.
You'll likely end up with something similar to this:
mov ecx,dword ptr [esp+4]
mov edx,dword ptr [esp+8]
mov eax,dword ptr [esp+12]
lock cmpxchg dword ptr [ecx],edx
ret 12
You should read this paper...
Edit
In response to the updated question, are you looking to do something like the Boost shared_ptr? If so, have a look at that code and the files in that directory - they'll definitely get you started.
if you are on 64 bits and limit yourself to say 1tb of heap, you can pack the counter into the 24 unused top bits. if you have word aligned pointers the bottom 5 bits are also available.
int* IncrementAndRetrieve(int **ptr)
{
int val;
int *unpacked;
do
{
val = *ptr;
unpacked = unpack(val);
if(unpacked == NULL)
return NULL;
// pointer is on the bottom
} while(!cas(unpacked, val, val + 1));
return unpacked;
}
Don't know if LWARX and STWCX invalidate the whole cache line, CAS and DCAS do. Meaning that unless you are willing to throw away a lot of memory (64 bytes for each independent "lockable" pointer) you won't see much improvement if you are really pushing your software into stress. The best results I've seen so far were when people consciously casrificed 64b, planed their structures around it (packing stuff that won't be subject of contention), kept everything alligned on 64b boundaries, and used explicit read and write data barriers. Cache line invalidation can cost approx 20 to 100 cycles, making it a bigger real perf issue then just lock avoidance.
Also, you'd have to plan different memory allocation strategy to manage either controlled leaking (if you can partition code into logical "request processing" - one request "leaks" and then releases all it's memory bulk at the end) or datailed allocation management so that one structure under contention never receives memory realesed by elements of the same structure/collection (to prevent ABA). Some of that can be very counter-intuitive but it's either that or paying the price for GC.
What you are trying to do will not work the way you expect. What you implemented above can be done with the InterlockedIncrement function (Win32 function; assembly: XADD).
The reason that your code does not do what you think it does is that another thread can still change the value between the second read of *ptr and stwcx without invalidating the stwcx.

Memory allocation in VC++

I am working with VC++ 2005
I have overloaded the new and delete operators.
All is fine.
My question is related to some of the magic VC++ is adding to the memory allocation.
When I use the C++ call:
data = new _T [size];
The return (for example) from the global memory allocation is 071f2ea0 but data is set to 071f2ea4
When overloaded delete [] is called the 071f2ea0 address is passed in.
Another note is when using something like:
data = new _T;
both data an the return from the global memory allocation are the same.
I am pretty sure Microsoft is adding something at the head of the memory allocation to use for book keeping. My question is, does anyone know of the rules Microsoft is using.
I want to pass in the value of "data" into some memory testing routines so I need to get back to the original memory reference from the global allocation call.
I could assume the 4 byte are an index but I wanted to make sure. I could easily be a flag plus offset, or count or and index into some other table, or just an alignment to cache line of the CPU. I need to find out for sure. I have not been able to find any references to outline the details.
I also think on one of my other runs that the offset was 6 bytes not 4
The 4 bytes most likely contains the total number of objects in the allocation so delete [] will be able to loop over all objects in the array calling their destructor..
To get back the original address, you could keep a lookup-table keyed on address / 16, which stores the base address and length. This will enable you to find the original allocation. You need to ensure that your allocation+4 doesn't cross a 16-byte boundary, however.
EDIT:
I went ahead and wrote a test program that creates 50 objects with a destructor via new, and calls delete []. The destructor just calls printf, so it won't be optimized away.
#include <stdio.h>
class MySimpleClass
{
public:
~MySimpleClass() {printf("Hi\n");}
};
int main()
{
MySimpleClass* arr = new MySimpleClass[50];
delete [] arr;
return 0;
}
The partial disassembly is below, cleaned up to be a bit more legible. As you can see, VC++ is storing array count in the initial 4 byes.
; Allocation
mov ecx, 36h ; Size of allocation
call scratch!operator new
test rax,rax ; Don't write 4 bytes if NULL.
je scratch!main+0x25
mov dword ptr [rax],32h ; Store 50 in first 4 bytes
add rax,4 ; Increment pointer by 4
; Free
lea rdi,[rax-4] ; Grab previous 4 bytes of allocation
mov ebx,dword ptr [rdi] ; Store in loop counter
jmp StartLoop ; Jump to beginning of loop
Loop:
lea rcx,[scratch!`string' (00000000`ffe11170)] ; 1st param to printf
call qword ptr [scratch!_imp_printf; Destructor
StartLoop:
sub ebx,1 ; Decrement loop counter
jns Loop ; Loop while not negative
This book keeping is distinct from the book keeping that malloc or HeapAlloc do. Those allocators don't care about objects and arrays. They only see blobs of memory with a total size. VC++ can't query the heap manager for the total size of the allocation because that would mean that the heap manager would be bound to allocate a block exactly the size that you requested. The heap manager shouldn't have this limitation - if you ask for 240 bytes to allocate for 20 12 byte objects, it should be free to return a 256 byte block that it has immediately available.
The 4 bytes offset is for the number of elements. When delete[] is invoked it is necessary to know the exact number of elements to be able to call the destructors for exactly the necessary number of objects.
Since the memory allocator could have returned a bigger block than necessary for storing all the objects the only sure way to know the number of elements is to store it in the beginning of the block.
For sure, memory allocation does need some bookkeeping info stored together with the actual memory.
Next to that, heap allocated blocks will also be 'decorated' with some magic values, used to easily detect buffer overruns, double deletion, ... Take a look at this CodeGuru site for more info. If you want to know the last of heap debugging, take a look at the msdn documentation.
The finial answer is:
When you do a malloc (which new uses under the hoods) allocates more memory than needed for the system to manage memory. This debug information is not what I am interested. What I am interested is the difference between using the return from malloc on a C++ array allocation.
What I have been able to determine is sometimes the C++ adds/uses 4 additional byte to keep count of the objects being allocated. The twist is these 4 bytes are only added if the objects being allocated require being destructed.
So given:
void* _cdecl operator new(size_t size)
{
void *ptr = malloc(size);
return(ptr);
}
for the case of:
object * data = new object [size]
data will be ptr plus 4 bytes (assuming object required a destructor)
while:
char *data = new char [size]
data will equal ptr because no destructor is required.
Again, I am not interested in the memory tracking malloc adds to manage memory.

Resources