Custom allocators vs. promises and packaged tasks - visual-c++

Are the allocator-taking constructors of standard promise/packaged_task supposed to use the allocator for just the state object itself, or should this be guaranteed for all (internal) related objects?
[futures.promise]: "...allocate memory for the shared state"
[futures.task.members]: "...allocate memory needed to store the internal data structures"
In particular, are the below bugs or features?
*MSVC 2013.4, Boost 1.57, short_alloc.h by Howard Hinnant
Example 1
#define BOOST_THREAD_VERSION 4
#include <boost/thread/future.hpp>
#include "short_alloc.h"
#include <cstdio>
void *operator new( std::size_t s ) {
printf( "alloc %Iu\n", s );
return malloc( s );
}
void operator delete( void *p ) {
free( p );
}
int main() {
const int N = 1024;
arena< N > a;
short_alloc< int, N > al( a );
printf( "[promise]\n" );
auto p = boost::promise< int >( std::allocator_arg, al );
p.set_value( 123 );
printf( "[packaged_task]\n" );
auto q = boost::packaged_task< int() >( std::allocator_arg, al, [] { return 123; } );
q();
return 0;
}
Output:
...
[promise]
alloc 8
alloc 12
alloc 8
alloc 24
[packaged_task]
alloc 8
alloc 12
alloc 8
alloc 24
FWIW, the output with the default allocator is
...
[promise]
alloc 144
alloc 8
alloc 12
alloc 8
alloc 16
[packaged_task]
alloc 160
alloc 8
alloc 12
alloc 8
alloc 16
Example 2
AFAICT, MSVC's std::mutex does an unavoidable heap allocation, and therefore, so does std::promise which uses it. Is this a conformant behaviour?

N.B. there are a couple of issues with your code. In C++14 if you replace operator delete(void*) then you must also replace operator delete(void*, std::size)t). You can use a feature-test macro to see if the compiler requires that:
void operator delete( void *p ) {
free( p );
}
#if __cpp_sized_deallocation
// Also define sized-deallocation function:
void operator delete( void *p, std::size_t ) {
free( p );
}
#endif
Secondly the correct printf format specifier for size_t is zu not u, so you should be using %Izu.
AFAICT, MSVC's std::mutex does an unavoidable heap allocation, and therefore, so does std::promise which uses it. Is this a conformant behaviour?
It's certainly questionable whether std::mutex should use dynamic allocation. Its constructor can't, because it must be constexpr. It could delay the allocation until the first call to lock() or try_lock() but lock() doesn't list failure to acquire resources as a valid error condition, and it means try_lock() could fail to lock an uncontended mutex if it can't allocate the resources it needs. That's allowed, if you squint at it, but is not ideal.
But regarding your main question, as you quoted, the standard only says this for promise:
The second constructor uses the allocator a to allocate memory for the shared state.
That doesn't say anything about other resources needed by the promise. It's reasonable to assume that any synchronization objects like mutexes are part of the shared state, not the promise, but that wording doesn't require that the allocator is used for memory the shared state's members require, only for the memory needed by the shared state itself.
For packaged_task the wording is broader and implies that all internal state should use the allocator, although it could be argued that it means the allocator is used to obtain memory for the stored task and the shared state, but again that members of the shared state don't have to use the allocator.
In summary, I don't think the standard is 100% clear whether the MSVC implementation is allowed, but IMHO an implementation that does not need additional memory from malloc or new is better (and that's how the libstdc++ <future> implementation works).

Related

Where does "Freeing unused kernel memory" come from?

I often see Freeing unused kernel memory: xxxK (......) from dmesg, but I can never find this log from kernel source code with the help of grep/rg.
Where does it come from?
That line of text does not exist as a single, complete string, hence your failure to grep it.
This all gets rolling when free_initmem() in init/main.c calls free_initmem_default().
The line in question originates from free_initmem_default() in include/linux/mm.h:
/*
* Default method to free all the __init memory into the buddy system.
* The freed pages will be poisoned with pattern "poison" if it's within
* range [0, UCHAR_MAX].
* Return pages freed into the buddy system.
*/
static inline unsigned long free_initmem_default(int poison)
{
extern char __init_begin[], __init_end[];
return free_reserved_area(&__init_begin, &__init_end,
poison, "unused kernel");
}
The rest of that text is from free_reserved_area() in mm/page_alloc.c:
unsigned long free_reserved_area(void *start, void *end, int poison, const char *s)
{
void *pos;
unsigned long pages = 0;
...
if (pages && s)
pr_info("Freeing %s memory: %ldK\n",
s, pages << (PAGE_SHIFT - 10));
return pages;
}
(Code excerpts from v5.2)
From my answer here:
Some functions in the kernel source code are marked with __init because they run only once during initialization. This instructs the compiler to mark a function in a special way. The linker collects all such functions and puts them at the end of the final binary file.
Example method signature:
static int __init clk_disable_unused(void)
{
// some code
}
When the kernel starts, this code runs only once during initialization. After it runs, the kernel can free this memory to reuse it and you will see the kernel
message:
Freeing unused kernel memory: 108k freed

Handling dynamic allocation in Frama-C

I am trying to use Frama-C to verify safety properties of C code that includes dynamic memory allocation. The current version of the ACSL specification language (1.8) seems to be able to express a lot about dynamically allocated memory. However, most of that stuff is not yet implemented in Frama-C Neon.
Suppose we take the following snippet of code:
#include <stdlib.h>
/*# requires \valid(p) && \valid(q) && \separated(p, q);
# ensures \valid(q);
#*/
void test(char *p, char *q) {
free(p);
}
int main(void) {
char *p = (char *) malloc(10);
char *q = (char *) malloc(10);
test(p, q);
return 0;
}
So, main allocates two blocks of memory, and passes pointers to them to function test. Test frees the block pointed to by p, but not the block pointed to by q. Suppose I want to prove that at the end of test, pointer q is still valid. How would I proceed?
It seems that I have to model the heap on my own: axiomatize a few predicates that talk about the heap, and use them to specify malloc and free, mimicking the non-implemented parts of ACSL. What would be the simplest approach to do that, so that I can verify the example above?

Why accessing pthread keys' sequence number is not synchronized in glibc's NPTL implementation?

Recently when I look into how the thread-local storage is implemented in glibc, I found the following code, which implements the API pthread_key_create()
int
__pthread_key_create (key, destr)
pthread_key_t *key;
void (*destr) (void *);
{
/* Find a slot in __pthread_kyes which is unused. */
for (size_t cnt = 0; cnt < PTHREAD_KEYS_MAX; ++cnt)
{
uintptr_t seq = __pthread_keys[cnt].seq;
if (KEY_UNUSED (seq) && KEY_USABLE (seq)
/* We found an unused slot. Try to allocate it. */
&& ! atomic_compare_and_exchange_bool_acq (&__pthread_keys[cnt].seq,
seq + 1, seq))
{
/* Remember the destructor. */
__pthread_keys[cnt].destr = destr;
/* Return the key to the caller. */
*key = cnt;
/* The call succeeded. */
return 0;
}
}
return EAGAIN;
}
__pthread_keys is a global array accessed by all threads. I don't understand why the read of its member seq is not synchronized as in the following:
uintptr_t seq = __pthread_keys[cnt].seq;
although it is syncrhonized when modified later.
FYI, __pthread_keys is an array of type struct pthread_key_struct, which is defined as follows:
/* Thread-local data handling. */
struct pthread_key_struct
{
/* Sequence numbers. Even numbers indicated vacant entries. Note
that zero is even. We use uintptr_t to not require padding on
32- and 64-bit machines. On 64-bit machines it helps to avoid
wrapping, too. */
uintptr_t seq;
/* Destructor for the data. */
void (*destr) (void *);
};
Thanks in advance.
In this case, the loop can avoid an expensive lock acquisition. The atomic compare and swap operation done later (atomic_compare_and_exchange_bool_acq) will make sure only one thread can successfully increment the sequence value and return the key to the caller. Other threads reading the same value in the first step will keep looping since the CAS can only succeed for a single thread.
This works because the sequence value alternates between even (empty) and odd (occupied). Incrementing the value to odd prevents other threads from acquiring the slot.
Just reading the value is fewer cycles than the CAS instruction typically, so it makes sense to peek at the value, before doing the CAS.
There are many wait-free and lock-free algorithms that take advantage of the CAS instruction to achieve low-overhead synchronization.

Is clock_gettime() adequate for submicrosecond timing?

I need a high-resolution timer for the embedded profiler in the Linux build of our application. Our profiler measures scopes as small as individual functions, so it needs a timer precision of better than 25 nanoseconds.
Previously our implementation used inline assembly and the rdtsc operation to query the high-frequency timer from the CPU directly, but this is problematic and requires frequent recalibration.
So I tried using the clock_gettime function instead to query CLOCK_PROCESS_CPUTIME_ID. The docs allege this gives me nanosecond timing, but I found that the overhead of a single call to clock_gettime() was over 250ns. That makes it impossible to time events 100ns long, and having such high overhead on the timer function seriously drags down app performance, distorting the profiles beyond value. (We have hundreds of thousands of profiling nodes per second.)
Is there a way to call clock_gettime() that has less than ¼μs overhead? Or is there some other way that I can reliably get the timestamp counter with <25ns overhead? Or am I stuck with using rdtsc?
Below is the code I used to time clock_gettime().
// calls gettimeofday() to return wall-clock time in seconds:
extern double Get_FloatTime();
enum { TESTRUNS = 1024*1024*4 };
// time the high-frequency timer against the wall clock
{
double fa = Get_FloatTime();
timespec spec;
clock_getres( CLOCK_PROCESS_CPUTIME_ID, &spec );
printf("CLOCK_PROCESS_CPUTIME_ID resolution: %ld sec %ld nano\n",
spec.tv_sec, spec.tv_nsec );
for ( int i = 0 ; i < TESTRUNS ; ++ i )
{
clock_gettime( CLOCK_PROCESS_CPUTIME_ID, &spec );
}
double fb = Get_FloatTime();
printf( "clock_gettime %d iterations : %.6f msec %.3f microsec / call\n",
TESTRUNS, ( fb - fa ) * 1000.0, (( fb - fa ) * 1000000.0) / TESTRUNS );
}
// and so on for CLOCK_MONOTONIC, CLOCK_REALTIME, CLOCK_THREAD_CPUTIME_ID.
Results:
CLOCK_PROCESS_CPUTIME_ID resolution: 0 sec 1 nano
clock_gettime 8388608 iterations : 3115.784947 msec 0.371 microsec / call
CLOCK_MONOTONIC resolution: 0 sec 1 nano
clock_gettime 8388608 iterations : 2505.122119 msec 0.299 microsec / call
CLOCK_REALTIME resolution: 0 sec 1 nano
clock_gettime 8388608 iterations : 2456.186031 msec 0.293 microsec / call
CLOCK_THREAD_CPUTIME_ID resolution: 0 sec 1 nano
clock_gettime 8388608 iterations : 2956.633930 msec 0.352 microsec / call
This is on a standard Ubuntu kernel. The app is a port of a Windows app (where our rdtsc inline assembly works just fine).
Addendum:
Does x86-64 GCC have some intrinsic equivalent to __rdtsc(), so I can at least avoid inline assembly?
No. You'll have to use platform-specific code to do it. On x86 and x86-64, you can use 'rdtsc' to read the Time Stamp Counter.
Just port the rdtsc assembly you're using.
__inline__ uint64_t rdtsc(void) {
uint32_t lo, hi;
__asm__ __volatile__ ( // serialize
"xorl %%eax,%%eax \n cpuid"
::: "%rax", "%rbx", "%rcx", "%rdx");
/* We cannot use "=A", since this would use %rax on x86_64 and return only the lower 32bits of the TSC */
__asm__ __volatile__ ("rdtsc" : "=a" (lo), "=d" (hi));
return (uint64_t)hi << 32 | lo;
}
This is what happens when you call clock_gettime() function.
Based on the clock you choose it will call the respective function. (from vclock_gettime.c file from kernel)
int clock_gettime(clockid_t, struct __kernel_old_timespec *)
__attribute__((weak, alias("__vdso_clock_gettime")));
notrace int
__vdso_clock_gettime_stick(clockid_t clock, struct __kernel_old_timespec *ts)
{
struct vvar_data *vvd = get_vvar_data();
switch (clock) {
case CLOCK_REALTIME:
if (unlikely(vvd->vclock_mode == VCLOCK_NONE))
break;
return do_realtime_stick(vvd, ts);
case CLOCK_MONOTONIC:
if (unlikely(vvd->vclock_mode == VCLOCK_NONE))
break;
return do_monotonic_stick(vvd, ts);
case CLOCK_REALTIME_COARSE:
return do_realtime_coarse(vvd, ts);
case CLOCK_MONOTONIC_COARSE:
return do_monotonic_coarse(vvd, ts);
}
/*
* Unknown clock ID ? Fall back to the syscall.
*/
return vdso_fallback_gettime(clock, ts);
}
CLOCK_MONITONIC better (though I use CLOCK_MONOTONIC_RAW) since it is not affected from NTP time adjustment.
This is how the do_monotonic_stick is implemented inside kernel:
notrace static __always_inline int do_monotonic_stick(struct vvar_data *vvar,
struct __kernel_old_timespec *ts)
{
unsigned long seq;
u64 ns;
do {
seq = vvar_read_begin(vvar);
ts->tv_sec = vvar->monotonic_time_sec;
ns = vvar->monotonic_time_snsec;
ns += vgetsns_stick(vvar);
ns >>= vvar->clock.shift;
} while (unlikely(vvar_read_retry(vvar, seq)));
ts->tv_sec += __iter_div_u64_rem(ns, NSEC_PER_SEC, &ns);
ts->tv_nsec = ns;
return 0;
}
And the vgetsns_stick() function which provides nano seconds resolution is implemented as:
notrace static __always_inline u64 vgetsns(struct vvar_data *vvar)
{
u64 v;
u64 cycles;
cycles = vread_tick();
v = (cycles - vvar->clock.cycle_last) & vvar->clock.mask;
return v * vvar->clock.mult;
}
Where the function vread_tick() reads the cycles from register based on the CPU:
notrace static __always_inline u64 vread_tick(void)
{
register unsigned long long ret asm("o4");
__asm__ __volatile__("rd %%tick, %L0\n\t"
"srlx %L0, 32, %H0"
: "=r" (ret));
return ret;
}
A single call to clock_gettime() takes around 20 to 100 nano seconds. reading the rdtsc register and converting the cycles to time is always faster.
I have done some experiment with CLOCK_MONOTONIC_RAW here: Unexpected periodic behaviour of an ultra low latency hard real time multi threaded x86 code
I need a high-resolution timer for the embedded profiler in the Linux build of our application. Our profiler measures scopes as small as individual functions, so it needs a timer precision of better than 25 nanoseconds.
Have you considered oprofile or perf? You can use the performance counter hardware on your CPU to get profiling data without adding instrumentation to the code itself. You can see data per-function, or even per-line-of-code. The "only" drawback is that it won't measure wall clock time consumed, it will measure CPU time consumed, so it's not appropriate for all investigations.
Give clockid_t CLOCK_MONOTONIC_RAW a try?
CLOCK_MONOTONIC_RAW (since Linux 2.6.28; Linux-specific)
Similar to CLOCK_MONOTONIC, but provides access to a
raw hardware-based time that is not subject to NTP
adjustments or the incremental adjustments performed by
adjtime(3).
From Man7.org
It's hard to give a globally applicable answer because the hardware and software implementation will vary widely.
However, yes, most modern platforms will have a suitable clock_gettime call that is implemented purely in user-space using the VDSO mechanism, and will in my experience take 20 to 30 nanoseconds to complete (but see Wojciech's comment below about contention).
Internally, this is using rdtsc or rdtscp for the fine-grained portion of the time-keeping, plus adjustments to keep this in sync with wall-clock time (depending on the clock you choose) and a multiplication to convert from whatever units rdtsc has on your platform to nanoseconds.
Not all of the clocks offered by clock_gettime will implement this fast method, and it's not always obvious which ones do. Usually CLOCK_MONOTONIC is a good option, but you should test this on your own system.
You are calling clock_getttime with control parameter which means the api is branching through if-else tree to see what kind of time you want. I know you cant't avoid that with this call, but see if you can dig into the system code and call what the kernal is eventually calling directly. Also, I note that you are including the loop time (i++, and conditional branch).

Why would a VC++ program that is storing 5MB of data consume 64MB of system memory?

I have been working on trying to figure out why my program is consuming so much system RAM. I'm loading a file from disk into a vector of structs of several dynamically allocated arrays. A 16MB file ends up consuming 280MB of system RAM according to task manager. The types in the file are mostly chars with some shorts and a few longs. There are 331,000 records in the file containing on average about 5 fields. I converted the vector to a struct and that reduced the memory to about 255MB but that still seems very high. With the vector taking up so much memory the program is running out of memory so I need to find a way to get the memory usage more reasonable.
I wrote a simple program to just stuff a vector (or array) with 1,000,000 char pointers. I would expect it to allocate 4+1 bytes for each giving 5MB of memory required for storage, but in fact it is using 64MB (array version) or 67MB (vector version). When the program first starts up it only consumes 400K so why is there an additional 59MB for array or 62MB for vectors being allocated? This extra memory seems to be for each container, so if I create a size_check2 and copy everything and run it the program uses up 135MB for 10MB worth of pointers and data.
Thanks in advance,
size_check.h
#pragma once
#include <vector>
class size_check
{
public:
size_check(void);
~size_check(void);
typedef unsigned long size_type;
void stuff_me( unsigned int howMany );
private:
size_type** package;
// std::vector<size_type*> package;
size_type* me;
};
size_check.cpp
#include "size_check.h"
size_check::size_check(void)
{
}
size_check::~size_check(void)
{
}
void size_check::stuff_me( unsigned int howMany )
{
package = new size_type*[howMany];
for( unsigned int i = 0; i < howMany; ++i )
{
size_type *me = new size_type;
*me = 33;
package[i] = me;
// package.push_back( me );
}
}
main.cpp
#include "size_check.h"
int main( int argc, char * argv[ ] )
{
const unsigned int buckets = 20;
const unsigned int size = 50000;
size_check* me[buckets];
for( unsigned int i = 0; i < buckets; ++i )
{
me[i] = new size_check();
me[i]->stuff_me( size );
}
printf( "done.\n" );
}
In my test using VS2010, a debug build had a working set size of 52,500KB. But a release build had a working set
size of 20,944KB.
Debug builds will usually use more memory than optimized builds due to the debug heap manager doing things like creating memory fences.
In release builds, I suspect that the heap manager reserves more memory than you are actually using as a performance optimization.
Memory Leak
package = new size_type[howMany]; // instantiate 50,000 size_type's
for( unsigned int i = 0; i < howMany; ++i )
{
size_type *me = new size_type; // Leak: results in an extra 50k size_type's being instantiated
*me = 33;
package[i] = *me; // Set a non-pointer to what is at the address of pointer "me"
// Would package[i] = 33; not suffice?
}
Furthermore, make sure you've compiled in release mode
There might be a couple reasons why you're seeing such a large memory footprint from your test program. Inside your
void size_check::stuff_me( unsigned int howMany )
{
This method is always getting called with howMany = 50000.
package = new size_type[howMany];
Assuming this is on a 32-bit setup the above statement will allocate 50,000 * 4 bytes.
for( unsigned int i = 0; i < howMany; ++i )
{
size_type *me = new size_type;
The above will allocate new storage on each iteration of the loop. Since this loops 50,000 and the allocation never gets deleted that effectively takes up another 50,000 * 4 bytes upon loop completion.
*me = 33;
package[i] = *me;
}
}
Lastly, since stuff_me() gets called 20 times from main() your program would have allocated at least ~8Mbytes upon completion. If this is on a 64-bit system than the footprint will likely double since sizeof(long) == 8bytes.
The increase in memory consumption could have something to do with the way VS implements dynamic allocation. For performance reasons, it's possible that due to the multiple calls to new your program is reserving extra memory so as to avoid hitting up the OS everytime it needs more.
FYI, when I ran your test program on mingw-gcc 4.5.2, the memory consumption was ~20Mbytes -- much lower than what you were seeing but still a substantial amount. If I changed the stuff_me method to this:
void size_check::stuff_me( unsigned int howMany )
{
package = new size_type[howMany];
size_type *me = new size_type;
for( unsigned int i = 0; i < howMany; ++i )
{
*me = 33;
package[i] = *me;
}
delete me;
}
memory consumption goes down quite a bit down to ~4-5mbytes.
I think I found the answer by delving into the new statement. In debug builds there are two items that are created when you do a new. One is _CrtMemBlockHeader which is 32 bytes in length. The other is noMansLand (a memory fence) with a size of 4 bytes which gives us an overhead of 36 bytes for each new. In my case each individual new for a char was costing me 37 bytes. In release builds the memory usage is reduced to about 1/2 but I can't tell exactly how much is allocated for each new as I can't get to the new/malloc routine.
So my work around is to allocate a large block of memory to hold the file in memory. Then parse the memory image filling in a vector of pointers to the beginning of each of the records. Then on demand, I build a record from the memory image using the pointer to the beginning of the selected record. Doing this reduced the memory footprint to <25MB.
Thanks for all your help and suggestions.

Resources