Memory consumption in Racket - multithreading

Is there a simple way to measure a Racket program's memory use? I'm trying to run many programs in parallel and I want to make sure each gets enough RAM.

There are a few ways to track the memory used by Racket programs from within Racket itself.
current-memory-use Tracks the amount of memory that is reachable.
dump-memory-stats prints a report of your current error port. What it prints out will depend on your installation.
vector-set-performance-stats! takes a mutable vector, and fills it with a bunch of runtime stats for your program, including memory usage. And even memory usage that you can't get from current-memory-usage.
There's also a few options that don't use Racket to track memory. For example, the top command can show you how much memory your racket process uses. If you use this technique, be careful to ensure you are tracking the memory of all of the subprocesses that the racket process may have spawned. Also, this technique will vary greatly based on the OS you are using.

Related

GNU c++: how to declare that heap can be swapped out when inactive?

My code is built in GNU c++. It's heavily multi-threaded and allocates a fairly large amount of RAM. Problem is, I routinely have a dozen of different versions of the code "running" at the same time - the "running" being in quotes because only one is actually running at any given time: all the others are suspended, with all of their threads suspended.
With that many copies of the code in the RAM, I sometimes run out of memory (with disastrous consequences)... while my page file always stays at a measly 2-3% of use. I would like to make my code swappable to the page file, thus keeping space in actual RAM available for other programs as soon as I suspend it... but I have no idea of how to do that in g++.
Any idea how to do that? Many thanks!!

How to run many instances of the same process in a resource constraint environment without duplicating memory content

I observe that each ffmpeg instance doing audio decoding takes about 50 mb of memory. If I record 100 stations, that's 5 GB of RAM.
Now, they all more or less use the same amount of RAM, I suspect the contain the same information over and over again because they are spawned as new processes rather than forked.
Is there way to avoid this duplication?
I am using Ubuntu 20.04, x64
Now, they all more or less use the same amount of RAM, I suspect the
contain the same information over and over again because they are
spawned as new processes rather than forked.
Have you considered that the processes may use about the same amount of RAM because they are performing roughly the same computation, with similar parameters?
Have you considered that whatever means you are using to compute memory usage may be insensitive to whether the memory used is attributed uniquely to the process vs. already being shared with other processes?
Is there way to avoid this duplication?
Programs that rely on shared libraries already share those libraries' executable code among them, saving memory.
Of course, each program does need its own copy of any writable data belonging to the library, some of which may turn out to be unused by a particular client program, and programs typically have memory requirements separate from those of any libraries they use, too. Whatever amount of that 50 MB per process is in fact additive across processes is going to be from these sources. Possibly you could reduce the memory load from these by changing program parameters (or by changing programs), but there's no special way to run the same number of instances of the program you're running now, with the same options and inputs, to reduce the amount of memory they use.

Following memory allocation in gdb

Why is memory consumption jumping unpredictably as I step through a program in the gdb debugger? I'm trying to use gdb to find out why a program is using far more memory than it should, and it's not cooperating.
I step through the source code while monitoring process memory usage, but I can't find what line(s) allocate the memory for two reasons:
Reported memory usage only jumps up in increments of (usually, but not always exactly) 64 MB. I suspect I'm seeing the effects of some memory manager I don't know about which reserves 64 MB at a time and masks multiple smaller allocations.
The jump doesn't happen at a consistent location in code. Not only does it occur on different lines during different gdb runs; it also sometimes happens in illogical places like the closing bracket of a (c++) function. Is it possible that gdb itself is affecting memory allocations?
Any ideas/suggestions for more effective tools to help me drill down to the code lines that are really responsible for these memory allocations?
Here's some relevant system info: I'm running x86_64-redhat-linux-gnu version 7.2-64.el6-5.2 on a virtual CentOS Linux machine under Windows. The program is built on a remote server via a complicated build script, so tracking down exactly what options were used at any point is itself a bit of a chore. I'm monitoring memory usage both with the top utility ("virt" or virtual memory column) and by reading the real-time monitoring file /proc/<pid>/status, and they agree. Since this program uses a large suite of third-party libraries, there may be one or more overridden malloc() functions involved somewhere that I don't know about--hunting them down is part of this task.
gdb, left to its own devices, will not affect the memory use of your program, though a run under gdb may differ from a standalone run for other reasons.
However, this also depends on the way you use gdb. If you are just setting simple breakpoints, stepping, and printing things, then you are ok. But sometimes, to evaluate an expression, gdb will allocate memory in the inferior. For example, if you have a breakpoint condition like strcmp(arg, "string") == 0, then gdb will allocate memory for that string constant. There are other cases like this as well.
This answer is in several parts because there were several things going on:
Valgrind with the Massif module (a memory profiler) was much more helpful than gdb for this problem. Sometimes a quick look with the debugger works, sometimes it doesn't. http://valgrind.org/docs/manual/ms-manual.html
top is a poor tool for profiling memory usage because it only reports virtual memory allocations, which in this case were about 3x the actual heap memory usage. Virtual memory is mapped and made available by the Unix kernel when a process asks for a memory block, but it's not necessarily used. The underlying system call is mmap(). I still don't know how to check the block size. top can only tell you what the Unix kernel knows about your memory consumption, which isn't enough to be helpful. Don't use it (or the memory files under /proc/) to do detailed memory profiling.
Memory allocation when stepping out of a function was caused by autolocks--that's a thread lock class whose destructor releases the lock when it goes out of scope. Then a different thread goes into action and allocates some memory, leaving the operator (me) mystified. Non-repeatability is probably because some threads were waiting for external resources like Internet connections.

Memory Debugging

Currently I analyze a C++ application and its memory consumption. Checking the memory consumption of the process before and after a certain function call is possible. However, it seems that, for technical reasons or for better efficiency the OS (Linux) assigns not only the required number of bytes but always a few more which can be consumed later by the application. This makes it hard to analyze the memory behavior of the application.
Is there a workaround? Can one switch Linux to a mode where it assigns just the required number of bytes/pages?
if you use malloc/new, the allocator will always alloc a little more bytes than you requested , as it needs some room to do its housekeeping, also it may need to align the bytes on pages boundaries. The amount of supplementary bytes allocated is implementation dependent.
you can consider to use tools such as gperftools (google) to monitor the memory used.
I wanted to check a process for memory leeks some years ago.
What I did was the following: I wrote a very small debugger (it is easier than it sounds) that simply set breakpoints to malloc(), free(), mmap(), ... and similar functions (I did that under Windows but under Linux it is simpler - I did it in Linux for another purpose!).
Whenever a breakpoint was reached I logged the function arguments and continued program execution...
By processing the logfile (semi-automated) I could find memory leaks.
Disadvantage: It is not possible to debug the program using another debugger in parallel.

Preallocate memory for a program in Linux before it gets started

I have a program that repeatedly solves large systems of linear equations using cholesky decomposition. Characterising is that I sometimes need to store the complete factorisation which can exceed about 20 GB of memory. The factorisation happens inside a library that I call. Furthermore, this matrix and the resulting factorisation changes quite frequently and as such the memory requirements as well.
I am not the only person to use this compute-node. Therefore, is there a way to start the program under Linux and preallocate free memory for the process?
Something like: $: prealloc -m 25G ./program
I'll stick my neck out and say that I don't think that there is such a way under Linux. I think that the philosophy of Linux (and every other multi-tasking o/s I've used or heard of) is to provide the programmer (and the program) with the illusion that they have the whole of the computer's memory to play with and to make it very difficult indeed for a programmer to interfere with the o/s.
Instead, I think that you should plan to modify your program to grab the memory it will (or may) require when it starts up, that is, do the memory management yourself in whatever your chosen language is. How easy this might be for you, considering calls to a library, I don't know.
I've never heard of such a way. Usually it would be bad for other users on the node if one program went ahead and hogged all available memory. It's not good practice.
But opinions aside, I would probably write my program in such a way that it acts like a small environment that is able to make multiple runs of the routine in question without ending. It would allocate lots of memory on startup, then wait for user commands (through a minimal shell) and make the runs requested with the allocated memory pool. It would hold on to the pool until the user requests termination.
Of course this requires you to have an interactive session on the node, which you may not have.

Resources