Is there a way to send the memory swapped, back again to the principal memory?
EDIT: I had a process that I ran and eated all memory, so now, each time I use another app, it has something in swap, so it takes time to reload to memory. Now the consuming memory process has stopped, I want to force to have all the things in memory again. So I will wait only one time to have the things that are in swap to memory again, and not each time that I reuse an opened app.
Not directly; moreover, usually you don't want to, as often what is swapped is the part that is no longer needed (initialization code). The only way to force the issue is to ask the kernel to disable the swap area, and even that is not immediate.
The kernel will automatically and transparently swap that data back into RAM when it's required.
You can avoid getting swapped out using mlock() or mlockall(), but you probably shouldn't do that. If your app ends up getting swapped out it might be using too much memory, or there could be too little memory in the machine, or too many running processes. None of those problems will be improved by your app using mlock().
Related
I just learned about mlock() functions. I know that it allows you to lock program memory into RAM (allowing the physical address to change but not allowing the memory to be evicted). I've read that newer Linux kernel versions have a mlock limit (ulimit -l), but that this is only applicable to unprivileged processes. If this is a per-process limit, could an unprivileged process spawn a ton of processes by fork()-ing and have each call mlock(), until all memory is locked up and the OS slows to a crawl because of tons of swapping or OOM killer calls?
It is possible that an attacker could cause problems with this, but not materially more problems than they could cause otherwise.
The default limit for this on my system is about 2 MB. That means a typical process won't be able to lock more than 2 MB of data into memory. Note that this is just normal memory that won't be swapped out; it's not an independent, special resource.
It is possible that a malicious process could spawn many other processes to use more locked memory, but because a process usually requires more than 2 MB of memory to run anyway, they're not really exhausting memory more efficiently by locking it; in fact, starting a new process itself is actually going to more effective at using memory than locking it. It is true that a process could simply fork, lock memory, and sleep, in which case its other pages would likely be shared because of copy-on-write, but it could just also allocate a decent chunk of memory and cause many more problems, and in fact it will generally have permission to do so since many processes require non-trivial amounts of memory.
So, yes, it's possible that an attacker could use this technique to cause problems, but because there are many easier and more effective ways to exhaust memory or cause other problems, this seems like a silly way to go about doing it. I, for one, am not worried about this as a practical security problem.
When playing Minecraft with a heavyweight mod pack without increasing the memory allocation you can sometimes see a lot of stuttering: it runs smoothly for about a second then pauses for about a second. I assumed that this was due to GC with some part of the code creating a lot of temporary objects in the main loop.
Allocating more memory before launching the game cures it. My question is, what's going on under the hood that means extra memory can prevent the stuttering rather than just increasing its period?
I have a process that seems to be leaking memory. The longer the process runs, the more memory it uses. That is in spite of the fact that the process consists primarily of a loop that iteratively calls a function which should not preserve any data between calls. When I use valgrind to check for leaks, everything comes back a-ok. When the process eventually exits after running for a few hours, there is a substantial delay at exit, which all leads me to believe that memory is being allocated in that function and not freed immediately because it is still referenced. The memory is then subsequently freed on exit because that reference is eventually freed.
I'm wondering if there is a way with valgrind (or some other linux-compatible tool) to do a leak check between two code checkpoints. I'd like to get a leak report of all memory that was allocated but not freed between two code checkpoints.
I wrote an article on this a few years back.
In short, you include valgrind.h and then you can use macros like
VALGRIND_DO_LEAK_CHECK
Alternatively you can attach gdb and issue the 'monitor leak_check' command. This can be incremental. See here
This may have been asked a few different ways, but this is a relatively new field to me so forgive me if it is redundant and point me on my way.
Essentially I have created a data collection engine that take high speed data (up to thousands of points a second) and stores them in a database.
The database is dynamic, so the statements being fed to the database are dynamically created in code as well, this in turn required a great deal of string manipulation. All of the strings however are declared within scope of asynchronous event handler methods, so they should fall out of scope as soon as the method completes.
As this application runs, its memory usage according to task manager / process explorer, slowly but steadily increases, so it would seem that something was not getting properly disposed and or collected.
If I attach CDB -p (yes I am loading the sos.dll from the CLR) and do a !dumpheap I see that the majority of this is being used by System.String, as well if I !dumpheap -type System.String, and the !do the addresses I see the exact strings (the SQL statements).
however if I do a !gcroot on the any of the addresses, I get "Found 0 unique roots (run '!GCRoot -all' to see all roots)." that in turn if I try as it suggests I get "Invalid argument -all" O.o
So after some googling, and some arguments concerning that unrooted objects will eventually be collected by GC, that this is not an issue.. I looked to see, and it appears 84% of my problem is sitting on the LOH (where depending on which thread you look at where, may or may not get processed for GC unless there is a memory constraint on the machine or I explicitly tell it to collect which is considered bad according to everything I can find)
So what I need to know is, is this essentially true, that this is not a memory leak, it is simply the system leaving stuff there until it HAS to be reclaimed, and if so how then do I tell that I do or do not have a legitimate memory leak.
This is my first time working the debugger external to the application as I have never had to address this sort of issue before, so I am very new to that portion, this is a learning experience.
Application is written in VS2012 Pro, C#, it is multi-threaded, and a console application is wrapping the API for testing, but will eventually be a Windows service.
What you read is true, managed applications use a memory model where objects pile on until you reach a certain memory threshold (calculated based on the amount of physical memory on your system and your application's real growth rate), after which all(*) "dead" objects get squished by the rest of the useful memory, making it one contiguous block for allocation speed.
So yes, don't worry about your memory steadily increasing until you're several tens of MB up and no collection has taken place.
(*) - is actually more complicated by multiple memory pools (based on object size and lifetime length), such that the system isn't constantly probing very long lived objects, and by finalizers. When an object has a finalizer, instead of being freed, the memory gets squished over them but they get moved to a special queue, the finalizer queue, where they wait for the finalizer to run on the UI thread (keep in mind the GC runs on a separate thread), and only then it finally gets freed.
I want to dump the core of a running process that is, according to /proc/<pid>/status, currently blocking on disk activity. Actually, it is busy doing work on the GPU (should be 4 hours of work, but it has taken significantly longer now). I would like to know how much of the process's work has been done, so it'd be good to be able to dump the process's memory. However, as far as I know, "blocking on disk activity" means that it's not possible to interrupt the process in any way, and coredumping a process e.g. using gdb requires interrupting and temporarily stopping the process in order to attach via ptrace, right?
I know that I could just read /proc/<pid>/{maps,mem} as root to get the (maybe inconsistent) memory state, but I don't know any way to get hold of the process's userspace CPU register values... they stay the same while the process is inside the kernel, right?
You can probably run gcore on your program. It's basically a wrapper around GDB that attaches, uses the gcore command, and detaches again.
This might interrupt your IO (as if it received a signal, which it will), but your program can likely restart it if written correctly (and this may occur in any case, due to default handling).