tcl script aborts : unable to realloc xxx bytes [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
My Tcl script aborts, saying that it is unable to realloc 2191392 bytes. This happens when the script is kept for a longer execution duration, say, more than 10 hours. My Tcl script connects to devices using telnet and ssh connections and executes/verifies some command outputs on devices. The Linux machine has enough RAM 32GB, and ulimit is unlimited for process, data, file size. My script process doesn't eat up more memory, but the worst case is < 1GB. I just wonder why memory allocation is failed having plenty of RAM.

That message is an indication that an underlying malloc() call returned NULL (in a non-recoverable location), and given that it's not for very much, it's an indication that the system is thoroughly unable to allocate much memory. Depending on how your system is configured (32-bit vs. 64-bit; check what parray tcl_platform prints to find out) that could be an artefact of a few things, but if you think that it shouldn't be using more than a gigabyte of memory, it's an indication of a memory leak.
Unfortunately, it's hard to chase down memory leaks in general. Tcl's built-in memory command (enabled via configure --enable-symbols=mem at build time) can help, as can a tool like Electric Fence, but they are imperfect and can't generally tell you where you're getting things wrong (as you'll be looking for the absence of something to release memory). At the Tcl level, see whether each of the variables listed by info globals is of sensible size or whether there's a growing number of globals. You'll want to use tools like string length, array exists, array size and array names for this.
It's also possible for memory allocation to fail due to another process consuming so much memory that the OS starts to feel highly constrained. I hope this isn't happening in your case, since it's much harder to prevent.

Related

Does Linux have a page file? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I found at several places that Linux uses pages and a paging mechanism but I didn't find anywhere where this file is or how to configure it.
All the information I found is about the Linux swap file / partition. There is a difference between paging and swapping:
Paging moves pages (a small frame which contains a piece of data - usually 4 KB but can vary between different OS's) from main memory to a backbend storage, happens always as a normal function of the operating system.
Swapping moves an entire process to storage and happens when the system is memory stressed or on windows 8 when a new application is hibernating.
Does Linux uses it's swap file / partition for both cases?
If so, how could I see how many page are currently paged out? This information is not there in vmstat, free or swapon commands (or that I fail to see it).
Or is there another file used for paging?
If so, how can I configure it (and watch it's usage)?
Or perhaps Linux does not use paging at all and I was mislead?
I would appreciate if the answers will be specific to red hat enterprise Linux both versions 6 and 7 but also a general answer about all Linux's will be good.
Thanks in advance.
On Linux, the swap partition(s) are used for paging.
Linux does not respond to memory pressure by swapping out whole processes. The virtual memory system does demand paging, page by page. Under extreme memory pressure, one or more processes will be killed by the OOM killer. (There are some useful links to documentation in the first NOTE in man malloc)
There is a line in the top header which shows swap partition usage, but if that is all the information you want, use
swapon -s
man swapon for more information.
The swap partition usage is not the same as the number of unmapped pages. A page might be memory-mapped to a file using the mmap call; since that page has backing store in the file, there is no need to also write it to a swap partition, and the system won't use swap space for that. But swap partition usage is a pretty good indicator.
Also note that Linux (unlike Windows) does not allocate swap space for pages when they are allocated. Instead, it adds the new page to the virtual memory map without any backing store. and allocates the swap space when the page needs to be swapped out. The consequence (as described in the malloc manpage referenced earlier) is that a malloc call may succeed in allocating virtual memory, but a subsequent attempt to use that virtual memory may fail.
Although Linux retains the term 'swap partition' as a historical relic, it actually performs paging. So your expectation is borne out; you were just thrown by the archaic terminology.

Who determine the block size when writting to a disk? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
This might be a naive question but I can't find a straight answer for it.
While using IO tools such as dd tool, fio and bonnie++, one of the tools parameters is to set the block size that will be used in the test. So, one can set the block size to 512 KB, 1 MB or even more. And as the block size get greater the output MB/s also get higher, and I believe it is logical, since you get to write on less blocks.
So my questions are:
-How does the process happen while the default block size is 4 KB or 32 KB in some kernels ?!
-In any other application, who determine the block size to write on a disk ? is it the application itself or the operating system ?!
-What would be a typical block size of a database application for instance ?!
Thanks in advance :)
If you use something like dd, you're doing a block-level operation, so you get to specify a block size. Up to a point, you'll get greater speed by using a bigger block size, but it will quickly tail off. It's very inefficient to read from a disk byte by byte, but by the time you've hit a few megabytes, you won't notice any further speed increase.
When an application writes to disk, it is generally not doing block-level access, but reading and writing files. It's the operating system that is responsible for turning this file-level access into block-level access. An application, unless it's a specialised one running as root, won't care about block-level access, and won't be involved in determining block sizes for this kind of thing.
It's further complicated by the disk cache: when you read something at the application level, if you're lucky, you won't touch the disk at all: it'll be something already cached, and you'll retrieve it from there (without being aware of it). When you write, you will hopefully find that you write into the cache and appear to finish immediately, and then the operating system will do the actual write when it gets round to it. It's only if you're doing lots of writing, or if the cache is turned off, that you will exhaust the cache and the writes will need to happen before control gets passed back to your application.
In short: unless you muck about at a fairly low level, you don't need to worry about block sizes.

What is the Linux Stack?

I recently ran into a bug with the "linux stack" and the "linux stack size". I came across a blog directing me to try
ulimit -a
to see what the limit for my box was, and it was set to 8192kb which seems to be the default.
What is the "linux stack"? How does it work, what does it store, what does it do?
The short answer is:
When programs on your linux box run, they add and remove data from the stack on a regular basis as the programs function. The stack size, referes to how much space is allocated in memory for the stack. If you increase the stack size, that allows the program to increase the number of routines that can be called. Each time a function is called, data can be added to the stack (stacked on top of the last routines data.)
Unless the program is a very complex, or designed for a special purpose, a stack size of 8192kb is normally fine. Some programs like graphics processing programs require you to increase the size of the stack to function. As they may store a lot of data on the stack.
Feel free to increase the stack size for those applications, its not a problem. To do so, use
ulimit -s bytes
BTW, What is a StackOverflowError?

Alternative to valgrind (memcheck) for finding leaks on linux? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have a linux x86 application that makes use of various third-party shared-object libraries. I suspect these libraries are leaking memory (since it can't possibly be my code ;-)
I tried the trusty valgrind, but it died a horrible death because one of the third-party libraries is using an obscure x86 instruction that valgrind doesn't implement.
I found a recommendation for DUMA and gave it a try (using the LD_PRELOAD technique to bring DUMA in at run-time), but it aborted complaining about a free operation on memory that wasn't allocated via DUMA (almost certainly by some constructor of a static object in one of the previously mentioned third-party libraries).
Are there other run-time-linkable (or otherwise not requiring a recompilation/relink) tools around that will work on linux?
Give Dr. Memory a try. It is based on DynamoRIO and shares many of the features with Valgrind.
The TotalView debugger (or, more precisely, its Memscope) has a feature set similar to the one of Valgrind.
You can also try Electric Fence (original author's link) (the origin of DUMA) for buffer overflows or touch-after-free cases (but not for memleaks, though).
In 2020, to find memory leaks on Linux, you may try:
Address Sanitizers
For both GCC(above 4.8) and Clang (above 3.1), the address sanitizer can be used, it's great
the tool has been proved useful in large projects such as Chromium and Firefox.
It's much faster than other old alternatives like Valgrind.
ASan will provide very detailed memory region information, which is very helpful for analysis of the leak.
The drawback for ASan: You need to build your program with the option -fsanitize=address; The extra memory cost is much bigger.
TCmalloc
TCmalloc can be both used with LD_PRELOAD or directly link to your program. The result can be visualized with the pprof program, it has both beautiful web UI and consoles text mode, I suggest using it if address sanitizer is not applicable in your environment(If you have a very old compiler or your PC have very limited memory to run ASan).
TCmalloc is also used in large-scale production and proved to be robust.
Linux Perf tools and BCC
Linux perf tools can also be used to find memory leaks, it's a tool based on sampling. So it can not be precise, but it's still a great tool to help us analyze the usage of memory.
There is also a script from bcc's tools.
./memleak -p $(pidof allocs)
Trace allocations and display a summary of "leaked" (outstanding)
allocations every 5 seconds
./memleak -p $(pidof allocs) -t
Trace allocations and display each individual allocator function call
./memleak -ap $(pidof allocs) 10
Trace allocations and display allocated addresses, sizes, and stacks
every 10 seconds for outstanding allocations
./memleak -c "./allocs"
Run the specified command and trace its allocations
./memleak
Trace allocations in kernel mode and display a summary of outstanding
allocations every 5 seconds
./memleak -o 60000
Trace allocations in kernel mode and display a summary of outstanding
allocations that are at least one minute (60 seconds) old
./memleak -s 5
Trace roughly every 5th allocation, to reduce overhead
The pros of such tools: We don't need to rebuild our program, so it's handy for analyzing some online services.
Heapusage is a simple run-time tool for finding memory leaks on Linux and macOS. The output logging format for leaks is quite similar to Valgrind, but it only logs definite leaks (i.e. allocations not free'd at termination).
Full disclosure: I wrote Heapusage for usage in situations when Valgrind is inadequate (high performance applications, and also for CPU architectures not supported by Valgrind).

How do I find which process is leaking memory? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a system (Ubuntu) with many processes and one (or more) have a memory leak. Is there a good way to find the process that has the leak? Some of the process are JVMs, some are not. Some are home grown some are open source.
You can run the top command (to run non-interactively, type top -b -n 1). To see applications which are leaking memory, look at the following columns:
RPRVT - resident private address space size
RSHRD - resident shared address space size
RSIZE - resident memory size
VPRVT - private address space size
VSIZE - total memory size
if the program leaks over a long time, top might not be practical. I would write a simple shell scripts that appends the result of "ps aux" to a file every X seconds, depending on how long it takes to leak significant amounts of memory. Something like:
while true
do
echo "---------------------------------" >> /tmp/mem_usage
date >> /tmp/mem_usage
ps aux >> /tmp/mem_usage
sleep 60
done
I suggest the use of htop, as a better alternative to top.
In addition to top, you can use System Monitor (System - Administration - System Monitor, then select Processes tab). Select View - All Processes, go to Edit - Preferences and enable Virtual Memory column. Sort either by this column, or by Memory column
If you can't do it deductively, consider the Signal Flare debugging pattern: Increase the amount of memory allocated by one process by a factor of ten. Then run your program.
If the amount of the memory leaked is the same, that process was not the source of the leak; restore the process and make the same modification to the next process.
When you hit the process that is responsible, you'll see the size of your memory leak jump (the "signal flare"). You can narrow it down still further by selectively increasing the allocation size of separate statements within this process.
Difficult task. I would normally suggest to grab a debugger/memory profiler like Valgrind and run the programs one after one in it. Soon or later you will find the program that leaks and can tell it the devloper or fix it yourself.
As suggeseted, the way to go is valgrind. It's a profiler that checks many aspects of the running performance of your application, including the usage of memory.
Running your application through Valgrind will allow you to verify if you forget to release memory allocated with malloc, if you free the same memory twice etc.

Resources