I am wondering if there is a possibility in valgrind to show the value of the leaked memory, such as (NOT a real valgrind output!):
==15060== 12 bytes (***HERE***) in 1 blocks are definitely lost in loss record 1 of 1
==15060== at 0x4C2AAA4: operator new[](unsigned long) (in vgpreload_memcheck-amd64-linux.so)
==15060== by 0x5DC8236: char* allocate(unsigned long, char const*, long) (mem.h:149)
==15060== by 0x5EAC286: trim(char const*, nap_compiler const*) (file.cpp:107)
Where the ***HERE*** shows the exact value of the string that is being leaked. I've been looking all over the documentation, but found nothing. Maybe someone more familiar with the tool can point out what to do to achieve this! (I'm not afraid of compiling it myself :) )
GDB server in Valgrind version >= 3.8.0 provides the monitor command
block_list
which will output the addresses of the leaked blocks.
You can then examine the leaked memory content using GDB commands such as x.
For more information, see
http://www.valgrind.org/docs/manual/manual-core-adv.html#manual-core-adv.gdbserver
and
http://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands
Related
I am trying to reproduc a problem .
My c code giving SIGABRT , i traced it back to this line number :3174
https://elixir.bootlin.com/glibc/glibc-2.27/source/malloc/malloc.c
/* Little security check which won't hurt performance: the allocator
never wrapps around at the end of the address space. Therefore
we can exclude some size values which might appear here by
accident or by "design" from some intruder. We need to bypass
this check for dumped fake mmap chunks from the old main arena
because the new malloc may provide additional alignment. */
if ((__builtin_expect ((uintptr_t) oldp > (uintptr_t) -oldsize, 0)
|| __builtin_expect (misaligned_chunk (oldp), 0))
&& !DUMPED_MAIN_ARENA_CHUNK (oldp))
malloc_printerr ("realloc(): invalid pointer");
My understanding is that when i call calloc function memory get allocated when I call realloc function and try to increase memory area ,heap is not available for some reason giving SIGABRT
My another question is, How can I limit the heap area to some bytes say, 10 bytes to replicate the problem. In stackoverflow RSLIMIT and srlimit is mentioned but no sample code is mentioned. Can you provide sample code where heap size is 10 Bytes ?
How can I limit the heap area to some bytes say, 10 bytes
Can you provide sample code where heap size is 10 Bytes ?
From How to limit heap size for a c code in linux , you could do:
You could use (inside your program) setrlimit(2), probably with RLIMIT_AS (as cited by Ouah's answer).
#include <sys/resource.h>
int main() {
setrlimit(RLIMIT_AS, &(struct rlimit){10,10});
}
Better yet, make your shell do it. With bash it is the ulimit builtin.
$ ulimit -v 10
$ ./your_program.out
to replicate the problem
Most probably, limiting heap size will result in a different problem related to heap size limit. Most probably it is unrelated, and will not help you to debug the problem. Instead, I would suggest to research address sanitizer and valgrind.
When using io_uring_queue_init it calls io_uring_setup. There's an ENOMEM returned when there is insufficient amount of locked memory available for the process.
A strace will look something like:
[pid 37480] io_uring_setup(2048, {flags=0, sq_thread_cpu=0, sq_thread_idle=0}) = -1 ENOMEM (Cannot allocate memory)
What is the formula for how much locked memory is required per entry (first argument)? and if possible, based on the sq_entries/cq_entries in the params structure? Kernel code for the particularly keen. Please don't expand the kernel page size from the formula, as I do want this to be an architecture dependent answer (if it is).
I don't want a dodgy just set ulimit -l to unlimited as an answer. There's this outstanding feature request feature request that would help when implemented.
Thanks to Jens Axboe the following to liburing library calls where added (>=liburing-2.1) returning the size in bytes, 0 if not required, or -errno for errors.
ssize_t io_uring_mlock_size(unsigned entries, unsigned flags);
ssize_t io_uring_mlock_size_params(unsigned entries, struct io_uring_params *p);
I wonder what is the unit for reporting a memory usage returned by vtkPVMemoryUseInformation.GetProcMemoryUse (reference)? Is it a bit, byte, kilobyte? Where can I find this in the documentation?
Update 1
I'm calling the mentioned function from a Python-script with servermanager.vtkPVMemoryUseInformation().GetProcMemoryUse(<index>). We don't have size_t in Python, right? The main question is how can I convert the value into a human-readable value like MB or GB returned by a function call?
This method internally uses vtksys::SystemInformation, which returns system RAM used in units of KiB.
https://github.com/Kitware/VTK/blob/master/Utilities/KWSys/vtksys/SystemInformation.hxx.in
The doc should be improved here.
using Cassandra 2.2.8.
I'm in situation where too many SSTables (98,000+) are created for a single table and many more for other CFs. Node keeps crashing complaining insufficient memory for jre. I've tried increasing linux nofile limit to 200K and max_heap_size to 16G but no avail!
Looking for help to know ways as how i can reduce # of SSTables (compaction?) and keep node up so to do the maintenance.
Thanks in advance!
errors:
There is insufficient memory for the Java Runtime Environment to continue.
Out of Memory Error (os_linux.cpp:2627), pid=22667, tid=139622017013504
--------------- T H R E A D ---------------
Current thread (0x00007efc78b83000): JavaThread "MemtableFlushWriter:2" daemon [_thread_in_vm, id=22726, stack(0x00007efc48b61000,0x00007efc48ba2000)]
Stack: [0x00007efc48b61000,0x00007efc48ba2000], sp=0x00007efc48b9f730, free space=249k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0xab97ea] VMError::report_and_die()+0x2ba
V [libjvm.so+0x4f9dcb] report_vm_out_of_memory(char const*, int, unsigned long, VMErrorType, char const*)+0x8b
V [libjvm.so+0x91a7c3] os::Linux::commit_memory_impl(char*, unsigned long, bool)+0x103
V [libjvm.so+0x91ad19] os::pd_commit_memory(char*, unsigned long, unsigned long, bool)+0x29
V [libjvm.so+0x91502a] os::commit_memory(char*, unsigned long, unsigned long, bool)+0x2a
JRE version: Java(TM) SE Runtime Environment (8.0_65-b17) (build 1.8.0_65-b17)
I would treat this as a dead node situation:
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsReplaceNode.html
https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsReplaceNode.html
After you finish the procedure the node will have way less sstables etc. The thing that is bothering me is how did you come into this situation. Can you provide some schema, insert, delete, ttl related info and describe the workload?
in linux, at least in SUSE/SLES version 11, if you type 'limit' it will respond with
cputime unlimited
filesize unlimited
datasize unlimited
stacksize 8192 kbytes
coredumpsize 1 kbytes
memoryuse 449929820 kbytes
vmemoryuse 423463360 kbytes
descriptors 1024
memorylocked 64 kbytes
maxproc 4135289
maxlocks unlimited
maxsignal 4135289
maxmessage 819200
maxnice 0
maxrtprio 0
maxrttime unlimited
those are my default settings in /etc/security/limits.conf
I have a C program that is around 5500 lines of code, minimal comments.
I declare some big arrays, the program deals with mesh structures so there is a structured "nodes" array having x,y,z variables of type double, along with some other integer and double variables if needed. And there's a structured "elements" array having n1, n2, n3 variables of type integer. In many of the functions I'll declare something like
struct NodeTYPE nodes[200000];
struct ElemTYPE elements[300000];
When running the program, it spits out a menu to choose what to do, and you enter a number. Enter a 1 then calls some function, enter a 2 calls some different function, and so on. Most of them work but one did not, and debugging the program as soon as the function is called it segmentation faults.
If i modify /etc/security/limits.conf and do
* hard stacksize unlimited
* soft stacksize unlimited
then the program works without changing anything in the code and without recompiling. The program is compiled via
gcc myprogram.c -O2 -o myprogram.x -lm
Can someone explain in some detail why this happens,
and what the best method for fixing this type of problem?
Is it poor programming on my part causing the segmentation fault?
I want to say in the past... when i figured out making stacksize unlimited... if i put my large arrays globally in the program outside of main() then the program would not seg fault... it was only when those big array declarations were within functions that causes this problem.
Is having a limit on stacksize of 8MB ridiculously small (my system has over 128GB of RAM) ?