When I extract and iterate over the cashflows in a fixed rate bond, Valgrind reports a memory leak. I am using the following code:
FixedRateBond fixedRateBond(
settlementDays,
faceAmount,
fixedBondSchedule,
std::vector<Rate>(1, couponRate),
ActualActual(ActualActual::Bond),
BusinessDayConvention ::Unadjusted,
redemption,
issueDate
);
vector<boost::shared_ptr<CashFlow>> cashFlows = fixedRateBond.cashflows();
for (size_t i=0; i != cashFlows.size(); ++i) {
cout << "Date: " << cashFlows[i]->date() << " Amount: " << cashFlows[i]->amount() <<endl;
}
Edit: Looks like it's an OSX issue as the same code doesn't raise any issues when run in Linux. For posterity, here is the report I was getting on OSX:
==62096== 148 (80 direct, 68 indirect) bytes in 1 blocks are definitely lost in loss record 170 of 208
==62096== at 0x1001F8EA1: malloc (vg_replace_malloc.c:303)
==62096== by 0x102D3D4A2: __Balloc_D2A (in /usr/lib/system/libsystem_c.dylib)
==62096== by 0x102D3DDEB: __d2b_D2A (in /usr/lib/system/libsystem_c.dylib)
==62096== by 0x102D3A443: __dtoa (in /usr/lib/system/libsystem_c.dylib)
==62096== by 0x102D6307A: __vfprintf (in /usr/lib/system/libsystem_c.dylib)
==62096== by 0x102D8C35C: __v2printf (in /usr/lib/system/libsystem_c.dylib)
==62096== by 0x102D705A8: _vsnprintf (in /usr/lib/system/libsystem_c.dylib)
==62096== by 0x102D70607: vsnprintf_l (in /usr/lib/system/libsystem_c.dylib)
==62096== by 0x102D60AB1: snprintf_l (in /usr/lib/system/libsystem_c.dylib)
==62096== by 0x102ACD752: std::__1::num_put<char, std::__1::ostreambuf_iterator<char, std::__1::char_traits<char> > >::do_put(std::__1::ostreambuf_iterator<char, std::__1::char_traits<char> >, std::__1::ios_base&, char, double) const (in /usr/lib/libc++.1.dylib)
==62096== by 0x102AB3B33: std::__1::basic_ostream<char, std::__1::char_traits<char> >::operator<<(double) (in /usr/lib/libc++.1.dylib)
==62096== by 0x10000A2D1: main (main.cpp:31)
This seems to be related to OS X as the issue does not occur on Linux and as (the always helpful) Luigi states is not in QuantLib code itself.
Related
I am working on a project that uses OpenMP and the exact kernel of CGAL. As I was running my code in debug mode, and thanks to Visual Studio Leak Detector, I discovered a lot of unexpected memory leaks that are obviously related to the combination of those features. I know that an instance of CGAL::Lazy_exact_nt<CGAL::Gmpq> a.k.a. FT should not be shared between multiple threads, but since it is not the case here, I thought it was safe. Is there any way to fix these leaks ?
Here is a minimal reproducible example:
int main()
{
double xs_h = 1.2;
#pragma omp parallel
{
FT xs;
std::stringstream stream;
stream << xs_h;
stream >> xs;
}
}
And here is (part of) the output of VLD (7 memory leaks in total, all pointing to the same line of code):
---------- Block 26 at 0x00000000FEED2E50: 40 bytes ----------
Leak Hash: 0x6B0C3782, Count: 1, Total 40 bytes
Call Stack (TID 15080):
ucrtbased.dll!malloc()
D:\agent\_work\9\s\src\vctools\crt\vcstartup\src\heap\new_scalar.cpp (35): Test.exe!operator new() + 0xA bytes
D:\CGAL-4.13\include\CGAL\Lazy.h (818): Test.exe!CGAL::Lazy<CGAL::Interval_nt<0>,CGAL::Gmpq,CGAL::Lazy_exact_nt<CGAL::Gmpq>,CGAL::To_interval<CGAL::Gmpq> >::zero() + 0x77 bytes
D:\CGAL-4.13\include\CGAL\Lazy.h (766): Test.exe!CGAL::Lazy<CGAL::Interval_nt<0>,CGAL::Gmpq,CGAL::Lazy_exact_nt<CGAL::Gmpq>,CGAL::To_interval<CGAL::Gmpq> >::Lazy<CGAL::Interval_nt<0>,CGAL::Gmpq,CGAL::Lazy_exact_nt<CGAL::Gmpq>,CGAL::To_interval<CGAL::Gmpq> >() + 0x5 bytes
D:\CGAL-4.13\include\CGAL\Lazy_exact_nt.h (365): Test.exe!CGAL::Lazy_exact_nt<CGAL::Gmpq>::Lazy_exact_nt<CGAL::Gmpq>() + 0x28 bytes
D:\projects\...\src\main.cpp (20): Test.exe!main$omp$1() + 0xA bytes
VCOMP140D.DLL!vcomp_fork() + 0x2E5 bytes
VCOMP140D.DLL!vcomp_fork() + 0x2A1 bytes
VCOMP140D.DLL!vcomp_atomic_div_r8() + 0x20A bytes
KERNEL32.DLL!BaseThreadInitThunk() + 0x14 bytes
ntdll.dll!RtlUserThreadStart() + 0x21 bytes
I am using CGAL 4.13, and my compiler is Visual Studio 2019. I tried recompiling this code with macro CGAL_HAS_THREADS but it does not change the result (memory leaks do not vanish). Thanks for your attention.
My goal is to find out process-id of pages which are being swapped out. The Linux Kernel function swap_writepage() takes a pointer to struct page as a part of formal argument while swapping a page on backing store. All swap-out operations are done by "kswapd" process. I need to find out pid(s) of the processes whose page is passed as argument in the swap_writepage() function. In order to get that, I was able to find all page table entries associated with that page using rmap structures.
How can I get pid from a pte or from struct page? I have used sytemtap to get the value of struct page pointer, received in swap_writepage() function as argument. Also, the pid() function prints the pid of current process running not the pid of process to which that page belongs which always gives kswapd process.
Here is the example of how reverse mapping used in modern Linux (copied from lxr):
1435 static int try_to_unmap_anon(struct page *page, enum ttu_flags flags)
1436 {
1437 struct anon_vma *anon_vma;
1438 struct anon_vma_chain *avc;
1439 int ret = SWAP_AGAIN;
1440
1441 anon_vma = page_lock_anon_vma(page);
1442 if (!anon_vma)
1443 return ret;
1444
1445 list_for_each_entry(avc, &anon_vma->head, same_anon_vma) {
1446 struct vm_area_struct *vma = avc->vma;
1447 unsigned long address;
1448
1449 /*
1450 * During exec, a temporary VMA is setup and later moved.
1451 * The VMA is moved under the anon_vma lock but not the
1452 * page tables leading to a race where migration cannot
1453 * find the migration ptes. Rather than increasing the
1454 * locking requirements of exec(), migration skips
1455 * temporary VMAs until after exec() completes.
1456 */
1457 if (PAGE_MIGRATION && (flags & TTU_MIGRATION) &&
1458 is_vma_temporary_stack(vma))
1459 continue;
1460
1461 address = vma_address(page, vma);
1462 if (address == -EFAULT)
1463 continue;
1464 ret = try_to_unmap_one(page, vma, address, flags);
1465 if (ret != SWAP_AGAIN || !page_mapped(page))
1466 break;
1467 }
1468
1469 page_unlock_anon_vma(anon_vma);
1470 return ret;
1471 }
This example shows for rmap used for unmapping pages. So each anonymous page in ->mapping field holds anon_vma object. anon_vma holds a list of vma areas page is mapped to. Having vma you have mm, having mm you have a task_struct. that's it. If you have any doubts - here is the illustraction
Daniel P. Bovet, Marco Cesati Understanding Linux Kernel chapter 17.2
I am confused about the behaviour of malloc_trim as implemented in the glibc.
man malloc_trim
[...]
malloc_trim - release free memory from the top of the heap
[...]
This function cannot release free memory located at places other than the top of the heap.
When I now look up the source of malloc_trim() (in malloc/malloc.c) I see that it calls mtrim() which is utilizing madvise(x, MADV_DONTNEED) to release memory back to the operating system.
So I wonder if the man-page is wrong or if I misinterpret the source in malloc/malloc.c.
Can malloc_trim() release memory from the middle of the heap?
There are two usages of madvise with MADV_DONTNEED in glibc now: http://code.metager.de/source/search?q=MADV_DONTNEED&path=%2Fgnu%2Fglibc%2Fmalloc%2F&project=gnu
H A D arena.c 643 __madvise ((char *) h + new_size, diff, MADV_DONTNEED);
H A D malloc.c 4535 __madvise (paligned_mem, size & ~psm1, MADV_DONTNEED);
There was https://sourceware.org/git/?p=glibc.git;a=commit;f=malloc/malloc.c;h=68631c8eb92ff38d9da1ae34f6aa048539b199cc commit by Ulrich Drepper on 16 Dec 2007 (part of glibc 2.9 and newer):
malloc/malloc.c (public_mTRIm): Iterate over all arenas and call
mTRIm for all of them.
(mTRIm): Additionally iterate over all free blocks and use madvise
to free memory for all those blocks which contain at least one
memory page.
mTRIm (now mtrim) implementation was changed. Unused parts of chunks, aligned on page size and having size more than page may be marked as MADV_DONTNEED:
/* See whether the chunk contains at least one unused page. */
char *paligned_mem = (char *) (((uintptr_t) p
+ sizeof (struct malloc_chunk)
+ psm1) & ~psm1);
assert ((char *) chunk2mem (p) + 4 * SIZE_SZ <= paligned_mem);
assert ((char *) p + size > paligned_mem);
/* This is the size we could potentially free. */
size -= paligned_mem - (char *) p;
if (size > psm1)
madvise (paligned_mem, size & ~psm1, MADV_DONTNEED);
Man page of malloc_trim is there: https://github.com/mkerrisk/man-pages/blob/master/man3/malloc_trim.3 and it was committed by kerrisk in 2012: https://github.com/mkerrisk/man-pages/commit/a15b0e60b297e29c825b7417582a33e6ca26bf65
As I can grep the glibc's git, there are no man pages in the glibc, and no commit to malloc_trim manpage to document this patch. The best and the only documentation of glibc malloc is its source code: https://sourceware.org/git/?p=glibc.git;a=blob;f=malloc/malloc.c
Additional functions:
malloc_trim(size_t pad);
609 /*
610 malloc_trim(size_t pad);
611
612 If possible, gives memory back to the system (via negative
613 arguments to sbrk) if there is unused memory at the `high' end of
614 the malloc pool. You can call this after freeing large blocks of
615 memory to potentially reduce the system-level memory requirements
616 of a program. However, it cannot guarantee to reduce memory. Under
617 some allocation patterns, some large free blocks of memory will be
618 locked between two used chunks, so they cannot be given back to
619 the system.
620
621 The `pad' argument to malloc_trim represents the amount of free
622 trailing space to leave untrimmed. If this argument is zero,
623 only the minimum amount of memory to maintain internal data
624 structures will be left (one page or less). Non-zero arguments
625 can be supplied to maintain enough trailing space to service
626 future expected allocations without having to re-obtain memory
627 from the system.
628
629 Malloc_trim returns 1 if it actually released any memory, else 0.
630 On systems that do not support "negative sbrks", it will always
631 return 0.
632 */
633 int __malloc_trim(size_t);
634
Freeing from the middle of the chunk is not documented as text in malloc/malloc.c (and malloc_trim description in commend was not updated in 2007) and not documented in man-pages project. Man page from 2012 may be the first man page of the function, written not by authors of glibc. Info page of glibc only mentions M_TRIM_THRESHOLD of 128 KB:
https://www.gnu.org/software/libc/manual/html_node/Malloc-Tunable-Parameters.html#Malloc-Tunable-Parameters and don't list malloc_trim function https://www.gnu.org/software/libc/manual/html_node/Summary-of-Malloc.html#Summary-of-Malloc (and it also don't document memusage/memusagestat/libmemusage.so).
You may ask Drepper and other glibc developers again as you already did in https://sourceware.org/ml/libc-help/2015-02/msg00022.html "malloc_trim() behaviour", but there is still no reply from them. (Only wrong answers from other users like https://sourceware.org/ml/libc-help/2015-05/msg00007.html https://sourceware.org/ml/libc-help/2015-05/msg00008.html)
Or you may test the malloc_trim with this simple C program (test_malloc_trim.c) and strace/ltrace:
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <malloc.h>
int main()
{
int *m1,*m2,*m3,*m4;
printf("%s\n","Test started");
m1=(int*)malloc(20000);
m2=(int*)malloc(40000);
m3=(int*)malloc(80000);
m4=(int*)malloc(10000);
printf("1:%p 2:%p 3:%p 4:%p\n", m1, m2, m3, m4);
free(m2);
malloc_trim(0); // 20000, 2000000
sleep(1);
free(m1);
free(m3);
free(m4);
// malloc_stats(); malloc_info(0, stdout);
return 0;
}
gcc test_malloc_trim.c -o test_malloc_trim, strace ./test_malloc_trim
write(1, "Test started\n", 13Test started
) = 13
brk(0) = 0xcca000
brk(0xcef000) = 0xcef000
write(1, "1:0xcca010 2:0xccee40 3:0xcd8a90"..., 441:0xcca010 2:0xccee40 3:0xcd8a90 4:0xcec320
) = 44
madvise(0xccf000, 36864, MADV_DONTNEED) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], 0}, 8) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
nanosleep({1, 0}, 0x7ffffafbfff0) = 0
brk(0xceb000) = 0xceb000
So, there is madvise with MADV_DONTNEED for 9 pages after malloc_trim(0) call, when there was hole of 40008 bytes in the middle of the heap.
... utilizing madvise(x, MADV_DONTNEED) to release memory back to the
operating system.
madvise(x, MADV_DONTNEED) does not release memory. man madvise:
MADV_DONTNEED
Do not expect access in the near future. (For the time being,
the application is finished with the given range, so the kernel
can free resources associated with it.) Subsequent accesses of
pages in this range will succeed, but will result either in
reloading of the memory contents from the underlying mapped file
(see mmap(2)) or zero-fill-on-demand pages for mappings without
an underlying file.
So, the usage of madvise(x, MADV_DONTNEED) does not contradict man malloc_trim's statement:
This function cannot release free memory located at places other than the top of the heap.
I have a very simple CUDA component in my application. Valgrind reports a lot of leaks and still-reachables, all related to the cudaMalloc calls.
Are these leaks real? I call cudaFree for every cudaMalloc. Is this valgrind's inability to interpret GPU memory allocation? If these leaks are not real, can I suppress them and have valgrind only analyse the non-gpu part of the application?
extern "C"
unsigned int *gethash(int nodec, char *h_nodev, int len) {
unsigned int *h_out = (unsigned int *)malloc(sizeof(unsigned int) * nodec);
char *d_in;
unsigned int *d_out;
cudaMalloc((void**) &d_in, sizeof(char) * len * nodec);
cudaMalloc((void**) &d_out, sizeof(unsigned int) * nodec);
cudaMemcpy(d_in, h_nodev, sizeof(char) * len * nodec, cudaMemcpyHostToDevice);
int blocks = 1 + nodec / 512;
cube<<<blocks, 512>>>(d_out, d_in, nodec, len);
cudaMemcpy(h_out, d_out, sizeof(unsigned int) * nodec, cudaMemcpyDeviceToHost);
cudaFree(d_in);
cudaFree(d_out);
return h_out;
}
Last bit of the Valgrind output:
...
==5727== 5,468 (5,020 direct, 448 indirect) bytes in 1 blocks are definitely lost in loss record 506 of 523
==5727== at 0x402B965: calloc (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
==5727== by 0x4843910: ??? (in /usr/lib/nvidia-319-updates/libcuda.so.319.60)
==5727== by 0x48403E9: ??? (in /usr/lib/nvidia-319-updates/libcuda.so.319.60)
==5727== by 0x498B32D: ??? (in /usr/lib/nvidia-319-updates/libcuda.so.319.60)
==5727== by 0x494A6E4: ??? (in /usr/lib/nvidia-319-updates/libcuda.so.319.60)
==5727== by 0x4849534: ??? (in /usr/lib/nvidia-319-updates/libcuda.so.319.60)
==5727== by 0x48191DD: cuInit (in /usr/lib/nvidia-319-updates/libcuda.so.319.60)
==5727== by 0x406B4D6: ??? (in /usr/lib/i386-linux-gnu/libcudart.so.5.0.35)
==5727== by 0x406B61F: ??? (in /usr/lib/i386-linux-gnu/libcudart.so.5.0.35)
==5727== by 0x408695D: cudaMalloc (in /usr/lib/i386-linux-gnu/libcudart.so.5.0.35)
==5727== by 0x804A006: gethash (hashkernel.cu:36)
==5727== by 0x804905F: chkisomorphs (bdd.c:326)
==5727==
==5727== LEAK SUMMARY:
==5727== definitely lost: 10,240 bytes in 6 blocks
==5727== indirectly lost: 1,505 bytes in 54 blocks
==5727== possibly lost: 7,972 bytes in 104 blocks
==5727== still reachable: 626,997 bytes in 1,201 blocks
==5727== suppressed: 0 bytes in 0 blocks
It's a known issue that valgrind reports false-positives for a bunch of CUDA stuff. The best way to avoid seeing it would be to use valgrind suppressions, which you can read all about here:
http://valgrind.org/docs/manual/manual-core.html#manual-core.suppress
If you want to jumpstart into something a little closer to your specific issue, an interesting post is this one on the Nvidia dev forums. It has a link to a sample suppression rule file.
https://devtalk.nvidia.com/default/topic/404607/valgrind-3-4-suppressions-a-little-howto/
Try using cuda-memcheck --leak-check full. Cuda-memcheck is a set of tools that provides similar functionality to Valgrind for CUDA applications. It is installed as part of the CUDA toolkit. You can get more documentation about how to use cuda-memcheck here : http://docs.nvidia.com/cuda/cuda-memcheck/
Note that cuda-memcheck is not a direct replacement for valgrind and can't be used to detect host side memory leaks or buffer overflows.
To add to scarl3tt's answer, this may be overly general for some applications, but if you want to use valgrind while ignoring most of the cuda issues, use the option --suppressions=valgrind-cuda.supp where valgrind-cuda.supp is a file with the following rules:
{
alloc_libcuda
Memcheck:Leak
match-leak-kinds: reachable,possible
fun:*alloc
...
obj:*libcuda.so*
...
}
{
alloc_libcufft
Memcheck:Leak
match-leak-kinds: reachable,possible
fun:*alloc
...
obj:*libcufft.so*
...
}
{
alloc_libcudaart
Memcheck:Leak
match-leak-kinds: reachable,possible
fun:*alloc
...
obj:*libcudart.so*
...
}
I wouldn't trust valgrind or any other leak detector (like VLD) with CUDA. I'm sure they weren't designed with GPU allocations in mind. I don't know whether Nvidia's Nsight has the capability these days (I haven't done GPU programming for almost 6 months now), but that's the best thing I used for CUDA debugging, and to be quite honest, it was buggy as hell.
The code you've posted shouldn't create a leak.
Since I don't have 50 reputation, I cannot leave a comment on #Vyas 's answer.
I feel strange that cuda-memcheck cannot observe cuda memory leakage.
I just write a very simple code with a cuda memory leakage, but when using cuda-memcheck --leak-check full it give no leakage. It is:
#include <iostream>
#include <cuda_runtime.h>
using namespace std;
int main(){
float* cpu_data;
float* gpu_data;
int buf_size = 10 * sizeof(float);
cpu_data = (float*)malloc(buf_size);
for(int i=0; i<10; i++){
cpu_data[i] = 1.0f * i;
}
cudaError_t cudaStatus = cudaMalloc(&gpu_data, buf_size);
cudaMemcpy(gpu_data, cpu_data, buf_size, cudaMemcpyHostToDevice);
free(cpu_data);
//cudaFree(gpu_data);
return 0;
}
Note the commented line of code, which make this program a cuda memory leakage, I think. However, when execuing cuda-memcheck ./a.out it gives:
========= CUDA-MEMCHECK
========= ERROR SUMMARY: 0 errors
I've been chasing a memory leak (reported by 'valgrind --leak-check=yes') and it appears to be coming from ALSA. This code has been in the free world for some time so I'm guessing that it's something I'm doing wrong.
#include <stdio.h>
#include <stdlib.h>
#include <alsa/asoundlib.h>
int main (int argc, char *argv[])
{
snd_ctl_t *handle;
int err = snd_ctl_open( &handle, "hw:1", 0 );
printf( "snd_ctl_open: %d\n", err );
err = snd_ctl_close(handle);
printf( "snd_ctl_close: %d\n", err );
}
The output looks like this:
[root#aeolus alsa]# valgrind --leak-check=yes ./test2
==16296== Memcheck, a memory error detector
==16296== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al.
==16296== Using Valgrind-3.8.1 and LibVEX; rerun with -h for copyright info
==16296== Command: ./test2
==16296==
snd_ctl_open: 0
snd_ctl_close: 0
==16296==
==16296== HEAP SUMMARY:
==16296== in use at exit: 22,912 bytes in 1,222 blocks
==16296== total heap usage: 1,507 allocs, 285 frees, 26,236 bytes allocated
==16296==
==16296== 4 bytes in 2 blocks are possibly lost in loss record 1 of 62
==16296== at 0x4007100: malloc (vg_replace_malloc.c:270)
==16296== by 0x340F7F: strdup (in /lib/libc-2.5.so)
==16296== by 0x624C6B5: ??? (in /lib/libasound.so.2.0.0)
==16296== by 0x624CA5B: ??? (in /lib/libasound.so.2.0.0)
==16296== by 0x624CD81: ??? (in /lib/libasound.so.2.0.0)
==16296== by 0x624F311: snd_config_update_r (in /lib/libasound.so.2.0.0)
==16296== by 0x624FAD7: snd_config_update (in /lib/libasound.so.2.0.0)
==16296== by 0x625DA22: snd_ctl_open (in /lib/libasound.so.2.0.0)
==16296== by 0x804852F: main (test2.cpp:9)
and continues for some pages to
==16296== 2,052 bytes in 57 blocks are possibly lost in loss record 62 of 62
==16296== at 0x4005EB4: calloc (vg_replace_malloc.c:593)
==16296== by 0x624A268: ??? (in /lib/libasound.so.2.0.0)
==16296== by 0x624A38F: ??? (in /lib/libasound.so.2.0.0)
==16296== by 0x624CA33: ??? (in /lib/libasound.so.2.0.0)
==16296== by 0x624CCC9: ??? (in /lib/libasound.so.2.0.0)
==16296== by 0x624CD81: ??? (in /lib/libasound.so.2.0.0)
==16296== by 0x624F311: snd_config_update_r (in /lib/libasound.so.2.0.0)
==16296== by 0x624FAD7: snd_config_update (in /lib/libasound.so.2.0.0)
==16296== by 0x625DA22: snd_ctl_open (in /lib/libasound.so.2.0.0)
==16296== by 0x804852F: main (test2.cpp:9)
==16296==
==16296== LEAK SUMMARY:
==16296== definitely lost: 0 bytes in 0 blocks
==16296== indirectly lost: 0 bytes in 0 blocks
==16296== possibly lost: 22,748 bytes in 1,216 blocks
==16296== still reachable: 164 bytes in 6 blocks
==16296== suppressed: 0 bytes in 0 blocks
==16296== Reachable blocks (those to which a pointer was found) are not shown.
==16296== To see them, rerun with: --leak-check=full --show-reachable=yes
==16296==
==16296== For counts of detected and suppressed errors, rerun with: -v
==16296== ERROR SUMMARY: 56 errors from 56 contexts (suppressed: 19 from 8)
This came about as I'm using ALSA in a project and started seeing this huge leak...or at least the report of said leak.
So the question is: is it me, ALSA or valgrind that's having issues here?
http://git.alsa-project.org/?p=alsa-lib.git;a=blob;f=MEMORY-LEAK;hb=HEAD says:
Memory leaks - really?
----------------------
Note that some developers are thinking that the ALSA library has some memory
leaks. Sure, it can be truth, but before contacting us, please, be sure that
these leaks are not forced.
The biggest reported leak is that the global configuration is cached for
next usage. If you do not want this feature, simply, call
snd_config_update_free_global() after all snd_*_open*() calls. This function
will free the cache.
The biggest reported leak is that the global configuration is cached for next usage.
If you do not want this feature, simply call snd_config_update_free_global() after all snd_*_open*() calls.
This function will free the cache." <---- Valgrind still detects leaks.
This can be fixed if you call snd_config_update_free_global() after snd_pcm_close(handle);
Perhaps this will work (source):
diff --git a/src/pcm/pcm.c b/src/pcm/pcm.c
--- a/src/pcm/pcm.c
+++ b/src/pcm/pcm.c
## -2171,7 +2171,12 ## static int snd_pcm_open_conf(snd_pcm_t **pcmp, const char *name,
if (open_func) {
err = open_func(pcmp, name, pcm_root, pcm_conf, stream, mode);
if (err >= 0) {
- (*pcmp)->open_func = open_func;
+ if ((*pcmp)->open_func) {
+ /* only init plugin (like empty, asym) */
+ snd_dlobj_cache_put(open_func);
+ } else {
+ (*pcmp)->open_func = open_func;
+ }
err = 0;
} else {
snd_dlobj_cache_put(open_func);
I tried it myself, but to no avail. My core temp heats up ~10 °F, most likely due to similar memory leak. Here's some of what valgrind gave me, even after using the patch above:
==869== 16,272 bytes in 226 blocks are possibly lost in loss record 103 of 103
==869== at 0x4C28E48: calloc (vg_replace_malloc.c:566)
==869== by 0x5066E61: _snd_config_make (in /usr/lib64/libasound.so.2)
==869== by 0x5066F58: _snd_config_make_add (in /usr/lib64/libasound.so.2)
==869== by 0x50673B9: parse_value (in /usr/lib64/libasound.so.2)
==869== by 0x50675DE: parse_array_def (in /usr/lib64/libasound.so.2)
==869== by 0x5067680: parse_array_defs (in /usr/lib64/libasound.so.2)
==869== by 0x5067A8E: parse_def (in /usr/lib64/libasound.so.2)
==869== by 0x5067BC7: parse_defs (in /usr/lib64/libasound.so.2)
==869== by 0x5067A6F: parse_def (in /usr/lib64/libasound.so.2)
==869== by 0x5067BC7: parse_defs (in /usr/lib64/libasound.so.2)
==869== by 0x5067A6F: parse_def (in /usr/lib64/libasound.so.2)
==869== by 0x5067BC7: parse_defs (in /usr/lib64/libasound.so.2)
The number of bytes lost just keeps going up and up.