Memory leaks in MSVC 2017 - visual-c++

My dev environment is MSVC2017. I also have VLD installed and linked in my project.
Right now in one of the path I run I see that MSVC reports memory leaks, but VLD says that there is no memory leaks and it exits successfully.
Some leaks were reported by VLD and I did fix it, but there are some that are reported by MSVC only.
Does anyone have any suggestions? MSVC does not provide the backtrace t the allocation like VLD and as far as I know VLD is the only external tool available for MSVC.
I did some searching and there is an article in the MSDN about memory leaks, but:
I'm afraid it will interfere with the VLD.
I didn't have a success with it (maybe because of 1).
I don't want to get rid of VLD as it is still a great tool for identifying the leaks, just no in those cases.

Related

Valgrind shows memory leak but no memory allocation took place

this is a rather simple question.
In my school we use a remote CentOS server to compile and test our programs. For some reason, valgrind always shows a 4096B leak in spite of the fact that no malloc was used. Does anyone here have any idea where this problem might stem from?
Your program makes a call to printf. This library might allocate memory for its own usage. More generally, depending on the OS/libc/..., various allocations might be done just to start a program.
Note also that in this case, you see that there is one block still allocated at exit, and that this block is part of the suppressed count. That means that valgrind suppression file already ensures that this memory does not appear in the list of leaks to be examined.
In summary: no problem.
In any case, when you suspect you have a leak, you can look at the details of the leaks e.g. their allocation stack trace to see if these are triggered by your application.
In addition to #phd's answer, there are a few things you can do to see more clearly what is going on.
If you run Valgrind with -s or -v it will show details of the suppressions used.
You can use --trace-malloc=yes to see all calls to allocation functions (only do that for small applications). Similarly you can run with --default-suppressions=no and than you will see the details of the memory (with --leak-check=full --show-reachable=yes in this case)
Finally, are you using an old Centos / GNU libc? A few years ago Valgrind got a mechanism to cleanup things like io buffers so you shouldn't get this sort of message with recent Valgrind and recent Linux + libc.

newlib and C++ causing hard faults

I am relatively new to newlib, so my questions may sound stupid....
I have an ARM CortexM4 project done in C and added a C++ library to project. I changed the linker executable from arm-none-eabi-gcc to arm-none-eabi-g++. In doing so I noticed that I started getting hard faults on the processor.
I found that __libc_init_array(); was allocating around 2500 bytes from heap, which I had limited to 4k in linker script. Then after this quickly my heap ran out of space. My sbrk() does return -1 when it exceeds heap, however during a call sprintf() using floating point I noticed that dtoa() appears to try and allocate 4096 bytes from the heap, which fails and then shortly after I get the hard fault in d2b(). Note the hard fault appears to be from trying to read/write to address -1, which might be a bug in dtoa() not checking malloc return value.
If I change my linker back to gcc it all works, and I don't get large heap allocations.
I was wondering if this was normal or a bug, if normal could someone explain what is happening with large heap allocations?
Note I am using GNU Tools for ARM version 5.2 2015sq4, but I am unsure how to check newlib versions.

Avoiding memory leaks using malloc() in the Code::Blocks IDE

I use Code::Blocks to write my code in C. As far as I know, it combines a text editor, compiler and debugger.
My concern is whether using the malloc command without using the free function will lead to memory leaks or whether Code::Blocks will clean up by itself after each time I run my program from Code::Blocks?
Well, CodeBlocks is just an IDE, which means you can edit, compile, debug and run your codes by using it. However, the software itself(I mean CodeBlocks) cannot disturb or do any impact on the program you write in RUNTIME.
After you "build and run" your code, the operation system will give your program resource (memory and CPU, etc), but OS cannot "rewrite" your program, either.
To avoid memory leak, you should remember to free memory after you call the allocator(calloc or malloc) and use the memory.
To learn more about the tips of memory usage in C, you can read the chapter 9, virtual memory of CSAPP.
You are right, Codeblocks is an integrated development environment, but it is not a C++ runtime. It only integrates with the compiler, and has no control over the execution of your code.
Whenever you call malloc you must call free. The platform executing your code will reclaim the leaked memory after your program terminates, but this is not the responsibility of either Codeblocks, or the operating system.
Never call malloc without calling free.

Which methods can be used to detect memory overwritten in Linux kernel?

If there are memory overwritten code such as buffer overflow in Linux kernel or driver, it's very hard to debug and find the root cause.
I know I can enable SLAB debug to get some information. If something is written to the slab memory after it is freed, then we will see warnings. But there is limitation for this method and we still can't get useful clues sometimes.
Are there any other kernel debugging method to detect memory overwritten?
Take a look at Kmemcheck tool. You may enable it in your kernel configuration and rebuild the kernel.
Kmemcheck may slow the system down significantly but it can detect incorrect memory accesses that would be very hard to find otherwise.
For the kernel 4.1 or newer on x86_64 architecture, Kernel Address sanitizer (KASan) may also be an option. It should be much faster than Kmemcheck.

clGetPlatformIDs Memory Leak

I'm testing my code on Ubuntu 12.04 with NVIDIA hardware.
No actual OpenCL processing takes place; but my initialization code is still running. This code calls clGetPlatformIDs. However, Valgrind is reporting a memory leak:
==2718== 8 bytes in 1 blocks are definitely lost in loss record 4 of 74
==2718== at 0x4C2B6CD: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==2718== by 0x509ECB6: ??? (in /usr/lib/nvidia-current/libOpenCL.so.1.0.0)
==2718== by 0x50A04E1: ??? (in /usr/lib/nvidia-current/libOpenCL.so.1.0.0)
==2718== by 0x509FE9F: clGetPlatformIDs (in /usr/lib/nvidia-current/libOpenCL.so.1.0.0)
I was unaware this was even possible. Can this be fixed? Note that no special deinitialization is currently taking place--do I need to call something after this? The docs don't mention anything about having to deallocate anything.
regarding: "Check this out: devgurus.amd.com/thread/136242. valgrind cannot deal with custom memory allocators by design, which OpenCL is likely using"
to quote from the link given: "The behaviour not to free pools at the exit could be called a bug of the library though."
If you want to create a pool of memory and allocate from that, go ahead; but you still should properly deallocate it. The complexity of a memory pool as a whole is no less complex then the complexity of a regular memory reference and deserves at least the same attention, if not more, then that of regular references. Also, an 8 byte structure is highly unlikely to be a memory pool.
Tim Child would have a point about how you use clGetPlatformIds if it was designed to return allocated memory. However, reading http://www.khronos.org/registry/cl/sdk/1.0/docs/man/xhtml/clGetPlatformIDs.html I am not sufficiently convinced this should be the case.
The leak in question may or may not be serious, may or may not accumulate by successive calls, but you might be left only with the option to report the bug to nvidia in hopes they fix it or to find a different opencl implementation for development. Still, there might be reasons for an opencl library to create references to data which from the viewpoint of valgrind are not in use.
Sadly, this still leaves us with a memory leak caused by an external factor we cannot control, and it still leaves us with excess valgrind output.
Say you are sufficiently sure you are not responsible for this leak (say, we know for a fact that an nvidia engineer allocated a random value in OpenCL.so which he didn't deallocate just to spite you). Valgrind has a flag --gen-suppressions=yes, with which you can suppress warnings about particular warnings, which you can feed back to valgrind using --suppressions=$filename. Read the valgrind page for more details about how it works.
Be very wary of using suppressions though. Obviously suppressing errors does not fix them, and liberal usage of the mechanism will lead to situations where you suppress errors made by your code, rather then nvidia or valgrind. Do not suppress warnings of which you are not absolutely sure of where they come from, or regularly reassert your suppressions.

Resources