Memory leaks in code when using Embarcadero 10.3.1 [closed] - memory-leaks

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
My C++ code is written in Embarcadero 10.3.1. I am facing lot of memory leaks and resource leaks. I am unable to identify the leaks.
When I use CodeGaurd, the application freezes, so I'm unable to get any conclusions.
My application is a background job which continuously processes files and generates labels. It works fine for a couple of hours and generates around 3000 labels, and then goes to a hung/non-responsive state.
Can anyone suggest any solution?

Memory leaks can be difficult to track down. In your case I suspect that you are using a label printer with it's own library or driver and leaks could be anywhere.
Firstly you should try and understand what memory management models exist in the application. With C++ Builder code generally you will be responsible for allocating and freeing memory. So every object you create with new should have a corresponding delete - make sure you understand what part of the code is responsible for freeing the object. (In 10.3.1 C++ Builder does support C++ auto_ptr but you may not be using this, and you can't guarantee that any library code you have linked in will honour the auto_ptr semantics).
If you are passing information into code that's using another memory management model (so using a COM Object is a good example) then make sure you understand the implications for memory management. If you pass it a pointer is it expecting to free it or is it expecting you to free it - and if it's you how do you know when it has finished with it.
Try running a smaller run and seeing if with a smaller run you can use CodeGuard and pick up anything it suggests.
If your system is in production you will want to keep it running. One option would be to run it as a Windows Scheduled Task. It will process a set number of files and exit. The O/S will free up resources it had in use (but not any that are being leaked at the system level, perhaps by a buggy driver). That may allow you to keep it running all day while you continue to find any leaks.
Good Luck!

Related

Google Cloud Profiling how to optimize my code by analyzing it?

I am trying to analyse the Google CLoud Stackdriver's Profiling, now can anyone please tell me how can I Optimize my code by using this.
Also, I cannot see any of my function name and all, i don't know what is this _tickCallback and which part of the code it is executing ??
Please help me, anyone.
When looking at node.js heap profiles, I find that it's really helpful to know if the code is busy and allocating memory (i.e. does a web server have a load?).
This is because the heap profiler takes a snap shot of everything that is in the heap at a the time of profile collection, including allocations which are not in use anymore, but have not been garbage collected.
Garbage collection doesn't happen very often if the code isn't allocating memory. So, when the code isn't very busy, the heap profile will show a lot of memory allocations which are in the heap but no longer really in use.
Looking at your flame graph, I would guess that your code isn't very busy (or isn't allocating very much memory), so memory allocations from the profiler dominate the heap profiles. If you're code is a web server and you profiled it when it didn't have a load, it may help if you generate a load for it while profiling.
To answer the secondary question: _tickCallback is a Node.js internal function that is used for running things on timers. For example, if anything is using a setTimeout. Anything that is scheduled on a timer would have _tickCallback on the bottom of the stack.
Elsewhere on the picture, you can see some green and cyan 'end' functions on the stack. I suspect those are the places where you are calling response.end in your express request handlers.

Multithreading behaviour change when linking a static library to a program

I have been developing an efficient sparse matrix solver that uses the concept of multithreading (C++11 std::thread) for the past year. Doing a stand alone test on my code works perfect and all expectations were exceeded. However, when linking the code (as a static library) to the software I am developing for, the performance was way worse and from what I can see in CPU loads in task manager, all threads are running on the same core which was not the case during the standalone testing.
Does system loading have anything to do with this ?
I don't have access to the software code.
Anyone has any advice or have any explanation ?
Have you considered the tradeoffs between a context switch and the actual workload of each thread? This problem could happen if the context switch happens to be more CPU intensive than the actual load each thread is performing. Try increasing the work each thread does and see if the problem gets resolved

Delphi 5 App crashed with 'EInvalidPointer' when Hyper-Threading enabled, upgrade IDE will work?

I'm designing an application for an asphalt batch mix plant, using a thread to run the mixing process, several timers to read system states and perform control actions.
If "Hyper-Threading" features is disabled, the application will run smoothly, everything is OK; or it will bring up a dialog grumbling that memory access is invalid and abort immediately after click "OK".
Don't know why? Maybe something wrong with IDE version, since Delphi 5 was released at 10th August 1999; maybe the thread unit in Delphi 5.0 cannot deal with new CPU technology?
Maybe memory management has some bugs, maybe the thread mode is not suitable for new era?
I want to upgrade the IDE, but since there are many many years pasted, I have no idea which would be the best choice,
Delphi 7? Delphi 2007(which support OmniThreadLibrary)? RAD Studio XE6/7? Hope someone will help.
The most plausible explanation is that your program has a bug related to threading. You happen to get away with the flaw in your code when hyperthreading is disabled, but enabling it is sufficient to make the error in your code manifest.
Threading bugs are just like this. They will manifest if threads execute specific code in a particular order, with respect to the other threads. And the relative ordering is unpredictable. Which is part and parcel of parallel computation. Code that is broken can appear to be correct when running under one environment, but then fail under another. Whilst it is tempting to blame the tools, always check in the mirror first.
Changing development environment is not the solution. What you need to do is to find and then fix the error in your code. Getting a good stack trace will help, and I can recommend a tool like madExcept for that.

Tips for debugging hard-to-reproduce concurrency bugs?

What are some tips for debugging hard to reproduce concurrency bugs that only happen, say, once every thousand runs of a test? I have one of these and I have no idea how to go about debugging it. I can't put print statements or debugger watches all over the place to observe internal state, because that would change timings and produce overwhelming amounts of information when the bug is not successfully reproduced.
Here is my technique : I generally use a lot of assert() to check the data consistency/validity as often as possible. When one assert fails, the program crashes generating a core file. Then I use a debugger with the core file to understand what thread configuration led to data corruption.
This might not help you but will probably help someone seeing this question in the future.
If you're using a .Net language you can use the CHESS project from Microsoft research. It runs unit tests with every kind of thread interleaving and shows you which ones cause the bug to happen.
There may be a similar tool for the language you're using.
It highly depends on the nature of the problem. Commonly useful are bisection (to narrow down the search space) + code "instrumentation" with assertions for accessing thread IDs, lock/unlock counts, locking order, etc. in the hope that when the problem will reproduce next time the application will either log a verbose message or will core-dump giving you the solution.
One method for finding data corruption caused by concurrency bug:
Add an atomic counter for that data or buffer.
Leave all the existing synchronizing code as is - don't modify them, assuming that you're going to fix the bug in the existing code, whereas the new atomic counter will be removed once the bug is fixed.
When starting to modify the data, increment the atomic counter. When finished, decrement.
Core dump as soon as you find that the counter is greater than one (using something similar to InterlockedIncrement)
Targeted unit test code is time-consuming but effective, in my experience.
Narrow down the failing code as much as you can. Write test code that's specific to the apparent culprit code and run it in a debugger for as long as it takes to reproduce the problem.
One of the strategies I use is to simulate interleaving of the threads is by introducing spin waits. The caveat is that you should not utilize the standard spin wait mechanisms for your platform because they will likely introduce memory barriers. If the issue you are trying to troubleshoot is caused by a lack of a memory barrier (because it is difficult to get the barriers correct when using lock-free strategies) then the standard spin wait mechanisms will just mask the problem. Instead, place an empty loop at the points where you want your code to stall for a moment. This can increase the probability of reproducing a concurrency bug, but it is not a magic bullet.
If the bug is a deadlock, simply attaching a debugging tool (like gdb or strace) to the program after the deadlock happens, and observing where each thread is stuck, can often get you enough information to track down the source of the error quickly.
A little chart I've made with some debugging techniques to take in mind in debugging multithreaded code. The chart is growing, please leave comments and tips to be added. http://adec.altervista.org/blog/multithreading-debugging-chart/

Runtime integrity check of executed files

I just finished writing a linux security module which verifies the integrity of executable files at the start of their execution (using digital signatures). Now I want to dig a little bit deeper and want to check the files' integrity during run-time (i.e. periodically check them - since I am mostly dealing with processes that get started and run forever...) so that an attacker is not able to change the file within main memory without being identified (at least after some time).
The problem here is that I have absolutely no clue how I can check the file's current memory image. My authentication method mentioned above makes use of a mmap-hook which gets called whenever a file is mmaped before its execution, but as far as I know the LSM framework does not provide tools for periodical checks.
So my question: Are there any hints how I shoudl start this? How I can read a memory image and check its integrity?
Thank you
I understand what you're trying to do, but I'm really worried that this may be a security feature that gives you a warm fuzzy feeling for no good reason; and those are the most dangerous kinds of security features to have. (Another example of this might be the LSM sitting right next to yours, SElinux. Although I think I'm in the minority on this opinion...)
The program data of a process is not the only thing that affects its behavior. Stack overflows, where malicious code is written into the stack and jumped into, make integrity checking of the original program text moot. Not to mention the fact that an attacker can use the original unchanged program text to his advantage.
Also, there are probably some performance issues you'll run into if you are constantly computing DSA inside the kernel. And, you're adding that much more to long list of privileged kernel code that could be possibly exploited later on.
In any case, to address the question: You can possibly write a kernel module that instantiates a kernel thread that, on a timer, hops through each process and checks its integrity. This can be done by using the page tables for each process, mapping in the read only pages, and integrity checking them. This may not work, though, as each memory page probably needs to have its own signature, unless you concatenate them all together somehow.
A good thing to note is that shared libraries only need to be integrity checked once per sweep, since they are re-mapped across all the processes that use them. It takes sophistication to implement this though, so maybe have this under this "nice-to-have" section of your design.
If you disagree with my rationale that this may not be a good idea, I'd be very interested in your thoughts. I ran into this idea at work a while ago, and it would be nice to bring fresh ideas to our discussion.

Resources