A .NET (C#) application using ODBC connections to SAP HANA is leaking memory consuming all memory available and then it crashes. The memory profiler shows memory leaks in odbc32 unmanaged module. Testing two SAP HANA drivers (HDBODBC 1.00.120.24 and 1.0.0.120.100), both are leaking memory.
Calling OdbcConnection.ReleaseObjectPool() (
ODBC leaking memory in c# application) doesn't solve the problem.
How to solve this memory leak?
The solution is to use pooled connections by selecting "Pool Connections to this driver" in ODBC Data Source Administrator, in Connection Pooling tab. By default HDBODBC is set as <not pooled>.
Related
All our packages are published in CQM mode. CQM queries use the 32-bit version of the Bi Bus process, this BI bus is limited to 2 GB of memory. This causes some large reports to fail with out of memory errors
CCL-SRV-0513 The BIBusTKServer process ran out of memory
The solution would be to publish in DQM but that is throwing another error that requires an upgrade. We are not ready for an upgrade now.
So my question is: Is there a way to use Cognos SDK to manipulate a BIBusTKServer? We want to be able to change the maximum allowed memory for CQM queries from 2 GB to any other value.
Usually if the BIBus runs out of memory on generating output you would experience a CCL OOM, the error you are getting might be related to tuning. If you are on linux ensure your ulimits are set correctly.
https://www.ibm.com/support/pages/ccl-srv-0514-bibustkserver-process-ran-out-thread-resource
You may also want to watch the process while running the report to see how much memory it actually uses.
I have been trying to understand what is the cause of high memory usage from processes in the windows server I have. I installed that tool DebugDiag 1.2 to try to find the problem.
Here is what runs in my server:
I have the IIS server which has a decent number of pool applications (68 pool applications). For each pool application there are at least 4 applications.
Recently, I have faced problems related to high memory usage, causing the server to work at 97% of memory usage or higher.
It was working fine when I took this printscreen below. However, the memory usage will easily get higher.
Task Manager:
With that being said, I have been trying to understand how to use the tool "DebugDiag1.2" from microsoft to find something (part of the source code, an sql procedure) that might help me locate what is causing the problem.
I read that we can't limit the memory for each IIS pool application, so I guess the solution would be trying to optmize the application. But first I need to know where to start.
I hope someone can help me out.
I found the following forum post in Redis Google Group: Verify Redis on Windows memory consumption, and some Microsoft Open Tech team member states:
In order to implement persistence and simulate the fork()
copy-on-write mechanism, the Windows port of Redis places the Redis
heap in a memory mapped file that can be shared with child processes.
Data is definitely stored in memory but because of the memory-mapped
file working set will be accounted for under "shared working set"
instead of "private working set". You can inspect the shared working
set of redis-server.exe using task manager or Windows Performance
Monitor. You should see values that much closer reflect
"used_memory_human"
Why I'm asking this question? Because I found that redis-server process takes significant less memory than what info command says (for example, info shows that Redis is using 148MB while shared working set in task manager shows 48MB).
Since the MSOpenTech member says that Redis for Windows is using memory-mapped files, does this means that Redis on Windows uses less RAM than Linux version?.
For those of you coming to this question later 2016's, open Process Hacker (process explorer) and look/add column Working Set .
The memory usage shown there is related to what's Redis actually using.
According to nodetime, my memory leak is persisting even through node application restarts. Check out the following "OS - Free Memory" graph; notice how the memory decreases steadily (despite the node app restarting dozens and dozens of times) until I restart the whole server:
How is this possible? Am I fundamentally misunderstanding something? I don't understand how a memory leak in one process could survive and continue to affect the OS...
Machine Info:
Amazon EC2 (m1.large) running CentOS
A memory leak in one process (that is actually killed) can't do this.
Are you using 3rd party systems to provided shared state? For example, a database, or something like redis for sessions? In that case, restarting your node process will just lead to reconnecting to the same shared state and continuing whatever leak was started initially.
I have a web application that is eventually running out of memory when it runs on IIS 7 Windows Server 2008. When I attempt to run a memory profiler against the application to determine the leak, it is not reproducible on my development workstation...Windows Vista.
The GC collection cycles are not consistent between the server and the workstation and it appears the server's collection is not reclaiming all of its memory and is eventually running out. The server becomes non responsive and throws out of memory exceptions.
We have tried setting objects that are surviving too many generations to null...Some improvement was noticed.
Any assistance/recommendations would be greatly appreciated
Tess Ferrandez's blog has some great information on debugging memory leaks using Windbg.
By taking a dump of the running application and then analysing it in Windbg, you should be able to find the source of the leaks you are seeing.
The following entries are probably a good starting point:
Setup (including links to configuring Windbg
Memory Leak Lab 1
Memory Leak Lab 2
Good luck!