All our packages are published in CQM mode. CQM queries use the 32-bit version of the Bi Bus process, this BI bus is limited to 2 GB of memory. This causes some large reports to fail with out of memory errors
CCL-SRV-0513 The BIBusTKServer process ran out of memory
The solution would be to publish in DQM but that is throwing another error that requires an upgrade. We are not ready for an upgrade now.
So my question is: Is there a way to use Cognos SDK to manipulate a BIBusTKServer? We want to be able to change the maximum allowed memory for CQM queries from 2 GB to any other value.
Usually if the BIBus runs out of memory on generating output you would experience a CCL OOM, the error you are getting might be related to tuning. If you are on linux ensure your ulimits are set correctly.
https://www.ibm.com/support/pages/ccl-srv-0514-bibustkserver-process-ran-out-thread-resource
You may also want to watch the process while running the report to see how much memory it actually uses.
Related
I install the tabnine extension in vscode, the tool is great however it occupy memory over 1G. My computer is lagging. Is there any way to limit tabnine's memory usage or general vscode extensions' memory usage
In linux is possible with the tool called "timeout". When you start vscode automatically is created one specify process for tabnine. With this tool is possible limit the RAM usage of one process
If you choose to use Tabnine Cloud, Tabnine will send blocks of code from your edited files to our server, allowing us to provide deep completion suggestions. These blocks of code will never be stored - they are used to calculate predictions and then immediately discarded.
We recommend the cloud version to optimize RAM usage if you have a stable internet connection.
Ref: Optimize RAM usage
I've got an app service plan with 14gb of memory - it should be plenty for my application's needs. There are two application services running on it, each identical - the private memory consumption of these hovers around 1gb but can spike to 4gb during periods of high usage. One app has a heavier usage pattern than the other.
Lately, during periods of high usage, I've noticed that the heavily used service can become unresponsive, and memory usage stays at 100% in the App Service Plan.
The high traffic service is using 4gb of private memory and starting to massively slow down. When I head over to the /scm.../ProcessExplorer/ page, I can see that the low traffic service has 1gb private memory used and 10gb of 'Working Set'.
As I understand it, on a single machine at least, the working set should be freed up when that memory is needed on another process. Does this happen naturally when two App Services share a single Plan?
It looks to me like the working set on the low-traffic instance is not being freed up to supply the needs of the high-traffic App Service.
If this is indeed the case, the simple fix is to move them to separate App Service Plans, each with 7gb of memory. However this seems like it might potentially be just shifting the problem around - has anyone else noticed similar issues with multiple Apps on a single App Service Plan? As far as I understand it, these shouldn't interfere with one another to the extent that they all need to be separated. Or have I got the wrong diagnosis?
In some high memory-consumption scenarios, your app might truly require more computing resources. In that case, consider scaling to a higher service tier so the application gets all the resources it needs. Other times, a bug in the code might cause a memory leak. A coding practice also might increase memory consumption. Getting insight into what's triggering high memory consumption is a two-part process. First, create a process dump, and then analyze the process dump. Crash Diagnoser from the Azure Site Extension Gallery can efficiently perform both these steps. For more information.
refer Capture and analyze a dump file for intermittent high memory for Web Apps.
In the end we solved this one via mitigation, rather than getting to the root cause.
We found a mitigation strategy to our previous memory issues several months ago, which was just to restart the server each night using a powershell script. This seems to prevent the memory just building up over time, and only costs us a few seconds of downtime. Our system doesn't have much overnight traffic as our users are all based in the same geographic location.
However we recently found that the overnight restart was reporting 'success' but actually failing each night due to expired credentials. Which meant that the memory issues we were having in the question I posted were actually exacerbated by server uptimes of several weeks. Restoring the overnight restart resolved the memory issues we were seeing, and we certainly don't see our system ever using 10gb+ again.
We'll investigate the memory issues if they rear their heads again. KetanChawda-MSFT's suggestion of using memory dumps to analyse the memory usage will be employed for this investigation when it's needed.
I have been trying to understand what is the cause of high memory usage from processes in the windows server I have. I installed that tool DebugDiag 1.2 to try to find the problem.
Here is what runs in my server:
I have the IIS server which has a decent number of pool applications (68 pool applications). For each pool application there are at least 4 applications.
Recently, I have faced problems related to high memory usage, causing the server to work at 97% of memory usage or higher.
It was working fine when I took this printscreen below. However, the memory usage will easily get higher.
Task Manager:
With that being said, I have been trying to understand how to use the tool "DebugDiag1.2" from microsoft to find something (part of the source code, an sql procedure) that might help me locate what is causing the problem.
I read that we can't limit the memory for each IIS pool application, so I guess the solution would be trying to optmize the application. But first I need to know where to start.
I hope someone can help me out.
I'm executing a UDF a few thousand times per second. This causes NodeJS's RSS memory usage to slowly climb, seemingly without limit, a few kb per execute. The problem persists even if I periodically close the connection and open a new client.
Reproduction is very easy: Just execute a UDF (that returns a few values) on random keys a thousand times a second over the same connection. Cluster configuration doesn't effect it.
Any insights or advice to debug this problem?
This issue is fixed and the change is pushed into npm repo. please get the latest version (1.0.25). Thanks for giving details which helped us isolate the problem.
As a side note...
Growth of memory during execution in node.js should not be a concern as long as it does not cross the default limit of a node.js process after which the process will crash. We typically see the memory growing steadily initially and stabilizes near the limit. The default limit is 1G on a 64-bit machine which can be extended to 1.7G. Read this for more info.
I have a web site and I am using iis as my web server. I noticed that on production server, the cpu reaches 95% usage pretty fast with very little users. this behaviour I don't see on my developement server. I am using visual studio to develop and iis as my local web server as well.
How much big traffic you have on production comparing to development server? How their parameters compare? Before starting a deep analysis of the application itself, I would identify all the infrastructure and environmental differences. Sometime such problems happens because of some other software, like antivirus software running in the background...
Nevertheless, because it sounds rather as a application problem, I would first check Event Viewer for errors. Then I would start from monitoring a few Performance Counters to correlate % Processor Time counter with Current Connections, Available Memory, # of Exceps Thrown / sec, % Time in GC and so on. This kind of behavior usually has a reason from the list:
excessive loops usage due to some logic error, like calling the same service again and again, trying to load or parse malfunctioned file etc. This can be analyzed with dump analysis (look below).
high CPU usage due to Garbage Collector - when memory usage is extensive (or there is a memory leak even) GC may start to consume more and more CPU fighting with the memory shortage. You will see this with memory-related performance counters.
a considerable amount of exceptions thrown (for example due to some environmental problems like network unavailability, production data difference) can also consume a lot of CPU. Event Viewer and exception-related performance counters (as they can be handled silently by your application) should be a indicator here.
To further analyze your application, I suggest to make a full memory dump during high CPU usage. You can do that with Debug Diag tool. Please refer this IIS troubleshooting guide for details.