Force Ninject to clear cache - memory-leaks

I am processing objects in a loop and for each object I am loading some data from CSV files. I am disposing of the object after it has been processed but it is still kept in the heap and I am getting an Out Of Memory exception after a few runs.
How can I force Ninject to remove that InstanceReference from the cache?
.Net Console app

Related

Apollo Client + NextJS memory leak, InMemoryCache

Official Apollo and NextJS recommendations are about to create a new ApolloClient instance each time when the GraphQL request should be executed in case if SSR is used.
This shows good results by memory usage, memory grows for some amount and then resets with the garbage collector to the initial level.
The problem is that the initial memory usage level constantly grows and the debugger shows that leaking is caused by the "InMemoryCache" object that is attached to the ApolloClient instance as cache storage.
We tried to use the same "InMemoryCache" instance for the all new Apollo instances and tried to disable caching customizing policies in "defaultOptions", but the leak is still present.
Is it possible to turn off cache completely? Something like setting a "false" value for the "cache" option in ApolloClient initialization? Or maybe it's a known problem with a known solution and could be solved with customization of the "InMemoryCache"?
We tried numerous options, such as force cache garbage collection, eviction of the objects in the cache, etc., but nothing helped, the leak is still here.
Thank you!

How to fix a memory leak when switching between databases with Mongoose & MongoDB?

I've identified a memory leak in an application I'm working on, which causes it to crash after a while due to being out of memory. Fortunately we're running it on Kubernetes, so the other replicas and an automatic reboot of the crashed pod keep the software running without downtime. I'm worried about potential data loss or data corruption though.
The memory leak is seemingly tied to HTTP requests. According to the memory usage graph, memory usage increases more rapidly during the day when most of our users are active.
In order to find the memory leak, I've attached the Chrome debugger to an instance of the application running on localhost. I made a heap snapshot and then I ran a script to trigger 1000 HTTP requests. Afterwards I triggered a manual garbage collection and made another heap snapshot. Then I opened a comparison view between the two snapshots.
According to the debugger, the increase of memory usage has been mainly caused by 1000 new NativeConnection objects. They remain in memory and thus accumulate over time.
I think this is caused by our architecture. We're using the following stack:
Node 10.22.0
Express 4.17.1
MongoDB 4.0.20 (hosted by MongoDB Atlas)
Mongoose 5.10.3
Depending on the request origin, we need to connect to a different database name. To achieve that we added some Express middleware in between that switches between databases, like so:
On boot we connect to the database cluster with mongoose.createConnection(uri, options). This sets up a connection pool.
On every HTTP request we obtain a connection to the right database with connection.useDb(dbName).
After obtaining the connection we register the Mongoose models with connection.model(modelName, modelSchema).
Do you have any ideas on how we can fix the memory leak, while still being able to switch between databases? Thanks in advance!

Arangodb foxx-application poor performance

I have serious issue with custom foxx application.
About the app
The application is customized algorithm for finding path in graph. It's optimized for public transport. On init it loads all necessary data into javascript variable and then it traverse through them. Its faster then accessing the db each time.
The issue
When I access through api the application for first time then it is fast eg. 300ms. But when I do absolutely same request second time it is very slow. eg. 7000ms.
Can you please help me with this? I have no idea where to look for bugs.
Without knowing more about the app & the code, I can only speculate about reasons.
Potential reason #1: development mode.
If you are running ArangoDB in development mode, then the init procedure is run for each Foxx route request, making precalculation of values useless.
You can spot whether or not you're running in development mode by inspecting the arangod logs. If you are in development mode, there will be a log message about that.
Potential reason #2: JavaScript variables are per thread
You can run ArangoDB and thus Foxx with multiple threads, each having thread-local JavaScript variables. If you issue a request to a Foxx route, then the server will pick a random thread to answer the request.
If the JavaScript variable is still empty in this thread, it may need to be populated first (this will be your init call).
For the next request, again a random thread will be picked for execution. If the JavaScript variable is already populated in this thread, then the response will be fast. If the variable needs to be populated, then response will be slow.
After a few requests (at least as many as configured in --server.threads startup option), the JavaScript variables in each thread should have been initialized and the response times should be the same.

How to run a time consuming task on startup in a web application while deploying it

we are facing an issue with initializing our cache at server startup or application deployment. Initializing the cache involves
Querying a database to get the list of items
Making an rmi call for each item
Listening to the data on a JMS queue/topic
Constructing the cache
This initialization process is in startup code. All this is taking lot of time due to which the deployment is taking lot of time or server start time is increasing.
So what I proposed is to create a thread in the startup and run the initialization code in it. I wrote a sample application to demonstrate it.
It involves a ServletContextListener, a filter. In the listener I am creating a new thread in which the HeavyProcess will run. When it finishes an event will be fired which the filter will be listening. On receiving the event the filter will allow incoming http requests. Until then the filter redirects all clients to a default page which shows a message that the application is initializing.
I presented this approach and few concerns were raised.
We should not ideally create a thread because handling the thread will be difficult.
My question is why cant we create a thread like these in web applications.
If this is not good, then what is the best approach?
If you can use managed threads, avoid unmanaged ones. The container has no control over unmanaged threads, and unmanaged threads survive redeployments, if you do not terminate these properly. So you have to register unmanaged threads, and terminate these somehow (which is not easy as well, because you have to handle race-conditions carefully).
So one solution is to use #Startup, and something like this:
#Schedule(second = "*/45", minute = "*", hour = "*")
protected void asyncInit(final Timer timer) {
timer.cancel();
// Do init here
// Set flag that init has been completed
}
I have learned about this method here: Executing task after deployment of Java EE application
So this gives you an async managed thread, and deployment will not be delayed by #PostConstruct. Note the timer.cancel().
Looking at your actual problem: I suggest using a cache which supports "warm starts".
For example, Infinispan supports cache stores so that the cache content survives restarts. If you have a cluster, there are distributed or replicated caching modes as well.
JBoss 7 embeds Infinispan (it's an integrated service in the same JVM), but it can be operated independently as well.
Another candidate is Redis (and any other key/value store with persistence will do as well).
In general, creating unmanaged threads in a Java EE environment is a bad idea. You will loose container managed transactions, user context and many more Java EE concepts within your unmanaged thread. Additionally unmanaged threads may block the conainer on shutdown if your thread handling isn't appropriate.
Which Java EE Version are you using? Perhaps you can use Servlet 3.0's async feature?
Or call a asynchronous EJB for doing the heavy stuff at startup (#PostConstruct). The call will then set a flag when its job is done.

ColdFusion singleton object pool

In our ColdFusion application we have stateless model objects.
All the data I want I can get with one method call (it calls other internally without saving the state).
Methods usually ask the database for the data. All methods are read only, so I don't have to worry about thread safety (please correct me if I'm wrong).
So there is no need to instantiate objects at all. I could call them statically, but ColdFusion doesn't have static methods - calling the method would mean instantiating the object first.
To improve performance I have created singletons for every Model object.
So far it works great - each object is created once and then accessed as needed.
Now my worry is that all requests for data would go through only 1 model object.
Should I? I mean if on my object I have a method getOfferData() and it's time-consuming.
What if a couple of clients want to access it?
Will second one wait for the first request to finish or is it executed in a separate thread?
It's the same object after all.
Should I implement some kind of object pool for this?
The singleton pattern you are using won't cause the problem you are describing. If getOfferData() is still running when another call to that function gets called on a different request then this will not cause it to queue unless you do one of the following:-
Use cflock to grant an exclusive lock
Get queueing connecting to your database because of locking / transactions
You have too many things running and you use all the available concurrent threads available to ColdFusion
So the way you are going about it is fine.
Hope that helps.

Resources