Can Use Castle Windsor to Keep Static References In Memory? - iis

Background...
I have to build a new (asp.net mvc) app that uses an existing class library that is complex and can't be rewritten at this stage. The main problem is that this class library has a huge initialisation hit - it takes up to 10 mins to load all its data into memory. This is fine for production environment where it performs fast after IIS has started up. However, for development this is a nightmare because every time you build the solution and start it up in a browser, it takes ages.
Possible Solution?
So, the idea was that Castle Windsor or IOC lifestyle can be used to hold this in memory so that only recycling the application pool will force an expensive reload. I remember having a problem before when Windsor was keeping code in memory so even after changing it and recompiling, IIS still had the old code running - in this scenario it was a problem, but in my new scenario this is exactly what I'd like.
Anyone know how this can be done? I have tried with a dummy project using Singleton lifestyle but after changing the mvc project it still reloads the class library.

If the data does serialize then you could store in a cache that will keep it's state when you recompile. For example, memcached runs as a separate process. You could change the bin or restart the dev server process and the cache will keep it's state. There's a provider for accessing memcacheD on codeplex.

Maybe you could serialize the contents of the loaded library and save it binary form on the disk. This could potentially speed up the load. It's a crazy idea, but then again, having a class library that takes 10 minutes to load is crazy, too.

Related

hot reload an es6 module w/o memory leak

I have a Node server, that I want to hot reload some modules that get sent to the server. Right now I just use a dynamic await import and swap out the reference to the module. After the module is loaded though, it stays in the cache forever, and the memory will slowly increase over time whenever it reloads a new version of the module. In practice it's a small amount of memory, but having a memory leak on the server leaves a bad taste in my mouse.
I considered using a worker, but you can't pass functions between threads, and the module I'm needing exposes functions.
I've done some research and so far I can't find any way to do this. Is it possible? I'm expecting the answer is no, and am planning on just restarting the server on reloads and finding some way to make the deploy 0 downtime, but at this point I'm also just curious about the question.

Why my asp.net website is very slow at start. i use resellerclub share hosting.how to fix this one

I m hosting asp.net mvc website and it really slow at the start. it take 40s to 1 minute. how to fix this.
i have tried bundling and minication. Debug mode=false in webconfig
my wesite is www.mmfujistore.com
My first thought on seeing this is that your hosting provider shuts down your server when it is idle for more than a few seconds in order to make room for other customers. That it takes so long to start up is because your shared application is literally booting up if you make a request after the site is down for a while.
If it works fine locally, I am going to bet you are best off finding a better hosting provider. If you update to use .net core you can use any linux host out there. I'm going to bet this isn't a .net issue.
Edit: I agree. I used to work there and I can confirm that they do this to manage the load by limiting the application pool memory considerably for each client. Considering the number of customers on the shared server it gets really difficult to manage, it really does not cause issues for most apps but if your application is memory hungry, they won't do anything to fix it permanently. The best they can do is free your app memory pool temporarily but it will get full once again. Your best option would be to move to another provider if this persists.

Electron running multiple main processes vs multiple browser windows

I'm running electron on linux server for web scraping. And currently I'm running new electron command for each task. But it results in high cpu usage. Now thinking about running single electron instance, and create new BrowserWindow for each task. It will take some time to adapt the code base for this style, so I wanted to ask here first. Will it make a difference in cpu usage, and how much?
Basically, creating a new NodeJS process will result in re-parsing your application's code, which will highly affect your CPU usage. Creating only a new BrowserWindow will only create a new renderer process, which is way more efficient.
If your application is packaged, e.g. with electron-packager, then creating a new instance will also affect your CPU usage like creating another NodeJS process, because that packaged (aka compiled) application has a copy of NodeJS in it, which is enough to run your code, but still affects the CPU usage.
But the decision depends on how you use the server. If you only run the Electron application to carry out the tasks that have been defined by you, adapting your working code would have no to only a low benefit. If you want to release this application and/or that server is used by some other tasks, e.g. a web server, it would be a real benefit if you adapt your code.
Running multiple instances of the main nodejs process with the default configuration is not actually supported or tested. You'll find that any features that persists data to disk either don't work, or don't work as expected (ie. localstorage, indexeddb, sessions, etc).
https://github.com/electron/electron/issues/2493
You can work around this by changing the data directory for each instance so they don't trample over each other but this is likely to use a lot of disk space and you'd need a way to keep track of all these data directories.
A single main process with multiple renderers is nearly always the answer.

IBM WebSphere 8 memory leaks with Axis2 Web Services

Migrated an application to WebSphere v8 from v6 and started getting memory leaks. The primary suspect is org.apache.axis2. It looks like each time the application calls a web service, an object called ServiceClient is created by WAS8 and stored in something that's called ClientConfigurationContextStore and then never garbage collected. Has anybody had a similar issue?
Fixed the problem by forcing original axis 1.4 over supplied soap implementation. This was done by placing two files in WEB-INF/services of the application. First file is called javax.xml.soap.MessageFactory and contains 'org.apache.axis.soap.MessageFactoryImpl' and the second is called javax.xml.soap.SOAPConnectionFactory and contains 'org.apache.axis.soap.SOAPConnectionFactoryImpl'. So now in the code this: javax.xml.soap.SOAPConnectionFactory.newInstance() returns a org.apache.axis stuff while before it was returning com.ibm.ws.webservices stuff. No memory leaks anymore.
If you don't have the problem in WebSphere v6, it's possible it is a leak in v8 itself. But it's also possible that v8 is being more strict about something that v6 was letting you get away with.
Have you checked that you're reusing all the Axis2 client objects that you can, rather than recreating ones on every call that you don't need to recreate? I recall us having some leakage in Axis2 client code under WAS v6.1 and realizing that we were recreating objects that we could be reusing instead.
In one of our projects, we used Axis2 1.6.2 as service client. Application server was WebSphere 7 and in test environment it got out of memory from time to time. When i examined heap dump AxisConfiguration class had lots of AxisService class instances. I was instantiating ServiceClient for every request and i saw that sometimes garbage collection worked late to finalize this object. So we used ServiceClient singleton and that solved our problem.

After enabling IIS 7.5 autostart, first request is still slow

I've set startMode="AlwaysRunning" attribute on my application pool and serviceAutoStartEnabled="true" attribute on my application in IIS configuration. I've even set up serviceAutoStartProvider and can see that "warm up" code is being executed. I also can see that w3wp process auto-starts after iisreset. Still, the first request to my ASP.NET MVC application is exactly as slow as without auto-start. Is there any point I'm missing or any way to easily debug this without a profiler?
Is this feature expected to affect first request performance at all? What is actually the bulk of work to do on the first request, given that the worker process is ready, .NET appdomain and even all .NET assemblies have been loaded?
I've been looking into this recently.
As far as I can tell, the autoStart feature will cause your IIS worker threads (by default, just the one for the pool) to JIT compile before the first request.
However, what is compiled appears to be just a bulk of the assemblies and dependencies, but not necessarily any methods.
When that first request happens, and your methods you've written get called for the first time, the JITer performs a final compile on those methods that have not yet been compiled.
The benefit of autoStart appears to be it lets .Net do 90% of the work up-front, but the last 10% is still paid for when the first request happens and those methods that were yet to be accessed get run for the first time.

Resources