I have a lot of Singleton implementation in asp.net application and want to move my application to IIS Web Garden environment for some performance reasons.
CMIIW, moving to IIS Web Garden with n worker process, there will be one singleton object created in each worker process, which make it not a single object anymore because n > 1.
can I make all those singleton objects, singleton again in IIS Web Garden?
I don't believe you can ( unless you can get those IIS workers to use objects in shared memory somehow ).
This is a scope issue. Your singleton instance uses process space as its scope. And like you've said, your implementation now spans multiple processes. By definition, on most operating systems, singletons will be tied to a certain process-space, since it's tied to a single class instance or object.
Do you really need a singleton? That's a very important question to ask before using that pattern. As Wikipedia says, some consider it an anti-pattern ( or code smell, etc. ).
Examples of alternate designs that may work include...
You can have multiple objects synchronize against a central store or with each other.
Use object serialization if applicable.
Use a Windows Service and some form of IPC, eg. System.Runtime.Remoting.Channels.Ipc
I like option 3 for large websites. A companion Windows Service is very helpful in general for large websites. Lots of things like sending mail, batch jobs, etc. should already be decoupled from the frontend processing worker process. You can push the singleton server object into that process and use client objects in your IIS worker processes.
If your singleton class works with multiple objects that share state or just share initial state, then options 1 and 2 should work respectively.
Edit
From your comments it sounds like the first option in the form of a Distributed Cache should work for you.
There are lots of distributed cache implementations out there.
Microsoft AppFabric ( formerly called Velocity ) is their very recent move into this space.
Memcached ASP.Net Provider
NCache ( MSDN Article ) - Custom ASP.Net Cache provider of OutProc support. There should be other custom Cache providers out there.
Roll out your own distributed cache using Windows Services and IPC ( option 3 )
PS. Since you're specifically looking into chat. I'd definitely recommend researching Comet ( Comet implementation for ASP.NET?, and WebSync, etc )
Related
I read a lot of documents and articles about DBcontext in Efcore and its lifetime, however, I have some questions.
Based on this link "https://learn.microsoft.com/en-us/ef/core/dbcontext-configuration/" the best lifetime of DBcontext and default lifetime of AddDbContext is scope, but there is a contradiction in the two below sentences on this document.
"DbContext is not thread-safe. Do not share contexts between
threads. Make sure to await all async calls before continuing to use
the context instance."
on the other hand, it was mentioned too,
"Dbcontext is safe from concurrent access issues in most
ASP.NET Core applications because there is only one thread executing
each client request at a given time, and because each request gets
a separate dependency injection scope (and therefore a separate
DbContext instance)."
I was confused about whether registering DBcontext as a scoped service is thread-safe or not?
What are the problems of registering DBcontext as a singleton service in detail?
In addition, I read some docs that prohibit registering singleton DbContext, however, AddDbContextPool makes to register singleton DBcontext.
so there are some questions about the Dbcontextpool.
what are the impacts of using the DbContextPool instead of the DbContext?
when we should use it and what should be considered when we use contextPool?
DbContextPool is thread-safe?
Has it memory issues because of storing a number of dbset instances throughout the application's lifetime?
change-tracking or any parts of the ef would be failed or not in the DB context pool?
One DbContext per web request... why?
.NET Entity Framework and transactions
I understand why you think the language in the Microsoft documents is confusing. I'll unravel it for you:
"DbContext is not thread-safe." This statement means that it's not safe to access a DbContext from multiple threads in parallel. The stack overflow answers you already referenced, explain this.
"Do not share contexts between threads." This statement is confusing, because asynchronous (async/await) operations have the tendency to run across multiple threads, although never in parallel. A simpler statement would be: "do not share contexts between web requests," because a single web request typically runs a single unit of work and although it might run its code asynchronously, it typically doesn't run its code in parallel.
"Dbcontext is safe from concurrent access issues in most ASP.NET Core applications": This text is a bit misleading, because it might make the reader believe that DbContext instances are thread-safe, but they aren't. What the writers mean to say here is that, with the default configuration (i.e. using AddDbContext<T>(), ASP.NET Core ensures that each request gets its own DbContext instance, making it, therefore, "safe from concurrent access" by default.
1 I was confused about whether registering DBcontext as a scoped service is thread-safe or not?
DbContext instances are by themselves not thread-safe, which is why you should register them as Scoped, because that would prevent them from being accessed from multiple requests, which would make their use thread-safe.
2 What are the problems of registering DBcontext as a singleton service in detail?
This is already described in detail in this answer, which you already referenced. I think that answer goes into a lot of detail, which I won't repeat here.
In addition, I read some docs that prohibit registering singleton DbContext, however, AddDbContextPool makes to register singleton DBcontext. so there are some questions about the Dbcontextpool.
The DbContext pooling feature is very different from registering DbContext as singleton, because:
The pooling mechanism ensures that parallel requests get their own DbContext instance.
Therefore, multiple DbContext instances exist with pooling, while only a single instance for the whole application exists when using the Singleton lifestyle.
Using the singleton lifestyle, therefore, ensures that one single instance is reused, which causes the myriad of problems laid out (again) here.
The pooling mechanism ensures that, when a DI scope ends, the DbContext is 'cleaned' and brought back to the pool, so it can be reused by a new request.
what are the impacts of using the DbContextPool instead of the DbContext?
More information about this is given in this document.
when we should use it and what should be considered when we use contextPool?
When your application requires the performance benefits that it brings. This is something you might want to benchmark before deciding to add it.
DbContextPool is thread-safe?
Yes, in the same way as registering a DbContext as Scoped is thread-safe; in case you accidentally hold on to a DbContext instance inside an object that is reused accross requests, this guarantee is broken. You have to take good care of Scoped objects to prevent them from becoming Captive Dependencies.
Has it memory issues because of storing a number of dbset instances throughout the application's lifetime?
The memory penalty will hardly ever be noticable. The so-called first-level cache is cleared for every DbContext that is brought back to the pool after a request ends. This is to prevent the DbContext from becoming stale and to prevent memory issues.
change-tracking or any parts of the ef would be failed or not in the DB context pool?
No, it doesn't. For the most part, making your DbContext pooled is something that only requires infrastructural changes (changes to the application's startup path) and is for the most part transparent to the rest of your application. But again, make sure to read this to familiar yourself with the consequences of using DbContext pooling.
I'm have a working Java EE application that runs a multitude of threads. I want to move these threads off my application and simply have access to their data (Strings and ints).
How should I achieve this if I want to say call a method on my web application that accesses a threads data on a different server/JVM?
Say you wanted to separate these layers (perhaps put them on different machines or for scalability) you would separate the data layer from your presentation layer and put them in different JVMs. You make the data layer provide a service to the presentation layer. How you do this depends on your preferred transport. e.g. it could be a web service, or you can use RMI, JMS, TCP, shared memory.
In any case, one JVM can only access the data of another process through services that JVM exposes. (Except in the case of shared memory, but it's not easy to get this working unless your data model is very simple)
accesses a threads data
In Java almost all data is on the heap, which is shared by many threads. Very little of it is scoped completely to an individual thread. So the very idea of moving a thread and only "its data" does not make much sense.
And it's not just Java, pretty much any language with shared mutable state will face the same issues.
Of course your application can have a concept of thread-owned data, but that would be application logic and not part of java itself or its Thread class.
I have a naïve version of a PokerApp running as an Azure Website.
The server stores in its memory the state of the tables, (whose turn it is, blinds value, cards…) etc.
The problem here is that I don't know how much I can rely on the WebServer's memory to be "permanent". A simple restart of the server would cause that memory to be lost and therefore all the games in progress before the restart would get lost / cause trouble.
I've read about using TableStorage to keep session data and share it between instances, but in my case it's not just string of text that I want to share but let's say for example, a Lobby objcet which contains all info associated with the games.
This is very roughly the structure of the object I have in memory
After some of your comments, you can see the object that needs to be stored is quite big and is being almost constantly. I don't know how well serializing and deserializing is going to work for me here...
Should I consider an azure VM which I'm hoping is going to have persistent memory instead of a Website?
Or is there a better approach to achieve something like this?
Thanks all for the answers and comments, you've made it clear that one can't rely on local memory when working on the cloud.
I'm going to do some refactoring and optimize the "state" object and then use a caching service.
Two question come to my mind though, and once you throw some light on these ones I promise I will shut up and accept #astaykov's great answer.
CONCURRENCY AT INSTANCE LEVEL - I have classic thread locks in my app to avoid concurrency problems, so I'm hoping there is something equivalent for those caching services you guys propose?
Also, I have a few timeouts per table (increase blinds, number of seconds the players have to act…). Let's say a user has just folded a hand, he's finished interacting with the state object so I update the cache. While that state object (to which the timers belong) is cached, my timers will stop ticking…
I know I'm not explaining myself very well here but I hope you guys see my point.
I'd suggest using the Azure Redis Cache.
Here is a nice sample how to build MVC App with Redis Cache in 15 minutes.
You can, of course use the Azure Managed Cache. Or end up with Azure Tables. And Azure Tables can hold much more then just a string. But I believe the caching solutions would have lower latency in communication.
In either way, your objects have to be serializable. And yes - the objects will get serialized/deserialized by every access. You can do it manually, or let the framework do it for you. From what I've read, NewtonSoft.JSON is quite good and optimized JSON serializerdeserializer.
UPDATE
When you ask for a VM running in the cloud - a VM will be restarted sooner or later! Application Pool will recycle, a planned maintenance will occur, an unplanned maintenance will occur, a hard disk will fail, a memory module will fail, unforeseen disaster will happen.
Only one thing is for sure - if you want your data to survive server crashes, change the way you think and design software, and take data out of (local) the memory. Or just live the fact that application may lose state sometime.
Second update - for the clocks
Well, you have to play with your imagination and experience. I would question that your clocks work anyway in the context of the ASP.NET app (unless all of them being static properties of a static type, which would be a little hell). My approach would be heavily extend my app to the client as well (JavaScript). There are a lot of great frameworks out there - SignalR, AngularJS, KnockoutJS, none of them to be underestimated! By extending your object model to the client, you can maintain players objects lifetime on the client (keeping the clock ticking) and sending updates from the client to the server for all those events. If you take a look at SignalR, you can keep real time communication between multiple clients (say players) and the server. And the server side of SignalR scales out nicely with Azure Service Bus and even Redis.
I have a Node application which persists data to a MongoDB database. Most of this data is in hand, such as data for the User collection. However, the application also has the concept of Website collection, and for this collection, data must first be downloaded from somewhere before it is saved.
I am wondering how I should separate the above concerns in my application. At the service layer, I have things like User and Website. They provide basic CRUD operations. At completely the opposite end of the spectrum, there is a user interface whereby uses can input a website URL. Somewhere between this UI and the application persisting the data to MongoDB (the service layer), the application must make a request to this URL to gather some data. Once the data has been fetched, the Website service will persist it.
Potentially, there could be thousands of these URLs entered at once, and I do not want to bring down the Node process that handles the web server due to load issues. Therefore I think it would be a good idea to abstract the work out to a different process and use some sort of messaging bus to tie the application together.
It seems that you've decomposed system correctly -and have created that separation at the persistence "service" layer-, but I'd take this separation a bit further by moving toward a distributed system architecture (i.e. SOA / micro-services).
The initial step of building a distributed system is identifying each of the functions necessary to meet the overall business goal of the application and mapping these to service endpoints. Each loosely coupled service endpoint will then serve a small isolated job/function and it will act as an abstraction for that business goal.
By continuing the separation of responsibilities all the way to the service endpoint you create small independent boundaries for scalability, throughput, fault tolerance, security, deployment, etc.
For example -RESTfully speaking-, this might mean service endpoints for both Users (e.g. /users/{userid}) and Websites (e.g. /websites/{websiteid|url})... and perhaps an additional Resource to maintain the relationship/link between the two (e.g. /users/{userid}/userwebsites : {websiteid:1234,url:blah.com).
This separation would mean you can handle the website processing responsibility independently, which would have a number of benefits -beyond just handling the different load characteristics-.
I believe that the mvc mini profiler is a bit of a 'God-send'
I have incorporated it in a new MVC project which is targeting the Azure platform.
My question is - how to handle profiling across server (role instance) barriers?
Is this is even possible?
I don't understand why you would need to profile these apps any differently. You want to profile how your app behaves on the production server - go ahead and do it.
A single request will still be executed on a single instance, and you'll get the data from that same instance. If you want to profile services located on a different physical tier as well, that would require different approaches; involving communication through internal endpoints which I'm sure the mini profiler doesn't support out of the box. However, the modification shouldn't be that complicated.
However, would you want to profile physically separated tiers, I would go about it in a different way. Specifically, profile each tier independantly. Because that's how I would go about optimizing it. If you wrap the call to your other tier in a profiler statement, you can see where the problem lies and still be able to solve it.
By default the mvc-mini-profiler stores and delivers its results using HttpRuntime.Cache. This is going to cause some problems in a multi-instance environment.
If you are using multiple instances, then some ways you might be able to make this work are:
to change the Http Cache to an AppFabric Cache implementation (or some MemCached implementation)
to use an alternative Storage strategy for your profile results (the code includes SqlServerStorage as an example?)
Obviously, whichever strategy you choose will require more time/resources than just the single instance implementation.