ColdFusion singleton object pool - multithreading

In our ColdFusion application we have stateless model objects.
All the data I want I can get with one method call (it calls other internally without saving the state).
Methods usually ask the database for the data. All methods are read only, so I don't have to worry about thread safety (please correct me if I'm wrong).
So there is no need to instantiate objects at all. I could call them statically, but ColdFusion doesn't have static methods - calling the method would mean instantiating the object first.
To improve performance I have created singletons for every Model object.
So far it works great - each object is created once and then accessed as needed.
Now my worry is that all requests for data would go through only 1 model object.
Should I? I mean if on my object I have a method getOfferData() and it's time-consuming.
What if a couple of clients want to access it?
Will second one wait for the first request to finish or is it executed in a separate thread?
It's the same object after all.
Should I implement some kind of object pool for this?

The singleton pattern you are using won't cause the problem you are describing. If getOfferData() is still running when another call to that function gets called on a different request then this will not cause it to queue unless you do one of the following:-
Use cflock to grant an exclusive lock
Get queueing connecting to your database because of locking / transactions
You have too many things running and you use all the available concurrent threads available to ColdFusion
So the way you are going about it is fine.
Hope that helps.

Related

Delphi/Indy multithreading Server

I am trying to turn my app multithreading. What I want to achieve is:
- Receive command via TidHTTPServer
- Execute local action (might involve using tidHTTP to send/receive data to other services)
- return execution result to the original caller
since I am pretty new to multi-threading I would like to know if my design-idea is correct
TMsgHandler=Class(TThread)
in TidHTTPServer.OnCommandGet I create a new instance of TMsgHandler and pass ARequestInfo and AResponseInfo
TMsgHandler.Excecute interprest the data
Can TMsgHandler.Execeute use Objects (descendants of TidHTTP) in my Main to communicate with other services?
TMsgHandler sends answer through AResponseInfo and terminates.
will this work?
This is not the correct design.
THTTPServer is a multi-threaded component. Its OnCommand... events are fired in the context of worker threads that Indy creates for you.
As such, you do not need to derive your TMsgHandler from TThread. Do your TIdHTTP directly in the context of the OnCommand... thread instead. A response will not be sent back to the client until your event handler exits (unless you send one manually). However, you should not share a single TIdHTTP from the main thread (unless you absolute need to, in which case you would need to synchronize access to it). You should create a new TIdHTTP dynamically directly in your OnCommand.../TMsgHandler code as needed.

Is there a recommended approach for object creation in Node.js?

I know that in PHP objects are created for each request and destroyed when the processing is finished.
And in Java, depending on configuration, objects can remain in memory and be either associated with a single user (through server sessions) or shared between multiple users.
Is there a general rule for this in Node.js?
I see many projects instantiating all app objects in the entry script, in which case they will be shared between requests.
Others will keep object creation inside functions, so AFAIK objects are destroyed after processing each request.
What are the downsides of each approach? Obviously, things like memory usage and information sharing should be considered, but are there any other things specific to Node.js that we should pay attention to?
Javascript has no such thing as objects that are tied to a given request. The language is garbage collected and all objects are garbage collected when there are no more references to them and no code can reach them. This has absolutely nothing to do with request handlers.
so AFAIK objects are destroyed after processing each request.
No. The lifetime of objects in Javascript has absolutely nothing to do with requests.
Instead, think of function scopes. If you create an object in a request handler and use it in that request handler and don't store it somewhere that creates a long lasting reference to the object, then just like ANY other function in Javascript, when that request handler function finishes and returns and has no more asynchronous operations still in-flight, then any objects created within that function that are not stored in some other scope will be cleaned up by the garbage collector.
It is the exact same rules for a request handler as it is for any other function call in the language.
So, please forget anything you know about PHP as its request-specific architecture will only mess you up in Javascript/node.js. There is no such thing in node.js.
Instead, think of a node.js server as one, long running process with a garbage collector. All objects that are created will be garbage collected when they are no longer reachable by live code (e.g. there are no live references to them that any code can get to). This is the same whether the object is created at startup of the server, in a request handler on the server, in a recurring timer on the server or any other event on the server. The language has one garbage collector that works the same everywhere and has no special behavior for server requests.
The usual way to do things in a node.js server is to create objects that are local variables in the request handler function (or in any functions that it calls) or maybe even occasionally assigned as properties of the request or response objects (middleware will often do this). Since everything is scoped to a function call in the request chain when that function call is done, the things you created as local variables in those functions will become eligible for garbage collection.
In general, you do not use many higher scoped variables outside the request handler except for purposeful long term storage (session state, database connections, or other server-wide state).
Is there a general rule for this in Node.js?
Not really in the sense you were asking since Javascript is really just about the scope that a variable is declared in and then garbage collection from there, but I will offer some guidelines down below.
If data is stored in a scope higher than the request handler (module scope or global scope), then it probably lasts for a long time because there is a lasting reference that future request handlers can access so it will not be garbage collected.
If objects are created and used within a request handler and not attached to any higher scope, then they will be garbage collected by the language automatically when the function is done executing.
Session frameworks typically create a specific mechanism for storing server-side state that persists on the server on a per-user basis. A popular node.js session manager, express-session does exactly this. There, you follow the rules for the session framework for how to store or remove data from each user's session. This isn't really a language feature as it is specific library built in the language. Even the session manage relies on the garbage collector. Data persists in the session manager when desired because there are lasting references to the data to make it available to future requests.
node.js has no such thing as "per-user" or "per-request" data built into the language or the environment. A session manager builds "per-user" data artificially by making persistent data that can be requested or accessed on a per-user basis.
Some general rules for node.js:
Define in your head and your design which data is local to a specific request handler, which data is meant for long term store, which data is meant for user-specific sessions. You should be very clear about that.
Don't ever put request-specific variables into any higher scope that any other request handler can access unless these are purposeful shared variables that are meant to be accessed by multiple requests. Accidentally sharing variables between requests creates concurrency issues and race conditions and very hard-to-track-down server bugs as one request may write to that variable in doing it's work and then another request may come along and also write to it, trouncing what the first request was working on. Keep these kind of request-specific variables local to the request handler (local to the function for the request handler) so that can never happen.
If you are storing data for long term use (beyond the lifetime of a specific request) which would generally mean storing it in a module scoped variable or in a global scoped variable (should generally not use global scoped variables), then be very, very careful about how the data is stored and accessed to avoid race conditions or inconsistent state that might mess up some other request handler reading/writing to that data. node.js makes this simpler because it runs your Javascript as single threaded, but once your request handler makes some sort of asynchronous function call (like a database call), then other request handlers get to run so you have to be careful about modifications to shared state across asynchronous boundaries.
I see many projects instantiating all app objects in the entry script, in which case they will be shared between requests.
In the example of an web server using the Express framework, there is one app object that all requests have access to. The only request-specific variables are the request and response objects that are created by the web server framework and passed into your request handler. Those will be unique to each new request. All other server state is accessible by all requests.
What are the downsides of each approach?
If you're asking for a comparison of the Apache/PHP web server model to the node.js/Express web server model, that's a really giant question. They are very different architectures and the topic has been widely discussed and debated before. I'd suggest you do some searching on that topic, read what has been previously written and then ask a more specific question about things you don't quite understand or need some clarification on.

Is ASP.NET Core Session implementation thread safe?

I know that there is an analogous question but that is about ASP.NET and not about ASP.NET Core. The answers are 7-9 years old, and mixing there talking about ASP.NET and ASP.NET Core may not be a good idea.
What I mean thread safe in this case:
Is it safe to use the read write methods (like Set(...)) of the Session (accessed via HttpContext, which accessed via an injected IHttpContextAccessor) in multiple requests belonging to the same session?
The obvious answer would be yes, because if it would not be safe, then all developers should make their session accessing code thread safe...
I took a look of the DistributedSession source code which seems to be the default (my session in the debugger which accessed as described above is an instance of DistributedSession) and no traces of any serialization or other techniques, like locks... even the private _store member is a pure Dictionary...
How could be this thread safe for concurrent modification usage? What am I missing?
DistributedSession is created by DistributedSessionStore which is registered as a transient dependency. That means that the DistributedSessionStore itself is implicitly safe because it isn’t actually shared between requests.
The session uses a dictionary as the underlying data source which is also local to the DistributedSession object. When the session is initialized, the session initializes the _store dictionary lazily when the session is being accessed, by deserializing the stored data from the cache. That looks like this:
var data = _cache.Get(_sessionKey);
if (data != null)
{
Deserialize(new MemoryStream(data));
}
So the access to _cache here is a single operation. The same applies when writing to the cache.
As for IDistributedCache implementations, you can usually expect them to be thread-safe to allow parallel access. The MemoryCache for example uses a concurrent collection as the backing store.
What all this means for concurrent requests is basically that you should not expect one request to directly impact the session of the other request. Sessions are usually only deserialized once so updates that happen during the request (by other requests) will not appear.

WCF - spawn a new worker thread and return to caller without waiting for it to finnish

I have a WCF web service hosted in IIS- This service has a method - lets call it DoSomething(). DoSomething() is called from a client-side application.
DoSomething performs some work and returns the answer to the user. Now I need to log how often DoSomething is being called. I can add it to the DoSomething function so that it will for every call write to an sql database and update a counter, but this will slow down the DoSomething method as the user needs to wait for this extra database call.
Is it a good option to let the DoSomething method spawn a new thread which will update the counter in the database, and then just return the answer from the DoSomething method to the user without waiting for the thread to finnish? Then I will not know if the database update fails, but that is not critical.
Any problems with spawning a new background thread and not wait for it to finnish in WCF? Or is there a better way to solve this?
Update: To ask the question in a little different way. Is it a bad idea to spawn new threads insde a wcf web service method?
The main issue is one of reliability. Is this a call you care about? If the IIS process crashes after you returned the response, but before your thread completes, does it matter? If no, then you can use client side C# tools. If it does matter, then you must use a reliable queuing technology.
If you use the client side then spawning a new thread just to block on a DB call is never the correct answer. What you want is to make the call async, and for that you use SqlCommand.BeginExecute after you ensure that AsyncronousProcessing is enabled on the connection.
If you need reliable processing then you can use a pattern like Asynchronous procedure execution which relies on persisted queues.
As a side note things like logging, or hit counts, and the like are a huge performance bottleneck if done in the naive approach of writing to the database on every single HTTP request. You must batch and flush.
If you want to only track a single method like DoSomething() in service then you can create an custom operation behavior and apply it over the method.
The operation behavior will contain the code that logs the info to database. In that operation behavior you can use the .NET 4.0's new TPL library to create a task that will take care of database logging. If you use TPL you don't need to worry about directly creating threads.
The advantage of using operation behvaior tomorrow you need to track another method then at that time instead of duplicating the code there you are just going to mark the method with the custom operation behavior. If you want to track all the methods then you should go for service behavior.
To know more about operation behaviors check http://msdn.microsoft.com/en-us/library/system.servicemodel.operationbehaviorattribute.aspx
To know more about TPL(Task Parallel Library) check http://msdn.microsoft.com/en-us/library/dd460717.aspx

Ansync thread from WCF RESTful Service

We have created a WCF RESTful service for a WPF(UI) Application. The UI sends a request to the WCF Service which then invokes a suitable method in BLL, which in turn invokes a method in DAL. All these layers have been separated using IOC/DI.
Now, for a new feature, we want that when a new object of a certain type is added to the database, it should go through 3 steps which would be performed in a separate thread.
That is, if service sends a request to BLL to add a new object OBJ to the database, the BLL should save the object into database through the DAL and then initiate a new thread to perform a some actions upon the object without blocking the WCF Request.
But whenever we try to do so by starting a new thread in the BLL, the application crashes. It is so because the 'InRequestScope' object of the database context has been disposed and the thread cannot update the database. Also the WCF request does not ends until the thread is completed, although the return value has been provided and the BLL method has completed execution.
Any help would be much valued.
I have figured out the solution and explanation for this behavior. Turns out to be a rather silly one.
Since I was creating a thread from the BLL (with IsBackground = true;), the parent thread (originated by the service request) was waiting for this thread to end. And when both the threads ended, the response was sent back to the client. And the solution, well, use a BackgroundWorker instead, no rocket science, just common sense.
And for the disposing of context, since the objects were InRequestScope, and the request had ended. So every time a Repository required a UnitOfWork (uow/context), it would generate a new context and end it as soon as the database request was complete. And the solution would be, create a uow instance, store in a variable, pass it to the repository required to be used, and force all repositories to use the same uow instance than creating a new one for itself.
This seem more of a client-side concern than a service-side concern. Why not have the client make asynchronous requests to WCF service since this automatically provides multi-threaded access to the service.
The built-in System.Net.WebClient (since you're access a webHttpBinding or WCF Web API endpoint) can be used asynchronously. This blog post gives a quick overview of how it is done. Although this MSDN article seems to apply to file I/O, about three quarters down, there is a detailed explanation on coding asynchronous WebClient usage.

Resources