A WCF service is configured as
InstanceContextMode = InstanceContextMode.PerCall
ConcurrencyMode = ConcurrencyMode.Multiple
I am using Entity Framework 3.1. Only with load tests and when I reached five concurrent users, I got OptimisticConcurrencyException.
I will either synchronize BLL.Update method. Or use ConcurrencyMode.Single. I cannot use ClientWins and StoreWins techniques.
I will define a private static Object instance and lock on it to synchronize access to the method. How I prevent one of the threads from being starved. Is there a way to make the locking fair? Is it a good idea to lock on a static reference?
The exception that you are getting is OptimisticConcurrencyException. You are getting this since your transaction is using Optimistic Concurrency and 2 users are changing the same data.
There are atleast 3 ways to fix it:
Design level: Why are different users changing the same data?
Database level: use a transaction scope does not use optimistic concurrency for database access
WCF level: use concurrency mode single for the WCF service
Your idea with the private static Object instance would have the same effect as setting the WCF service in single mode.
Related
I'm using Azure Functions with queue triggers for part of our workload. The specific function queries the database and this creates problems with scaling since the large concurrent number of function instances pinging the db results in maximum allowed number of Azrue DB connections being hit constantly.
This article https://learn.microsoft.com/en-us/azure/azure-functions/manage-connections lists HttpClient as one of those resources that should be made static.
Should database access also be made static with static SqlConnection to resolve this issue, or would that cause some other problems by keeping the constant connection object?
Should database access also be made static with static SqlConnection
Definitely not. Each function invocation should open a new SqlConnection, with the same connection string, in a using block. It's not really clear how many concurrent Function Invocations the runtime will make to a single instance of your application. But if it's more than 1, then a singleton SqlConnection is a bad thing.
I wonder exactly which limit you're hitting in SQL Database, the connection limit or the concurrent request limit? In either case I'm a bit surprised (not a Functions expert) that you get that many concurrent function invocations, so there might be something else going on. Like you're leaking SqlConnections.
But reading the Functions docs, my guess is that the functions runtime is scaling by launching multiple instances of your function app. Your .NET app could scale in a single process, but that's apparently not the way Functions works. Each instance of your Functions app has it's own ConnectionPool for SQL Server, and by default each ConnectionPool can have 100 connections.
Perhaps if you sharply limit the Max Pool Size in your connection string, won't have so many connections open. When you hit the Max Pool Size, new calls to SqlConnection.Open() will block for up to 30 seconds waiting for a pooled SqlConnection to become available. So this not only limits the connection use for each instance of your application, it throttles your throughput under load.
You can use the configuration settings in host.json to control the level of concurrency your functions execute at per instance and the max scaleout setting to control how many instances you scale out to. This will let you control the total amount of load put on your database.
For future readers, the documentation has been updated with some information about the SQL connection stating:
Your function code may use the .NET Framework Data Provider for SQL Server (SqlClient) to make connections to a SQL relational database. This is also the underlying provider for data frameworks that rely on ADO.NET, such as Entity Framework. Unlike HttpClient and DocumentClient connections, ADO.NET implements connection pooling by default. However, because you can still run out of connections, you should optimize connections to the database. For more information, see SQL Server Connection Pooling (ADO.NET).
So, as David Browne already mentioned, you shouldn't make your SqlConnection static.
Can I just have one global instance of the client and table, or do I need a separate instance per thread?
reference: https://msdn.microsoft.com/en-us/library/azure/microsoft.windowsazure.storage.table.aspx
Each individual class should describe whether it is thread-safe or not. I know some are. I suspect some are not.
Example: CloudTable's public static methods are thread-safe but instances of the class are not.
More than likely, you'll want multiple instances of these. If you have some strange scaling issue where this is problematic, consider creating a "ClientPool" somewhat akin to a Connection Pool to lease and reuse instances.
This is a best practices question.
Per this best practices article and per MSDN, the OrganizationServiceProxy is not thread safe.
If you have a multi threaded client application in which you are creating an instance of an
OrganizationServiceContext (on a per thread basis), the constructor of which accepts an
IOrganizationService instance and you pass in a global instance of the OrganizationServiceProxy
(i.e a static instance allocated once at the "process level"), will this cause threading issues and/or if the OrganizationServiceProxy instance faults, will it affect operations that the threads try to perform on their own "local" instance of the OrganizationServiceContext?
My belief is that it will, and that an OrganizationServiceProxy instance needs to be created on a "per thread" basis and that each OrganizationServiceContext in a multi threaded application would need its own corresponding OrganizationServiceProxy instance.
I'm posting this to get confirmation of the above.
Also, the article indicates
The service proxy class performs the metadata download and user authentication by using the following class methods
IServiceManagement<IOrganizationService> orgServiceManagement =
ServiceConfigurationFactory.CreateManagement<IOrganizationService>(
new Uri(organizationUrl))
AuthenticationCredentials authCredentials = orgServiceManagement.Authenticate(credentials)
By caching the service management and authenticated credential objects, your application can more efficiently construct the service proxy objects more than one time per application session
If I try to execute the above API calls manually, in Active directory authentication mode, the authCredentials.SecurityTokenResponse is null as indicated by MSDN
Is there a way to perform the authentication just once for AD mode and pass an authenticated SecurityTokenResponse to a newly created OrganizationServiceProxy via the following constructor?
OrganizationServiceProxy (IServiceConfiguration, SecurityTokenResponse)
so that you don't have to take the authentication and metadata download hit on a "per thread basis" when constructing the OrganizationServiceProxy instance per thread and just take the hit once?
Yes, you will definitely have issue if you attempt multi-threaded operations on a single IOrganization service.
We have two basic multi-threaded CRM applications: batch processors, and another web app. For the batch programs I've found it works better to only have 10 different threads, and to batch up the work among the 10 different threads. So if you're inserting 100,000 records, split them into 10 batches of 10,000, a single organization service for each thread.
We also have a website that does a lot of CRM interactions so there is no real way to batch the requests, so we created a CRM connection pool to reuse any open, already authenticated connections.
Of course this won't work at all if you're not using some system service account.
I've seen a project lately using a background worker to make some operations (get data from other web services) and throw the data using events to the client. This project is a WCF service and consume by an ASP.NET web site by another class library as WCF client role and throwing in turn events to the application. This all multithreaded series made me curious to examine. I've seen that this is a basicHttpBinding binding and the only behavior to the service is the UseSynchronizationContext=false where I found out that they added it after unexplained exception which is normal :)
Now I'm asking about the default ConcurrencyMode for the basicHttpBinding. Shouldn't they make it Reentrant or this is the default behavior?
Is this scenario will continue failing cause they already have an unexplained reference not set to an instance of an object if the WCF service is down from the client?
I believe using multithread operations in a WCF service consume by ASP.NET project which relies on IIS handling is bad cause the page could be sent to the client before the WCF service return data to the client class library and append these to the page.
Can you discuss the above and explain your thoughts?
Shouldn't be better when you need such an asynchronous programming style to inform WCF comsumers to notify after long operation using CallbackContracts and embedded WCF technologies, rather multithreading operations?
Need clarification to correct the design and have some proves that this is a bad service architecture, if it is for real, which I suspect!
Thank you.
It is not inherently bad architecture, but it sounds like it does create a number of possible pitfalls.
The WCF client library is leaving all the coordination up to the ASP.NET application. If the ASP.NET app isn't checking that a call to the WCF service has been completed, then it risks using variables before they have been set with values from the service, and other such race conditions unless explicitly setting up some manner of coordinating the initial call against the completion events.
My recommendation would be to rewrite the WCF client asynchronous methods to return Task objects, from the System.Threading.Tasks namespace (MSDN reference). In this way you can spin off the background processing calling the WCF service, and use the Result property of the Task to ensure the service has completed.
An example:
protected void Page_Load(object sender, EventArgs e)
{
Task<string> t = Task<string>.Factory.StartNew(() =>
{
return MyWcfClientClass.StaticAsyncMethod(MyArguments);
}
/* other control initialization stuff here, while the task
and WCF call continue processing in background */
/* Calling Result causes the thread to wait for the task to
complete as necessary, to ensure we have our correct value */
MyLabel1.Text = t.Result;
}
New at MSMQ and WCF.
I want to be able to process incoming MSMQ messages at a high rate. I want to make it Multithreaded (and transactional).
What is the best way of doing this? Any examples, code snippets, theories are very much welcome.
Also, how is WCF able to know if there is a message in the MSMQ? Or would I have to create a Windows Service that polls the MSMQ, then for messages found, start it on a new thread and invoke the WCF service and pass the message to it?
What is the best way?
Many thanks
Answer here was to use WCF and create a data contract of service known types.
These known types are objects it would be expecting from the queue being read from.
To make it multi threaded and transactional, not only does the queue need to be transactional but also decorate the service attribute:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, InstanceContextMode = InstanceContextMode.PerSession, ReleaseServiceInstanceOnTransactionComplete = false)]
The InstanceContextMode IS perSession by default.
you also need to set up the bindings on your config file
example: http://msdn.microsoft.com/en-us/library/ms751493.aspx