C#, Linq2Sql, TransactionScope, "Transaction has aborted.", msdtc, sqlserver 2005 - c#-4.0

we're having issues with TransactionScope in a .NET 4 project.
We have segmented our DAL's into domains, that is we have different Linq2Sql DataContexts pointing to the same database.
The issue arises when, within the same TransactionScope, we insert/update on more than one DataContext, instantly a msdtc transaction will pop up, both locally and on the server, and then it will just hang there for 1-2 minutes (guess it times out), the code will then continue to run, until t.Complete() and subsequent implied .Dispose will yield and Exception "Transaction has aborted.".
We have configured msdtc both locally and on the server to allow all, no authentication, full trace levels, still no relevant information comes from the dtctrace.log
I guess it is standard procedure for msdtc to kick in when more that one database connection is initiated (even if it is vs. the same database), but why the timeout? The operations are not conflicting, there is no possible way for a deadlock to occur in our domain?
Have googled and tested extensively, hope for some seasoned experience here :)

With SQL2005 any transaction spanning multiple connections will be escalated to DTC. With SQL2008, several connections with the same connection string can participate in the same transaction without the need for DTC. With the architecture you've chosen I'd strongly suggest upgrading to SQL2008 if that is an option. DTC can be a paint to get working correctly.

Related

Is SearchLight Package using Connection Pools?

SearchLight is a Julia package that builds the ORM (object-relational mapping) layer in Genie (a web development framework in Julia).
Right now, I am building a website backend and I decided to use SearchLight for data storing, because someone told me to do so with an ORM instead of "just" parsing SQL queries - sorry, I am so not a web developer and know basically nothing.
Unfortunately, SearchLight is (i) unbelievably badly documented (i.e. in most cases not at all) and (ii) missing important functionalities, like setting foreign keys in a DB table. I found several work-arounds, even if they are not pretty and kind of counter the ORM idea.
Actual Question
Does SearchLight use connection pools?
The web service I'm building, gets a JSON, queries the database, gets the result and sends it back to the client. This might take several seconds and multiple clients doing a request should not "stand in line" and wait for other requests to finish. Will SearchLight do that? I.e. open several connections and use them when they're needed? (Or does it something else that leads to what I want?)

MongoDB - how to make Replica set Step down truly seamless

The problem is - when your Replica Set is forced to step down while your application is running, all mainstream Mongo clients will throw at least one exception per connection. This happens because their database connections are hardwired to the physical server which used to be the primary, and no longer accepts queries. So, while MongoDB architects might think that the StepDown process does not create any downtime, in reality if you handle connections according to their documentation, each step down will cause a full blown crash for at least one user, and might even create a data integrity issue. I hope, this can be avoided with a simple wrapper that captures some specific Mongo exceptions and handles them by automatically re-connecting to the Replica Set, and re-running the failed query. If you already have a solution for this, please share! I am particularly interested in a solution that works with any major Mongo driver for Node.JS.
You are correct -- this is the exact behavior I experienced with both mainstream ODMs as well as the official native MongoDB driver for Node.js.
Replica set step-downs would cause my outstanding queries to fail with "Could not locate any valid servers in initial seed list", "sockets closed", and "ECONNRESET" before additional queries would get buffered up even though bufferMaxEntries is correctly configured.
Therefore, I developed Monkster to provide seamless replica set step-down and overall high-availability for MongoDB clusters for Node.js developers using the popular Monk ODM.
Monkster is a Node.js package that provides high availability for Monk, the wise MongoDB API. It implements smart error handling and retry logic to handle temporary network connectivity issues and replica set step-downs seamlessly.
https://www.npmjs.com/package/monkster

Azure Websites and stateful webApp

I have a naïve version of a PokerApp running as an Azure Website.
The server stores in its memory the state of the tables, (whose turn it is, blinds value, cards…) etc.
The problem here is that I don't know how much I can rely on the WebServer's memory to be "permanent". A simple restart of the server would cause that memory to be lost and therefore all the games in progress before the restart would get lost / cause trouble.
I've read about using TableStorage to keep session data and share it between instances, but in my case it's not just string of text that I want to share but let's say for example, a Lobby objcet which contains all info associated with the games.
This is very roughly the structure of the object I have in memory
After some of your comments, you can see the object that needs to be stored is quite big and is being almost constantly. I don't know how well serializing and deserializing is going to work for me here...
Should I consider an azure VM which I'm hoping is going to have persistent memory instead of a Website?
Or is there a better approach to achieve something like this?
Thanks all for the answers and comments, you've made it clear that one can't rely on local memory when working on the cloud.
I'm going to do some refactoring and optimize the "state" object and then use a caching service.
Two question come to my mind though, and once you throw some light on these ones I promise I will shut up and accept #astaykov's great answer.
CONCURRENCY AT INSTANCE LEVEL - I have classic thread locks in my app to avoid concurrency problems, so I'm hoping there is something equivalent for those caching services you guys propose?
Also, I have a few timeouts per table (increase blinds, number of seconds the players have to act…). Let's say a user has just folded a hand, he's finished interacting with the state object so I update the cache. While that state object (to which the timers belong) is cached, my timers will stop ticking…
I know I'm not explaining myself very well here but I hope you guys see my point.
I'd suggest using the Azure Redis Cache.
Here is a nice sample how to build MVC App with Redis Cache in 15 minutes.
You can, of course use the Azure Managed Cache. Or end up with Azure Tables. And Azure Tables can hold much more then just a string. But I believe the caching solutions would have lower latency in communication.
In either way, your objects have to be serializable. And yes - the objects will get serialized/deserialized by every access. You can do it manually, or let the framework do it for you. From what I've read, NewtonSoft.JSON is quite good and optimized JSON serializerdeserializer.
UPDATE
When you ask for a VM running in the cloud - a VM will be restarted sooner or later! Application Pool will recycle, a planned maintenance will occur, an unplanned maintenance will occur, a hard disk will fail, a memory module will fail, unforeseen disaster will happen.
Only one thing is for sure - if you want your data to survive server crashes, change the way you think and design software, and take data out of (local) the memory. Or just live the fact that application may lose state sometime.
Second update - for the clocks
Well, you have to play with your imagination and experience. I would question that your clocks work anyway in the context of the ASP.NET app (unless all of them being static properties of a static type, which would be a little hell). My approach would be heavily extend my app to the client as well (JavaScript). There are a lot of great frameworks out there - SignalR, AngularJS, KnockoutJS, none of them to be underestimated! By extending your object model to the client, you can maintain players objects lifetime on the client (keeping the clock ticking) and sending updates from the client to the server for all those events. If you take a look at SignalR, you can keep real time communication between multiple clients (say players) and the server. And the server side of SignalR scales out nicely with Azure Service Bus and even Redis.

Azure Web Site Migrations & Concurrency

I have two Azure Websites set up - one that serves the client application with no database, another with a database and WebApi solution that the client gets data from.
I'm about to add a new table to the database and populate it with data using a temporary Seed method that I only plan on running once. I'm not sure what the best way to go about it is though.
Right now I have the database initializer set to MigrateDatabaseToLatestVersion and I've tested this update locally several times. Everything seems good to go but the update / seed method takes about 6 minutes to run. I have some questions about concurrency while migrating:
What happens when someone performs CRUD operations against the database while business logic and tables are being updated in this 6-minute window? I mean - the time between when I hit "publish" from VS, and when the new bits are actually deployed. What if the seed method modifies every entry in another table, and a user adds some data mid-seed that doesn't get hit by this critical update? Should I lock the site while doing it just in case (far from ideal...)?
Any general guidance on this process would be fantastic.
Operations like creating a new table or adding new columns should have only minimal impact on the performance and be transparent, especially if the application applies the recommended pattern of dealing with transient faults (for instance by leveraging the Enterprise Library).
Mass updates or reindexing could cause contention and affect the application's performance or even cause errors. Depending on the case, transient fault handling could work around that as well.
Concurrent modifications to data that is being upgraded could cause problems that would be more difficult to deal with. These are some possible approaches:
Maintenance window
The most simple and safe approach is to take the application offline, backup the database, upgrade the database, update the application, test and bring the application back online.
Read-only mode
This approach avoids making the application completely unavailable, by keeping it online but disabling any feature that changes the database. The users can still query and view data while the application is updated.
Staged upgrade
This approach is based on carefully planned sequences of changes to the database structure and data and to the application code so that at any given stage the application version that is online is compatible with the current database structure.
For example, let's suppose we need to introduce a "date of last purchase" field to a customer record. This sequence could be used:
Add the new field to the customer record in the database (without updating the application). Set the new field default value as NULL.
Update the application so that for each new sale, the date of last purchase field is updated. For old sales the field is left unchanged, and the application at this point does not query or show the new field.
Execute a batch job on the database to update this field for all customers where it is still NULL. A delay could be introduced between updates so that the system is not overloaded.
Update the application to start querying and showing the new information.
There are several variations of this approach, such as the concept of "expansion scripts" and "contraction scripts" described in Zero-Downtime Database Deployment. This could be used along with feature toggles to change the application's behavior dinamically as the upgrade stages are executed.
New columns could be added to records to indicate that they have been converted. The application logic could be adapted to deal with records in the old version and in the new version concurrently.
The Entity Framework may impose some additional limitations in the options, because it generates the SQL statements on behalf of the application, so you would have to take that into consideration when planning the stages.
Staging environment
Changing the production database structure and executing mass data changes is risky business, especially when it must be done in a specific sequence while data is being entered and changed by users. Your options to revert mistakes can be severely limited.
It would be necessary to do extensive testing and simulation in a separate staging environment before executing the upgrade procedures on the production environment.
I agree with the maintenance window idea from Fernando. But here is the approach I would take given your question.
Make sure your database is backed up before doing anything (I am assuming its SQL Azure)
Put up a maintenance page on the Client Application
Run the migration via Visual Studio to your database(I am assuming you are doing this through the console) or a unit test
Publish the website/web api websites
Verify your changes.
The main thing is working with the seed method via Entity Framework is that its easy to get it wrong and without a proper backup while running against Prod you could get yourself in trouble real fast. I would probably run it through your test database/environment first (if you have one) to verify what you want is happening.

Recurring Timeout on Sql-Azure

On our system, which is implemented by a web role that uses a database sql-azure, we are experiencing recurring timeout on a specific query.
These timeouts occur for a few hours during the day and then do not show up anymore.
The query has two tables with a number of rows is not very high (about 800,000 rows) with joins using primary keys.
The execution plan is ok, the indexes are used properly, the query normally takes two seconds to be performed.
Tests without EntityFramework give the same result.
Transient fault handling are not applicable in the case of timeout.
What can be the cause of this behavior?
We have experienced similar issues in the past using SQL Azure; frequently queries running against tables with less that 10 rows and even the standard .Net membership provider queries, all failed intermittently with timeouts. This is usually when we have little to no activity on our service; mostly at night.
In commonly used areas where it is safe to retry on SQL Timeout (Usually read operations) we have added the timeout exception to our custom error detection strategy, taken from the Transient Fault Handling Block; however as you stated this is not appropriate in most cases.
The best explanation we have received from Azure support thus far is that as SQL Azure is really a shared SQL Server instance that is used by multiple clients; if one user performs an intensive operation it can affect other users in this way. However; believe this not to be acceptable we are still in contact with SQL Azure support to ascertain why throttling is not stopping this sort of activity from affecting us.
You best bet is to:
Contact SQL Azure Support either through the forums or directly (If you have a support package)
If possible; try setting up a new SQL Azure instance and migrating your database across
Whilst we get this issue intermittently on one SQL Azure instance; we have never experienced it on our other 2 instances.
As a side note; we are still waiting on Azure Support to get back to us regarding why we were still receiving timeout exceptions.

Resources