I'm having a little trouble getting a multi delete transaction working using SubSonic in an ASP.NET/SQL Server 2005. It seems it always makes the change(s) in the database even without a call the complete method on the transactionscope object?
I've been reading through the posts regarding this and tried various alternatives (switching the order of my using statements), using DTC, not using DTC etc but no joy so far.
I'm going to assume it's my code that's the problem but I can't spot the issue - is anyone able to help? I'm using SubSonic 2.2. Code sample below:
using (TransactionScope ts = new TransactionScope())
{
using (SharedDbConnectionScope sts = new SharedDbConnectionScope())
{
foreach (RelatedAsset raAsset in relAssets)
{
// grab the asset id:
Guid assetId = new Select(RelatedAssetLink.AssetIdColumn)
.From<RelatedAssetLink>()
.Where(RelatedAssetLink.RelatedAssetIdColumn).IsEqualTo(raAsset.RelatedAssetId).ExecuteScalar<Guid>();
// step 1 - delete the related asset:
new Delete().From<RelatedAsset>().Where(RelatedAsset.RelatedAssetIdColumn).IsEqualTo(raAsset.RelatedAssetId).Execute();
// more deletion steps...
}
// complete the transaction:
ts.Complete();
}
}
The order of your using statements is correct (I remember the order myself with this trick: The Connection needs to know about the transaction while it is created, and it does that by checking `System.Transactions.Transaction.Current).
One hint: you don't need to use double brackets. And you don't need a reference to the SharedDbConnectionScope().
This looks far more readable.
using (var ts = new TransactionScope())
using (new SharedDbConnectionScope())
{
// some db stuff
ts.Complete();
}
Anyway, I don't see, why this shouldn't work.
If the problem would be related to the MSDTC, an exception would occur.
I only could imagine, that there is a problem in the SqlServer 2005 Configuration but I am not a SqlServer expert.
Maybe you should try some demo code to verify that transactions work:
using (var conn = new SqlConnection("your connection String");
{
conn.Open();
var tx = conn.BeginTransaction();
using (var cmd = new SqlCommand(conn)
cmd.ExecuteScalar("DELETE FROM table WHERE id = 1");
using (var cmd2 = new SqlCommand(conn)
cmd2.ExecuteScalar("DELETE FROM table WHERE id = 2");
tx.Commit();
}
And subsonic supports native transactions without using TransactionScope: http://subsonicproject.com/docs/BatchQuery
Related
I am using Nodejs with SQL Server database. I am using this node-mssql package for writing queries from Nodejs.
I have a route which has a if condition and query structure as below:
let checkPartExists = await pool.query(`Select * from Parts WHERE partID = ${partID}`);
if (checkPartExists.recordset.length == 0){
await pool.query(`INSERT INTO Parts(PartID, Quantity) VALUES(${partID}, ${quantity})`)
}
else{
await pool.query(`UPDATE Parts SET Quantity = ${quantity} WHERE PartID = ${partID}`)
}
Now, if the single threaded Nodejs didn't have any event loop, I could safely assume that this would always work. But I know that that is not the case. I just had an instance where the same partID has been inserted twice.
My understanding is that:
User 1 makes a post request to that route
It executes the select query, finds that this partID does not exist in the parts table and reaches the insert portion
However, before it finishes with the insert User 2 (or maybe the same user) makes a post request and the select query is executed which also thinks that a part with that partID does not exist.
This will insert the same partID twice. Is this called a race condition? How do I prevent such situation?
I know I can make PartID a unique key in the database and throw an error when this happens, but I feel like there has to be a way of handling this through code as well.
Please let me know how you guys/girls are handling such situations.
This is a job for a Transaction. Something like this.
const transaction = new sql.Transaction()
await transaction.begin()
const request = new sql.Reqest(transaction)
let checkPartExists = await request.query(`
Select * from Parts WHERE partID = ${partID}`);
if (checkPartExists.recordset.length == 0){
await request.query(`INSERT INTO Parts(PartID, Quantity) VALUES(${partID}, ${quantity})`)
} else {
await request.query(`UPDATE Parts SET Quantity = ${quantity} WHERE PartID = ${partID}`)
}
await transaction.commit()
This serializes access to the table. Transactions are the standard way of avoiding race conditions.
If only one cluster of node.js application is running, then async-mutex can be the solution. If you are running multiple clusters then distributed deadlocking could be a solution.
Mutex is a design pattern so the resource can not be shared among instances.
I'm attempting to create a logger for an application in Azure using the new Azure append blobs and the Azure Storage SDK 6.0.0. So I created a quick test application to get a better understanding of append blobs and their performance characteristics.
My test program simply loops 100 times and appends a line of text to the append blob. If I use the synchronous AppendText() method everything works fine, however, it appears to be limited to writing about 5-6 appends per second. So I attempted to use the asynchronous AppendTextAsync() method; however, when I use this method, the loop runs much faster (as expected) but the append blob is missing about 98% of the appended text without any exception being thrown.
If I add a Thread.Sleep and sleep for 100 milliseconds between each append operation, I end up with about 50% of the data. Sleep for 1 second and I get all of the data.
This seems similar to an issue that was discovered in v5.0.0 but was fixed in v5.0.2: https://github.com/Azure/azure-storage-net/releases/tag/v5.0.2
Here is my test code if you'd like to try to reproduce this issue:
static void Main(string[] args)
{
var accountName = "<account-name>";
var accountKey = "<account-key>;
var credentials = new StorageCredentials(accountName, accountKey);
var account = new CloudStorageAccount(credentials, true);
var client = account.CreateCloudBlobClient();
var container = client.GetContainerReference("<container-name>");
container.CreateIfNotExists();
var blob = container.GetAppendBlobReference("append-blob.txt");
blob.CreateOrReplace();
for (int i = 0; i < 100; i++)
blob.AppendTextAsync(string.Format("Appending log number {0} to an append blob.\r\n", i));
Console.WriteLine("Press any key to exit.");
Console.ReadKey();
}
Does anyone know if I'm doing something wrong with my attempt to append lines of text to an append blob? Otherwise, any idea why this would just lose data without throwing some kind of exception?
I'd really like to start using this as a repository for my application logs (since it was largely created for that purpose). However, it would be quite unreliable if logs would just go missing without warning if the rate of logging went above 5-6 logs per second.
Any thoughts or feedback would be greatly appreciated.
I now have a working solution based upon the information provided by #ZhaoxingLu-Microsoft. According to the the API documentation, the AppendTextAsync() method should only be used in a single-writer scenario because the API internally uses the append-offset conditional header to avoid duplicate blocks which does not work in a multiple-writer scenario.
Here is the documentation that specifies this behavior is by design:
https://msdn.microsoft.com/en-us/library/azure/mt423049.aspx
So the solution is to use the AppendBlockAsync() method instead. The following implementation appears to work correctly:
for (int i = 0; i < 100; i++)
{
var message = string.Format("Appending log number {0} to an append blob.\r\n", i);
var bytes = Encoding.UTF8.GetBytes(message);
var stream = new MemoryStream(bytes);
tasks[i] = blob.AppendBlockAsync(stream);
}
Task.WaitAll(tasks);
Please note that I am not explicitly disposing the memory stream in this example as that solution would entail a using block with an async/await inside the using block in order to wait for the async append operation to finish before disposing the memory stream... but that causes a completely unrelated issue.
You are using async method incorrectly. blob.AppendTextAsync() is non-blocking, but it doesn't really finish when it returns. You should wait for all the async tasks before exiting from the process.
Following code is the correct usage:
var tasks = new Task[100];
for (int i = 0; i < 100; i++)
tasks[i] = blob.AppendTextAsync(string.Format("Appending log number {0} to an append blob.\r\n", i));
Task.WaitAll(tasks);
Console.WriteLine("Press any key to exit.");
Console.ReadKey();
I'm using Rhino ETL for handling both SQL SERVER and Excel file Extraction/Load.
I've put more than one ETL processes (which insert data into sql server tables) in a transaction scope (c#) so that I can rollback the whole process in case of any errors or exceptions.
I'm not sure if C# transaction scope could be used along with ETL processes, but the way I did is as follows:
private void ELTProcessesInTransaction(string a, string b, int check)
{
using (var scope = new TransactionScope())
{
using (ETLProcess1 etlProcess1 = new ETLProcess1(a, b))
{
etlProcess1.Execute();
}
using (ETLProcess2 etlProcess2 = new ETLProcess2(a, b))
{
etlProcess2.Execute();
}
if (!_InSeverDB.HasError(check))
scope.Complete();//Commits based on a condition
}
}
This does not rollback the transaction at all. I tried removing the scope.Complete() line, still the process is getting committed.
Please let me know where I need to correct or if the whole approach is incorrect.
I've a (fairly large) Azure application that uploads (fairly large) files in parallel to Azure blob storage.
In a few percent of uploads I get an exception:
The specified block list is invalid.
System.Net.WebException: The remote server returned an error: (400) Bad Request.
This is when we run a fairly innocuous looking bit of code to upload a blob in parallel to Azure storage:
public static void UploadBlobBlocksInParallel(this CloudBlockBlob blob, FileInfo file)
{
blob.DeleteIfExists();
blob.Properties.ContentType = file.GetContentType();
blob.Metadata["Extension"] = file.Extension;
byte[] data = File.ReadAllBytes(file.FullName);
int numberOfBlocks = (data.Length / BlockLength) + 1;
string[] blockIds = new string[numberOfBlocks];
Parallel.For(
0,
numberOfBlocks,
x =>
{
string blockId = Convert.ToBase64String(Guid.NewGuid().ToByteArray());
int currentLength = Math.Min(BlockLength, data.Length - (x * BlockLength));
using (var memStream = new MemoryStream(data, x * BlockLength, currentLength))
{
var blockData = memStream.ToArray();
var md5Check = System.Security.Cryptography.MD5.Create();
var md5Hash = md5Check.ComputeHash(blockData, 0, blockData.Length);
blob.PutBlock(blockId, memStream, Convert.ToBase64String(md5Hash));
}
blockIds[x] = blockId;
});
byte[] fileHash = _md5Check.ComputeHash(data, 0, data.Length);
blob.Metadata["Checksum"] = BitConverter.ToString(fileHash).Replace("-", string.Empty);
blob.Properties.ContentMD5 = Convert.ToBase64String(fileHash);
data = null;
blob.PutBlockList(blockIds);
blob.SetMetadata();
blob.SetProperties();
}
All very mysterious; I'd think the algorithm we're using to calculate the block list should produce strings that are all the same length...
We ran into a similar issue, however we were not specifying any block ID or even using the block ID anywhere. In our case, we were using:
using (CloudBlobStream stream = blob.OpenWrite(condition))
{
//// [write data to stream]
stream.Flush();
stream.Commit();
}
This would cause The specified block list is invalid. errors under parallelized load. Switching this code to use the UploadFromStream(…) method while buffering the data into memory fixed the issue:
using (MemoryStream stream = new MemoryStream())
{
//// [write data to stream]
stream.Seek(0, SeekOrigin.Begin);
blob.UploadFromStream(stream, condition);
}
Obviously this could have negative memory ramifications if too much data is buffered into memory, but this is a simplification. One thing to note is that UploadFromStream(...) uses Commit() in some cases, but checks additional conditions to determine the best method to use.
This exception can happen also when multiple threads open stream into a blob with the same file name and try to write into this blob simultaneously.
NOTE: this solution is based on Azure JDK code, but I think we can safely assume that pure REST version will have the very same effect (as any other language actually).
Since I have spent entire work day fighting this issue, even if this is actually a corner case, I'll leave a note here, maybe it will be of help to someone.
I did everything right. I had block IDs in the right order, I had block IDs of the same length, I had a clean container with no leftovers of some previous blocks (these three reasons are the only ones I was able to find via Google).
There was one catch: I've been building my block list for commit via
CloudBlockBlob.commitBlockList(Iterable<BlockEntry> blockList)
with use of this constructor:
BlockEntry(String id, BlockSearchMode searchMode)
passing
BlockSearchMode.COMMITTED
in the second argument. And THAT proved to be the root cause. Once I changed it to
BlockSearchMode.UNCOMMITTED
and eventually landed on the one-parameter constructor
BlockEntry(String id)
which uses UNCOMMITED by default, commiting the block list worked and blob was successfuly persisted.
I have following code which will fire query on each database in SQL SERVER 2008R2,
public DataTable GetResultsOfAllDB(string query)
{
SqlConnection con = new SqlConnection(_ConnectionString);
string locleQuery = "select name from [master].sys.sysdatabases";
DataTable dtResult = new DataTable("Result");
SqlCommand cmdData = new SqlCommand(locleQuery, con);
cmdData.CommandTimeout = 0;
SqlDataAdapter adapter = new SqlDataAdapter(cmdData);
DataTable dtDataBases = new DataTable("DataBase");
adapter.Fill(dtDataBases);
// This is implemented for sequential
foreach (DataRow drDB in dtDataBases.Rows)
{
locleQuery = " Use [" + Convert.ToString(drDB[0]) + "]; " + query;
cmdData = new SqlCommand(locleQuery, con);
adapter = new SqlDataAdapter(cmdData);
DataTable dtTemp = new DataTable();
adapter.Fill(dtTemp);
dtResult.Merge(dtTemp);
}
//Parallel Implementation
Parallel.ForEach(dtDataBases.AsEnumerable(), drDB =>
{
locleQuery = " Use [" + Convert.ToString(drDB[0]) + "]; " + query;
con = new SqlConnection(_ConnectionString);
cmdData = new SqlCommand(locleQuery, con);
cmdData.CommandTimeout = 0;
adapter = new SqlDataAdapter(cmdData);
DataTable dtTemp = new DataTable();
adapter.Fill(dtTemp);
dtResult.Merge(dtTemp);
}
);
return dtResult;
}
Now the problem is when i use the second loop i.e. Parallel ForEach loop it gives me different errors at the line adapter.Fill(dtTemp); as following,
Yes of-course these are expected errors.
Connection is closed
Connection is opening,
Data reader is closed
reader is connecting..
Blha Blha ... All connection related errors.
Note : Some time it works like charm i mean no errors.
And absolutely the first loop i.e. Sequential foreach loop works fine but the performance is not that good looking which i fall in Love with it :)
Now my question is if i want to use parallel foreach loop for the same, then How should i do this? Is there any cosmetics which will help the Parallel Foreach loop good looking ;)
Thanks in Advance.
The database connection can only run one query at a time, so when a thread tries to run a query while the connection is busy, you get an error. If you want to run queries in parallel each thread needs its own database connection.
Though you are creating a new SqlConnection object for each iteration, all the objects are using the same physical database connection. This is because of connection pooling used by the .NET framework. In your case you need to manually configure the behavior of the connection pool. For example you can disable connection pooling; this will have performance implications. Read about connection pooling at MSDN.