I have following code which will fire query on each database in SQL SERVER 2008R2,
public DataTable GetResultsOfAllDB(string query)
{
SqlConnection con = new SqlConnection(_ConnectionString);
string locleQuery = "select name from [master].sys.sysdatabases";
DataTable dtResult = new DataTable("Result");
SqlCommand cmdData = new SqlCommand(locleQuery, con);
cmdData.CommandTimeout = 0;
SqlDataAdapter adapter = new SqlDataAdapter(cmdData);
DataTable dtDataBases = new DataTable("DataBase");
adapter.Fill(dtDataBases);
// This is implemented for sequential
foreach (DataRow drDB in dtDataBases.Rows)
{
locleQuery = " Use [" + Convert.ToString(drDB[0]) + "]; " + query;
cmdData = new SqlCommand(locleQuery, con);
adapter = new SqlDataAdapter(cmdData);
DataTable dtTemp = new DataTable();
adapter.Fill(dtTemp);
dtResult.Merge(dtTemp);
}
//Parallel Implementation
Parallel.ForEach(dtDataBases.AsEnumerable(), drDB =>
{
locleQuery = " Use [" + Convert.ToString(drDB[0]) + "]; " + query;
con = new SqlConnection(_ConnectionString);
cmdData = new SqlCommand(locleQuery, con);
cmdData.CommandTimeout = 0;
adapter = new SqlDataAdapter(cmdData);
DataTable dtTemp = new DataTable();
adapter.Fill(dtTemp);
dtResult.Merge(dtTemp);
}
);
return dtResult;
}
Now the problem is when i use the second loop i.e. Parallel ForEach loop it gives me different errors at the line adapter.Fill(dtTemp); as following,
Yes of-course these are expected errors.
Connection is closed
Connection is opening,
Data reader is closed
reader is connecting..
Blha Blha ... All connection related errors.
Note : Some time it works like charm i mean no errors.
And absolutely the first loop i.e. Sequential foreach loop works fine but the performance is not that good looking which i fall in Love with it :)
Now my question is if i want to use parallel foreach loop for the same, then How should i do this? Is there any cosmetics which will help the Parallel Foreach loop good looking ;)
Thanks in Advance.
The database connection can only run one query at a time, so when a thread tries to run a query while the connection is busy, you get an error. If you want to run queries in parallel each thread needs its own database connection.
Though you are creating a new SqlConnection object for each iteration, all the objects are using the same physical database connection. This is because of connection pooling used by the .NET framework. In your case you need to manually configure the behavior of the connection pool. For example you can disable connection pooling; this will have performance implications. Read about connection pooling at MSDN.
Related
Basicly, i want to access a lot of pages from the same URl, but i don't wan't to spam it at the same secound.And i wan't it over in 30minutes.
Does the Thread.Sleep() inside the parallel works fine?
Because only after like 90 minutes i have it done.
var pages = new List<string>( urls );
var sources = new BlockingCollection<string>();
string htmlCode = "";
Random randomN = new Random();
Parallel.ForEach(pages, x =>
{
x = x.Replace("//", "http://");
int sleep = randomN.Next(0, 1800000);
Thread.Sleep(sleep);
HttpWebResponse response = null;
CookieContainer cookie = new CookieContainer();
HttpWebRequest newRequest = (HttpWebRequest)WebRequest.Create(x);
newRequest.Timeout = 15000;
newRequest.Proxy = null;
newRequest.CookieContainer = cookie;
newRequest.UserAgent = "Mozilla / 5.0(Windows NT 5.1) AppleWebKit / 537.11(KHTML, like Gecko) Chrome / 23.0.1300.0 Iron / 23.0.1300.0 Safari / 537.11";
newRequest.ContentLength = 0;
response = (HttpWebResponse)newRequest.GetResponse();
htmlCode = new StreamReader(response.GetResponseStream(), Encoding.GetEncoding(1252)).ReadToEnd();
sources.Add(htmlCode);
});
return sources;
The requirement is not very clear, though, as to what you are trying to achieve. But Thread.Sleep in parallel.foreach might not be the right option. Try going thru the below threads, they might be helpful and solve your issue.
[1]. Parallel.ForEach using Thread.Sleep equivalent
[2]. Thread.Sleep blocking parallel execution of tasks
[3]. Parallel ForEach wait 500 ms before spawning
You don't want to sleep inside your Parallel.ForEach body--that'll just tie up the worker threads in the .NET thread pool.
This is what Task.Delay is for. Just create a bunch of delayed tasks and then do a WaitAll on them.
I'm attempting to create a logger for an application in Azure using the new Azure append blobs and the Azure Storage SDK 6.0.0. So I created a quick test application to get a better understanding of append blobs and their performance characteristics.
My test program simply loops 100 times and appends a line of text to the append blob. If I use the synchronous AppendText() method everything works fine, however, it appears to be limited to writing about 5-6 appends per second. So I attempted to use the asynchronous AppendTextAsync() method; however, when I use this method, the loop runs much faster (as expected) but the append blob is missing about 98% of the appended text without any exception being thrown.
If I add a Thread.Sleep and sleep for 100 milliseconds between each append operation, I end up with about 50% of the data. Sleep for 1 second and I get all of the data.
This seems similar to an issue that was discovered in v5.0.0 but was fixed in v5.0.2: https://github.com/Azure/azure-storage-net/releases/tag/v5.0.2
Here is my test code if you'd like to try to reproduce this issue:
static void Main(string[] args)
{
var accountName = "<account-name>";
var accountKey = "<account-key>;
var credentials = new StorageCredentials(accountName, accountKey);
var account = new CloudStorageAccount(credentials, true);
var client = account.CreateCloudBlobClient();
var container = client.GetContainerReference("<container-name>");
container.CreateIfNotExists();
var blob = container.GetAppendBlobReference("append-blob.txt");
blob.CreateOrReplace();
for (int i = 0; i < 100; i++)
blob.AppendTextAsync(string.Format("Appending log number {0} to an append blob.\r\n", i));
Console.WriteLine("Press any key to exit.");
Console.ReadKey();
}
Does anyone know if I'm doing something wrong with my attempt to append lines of text to an append blob? Otherwise, any idea why this would just lose data without throwing some kind of exception?
I'd really like to start using this as a repository for my application logs (since it was largely created for that purpose). However, it would be quite unreliable if logs would just go missing without warning if the rate of logging went above 5-6 logs per second.
Any thoughts or feedback would be greatly appreciated.
I now have a working solution based upon the information provided by #ZhaoxingLu-Microsoft. According to the the API documentation, the AppendTextAsync() method should only be used in a single-writer scenario because the API internally uses the append-offset conditional header to avoid duplicate blocks which does not work in a multiple-writer scenario.
Here is the documentation that specifies this behavior is by design:
https://msdn.microsoft.com/en-us/library/azure/mt423049.aspx
So the solution is to use the AppendBlockAsync() method instead. The following implementation appears to work correctly:
for (int i = 0; i < 100; i++)
{
var message = string.Format("Appending log number {0} to an append blob.\r\n", i);
var bytes = Encoding.UTF8.GetBytes(message);
var stream = new MemoryStream(bytes);
tasks[i] = blob.AppendBlockAsync(stream);
}
Task.WaitAll(tasks);
Please note that I am not explicitly disposing the memory stream in this example as that solution would entail a using block with an async/await inside the using block in order to wait for the async append operation to finish before disposing the memory stream... but that causes a completely unrelated issue.
You are using async method incorrectly. blob.AppendTextAsync() is non-blocking, but it doesn't really finish when it returns. You should wait for all the async tasks before exiting from the process.
Following code is the correct usage:
var tasks = new Task[100];
for (int i = 0; i < 100; i++)
tasks[i] = blob.AppendTextAsync(string.Format("Appending log number {0} to an append blob.\r\n", i));
Task.WaitAll(tasks);
Console.WriteLine("Press any key to exit.");
Console.ReadKey();
Using the Transient Fault Handling Application Block with SQL Azure.
In this sample is a specific retry logic for oCmd.ExecuteReader() is mandatory or does the ReliableSqlConnection takes care of it ?
Using oCmd As New SqlCommand()
strSQL = "SELECT xxxxxx.* FROM xxxxxx"
oCmd.CommandText = strSQL
Using oConn As New ReliableSqlConnection(Cs, retryPolicy)
oConn.Open()
oCmd.Connection = oConn
Using oDR As SqlDataReader = oCmd.ExecuteReader()
If oDR.Read() Then
sb1.Append(oDR("xxxxxx").ToString)
End If
End Using
End Using
End Using
* UPDATE *
From response below, If I create the SqlCommand object from the context of the ReliableSqlConnect object i can expect to have the retry behaviour extended to the command also, as stated in this page
http://geekswithblogs.net/ScottKlein/archive/2012/01/27/understanding-sql-azure-throttling-and-implementing-retry-logic.aspx
"This next code example below illustrates creating a retry policy using the RetryPolicy class, specifying the retry attempts and the fixed time between retries. This policy is then applied to the ReliablSqlConnection as both a policy to the connection as well as the policy to the command."
RetryPolicy myretrypolicy = new RetryPolicy<SqlAzureTransientErrorDetectionStrategy>(3, TimeSpan.FromSeconds(30));
using (ReliableSqlConnection cnn = new ReliableSqlConnection(connString, myretrypolicy, myretrypolicy))
{
try
{
cnn.Open();
using (var cmd = cnn.CreateCommand())
{
cmd.CommandText = "SELECT * FROM HumanResources.Employee";
using (var rdr = cnn.ExecuteCommand<IDataReader>(cmd))
{
//
}
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message.ToString());
}
}
ReliableSqlConnection applies retry logic only to the process of establishing a connection. Try using something like this:
Using oDR As SqlDataReader = oCmd.ExecuteReaderWithRetry(retryPolicy)
If oDR.Read() Then
sb1.Append(oDR("xxxxxx").ToString)
End If
End Using
For further information on usage your can refer to MSDN or there are some additional examples here.
I am calling WCF method in for loop. I have couple of questions in this regard,
1) In this case if error occurs while its in the loop, where to re-open the connection?
2) Where to close the connection?
MyProxy.DemoServiceClient wsDemo = new MyProxy.DemoServiceClient();
foreach (DataRow dataRow in dataTABLE.Rows)
{
Product product = new Product();
//Populate product using DataRow.
try
{
wsDemo.CreateProduct(product);
}
catch (Exception exc)
{
}
}
Abort and re-open the connection in the catch
You can close the connection outside the loop. However if you anticipate being in the loop for long, then I would prefer using a counter and closing the connection everytime the counter gets to, say 50. And use a finally block to close the connection if its not already aborted or closed.
I'm having a little trouble getting a multi delete transaction working using SubSonic in an ASP.NET/SQL Server 2005. It seems it always makes the change(s) in the database even without a call the complete method on the transactionscope object?
I've been reading through the posts regarding this and tried various alternatives (switching the order of my using statements), using DTC, not using DTC etc but no joy so far.
I'm going to assume it's my code that's the problem but I can't spot the issue - is anyone able to help? I'm using SubSonic 2.2. Code sample below:
using (TransactionScope ts = new TransactionScope())
{
using (SharedDbConnectionScope sts = new SharedDbConnectionScope())
{
foreach (RelatedAsset raAsset in relAssets)
{
// grab the asset id:
Guid assetId = new Select(RelatedAssetLink.AssetIdColumn)
.From<RelatedAssetLink>()
.Where(RelatedAssetLink.RelatedAssetIdColumn).IsEqualTo(raAsset.RelatedAssetId).ExecuteScalar<Guid>();
// step 1 - delete the related asset:
new Delete().From<RelatedAsset>().Where(RelatedAsset.RelatedAssetIdColumn).IsEqualTo(raAsset.RelatedAssetId).Execute();
// more deletion steps...
}
// complete the transaction:
ts.Complete();
}
}
The order of your using statements is correct (I remember the order myself with this trick: The Connection needs to know about the transaction while it is created, and it does that by checking `System.Transactions.Transaction.Current).
One hint: you don't need to use double brackets. And you don't need a reference to the SharedDbConnectionScope().
This looks far more readable.
using (var ts = new TransactionScope())
using (new SharedDbConnectionScope())
{
// some db stuff
ts.Complete();
}
Anyway, I don't see, why this shouldn't work.
If the problem would be related to the MSDTC, an exception would occur.
I only could imagine, that there is a problem in the SqlServer 2005 Configuration but I am not a SqlServer expert.
Maybe you should try some demo code to verify that transactions work:
using (var conn = new SqlConnection("your connection String");
{
conn.Open();
var tx = conn.BeginTransaction();
using (var cmd = new SqlCommand(conn)
cmd.ExecuteScalar("DELETE FROM table WHERE id = 1");
using (var cmd2 = new SqlCommand(conn)
cmd2.ExecuteScalar("DELETE FROM table WHERE id = 2");
tx.Commit();
}
And subsonic supports native transactions without using TransactionScope: http://subsonicproject.com/docs/BatchQuery