Multiple databases(datacontext) on same server without MS DTC - c#-4.0

I'm using EF5.0 with SQL server 2008. I have two databases on the same server instance. I need to update tables on both databases and want them to be same transaction. So I used the TransactionScope. Below is the code -
public void Save()
{
var MSObjectContext = ((IObjectContextAdapter)MSDataContext).ObjectContext;
var AWObjectContext = ((IObjectContextAdapter)AwContext).ObjectContext;
using (var scope = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions
{
IsolationLevel = IsolationLevel.ReadUncommitted
}))
{
MSObjectContext.SaveChanges(SaveOptions.DetectChangesBeforeSave);
AWObjectContext.SaveChanges(SaveOptions.DetectChangesBeforeSave);
scope.Complete();
}
}
When I use the above code Transaction gets promoted to DTC. After searching on internet I found that this happens because of two different connectionstrings / connections. But what I dont understand is if I write a stored procedure on one database which updates table in a different database (on same server) no DTC is required. Then why EF or TransactionScope is promoting this to DTC? Is there any other work around for this?
Please advise
Thanks in advance
Sai

With plain DbConnections, you can prevent DTC escalation for multiple databases on the same server by using the same connection string (with any database you like) and manually change the database on the opened connection object like so:
using (var tx = new TransactionScope())
{
using (var conn = new SqlConnection(connectStr))
{
conn.Open();
new SqlCommand("INSERT INTO atest VALUES (1)", conn).ExecuteNonQuery();
}
using (var conn = new SqlConnection(connectStr))
{
conn.Open();
conn.ChangeDatabase("OtherDB");
new SqlCommand("INSERT INTO btest VALUES (2)", conn).ExecuteNonQuery();
}
tx.Complete();
}
This will not escalate to DTC, but it would, if you used different values for connectStr.
I'm not familiar with EF and how it manages connections and contexts, but using the above insight, you might be able to avoid DTC escalation by doing a conn.ChangeDatabase(..) and then creating your context like new DbContext(conn, ...).
But please note that even with a shared connect string, as soon as you have more than one connection open at the same time, the DTC will get involved, like in this modified example:
using (var tx = new TransactionScope())
{
using (var conn = new SqlConnection(mssqldb))
{
conn.Open();
new SqlCommand("INSERT INTO atest VALUES (1)", conn).ExecuteNonQuery();
using (var conn2 = new SqlConnection(mssqldb))
{
conn2.Open();
conn2.ChangeDatabase("otherdatabase");
new SqlCommand("INSERT INTO btest VALUES (2)", conn2).ExecuteNonQuery();
}
}
tx.Complete();
}

Related

Azure Cosmos DB Connection Slow

I am finding connecting to and querying Azure Cosmos DB with C# .Net is very slow. It is taking about 2.5 seconds to connect and 8 seconds to do a simple query returning about 600 rows (no where clause). Is there more efficient way to do this? Or best to use a client side connection pool so connections are re-used so don't have to connect as many times? This will mainly be used as a Azure Web Service (ASP.Net)
Interesting if I don't do the ReadThroughputAsync() method after getting the cosmos container, then the Initialize() only takes 438 ms but the query takes longer (8.9 seconds). Anyone know why this is?
With calling await _container.ReadThroughputAsync():
Initialize() in 2482 ms
Found 598 results in 8171 ms
Without calling await _container.ReadThroughputAsync():
Initialize() in 438 ms
Found 598 results in 8937 ms
private const string ContainerId = "Items";
private const string DatabaseId = "Results";
private const string EndpointUri = "https://myServer.documents.azure.com:443/";
private const string PrimaryKey ="xxxxxxx==";
private Container _container;
private CosmosClient _cosmosClient;
private Database _database;
public async Task Initialize()
{
var tickCount = Environment.TickCount;
_cosmosClient = new CosmosClient(EndpointUri, PrimaryKey,
new CosmosClientOptions {ApplicationName = "DataImporter"});
_database = _cosmosClient.GetDatabase(DatabaseId);
_container = _database.GetContainer(ContainerId);
var throughput = await _container.ReadThroughputAsync();
tickCount = Environment.TickCount - tickCount;
WriteLine($"Initialize() in {tickCount} ms");
var tickCount2 = Environment.TickCount;
var sqlQueryText = $"SELECT * FROM c";
var queryDefinition = new QueryDefinition(sqlQueryText);
var queryResultSetIterator = _container.GetItemQueryIterator<SampleData>(queryDefinition);
var results = new List<SampleData>();
while (queryResultSetIterator.HasMoreResults)
{
var currentResultSet = await queryResultSetIterator.ReadNextAsync();
foreach (var result in currentResultSet)
results.Add(result);
}
tickCount2 = Environment.TickCount - tickCount;
WriteLine($"Found {results.Count} results in {tickCount} ms");
}
Thank you Gaurav Mantri, David Makogon and Mark Brown. Posting your suggestions as answer to help other community members.
Practices that you can adopt to make the connection faster
Initialize the cosmos connection at startup. you would need to use CreateAndInitializeAsync
method of the CosmosClient.
Make sure to run the code in the same region as that of cosmos DB.
Always have the reference to cosmos client and container alive. This will ensure the subsequent calls (after 1st call) will be faster.
Reference:
Why the first request takes so much time and how you can speed that up.
https://stackoverflow.com/questions/67943528/asp-net-core-3-application-slow-to-load-cosmos-db-query#:~:text=First%2C%20I%20would,from%2Dportal%22%2C%0AcontainersToInitialize)

How do I re-use Azure Cosmos UDFs?

I am using a User Defined Function across multiple collections within multiple Cosmos databases. Is there a way to store it somewhere and deploy it to all of these collections/databases at once? Or a way to update them all at the same time? Currently I am having to go through and manually update each UDF within each collection within each database.
You can write console application for updating UDF-
private async Task<string> CreateUDFAsync(string collectionUri, string udfName, string udfBody)
{
ResourceResponse<UserDefinedFunction> response = null;
try
{
var existingUdf = await this.cosmosDbClient.ReadUserDefinedFunctionAsync($"{collectionUri}/udfs/{udfName}");
existingUdf.Resource.Body = udfBody;
response = await this.cosmosDbClient.ReplaceUserDefinedFunctionAsync(existingUdf.Resource);
}
catch (DocumentClientException ex)
{
response = await this.cosmosDbClient.CreateUserDefinedFunctionAsync(collectionUri,
new UserDefinedFunction
{
Id = udfName,
Body = udfBody
});
}
return response.Resource.AltLink;
}
It will replace existing UDF and Create new in case missing
At the Cosmos DB resource model structure stored procedures, UDFs, merge procedures, triggers and conflicts are Container level resources.
You have to create them for each container.

Kusto Query from c#

I want to retrieve data from Kusto DB from c# app can any one help me on this.
I have knowledge on writing the Kusto queries but I need some help on pulling data from Azure Kusto DB hosted in Azure.
I tried the following code but it's not working:
var client = Kusto.Data.Net.Client.KustoClientFactory.CreateCslQueryProvider("https://help.kusto.windows.net/Samples;Fed=true");
var reader = client.ExecuteQuery("MyTable | count");
// Read the first row from reader -- it's 0'th column is the count of records in MyTable
// Don't forget to dispose of reader when done.
Could you please elaborate what's not working (what is the error message you're getting) with the code above?
In addition, a full (though simple) example can be found below:
// This sample illustrates how to query Kusto using the Kusto.Data .NET library.
//
// For the purpose of demonstration, the query being sent retrieves multiple result sets.
//
// The program should execute in an interactive context (so that on first run the user
// will get asked to sign in to Azure AD to access the Kusto service).
class Program
{
const string Cluster = "https://help.kusto.windows.net";
const string Database = "Samples";
static void Main()
{
// The query provider is the main interface to use when querying Kusto.
// It is recommended that the provider be created once for a specific target database,
// and then be reused many times (potentially across threads) until it is disposed-of.
var kcsb = new KustoConnectionStringBuilder(Cluster, Database)
.WithAadUserPromptAuthentication();
using (var queryProvider = KustoClientFactory.CreateCslQueryProvider(kcsb))
{
// The query -- Note that for demonstration purposes, we send a query that asks for two different
// result sets (HowManyRecords and SampleRecords).
var query = "StormEvents | count | as HowManyRecords; StormEvents | limit 10 | project StartTime, EventType, State | as SampleRecords";
// It is strongly recommended that each request has its own unique
// request identifier. This is mandatory for some scenarios (such as cancelling queries)
// and will make troubleshooting easier in others.
var clientRequestProperties = new ClientRequestProperties() { ClientRequestId = Guid.NewGuid().ToString() };
using (var reader = queryProvider.ExecuteQuery(query, clientRequestProperties))
{
// Read HowManyRecords
while (reader.Read())
{
var howManyRecords = reader.GetInt64(0);
Console.WriteLine($"There are {howManyRecords} records in the table");
}
// Move on to the next result set, SampleRecords
reader.NextResult();
Console.WriteLine();
while (reader.Read())
{
// Important note: For demonstration purposes we show how to read the data
// using the "bare bones" IDataReader interface. In a production environment
// one would normally use some ORM library to automatically map the data from
// IDataReader into a strongly-typed record type (e.g. Dapper.Net, AutoMapper, etc.)
DateTime time = reader.GetDateTime(0);
string type = reader.GetString(1);
string state = reader.GetString(2);
Console.WriteLine("{0}\t{1,-20}\t{2}", time, type, state);
}
}
}
}
}

How to properly use redis with servicestack in a multi-thread environment?

I assumed that we should use basicredisclientmanager or pooledredisclientmanager?
I tried this
private void dddddd()
{
for(int i=0;i<=1000;i++)
{
var client = new BasicRedisClientManager(new string[] { "host1", "host2", "host3" }).GetClient();
//do something with client
}
}
This loop runs fine for the first 100 plus but after that, I always got an error "Unknown Command Role"?? What is that and how to fix it? I need help!
I also tried to make a new class called MyRedisMgr and created a static property to create some sort of singleton but it didn't work either.
public BasicRedisClientManager MyMgr = new BasicRedisClientManager(new string[] { "host1", "host2", "host3" });
And then I use it like
for(int i=0;i<=1000;i++)
{
var client = MyRedisMgr.MyMgr.GetClient();
//do something with client
}
Please read the documentation on the proper usage of Redis Client Manager which should only be used as a singleton.
The BasicRedisClientManager doesn't have any connection pooling so every time you call GetClient() you're opening a new TCP connection with the redis-server. Unless you understand the implications you should be using one of the Pooled Redis Client Managers, e.g: RedisManagerPool.
You also need to always dispose the client after its used so that it can either be re-used or the TCP connection disposed of properly.
So your code sample should look like:
//Always use the same singleton instance of a Client Manager
var redisManager = new RedisManagerPool(masterHost);
for(int i=0;i<=1000;i++)
{
using (var redis = redisManager.GetClient())
{
//do something with client
}
}
The "Unknown Command Role" error is due to using an old version of Redis Server. The ROLE command was added in redis 2.8.12 but this API should only be used if your using redis-server v2.8.12+, so you shouldn't be getting this error by default. You can avoid this error by upgrading to either the stable v3.0 or old 2.8 versions of redis-server which has this command.
If you want to continue using an older version, use the INFO command to check what version you're running then tell ServiceStack.Redis what the version is with:
RedisConfig.AssumeServerVersion = 2600; //e.g. v2.6
RedisConfig.AssumeServerVersion = 2612; //e.g. v2.6.12

How can I create a new database in mongodb after dropping it?

In my end to end tests, I want to drop the "test" database, and then create a new test db. Dropping an entire database is simple:
mongoose.connection.db.command( {dropDatabase:1}, function(err, result) {
console.log(err);
console.log(result);
});
But now how do I now create the test db? I can't find a createDatabase or useDatabase command in the docs. Do I have to disconnect and reconnnect, or is there a proper command? It would be bizarre if the only way to create a db was as a side effect of connecting to a server.
update
I found some C# code that appears to create a database, and it looks like it connects, drops the database, connects again (without disconnecting?) and that creates the new db. This is what I will do for now.
public static MongoDatabase CreateDatabase()
{
return GetDatabase(true);
}
public static MongoDatabase OpenDatabase()
{
return GetDatabase(false);
}
private static MongoDatabase GetDatabase(bool clear)
{
var connectionString = ConfigurationManager.ConnectionStrings["MongoDB"].ConnectionString;
var databaseName = GetDatabaseName(connectionString);
var server = new MongoClient(connectionString).GetServer();
if (clear)
server.DropDatabase(databaseName);
return server.GetDatabase(databaseName);
}
mongodb will create (or recreate) it automatically the next time a document is saved to a collection in that database. You shouldn't need any special code and I think you don't need to reconnect either, just save a document in your tests and you should be good to go. FYI this same pattern applies to mongodb collections - they are implicit create on write.

Resources