Azure: Using System.Diagnostics.PerformanceCounter - azure

I'm aware of the Microsoft.WindowsAzure.Diagnostics performance monitoring. I'm looking for something more real-time though like using the System.Diagnostics.PerformanceCounter
The idea is that a the real-time information will be sent upon a AJAX request.
Using the performance counters available in azure: http://msdn.microsoft.com/en-us/library/windowsazure/hh411520
The following code works (or at least in the Azure Compute Emulator, I haven't tried it in a deployment to Azure):
protected PerformanceCounter FDiagCPU = new PerformanceCounter("Processor", "% Processor Time", "_Total");
protected PerformanceCounter FDiagRam = new PerformanceCounter("Memory", "Available MBytes");
protected PerformanceCounter FDiagTcpConnections = new PerformanceCounter("TCPv4", "Connections Established");
Further down in the MSDN page is another counter I would like to use:
Network Interface(*)\Bytes Received/sec
I tried creating the performance counter:
protected PerformanceCounter FDiagNetSent = new PerformanceCounter("Network Interface", "Bytes Received/sec", "*");
But then I receive an exception saying that "*" isn't a valid instance name.
This also doesn't work:
protected PerformanceCounter FDiagNetSent = new PerformanceCounter("Network Interface(*)", "Bytes Received/sec");
Is using performace counters directly in Azure frowned upon?

The issue you're having here isn't related to Windows Azure, but to performance counters in general. As the name implies, Network Interface(*)\Bytes Received/sec is a performance counter for a specific network interface.
To initialize the performance counter, you'll need to initialize it with the name of the instance (the network interface) you'll want to get the metrics from:
var counter = new PerformanceCounter("Network Interface",
"Bytes Received/sec", "Intel[R] WiFi Link 1000 BGN");
As you can see from the code, I'm specifying the name of the network interface. In Windows Azure you don't control the server configuration (the hardware, the Hyper-V virtual network card, ...), so I wouldn't advise on using the name of the network interface.
That's why it might be safer to enumerate the instance names to initialize the counter(s):
var category = new PerformanceCounterCategory("Network Interface");
foreach (var instance in category.GetInstanceNames())
{
var counter = new PerformanceCounter("Network Interface",
"Bytes Received/sec", instance);
...
}

Related

Cosmos DB Query Intermittent latency

I have a singleton Cosmos DB Client running as a singleton with default options. I'm using a .NET 6.0 WebAPI project, running in an Azure app service with "Always-On" enabled. The App Service and Cosmos Account are in the same region, UE2. The API queries a Cosmos container and returns the result.
I've noticed that the latency of the first query is always slow (4-6 seconds), subsequent queries are much faster (-100ms) but also sometimes have random high latency. This is not a cold start scenario, the client has already been initialized by the DI pipeline. I'm also not being rate limited.
Here is my singleton client
public CosmosDbService(IConfiguration configuration)
{
var account = configuration.GetSection("CosmosDb")["Account"];
var key = configuration.GetSection("CosmosDb")["Key"];
var databaseName = configuration.GetSection("CosmosDb")["DatabaseName"];
var containerName = configuration.GetSection("CosmosDb")["Container"];
CosmosClient client = new (account, key);
_myContainer = client.GetContainer(databaseName, containerName);
}
Here is the meat of the query where a Linq query is being passed in:
public class RetrieveCarRepository : IRetrieveCarRepository
{
public async Task<List<CarModel>> RetrieveCars(IQueryable<CarModel> querydef)
{
var query = querydef.ToFeedIterator();
List<CarModel> cars = new ();
while (query.HasMoreResults)
{
var response = await query.ReadNextAsync();
foreach (var car in response)...do a thing
I've been through several Cosmos training videos and cosmos courses but still haven't been able to come to an idea of what is happening.
From the comments.
For query performance using the .NET SDK please see: https://learn.microsoft.com/en-us/azure/cosmos-db/performance-tips-query-sdk?tabs=v3&pivots=programming-language-csharp#use-local-query-plan-generation
Query Plan generation can affect latency and can be avoided if:
The query is reworked to be on a single partition (instead of cross-partition).
The workload runs on Windows, compiled as x64 and with the Nuget DLLs co-located. Which in turn would leverage local query plan generation through the ServiceInterop.dll
On both cases the Query Plan request should be removed and latency improved.
As a general rule, latency should be investigated on the P99 across 1h to understand how it is impacted. A couple of higher latency requests can always happen.
Keep also in mind that query latency will vary based on the type of query, volume of data to transfer, and number of pages. You can capture the Diagnostics and use: https://learn.microsoft.com/azure/cosmos-db/troubleshoot-dot-net-sdk-slow-request

Azure CosmosDB: Bulk deletion using SDK

I want to delete 20-30k items in bulk. Currently I am using below method to delete these items. But its taking 1-2 mins.
private async Task DeleteAllExistingSubscriptions(string userUUId)
{
var subscriptions = await _repository
.GetItemsAsync(x => x.DistributionUserIds.Contains(userUUId), o => o.PayerNumber);
if (subscriptions.Any())
{
List<Task> bulkOperations = new List<Task>();
foreach (var subscription in subscriptions)
{
bulkOperations.Add(_repository
.DeleteItemAsync(subscription.Id.ToString(), subscription.PayerNumber).CaptureOperationResponse(subscription));
}
await Task.WhenAll(bulkOperations);
}
}
Cosmos Client:As we can see I have already set AllowBulkExecution = true
private static void RegisterCosmosClient(IServiceCollection serviceCollection, IConfiguration configuration)
{
string cosmosDbEndpoint = configuration["CosmoDbEndpoint"];
Ensure.ConditionIsMet(cosmosDbEndpoint.IsNotNullOrEmpty(),
() => new InvalidOperationException("Unable to locate configured CosmosDB endpoint"));
var cosmosDbAuthKey = configuration["CosmoDbAuthkey"];
Ensure.ConditionIsMet(cosmosDbAuthKey.IsNotNullOrEmpty(),
() => new InvalidOperationException("Unable to locate configured CosmosDB auth key"));
serviceCollection.AddSingleton(s => new CosmosClient(cosmosDbEndpoint, cosmosDbAuthKey,
new CosmosClientOptions { AllowBulkExecution = true }));
}
Is there any way to delete these item in a batch with CosmosDB SDK 3.0 in less time?
Please check the metrics to understand if the volume of data you are trying to send is not getting throttled because your provisioned throughput is not enough.
Bulk just improves the client-side aspect of sending that data by optimizing how it flows from your machine to the account, but if your container is not provisioned to handle that volume of operations, then operations will get throttled and the time it takes to complete will be longer.
As with any data flow scenario, the bottlenecks are:
The source environment cannot process the data as fast as you want, which would show as a bottleneck/spike on the machine's CPU (processing more data would require more CPU).
The network's bandwidth has limitations, in some cases the network has limits on the amount of data it can transfer or even the amount of connections is can open. If the machine you are running the code has such limitations (for example, Azure VMs have SNAT, Azure App Service has TCP limits) and you are running into them, new connections might get delayed and thus increasing latency.
The destination has limits in the amount of operations it can process (in the form of provisioned throughput in this case).

Checking connection to Azure Service Bus

I have some code dependent of Azure Service Bus. I've created an endpoint that checks the availability of my Azure Service Bus topic using the following code:
var connectionString = CloudConfigurationManager.GetSetting("servicebusconnectionstring");
var manager = NamespaceManager.CreateFromConnectionString(connectionString);
var sub = manager.GetSubscription("mytopic", "mysubscription");
var count = sub.MessageCount;
This actually works, but I have two questions (since I'm constantly experiencing timeouts using this code).
Question 1: Is there an easier/better way of checking Service Bus connectivity from C#?
Question 2: When using the code above, which instances should I configure as singleton in my IoC container? I'm suspecting creating all instances every time I ping this endpoint to cause the timeout, since I don't see problems in my other endpoints where I re-use a TopicClient.
Getting MessageCount is potentially an expensive operation, especially if the value is high.
You could run a simple operation like a check whether the topic exists:
var ns = NamespaceManager.CreateFromConnectionString("...");
ns.TopicExists("mytopic");
which will throw an exception (probably MessagingCommunicationException) if communication to Service Bus fails.
It's ok to reuse NamespaceManager between requests, so you can make it singleton. Not sure if that brings any measurable performance benefit though.

Azure DNS issues after destroying/recreating VM

Like many people, we have Azure VMs that we want to destroy when not in use so that we don't have to pay for their core usage. All of the VMs in question are on the same domain and the DC/DNS server is never destroyed/recreated and has a static IP. After successfully using a combination of Export/Remove/Import-AzureVM, however, all of the IP settings for the network adapter (DNS is my primary concern) are gone because a new network adapter is created each time I reconstruct the VM using Import-AzureVM.
I initially tried using NETSH to set my DNS entry at startup, but it depends on knowing the name of the adapter and the adapter name changes daily (since we're taking the machines down for the evening and recreating them in the morning). My next not-so-brilliant idea was to include a VBScript that renamed the adapter to the same name on startup so that NETSH would always have the same adapter name to deal with. However, it was at that point that I discovered that all of the old adapters still exist, but are simply hidden and not in use, rendering my plan moot.
Here are the test NETSH command and VBScript I was attempting to use, just for the sake of reference:
'this script was modified from one i got from the Scripting Guys
Const NETWORK_CONNECTIONS = &H31&
Set objShell = CreateObject("Shell.Application")
Set objFolder = objShell.Namespace(NETWORK_CONNECTIONS)
Set colItems = objFolder.Items
For Each objItem in colItems
'only one adapter is ever returned by this query, but it didn't seem like a bad idea to leave the loop alone just in case
objItem.Name = "testlan"
wscript.echo objItem.Name
Next
NETSH
netsh interface ip add dns name="testlan" 10.0.0.4
I know I can't be the only person trying to solve this issue, but I've been unable to find the solution through a significant amount of Googling and trial and error on my part. Many thanks!
Ben
#Nathan's comment is incorrect. When a VM is "Stopped" it is still being billed. If it is "Stopped(Deallocated)" however then the billing stops. From Azure's Pricing Details FAQ:
To ensure that you are not billed, stop the VM from the management
portal. You can also stop the VM through Powershell by calling
ShutdowRoleOperation with 'PostShutdownAction' equal to
"StoppedDeallocated". However, you will continue to be billed if you
shut down a VM from inside (e.g. using power options in Windows) or
through PowerShell by calling ShutdownRoleOperation with
'PostShutdownAction' equal to "Stopped".
Instead of destroying the VM, you can get to the deallocated state using the azure control panel, or use Azure Cmdlets to force stop the VM. This will deallocate and you wont have the networking problems. Unfortunately this cant be done currently with the REST Api.
I use the following in an app to stop the service:
RunPowerShellScript(#"Stop-AzureVM -ServiceName " + cloudServiceName + " -Name " + vmName + " -Force");
Use that line in on a button, or use the REST api to query your cloud services, then the following function to run your powershell. Be sure to run the getting started initially.
private string RunPowerShellScript(string scriptText)
{
// create Powershell runspace
Runspace runspace = RunspaceFactory.CreateRunspace();
// open it
runspace.Open();
// create a pipeline and feed it the script text
Pipeline pipeline = runspace.CreatePipeline();
pipeline.Commands.AddScript(scriptText);
// add an extra command to transform the script
// output objects into nicely formatted strings
// remove this line to get the actual objects
// that the script returns. For example, the script
// "Get-Process" returns a collection
// of System.Diagnostics.Process instances.
pipeline.Commands.Add("Out-String");
// execute the script
Collection<PSObject> results = pipeline.Invoke();
// close the runspace
runspace.Close();
// convert the script result into a single string
StringBuilder stringBuilder = new StringBuilder();
foreach (PSObject obj in results)
{
stringBuilder.AppendLine(obj.ToString());
}
return stringBuilder.ToString();
}
Try this...
Set-ExecutionPolicy Unrestricted
$wmi = Get-WmiObject win32_networkadapterconfiguration -filter "ipenabled = 'true'"
$wmi.SetDNSServerSearchOrder("10.0.2.6")

How to manage centralized values in a sharded environment

I have an ASP.NET app being developed for Windows Azure. It's been deemed necessary that we use sharding for the DB to improve write times since the app is very write heavy but the data is easily isolated. However, I need to keep track of a few central variables across all instances, and I'm not sure the best place to store that info. What are my options?
Requirements:
Must be durable, can survive instance reboots
Must be synchronized. It's incredibly important to avoid conflicting updates or at least throw an exception in such cases, rather than overwriting values or failing silently.
Must be reasonably fast (2000+ read/writes per second
I thought about writing a separate component to run on a worker role that simply reads/writes the values in memory and flushes them to disk every so often, but I figure there's got to be something already written for that purpose that I can appropriate in Windows Azure.
I think what I'm looking for is a system like Apache ZooKeeper, but I dont' want to have to deal with installing the JRE during the worker role startup and all that jazz.
Edit: Based on the suggestion below, I'm trying to use Azure Table Storage using the following code:
var context = table.ServiceClient.GetTableServiceContext();
var item = context.CreateQuery<OfferDataItemTableEntity>(table.Name)
.Where(x => x.PartitionKey == Name).FirstOrDefault();
if (item == null)
{
item = new OfferDataItemTableEntity(Name);
context.AddObject(table.Name, item);
}
if (item.Allocated < Quantity)
{
allocated = ++item.Allocated;
context.UpdateObject(item);
context.SaveChanges();
return true;
}
However, the context.UpdateObject(item) call fails with The context is not currently tracking the entity. Doesn't querying the context for the item initially add it to the context tracking mechanism?
Have you looked into SQL Azure Federations? It seems like exactly what you're looking for:
sharding for SQL Azure.
Here are a few links to read:
http://msdn.microsoft.com/en-us/library/windowsazure/hh597452.aspx
http://convective.wordpress.com/2012/03/05/introduction-to-sql-azure-federations/
http://searchcloudapplications.techtarget.com/tip/Tips-for-deploying-SQL-Azure-Federations
What you need is Table Storage since it matches all your requirements:
Durable: Yes, Table Storage is part of a Storage Account, which isn't related to a specific Cloud Service or instance.
Synchronized: Yes, Table Storage is part of a Storage Account, which isn't related to a specific Cloud Service or instance.
It's incredibly important to avoid conflicting updates: Yes, this is possible with the use of ETags
Reasonably fast? Very fast, up to 20,000 entities/messages/blobs per second
Update:
Here is some sample code that uses the new storage SDK (2.0):
var storageAccount = CloudStorageAccount.DevelopmentStorageAccount;
var table = storageAccount.CreateCloudTableClient()
.GetTableReference("Records");
table.CreateIfNotExists();
// Add item.
table.Execute(TableOperation.Insert(new MyEntity() { PartitionKey = "", RowKey ="123456", Customer = "Sandrino" }));
var user1record = table.Execute(TableOperation.Retrieve<MyEntity>("", "123456")).Result as MyEntity;
var user2record = table.Execute(TableOperation.Retrieve<MyEntity>("", "123456")).Result as MyEntity;
user1record.Customer = "Steve";
table.Execute(TableOperation.Replace(user1record));
user2record.Customer = "John";
table.Execute(TableOperation.Replace(user2record));
First it adds the item 123456.
Then I'm simulating 2 users getting that same record (imagine they both opened a page displaying the record).
User 1 is fast and updates the item. This works.
User 2 still had the window open. This means he's working on an old version of the item. He updates the old item and tries to save it. This causes the following exception (this is possible because the SDK matches the ETag):
The remote server returned an error: (412) Precondition Failed.
I ended up with a hybrid cache / table storage solution. All instances track the variable via Azure caching, while the first instance spins up a timer that saves the value to table storage once per second. On startup, the cache variable is initialized with the value saved to table storage, if available.

Resources