How to get x-ms-request-id from Azure table storage api call - azure

I getting slow behavior for my azure tablestorage api calls on a windows azure app.I need to get the request id (x-ms-request-id in the response header) for a particular call. Is there a way I can get it using the storageclient api? Does the storage client api even expose this id? If not, is there any other way to get this id?
I am using the api in the following way:
public UserDataModel GetUserData(String UserId)
{
UserDataModel osudm = null;
try
{
var result = (from c in GetServiceContext().OrgUserIdTable
where (c.RowKey == UserId)
select c).FirstOrDefault();
UserDataSource osuds = new UserDataSource(this.account);
osudm = osuds.GetUserData(result.PartitionKey, result.UserName);
}
catch (Exception e)
{
}
return osudm;
}

What you're asking here is more related to WCF Data Services than it is to Windows Azure (the storage client client API uses this). Here is some example code how you can access the response headers:
var tableContext = new MyTableServiceContext(...);
DataServiceQuery<Order> query = tableContext.Orders.Where(o => o.RowKey == "1382898382") as DataServiceQuery<Order>;
IEnumerable<Order> result = query.Execute();
QueryOperationResponse response = result as QueryOperationResponse;
string requestId;
response.Headers.TryGetValue("x-ms-request-id", out requestId);
So what you'll be doing first is simply create your query and cast it to a DataServiceQuery of TType. Then you can call the Execute method on that query and cast it to a QueryOperationResponse. This class will give you access to all headers, including the x-ms-request-id.
Note that in this case you won't be able to use FirstOrDefault, since this doesn't return an IQueryable and you can't cast it to a DataServiceQuery of TType (unless there's an other way to do it using WCF Data Services).
Note: The reason why the call is so slow might be caused by your query. When you query the OrgUserIdTable table, you only filter based on the RowKey. I don't know how much data or partitions you have in that table, but if you don't use the PartitionKey this might have a significant performance impact. You have to know that, by not including the PartitionKey, you'll force a search on all partitions (possibly over multiple servers) which might be causing the call being so slow.
I suggest you take a look at the following real world guidance to get a better insight on how and why partitioning relates to performance in Windows Azure Storage: Designing a Scalable Partitioning Strategy for Windows Azure Table Storage

Related

How to get the Azure Cosmos DB Request Units after each operation in Entity Framework

I can see in the logs/traces the Entity Framework Cosmos DB provider displays the request units after each operation. Is there an easy way to get to that RU number programmatically? Can be pretty useful in integration, and benchmark tests, CI/CD gates, etc. It should be easy, right? It is in the header of the response to the HttpClient.
This might not be the easiest method possibly, but with the lack of other responses I thought it was worth mentioning.
You could add a custom HttpClientFactory using the CosmosDbContextOptionsBuilder class. Then let your factory yield you an instance of an HttpClient with a custom DelegatingHandler. In the handler you can override the SendAsync and obtain the response which should include the RU charge in the headers if applicable.
You can get the Request Units consumed in Azure Cosmos DB for each operation via response object programmatically.
// for read requests
double requestUnits = readResponse.RequestCharge;
// for query
double requestUnits = feadResponse.RequestCharge;
Please go through the object model. There are similar API's in other language SDK's as well.
https://learn.microsoft.com/dotnet/api/microsoft.azure.cosmos.response-1?view=azure-dotnet

ServiceStack: business logic that depends on the database itself

I'm exploring ServiceStack and I'm not sure what is the best way to implement some business logic.
Using the "Bookings CRUD" example I would like to enforce the following rule:
a given Booking can only be saved (either created or updated) if the hotel has enough free rooms for the particular dates of that booking
Please note that I'm not asking how to calculate "free rooms".
What I'm asking is, from the architectural point of view, how should this be done.
For example, one way would be:
create a request DTO to query the number of configured rooms (lets call it "QueryRooms")
use the existing "QueryBookings" to query current bookings present in database
create a " : Service" class to customize the Booking Service, in order to intercept the "CreateBooking" and "UpdateBooking" requests
inside the custom methods for "CreateBooking" and "UpdateBooking", somehow get the results of "QueryRooms" and "QueryBookings", check if there are enough free rooms for the current request, and proceed only if so
This doesn't look very clean, because the service "CreateBooking" and "UpdateBooking" would depend of "QueryRooms" and "QueryBookings".
What would be an elegant and effcient solution, using ServiceStatck?
You can override AutoQuery CRUD operations with your own Service implementation using the AutoQuery DTO.
Where you can use the Service Gateway to call existing Services which you can use to perform any additional validation & modify the request DTO before executing the AutoQuery operation to implement the API, e.g:
public class MyCrudServices : Service
{
public IAutoQueryDb AutoQuery { get; set; }
public object Post(CreateBooking request)
{
var response = Gateway.Send(new QueryRooms
{
From = request.BookingStartDate,
To = request.BookingEndDate,
});
if (response.Results.Count == 0)
throw new Exception("No rooms available during those dates");
request.RoomNumber = response.Results[0].Id;
return AutoQuery.Create(request, base.Request);
}
}
Note: calling in-process Services with the Service Gateway is efficient as it calls the C# method implementation directly, i.e. without incurring any HTTP overhead.

Timer based Azure function with Table storage, HTTP request, and Azure Service Bus

I have a process written in a console application right now that fires on a scheduled task to read data from Azure table storage and based on that data, make API calls to a third party vendor we use, deserialize the response data, loop over an array in the results, save the individual iterations of the loop into a different table in Azure table storage, and then publish messages for each iteration of the loop to Azure service bus where those messages are consumed by another client.
In an effort to get more of our tasks into the cloud, I've done some research and it seems that an Azure function would be a good candidate to replace my console application. I spun up a new Azure function project in Visual Studio 2019 as a "timer" function and then dove into some reading where I got lost really fast.
The reading I've done talks about using "bindings" in my Run() method arguments decorated with attributes for connection strings etc but I'm not sure that is the direction I should be heading. It sounds like that would make it easier for authentication to my table storage, but I can't figure out how to use those "hooks" to query my table and then perform inserts. I haven't even gotten to the service bus stuff yet nor looked into making HTTP calls to our third party vendor's api.
I know this is a very broad question and I don't have any code to post because I'm having a tough time even getting out of the starting blocks with this. The MS documentation is all over the map and I can't find anything specific to my needs and I promise I've spent a fair bit of time trying.
Are Azure functions even the right path I should be travelling? If not, what other options are out there?
TIA
You should keep with Azure Functions with the Time Trigger to replace your console app.
The bindings (which can be used for input /output) are helpers to save you some lines of code, for example:
Rather than using the following code to insert data into azure table:
// Retrieve storage account information from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(storageConnectionString);
// Create a table client for interacting with the table service
CloudTableClient tableClient = storageAccount.CreateCloudTableClient(new TableClientConfiguration());
// Create a table client for interacting with the table service
CloudTable table = tableClient.GetTableReference("MyTable");
//some code to populate an entity
var entity = new { PartitionKey = "Http", RowKey = Guid.NewGuid().ToString(), Text = input.Text };
// Create the InsertOrReplace table operation
TableOperation insertOrMergeOperation = TableOperation.InsertOrMerge(entity);
// Execute the operation.
TableResult result = await table.ExecuteAsync(insertOrMergeOperation);
you would use:
[FunctionName("TableOutput")]
[return: Table("MyTable")]
public static MyPoco TableOutput([HttpTrigger] dynamic input, ILogger log)
{
log.LogInformation($"C# http trigger function processed: {input.Text}");
return new MyPoco { PartitionKey = "Http", RowKey = Guid.NewGuid().ToString(), Text = input.Text };
}
PS: the input trigger in the previous code is a HTTP Trigger, but was only to explain how to use output binding.
you can find more information in here:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-table
and you should watch: https://learn.microsoft.com/en-us/learn/modules/chain-azure-functions-data-using-bindings/

Azure Blob Storage to host images / media - fetching with blob URL (without intermediary controller)

In this article, the author provides a way to upload via a WebAPI controller. This makes sense to me.
He then recommends using an API Controller and a dedicated service method to deliver the blob:
public async Task<HttpResponseMessage> GetBlobDownload(int blobId)
{
// IMPORTANT: This must return HttpResponseMessage instead of IHttpActionResult
try
{
var result = await _service.DownloadBlob(blobId);
if (result == null)
{
return new HttpResponseMessage(HttpStatusCode.NotFound);
}
// Reset the stream position; otherwise, download will not work
result.BlobStream.Position = 0;
// Create response message with blob stream as its content
var message = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(result.BlobStream)
};
// Set content headers
message.Content.Headers.ContentLength = result.BlobLength;
message.Content.Headers.ContentType = new MediaTypeHeaderValue(result.BlobContentType);
message.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = HttpUtility.UrlDecode(result.BlobFileName),
Size = result.BlobLength
};
return message;
}
catch (Exception ex)
{
return new HttpResponseMessage
{
StatusCode = HttpStatusCode.InternalServerError,
Content = new StringContent(ex.Message)
};
}
}
My question is - why can't we just reference the blob URL directly after storing it in the database (instead of fetching via Blob ID)?
What's the benefit of fetching through a controller like this?
You can certainly deliver a blob directly, which then avoids using resources of your app tier (vm, app service, etc). Just note that, if blobs are private, you'd have to provide a special signed URI to the client app (e.g. adding a shared access signature) to allow this URI to be used publicly (for a temporary period of time). You'd generate the SAS within your app tier.
You'd still have all of your access control logic in your controller, to decide who has the rights to the object, for how long, etc. But you'd no longer need to stream the content through your app (consuming cpu, memory, & network resources). And you'd still be able to use https with direct storage access.
Quite simply, you can enforce access control centrally when you use a controller. You have way more control over who/what/why is accessing the file. You can also log requests pretty easily too.
Longer term, you might want to change the locations of your files, add a partitioning strategy for scalability, or do something else in your app that requires a change that you don't see right now. When you use a controller you can isolate the client code from all of those inevitable changes.

Handling large number of same requests in Azure/IIS WebRole

I have a Azure Cloud Service based HTTP API which is currently serving its data out of an Azure SQL database. We also have a in role cache at the WebRole side.
Generally this model is working fine for us but sometimes what happening is that we get a large number of requests for the same resource within a short period of time span and if that resource is not there in the cache, all the requests went directly to our DB which is a problem for us as many time DB is not able to take that much load.
By looking at the nature of the problem, it seems like it should be a pretty common problem which most of the people build API would face. I was thinking if somehow, I can send only 1st request to DB and hold all the remaining till the time when 1st one completes, to control the load going to DB but I did get any good of doing it. Is there any standard/recommended way of doing it in Azure/IIS?
The way we're handling this kind of scenario is by putting calls to the DB in a lock statement. That way only one caller will hit the DB. Here's pseudo code that you can try:
var cachedItem = ReadFromCache();
if (cachedItem != null)
{
return cachedItem;
}
lock(object)
{
cachedItem = ReadFromCache();
if (cachedItem != null)
{
return cachedItem;
}
var itemsFromDB = ReadFromDB();
putItemsInCache(itemsFromDB);
reurn itemsFromDB;
}

Resources