I want to show message to end user when blob is taking so much time for uploading and downloading. I found useful blog here.
Simply linear retry policy
public static RetryPolicy LinearRetry(int retryCount, TimeSpan intervalBetweenRetries)
{
return () =>
{
return (int currentRetryCount, Exception lastException, out TimeSpan retryInterval) =>
{
// Do custom work here
// Set backoff
retryInterval = intervalBetweenRetries;
// Decide if we should retry, return bool
return currentRetryCount < retryCount;
};
};
}
But here I didn't get how to send response to user back while retrying. Is this right way or anything else. Please suggest
OperationContext class in Storage Client Library has an event called Retrying that you can consume and send message back to the client.
For example, I created a simple console application which tries to create a blob container. When I ran this application, I deliberately turned off Internet access so that I can simulate a situation where operation would be retried. Then in this event consumer, I simply write something back to console. You could simply raise another event from there that would send a message back to your client.
var requestOptions = new BlobRequestOptions()
{
RetryPolicy = new ExponentialRetry(),
};
var operationContext = new OperationContext();
operationContext.Retrying += (sender, args) =>
{
Console.WriteLine("I'm retrying ....");
};
var cloudStorageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
var blobClient = cloudStorageAccount.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("test");
container.CreateIfNotExists(requestOptions, operationContext);
Related
We are needing to write some software, that receives events one at a time, and we need to ingress them into ADX. We are struggling to understand how the Kusto Client is meant to be utilized.
public void SaveEvent(Object event)
{
var _kcsb = new KustoConnectionStringBuilder(Uri).WithAadApplicationKeyAuthentication(
applicationClientId: "{}",
applicationKey: "{}",
authority: TenantId);
using var ingestClient = KustoIngestFactory.CreateQueuedIngestClient(_kcsb);
//// Create your custom implementation of IRetryPolicy, which will affect how the ingest client handles retrying on transient failures
IRetryPolicy retryPolicy = new NoRetry();
//// This line sets the retry policy on the ingest client that will be enforced on every ingest call from here on
((IKustoQueuedIngestClient)ingestClient).QueueOptions.QueueRequestOptions.RetryPolicy = retryPolicy;
var ingestProperties = new KustoIngestionProperties(DatabaseName, TableName)
{
Format = DataSourceFormat.json,
IngestionMapping = new IngestionMapping { IngestionMappingKind = Kusto.Data.Ingestion.IngestionMappingKind.Json, IngestionMappingReference = MappingName }
};
// Build the stream
var stream = new MemoryStream();
using var streamWriter = new StreamWriter(stream: stream, encoding: Encoding.UTF8, bufferSize: 4096, leaveOpen: true);
using var jsonWriter = new JsonTextWriter(streamWriter);
packet.Id = DateTime.UtcNow.Ticks;
var serializer = new JsonSerializer();
serializer.Serialize(jsonWriter, event);
streamWriter.Flush();
stream.Seek(0, SeekOrigin.Begin);
// Tell the client to ingest this
await ingestClient.IngestFromStreamAsync(data, ingestProperties);
}
Now I have several concerns with this. We are calling this function 300 to 500 times a second. I believe the custom client has built in batching, but do we not then need to use a singleton instance of the custom client?
Next thing is that I am creating a steam per event and then calling ingerss. This feels wrong? is there no way I can setup the custom client etc, and then just enqueue each event into the custom client as we receiver them?
I have the following workflow:
Service bus receives messages.
Azure function triggers and tries to deliver this messages via HTTP to some service.
If delivery failed - function throws exception (custom) and disables topic subscription via code below:
The other function in parallel pings special health check endpoint of the service, and if it gets 200 - it tries to enable subscription and make the flow work again.
The steps could be reproduced N times, cause health check will return 200, thus the delivery url of point 2 - 4xx code.
After the next attempt to enable subscription and deliver the message, I expect that delivery count will be increased and in the end (after 10 deliveries attempt) it will get to dead-letter.
Actual - it equals 1.
I assume, that it may reset when I call CreateOrUpdate with status changed.
If yes - what is the other way to manage subscription status instead of Microsoft.Azure.Management package so that the messages delivery count will not be reset?
UPDATE: Function code
public static class ESBTESTSubscriptionTrigger
{
private static readonly HttpClient Client = new HttpClient();
private static IDatabase redisCache;
[FunctionName("ESBTESTSubscriptionTrigger")]
[Singleton]
public static async Task Run([ServiceBusTrigger("Notifications", "ESBTEST", AccessRights.Listen, Connection = "NotificationsBusConnectionString")]BrokeredMessage serviceBusMessage, TraceWriter log, [Inject]IKeyVaultSecretsManager keyVaultSecretsManager)
{
var logicAppUrl = await keyVaultSecretsManager.GetSecretAsync("NotificationsLogicAppUrl");
if (redisCache == null)
{
redisCache = RedisCacheConnectionManager.GetRedisCacheConnection(
keyVaultSecretsManager.GetSecretAsync("RedisCacheConnectionString").GetAwaiter().GetResult());
}
if (string.IsNullOrWhiteSpace(logicAppUrl))
{
log.Error("Logic App URL should be provided in Application settings of function App.");
throw new ParameterIsMissingException("Logic App URL should be provided in Application settings of function App.");
}
var applicaitonId = serviceBusMessage.Properties["applicationId"].ToString();
var eventName = serviceBusMessage.Properties.ContainsKey("Event-Name") ? serviceBusMessage.Properties["Event-Name"].ToString() : string.Empty;
if (string.IsNullOrWhiteSpace(applicaitonId))
{
log.Error("ApplicationId should be present in service bus message properties.");
throw new ParameterIsMissingException("Application id is missing in service bus message.");
}
Stream stream = serviceBusMessage.GetBody<Stream>();
StreamReader reader = new StreamReader(stream);
string s = reader.ReadToEnd();
var content = new StringContent(s, Encoding.UTF8, "application/json");
content.Headers.Add("ApplicationId", applicaitonId);
HttpResponseMessage response;
try
{
response = await Client.PostAsync(logicAppUrl, content);
}
catch (HttpRequestException e)
{
log.Error($"Logic App responded with {e.Message}");
throw new LogicAppBadRequestException($"Logic App responded with {e.Message}", e);
}
if (!response.IsSuccessStatusCode)
{
log.Error($"Logic App responded with {response.StatusCode}");
var serviceBusSubscriptionsSwitcherUrl = await keyVaultSecretsManager.GetSecretAsync("ServiceBusTopicSubscriptionSwitcherUri");
var sbSubscriptionSwitcherResponse = await Client.SendAsync(
new HttpRequestMessage(HttpMethod.Post, serviceBusSubscriptionsSwitcherUrl)
{
Content =
new
StringContent(
$"{{\"Action\":\"Disable\",\"SubscriptionName\":\"{applicaitonId}\"}}",
Encoding.UTF8,
"application/json")
});
if (sbSubscriptionSwitcherResponse.IsSuccessStatusCode == false)
{
throw new FunctionNotAvailableException($"ServiceBusTopicSubscriptionSwitcher responded with {sbSubscriptionSwitcherResponse.StatusCode}");
}
throw new LogicAppBadRequestException($"Logic App responded with {response.StatusCode}");
}
if (!string.IsNullOrWhiteSpace(eventName))
{
redisCache.KeyDelete($"{applicaitonId}{eventName}DeliveryErrorEmailSent");
}
}
}
I am making a small App that should list the number of items in my Azure queues.
When I use FetchAttributesAsync and ApproximateMessageCount in a Console App, I get the expected result in ApproximateMessageCount after a call to FetchAttributesAsync (or FetchAttributes).
When I use the same in a Universal Windows app, ApproximateMessageCount remains stuck at null after a call to FetchAttributesAsync (FetchAttributes is not available there).
Console code:
CloudStorageAccount _account;
if (CloudStorageAccount.TryParse(_connectionstring, out _account))
{
var queueClient = _account.CreateCloudQueueClient();
Console.WriteLine(" {0}", _account.QueueEndpoint);
Console.WriteLine(" ----------------------------------------------");
var queues = (await queueClient.ListQueuesSegmentedAsync(null)).Results;
foreach (CloudQueue q in queues)
{
await q.FetchAttributesAsync();
Console.WriteLine($" {q.Name,-40} {q.ApproximateMessageCount,5}");
}
}
Universal App code:
IEnumerable<CloudQueue> queues;
CloudStorageAccount _account;
CloudQueueClient queueClient;
CloudStorageAccount.TryParse(connectionstring, out _account);
queueClient = _account.CreateCloudQueueClient();
queues = (await queueClient.ListQueuesSegmentedAsync(null)).Results;
foreach (CloudQueue q in queues)
{
await q.FetchAttributesAsync();
var count = q.ApproximateMessageCount;
// count is always null here!!!
}
I have tried all kinds of alternatives, like Wait()'s and such on the awaitables. Whatever I try, the ApproximateMessageCount stays a null with dertermination :-(.
Am I missing something?
I think you have discovered a bug in the storage client library. I looked up the code on Github and essentially instead of reading the value for Approximate Message Count header, the code is reading the value for Lease Status header.
In QueueHttpResponseParsers.cs class:
public static string GetApproximateMessageCount(HttpResponseMessage response)
{
return response.Headers.GetHeaderSingleValueOrDefault(Constants.HeaderConstants.LeaseStatus);
}
This method should have been:
public static string GetApproximateMessageCount(HttpResponseMessage response)
{
return response.Headers.GetHeaderSingleValueOrDefault(Constants.HeaderConstants.ApproximateMessagesCount);
}
I have submitted a bug for this: https://github.com/Azure/azure-storage-net/issues/155.
We are getting the below exception while reading data using JsonTextReader
Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
JsonTextReader jsonReader - parameter
while (hasRecords(jsonReader, JsonToken.StartObject, null, null)) //Row
{
...
//it's ok to read this all into memory - it's just one row's worth of data
JArray values = (JArray)JToken.ReadFrom(jsonReader);
Also, including the code for HttpPost implementation for better clarity
HttpClientHandler handler = new HttpClientHandler() { Credentials = taskProfileInfo.Credential };
HttpClient httpClient = new HttpClient(handler) { Timeout = TimeSpan.FromSeconds(taskProfileInfo.CommandTimeout) };
httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
HttpResponseMessage response;
HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post, url);
request.Content = new StringContent(postBody, Encoding.UTF8, "application/json");
response = await httpClient.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);
response.EnsureSuccessStatusCode();
//using (var responseStream = await response.Content.ReadAsStreamAsync())
//{
// using (var reader = new StreamReader(responseStream))
// {
// responseFromAPI = reader.ReadToEnd();
// }
//}
return new JsonTextReader(new StreamReader(await response.Content.ReadAsStreamAsync()));
Appreciate if any one can help us ..
Edit: Please note that we are able to debug it locally and works fine. Only problem when we run this as Worker Role in Azure Cloud service.
I finally addressed this issue. Just to close this (might help someone) -
After doing remote debugging we found the below inner exception :
{"The request was aborted: The request was canceled."}
And, the root cause for this issue is that we did set the timeout to less than what the actual read (JsonTextReader) operation would take. The below line of code which sets the time out :
HttpClient httpClient = new HttpClient(handler) { Timeout = TimeSpan.FromSeconds(taskProfileInfo.CommandTimeout) };
So, the FIX is to increase time out value so that request will not be cancelled while reading data.
I fixed increasing the seconds in the web config, it's an alternative:
sessionState timeout="50"
I have a very basic question about Windows Azure Storage Queue errors/access.
I am trying to find out if the given storage account already contains a queue by the given name - say "queue1". I do not want to create the queue if it does not exist, and so am not keen on using the CreateIfNotExist method. The permissions I have given to the SAS token are - processing and Add (since all I want to do is to add a new message to the queue only if it already exists, and throw an error otherwise)
The problem is that when I try to get reference to a fake named queue, and add a message to it, I get a 403. 403 can also occur when the SAS token does not have permissions, so I cannot be sure what is causing the error.
Is there a way I could explicitly know if the queue exists or not?
I have tried the BeginExist, and EndExist methods but they always return false even when I can see the queue being there.
Any suggestions?
The Get Queue Metadata REST API operation will return status code 200 if the queue exists or a Queue Service Error Code otherwise.
Regarding to authorization,
This operation can be performed by the account owner and by anyone with a shared access signature that has permission to perform this operation.
A GET request to
https://myaccount.queue.core.windows.net/myqueue?comp=metadata
Will return a response like:
Response Status:
HTTP/1.1 200 OK
Response Headers:
Transfer-Encoding: chunked
x-ms-approximate-messages-count: 0
Date: Fri, 16 Sep 2011 01:27:38 GMT
Server: Windows-Azure-Queue/1.0 Microsoft-HTTPAPI/2.0
Are you sure you're getting a 403 error even if the queue does not exist. Based on what you described above, I created a simple console app. The queue does not exist in my storage account. When I try to add a message with valid SAS token, I get a 404 error:
CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials("account", "key"), false);
CloudQueueClient client = storageAccount.CreateCloudQueueClient();
CloudQueue queue = client.GetQueueReference("non-existent-queue");
var queuePolicy = new SharedAccessQueuePolicy();
var sas = queue.GetSharedAccessSignature(new SharedAccessQueuePolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(30),
Permissions = SharedAccessQueuePermissions.Add | SharedAccessQueuePermissions.ProcessMessages | SharedAccessQueuePermissions.Update
}, null);
StorageCredentials creds = new StorageCredentials(sas);
var queue1 = new CloudQueue(queue.Uri, creds);
try
{
queue1.AddMessage(new CloudQueueMessage("This is a test message"));
}
catch (StorageException excep)
{
//Get 404 error here
}
Next, I made the SAS token invalid by setting it's expiry to 30 minutes before current time. Now when I run the application, I get 403 error as expected.
CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials("account", "key"), false);
CloudQueueClient client = storageAccount.CreateCloudQueueClient();
CloudQueue queue = client.GetQueueReference("non-existent-queue");
var queuePolicy = new SharedAccessQueuePolicy();
var sas = queue.GetSharedAccessSignature(new SharedAccessQueuePolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(-30),//-30 to ensure SAS is invalid
Permissions = SharedAccessQueuePermissions.Add | SharedAccessQueuePermissions.ProcessMessages | SharedAccessQueuePermissions.Update
}, null);
StorageCredentials creds = new StorageCredentials(sas);
var queue1 = new CloudQueue(queue.Uri, creds);
try
{
queue1.AddMessage(new CloudQueueMessage("This is a test message"));
}
catch (StorageException excep)
{
//Get 403 error here
}
There is now an Exists and ExistsAsync (with various overloads).
Example of the former in use:
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
CloudQueue queue = queueClient.GetQueueReference(queueName);
bool doesExist = queue.Exists();
You will want a reference to Microsoft.Azure.Storage.Queue (I believe older 'cloud' assemblies may not have had these properties - initially I could only access ExistsAsync before I had reference the right package, once I had added the above via Nuget Exists also was available)
For more details see the following links:
Exists
ExistsAsync
There is no Exists method in the v12 as well. Wrote a simple helper method to do the check:
private async Task<bool> QueueExistsAsync(QueueClient queue)
{
try
{
await queue.GetPropertiesAsync();
return true;
}
catch (RequestFailedException ex)
{
if (ex.Status == (int) HttpStatusCode.NotFound)
{
return false;
}
throw;
}
}