My function stores data up Azure Data Lakta Storage Gen 1.
But I got bug An error occurred while sending the request.
When I investigated,I knowed that my connection in azure function overcome 8k then it's broken.
Here is my code(Append to file Azure DataLakeStorage Gen 1)
//This for authorizing azure data lake storage gen 1
await InitADLInfo(adlsAccountName);
DataLakeStoreFileSystemManagementClient _adlsFileSystemClient;
//Here is my code to append data lake storage gen 1
using (var stream = new MemoryStream(Encoding.UTF8.GetBytes(buffer)))
{
await _adlsFileSystemClient.FileSystem.AppendAsync(_adlsAccountName, path, stream);
}
How to dispose that when every append ends.
I try to dispose
_adlsFileSystemClient.Dispose();
But it didn't dispose anything.My connection will up.
I read this
https://www.troyhunt.com/breaking-azure-functions-with-too-many-connections/1
and I have made connection down.Just use DO NOT create a new client with every function invocation.
Example Code :
// Create a single, static HttpClient
private static HttpClient httpClient = new HttpClient();
public static async Task Run(string input)
{
var response = await httpClient.GetAsync("http://example.com");
// Rest of function
}
Related
I want to download a storage blob from Azure and stream it to a client via an .NET Web-App. The blob was uploaded correctly and is visible in my Azure storage account.
Surprisingly, the following throws an exception within HttpBaseStream:
[...]
var blobClient = _containerClient.GetBlobClient(Path.Combine(fileName));
var stream = await blobClient.OpenReadAsync();
return stream;
-> When i step further and return a File (return File(stream, MediaTypeNames.Application.Octet);), the download works as intended.
I tried to push the stream into an MemoryStream, which also fails with the same exception:
[...]
var blobClient = _containerClient.GetBlobClient(Path.Combine(fileName));
var stream = new MemoryStream();
await blobClient.DownloadToAsync(stream);
return stream
->When i step further, returning the file results in a timeout.
How can i fix that? Why do i get this exception - i followed the official quickstart guide from Microsoft.
the following throws an exception within HttpBaseStream
It looks like the HTTP result type is attempting to set the Content-Length header and is reading Length to do so. That would be the natural thing to do. However, it would also be natural to handle the NotSupportedException and just not set Content-Length at all.
If the NotSupportedException only shows up when running in the debugger, then just ignore it.
If the exception is actually thrown to your code (i.e., causing the request to fail), then you'll need to follow the rest of this answer.
First, create a minimal reproducible example and report a bug to the .NET team.
To work around this issue in the meantime, I recommend writing a stream wrapper that returns an already-determined length, which you can get from the Azure blob attributes. E.g.:
public sealed class KnownLengthStreamWrapper : Stream
{
private readonly Stream _stream;
public KnownLengthStreamWrapper(Stream stream, long length)
{
_stream = stream;
Length = length;
}
public override long Length { get; private set; }
... // override all other Stream members and forward to _stream.
}
That should be sufficient to get your app working.
I tried to push the stream into an MemoryStream
This didn't work because you'd need to "rewind" the MemoryStream at some point, e.g.:
var stream = new MemoryStream();
await blobClient.DownloadToAsync(stream);
stream.Position = 0;
return stream;
Check this sample of all the blob options which i have already posted on git working as expected. Reference
public void DownloadBlob(string path)
{
storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudBlobClient client = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = client.GetContainerReference("images");
CloudBlockBlob blockBlob = container.GetBlockBlobReference(Path.GetFileName(path));
using (MemoryStream ms = new MemoryStream())
{
blockBlob.DownloadToStream(ms);
HttpContext.Current.Response.ContentType = blockBlob.Properties.ContentType.ToString();
HttpContext.Current.Response.AddHeader("Content-Disposition", "Attachment; filename=" + Path.GetFileName(path).ToString());
HttpContext.Current.Response.AddHeader("Content-Length", blockBlob.Properties.Length.ToString());
HttpContext.Current.Response.BinaryWrite(ms.ToArray());
HttpContext.Current.Response.Flush();
HttpContext.Current.Response.Close();
}
}
We are migrating a transaction-processing service which was processing messages from MSMQ and storing transacitons in a SQLServer Database to use the Azure Storage Queue (to store the id's of the messages and placing the actual messages in the Azure Storage Blob).
We should at least be able to process 200.000 messages per hour, but at the moment we barely reach 50.000 messages per hour.
Our application requests batches of 250 messages from the Queue (which now takes about 2 seconds to get the id's from the azure queue and about 5 seconds to get the actual data from the azure blob storage) and we're storing this data in one time into the database using a stored procedure accepting a datatable.
Our service also resides in Azure on a virtual machine, and we use the nuget-libraries Azure.Storage.Queues and Azure.Storage.Blobs suggested by Microsoft to access the Azure Storage queue and blob storage.
Does anyone have suggestions how to improve the speed of reading messages from the Azure Queue and then retrieving the data from the Azure Blob?
var managedIdentity = new ManagedIdentityCredential();
UriBuilder fullUri = new UriBuilder()
{
Scheme = "https",
Host = string.Format("{0}.queue.core.windows.net",appSettings.StorageAccount),
Path = string.Format("{0}", appSettings.QueueName),
};
queue = new QueueClient(fullUri.Uri, managedIdentity);
queue.CreateIfNotExists();
...
var result = await queue.ReceiveMessagesAsync(1);
...
UriBuilder fullUri = new UriBuilder()
{
Scheme = "https",
Host = string.Format("{0}.blob.core.windows.net", storageAccount),
Path = string.Format("{0}", containerName),
};
_blobContainerClient = new BlobContainerClient(fullUri.Uri, managedIdentity);
_blobContainerClient.CreateIfNotExists();
...
public async Task<BlobMessage> GetBlobByNameAsync(string blobName)
{
Ensure.That(blobName).IsNotNullOrEmpty();
var blobClient = _blobContainerClient.GetBlobClient(blobName);
if (!blobClient.Exists())
{
_log.Error($"Blob {blobName} not found.");
throw new InfrastructureException($"Blob {blobName} not found.");
}
BlobDownloadInfo download = await blobClient.DownloadAsync();
return new BlobMessage
{
BlobName = blobClient.Name,
BaseStream = download.Content,
Content = await GetBlobContentAsync(download)
};
}
Thanks,
Vincent.
Based on the code you posted, I can suggest two improvements:
Receive 32 messages at a time instead of 1: Currently you're getting just one message at a time (var result = await queue.ReceiveMessagesAsync(1);). You can receive a maximum of 32 messages from the top of the queue. Just change the code to var result = await queue.ReceiveMessagesAsync(32); to get 32 messages. This will save you 31 trips to storage service and that should lead to some performance improvements.
Don't try to create blob container every time: Currently you're trying to create a blob container every time you process a message (_blobContainerClient.CreateIfNotExists();). It is really unnecessary. With fetching 32 messages, you're reducing this method call by 31 times however you can just move this code to your application startup so that you only call it once during your application lifecycle.
I have to write an azure function using a time trigger that will hit an API every weekend and fetch the data from that API and store that data to my Azure SQL database.
So, I am not getting how to call an API from time trigger azure function to fetch data and how to store that data into my azure SQL database.
You can follow this link to get started Azure Function timer trigger.
You have to use HTTPClient to call Api.
static HttpClient client = new HttpClient();
// Update port # in the following line.
client.BaseAddress = new Uri("http://localhost:64195/");
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(
new MediaTypeWithQualityHeaderValue("application/json"));
Product product = null;
HttpResponseMessage response = await client.GetAsync(path);
if (response.IsSuccessStatusCode)
{
product = await response.Content.ReadAsAsync<Product>();
}
return product;
Note: If you have to call lots of API's/endpoints, you may get port exhaustion error. HttpClientFactory is recommended for that scenario.
We use an Azure Service Bus to post all of our requests from our Xamarin mobile app. The Azure Service Bus is bound to an Azure Function which is triggered each time a requests hits the Azure Service Bus.
We have found that we are getting errors from this Azure Function when we send data above a certain size. We can send up to 800 records without a problem but when we send >=850 records we get the following error:
[Error] Exception while executing function:
Functions.ServiceBusQueueTrigger. mscorlib: Exception has been thrown
by the target of an invocation. mscorlib: One or more errors occurred.
A task was canceled.
The service that is being invoked is an ASP.NET Web API RESTful service that saves the data records into a database. This doesn't generate any errors at all.
Here is my Azure Function code.
#r "JWT.dll"
#r "Common.dll"
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using Microsoft.ServiceBus.Messaging;
public static void Run(BrokeredMessage message, TraceWriter log)
{
log.Info($"C# ServiceBus queue trigger function processed message: {message.MessageId}");
if (message != null)
{
Common.Entities.MessageObjectEntity messageObject = message?.GetBody<Common.Entities.MessageObjectEntity>();
string msgType = messageObject?.MessageType;
var msgContent = messageObject?.MessageContent;
log.Info($"Message type: {msgType}");
double timestamp = (DateTime.UtcNow - new DateTime(1970, 1, 1)).TotalSeconds;
string subscriber = "MYSUBSCRIBER";
string privatekey = "MYPRIVATEKEY";
Dictionary<string, object> payload = new Dictionary<string, object>()
{
{"iat", timestamp},
{"subscriber", subscriber}
};
string token = JWT.JsonWebToken.Encode(payload, privatekey, JWT.JwtHashAlgorithm.HS256);
using (var client = new HttpClient())
{
string url = $"http://myexamplewebservices.azurewebsites.net/api/routingtasks?formname={msgType}";
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue(subscriber, token);
HttpContent content = new StringContent((string)msgContent, Encoding.UTF8, "application/json");
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var response = client.PostAsync(new Uri(url), content);
if (response == null)
{
log.Info("Null response returned from request.");
}
else
{
if (response.Result.IsSuccessStatusCode && response.Result.StatusCode == System.Net.HttpStatusCode.OK)
{
log.Info("Successful response returned from request.");
}
else
{
log.Info($"Unsuccessful response returned from request: {response.Result.StatusCode}.");
}
}
}
log.Info("Completing message.");
}
}
This code has been working for several years and works across all our other apps / web sites.
Any ideas why we're getting errors wehen we post large amounts of data to our Azure Service Bus / Azure Function?
It may caused by "new httpclient", there is a limit to how quickly system can open new sockets so if you exhaust the connection pool, you may get some errors. You can refer to this link: https://aspnetmonsters.com/2016/08/2016-08-27-httpclientwrong/
And could you please share some more error message ?
I can see that you are creating httpclient connection on each request which possibly be causing this issue. Httpclient creates a socket connection underneath it and has hard limit on it. Even when you dispose it it remains there for couple of mins that can't be used. A good practice is to create single static httpclient connection and reuse it. I am attaching some documents for you to go through.
AzFunction Static HttpClient , Http Client Working , Improper instantiation
I created a web API which allows users to send files and upload to Azure Storage. The way it works is, the client app will connect to API to send one or more files to the file upload controller and controller will take care of rest such as
Upload file to Azure storage
Update database
Works great but I don't think it is the right way to do this because now I can see there are two different processes
Upload file from the client's file system to my web API (server)
Upload file to the Azure storage from API (server)
It gives me the feeling that I am duplicating the upload process as the same file first travels to API (server) and then Azure (destination) from the client (file system). I feel the need of showing two progress-bars to the client for file upload progress (from client to server and then the server to Azure) - That just doesn't make sense to me and I feel that my approach is incorrect.
My API accepts up to 250MBs so you can imagine the overload.
What do you guys think?
//// API Controller
if (!Request.Content.IsMimeMultipartContent("form-data"))
{
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
}
var provider = new RestrictiveMultipartMemoryStreamProvider();
var contents = await Request.Content.ReadAsMultipartAsync(provider);
int Total_Files = contents.Contents.Count();
foreach (HttpContent ctnt in contents.Contents)
{
await storageManager.AddBlob(ctnt)
}
////// Stream
#region SteamHelper
public class RestrictiveMultipartMemoryStreamProvider : MultipartMemoryStreamProvider
{
public override Stream GetStream(HttpContent parent, HttpContentHeaders headers)
{
var extensions = new[] { "pdf", "doc", "docx", "cab", "zip" };
var filename = headers.ContentDisposition.FileName.Replace("\"", string.Empty);
if (filename.IndexOf('.') < 0)
return Stream.Null;
var extension = filename.Split('.').Last();
return extensions.Any(i => i.Equals(extension, StringComparison.InvariantCultureIgnoreCase))
? base.GetStream(parent, headers)
: Stream.Null;
}
}
#endregion SteamHelper
///// AddBlob
public async Task<string> AddBlob(HttpContent _Payload)
{
CloudStorageAccount cloudStorageAccount = KeyVault.AzureStorage.GetConnectionString();
CloudBlobClient cloudBlobClient = cloudStorageAccount.CreateCloudBlobClient();
CloudBlobContainer cloudBlobContainer = cloudBlobClient.GetContainerReference("SomeContainer");
cloudBlobContainer.CreateIfNotExists();
try
{
byte[] fileContentBytes = _Payload.ReadAsByteArrayAsync().Result;
CloudBlockBlob blob = cloudBlobContainer.GetBlockBlobReference("SomeBlob");
blob.Properties.ContentType = _Payload.Headers.ContentType.MediaType;
blob.UploadFromByteArray(fileContentBytes, 0, fileContentBytes.Length);
var B = await blob.CreateSnapshotAsync();
B.FetchAttributes();
return "Snapshot ETAG: " + B.Properties.ETag.Replace("\"", "");
}
catch (Exception X)
{
return ($"Error : " + X.Message);
}
}
It gives me the feeling that I am duplicating the upload process as
the same file first travels to API (server) and then Azure
(destination) from the client (file system).
I think you're correct. One possible solution would be have your API generate a Shared Access Signature (SAS) token and return that SAS token/URI to the client whenever a client wishes to upload a file.
Using this SAS URI your client can directly upload the file to Azure Storage without sending it to your API first. Once the file is uploaded successfully by the client, it can send a message to the API to update the database.
You can read more about SAS here: https://learn.microsoft.com/en-us/azure/storage/common/storage-dotnet-shared-access-signature-part-1.
I have also written a blog post long time back on using SAS that you may find useful: https://gauravmantri.com/2013/02/13/revisiting-windows-azure-shared-access-signature/.