How to monitor progress of an asynchronous upload using the Azure BlobServiceClient - azure

This seems like it should be really simple but, for whatever reason, the options just don't make sense to me.
I'm instantiating a BlobServiceClient:
this.FileClient = new BlobServiceClient(new Uri(ENDPOINT), credentials);
Then I get an instance of BlobContainerClient:
var container = this.FileClient.GetBlobContainerClient(this.GetContainerName(format));
Then I call UploadBlobAsync:
await container.UploadBlobAsync(filePath, content);
All I want to do is monitor the progress of the upload.
If you lookup the BlobClient class, you'll find there's a BlobUploadOptions class that has an ProgressHandler property of type IProgress which is exactly what I'm looking for...but it doesn't seem the BlobContainerClient uses that class in any way (why?). Also, if you look at the UploadAsync methods of the BlobClient class, there are 2 overloads of the method that except a parameter of type BlobUploadOptions....1 allows you to specify where to put the Blob you're uploading and the other allows you to specify the content you're uploading...but there doesn't appear to be an overload that allows you to specify both what you're uploading and, at the same time, where to put it. I'm so confused.
Also...I'm using 1 instance of the BlobServiceClient for the life of the application. Is that proper?

First of all, you should use BlobClient to monitor the uploading progress.
For your questions:
but it doesn't seem the BlobContainerClient uses that class in any way
(why?)
This is by design. Currently, the BlobContainerClient does not support this parameter. You can raise an feature request in the github page.
Also, if you look at the UploadAsync methods of the BlobClient class,
there are 2 overloads of the method that except a parameter of type
BlobUploadOptions......
I think you're referring these 2 methods:
UploadAsync(string path, BlobUploadOptions options, CancellationToken cancellationToken = default)
and
UploadAsync(Stream content, BlobUploadOptions options, CancellationToken cancellationToken = default).
You should know when to use them, then you may no confusion.
when using UploadAsync(string path, xxx), you should specify the path(like d:\myfolder\test.txt) for the file you're trying to upload.
when using UploadAsync(Stream content,xxx), it means that you have already got the stream of the file. For example, you use this method var mystream = File.OpenRead("file path") to get the stream of the file content, then pass the stream to this method UploadAsync(Stream content,xxx).
Also...I'm using 1 instance of the BlobServiceClient for the life of
the application. Is that proper?
Yes, it's ok.
For the usage of monitoring upload progress by using code, you can refer to this blog for more details.
Please let me know if you still have more issues.

Related

Azure Search: Cache ServiceClient without depending on index name in v11

I currently use v10 of the Azure Search APIs and create a static variable like so:
private static SearchServiceClient SearchServiceClient = new SearchServiceClient(searchServiceName, credentials);
On the server side, this variable is re-used between requests so I don't have to initialize it over and over. I got the idea from https://learn.microsoft.com/en-us/azure/azure-functions/manage-connections to improve performance.
Now, v11 has completely different data types. The new SearchClient type's constructor expects an index name as a parameter. I have many indexes and I'd like to avoid creating a static variable for each index,
In v11, is it possible to re-use a search client in the way I used to be able to do?
In v11, the SearchServiceClient is actually split into 3 different clients: SearchClient, SearchIndexClient, SearchIndexerClient, each of them has different usage. You can see here for the details.
So when you use SearchClient, it is defined to have an index name parameter. You can not re-use it like what SearchServiceClient does in v10.
But you can do it like below:
SearchIndexClient adminClient = new SearchIndexClient(serviceEndpoint, credential);
SearchClient ingesterClient = adminClient.GetSearchClient(indexName);

Model Binding Issue in Azure Function After Switching to Azure.Storage.Queues

I use Azure Functions with Queue triggers in my backend and up to this point, I'd been using the Microsoft.WindowsAzure.Storage package to handle all Azure Storage operations i.e. queues, blobs, etc. With this package, I'd simply send a MyQueueRequest object to my queue and everything worked fine.
Because the Microsoft.WindowsAzure.Storage package has been deprecated, I swithched to Azure.Storage.Queue and my Azure Function started throwing the following error:
Microsoft.Azure.WebJobs.Host: Exception binding parameter 'message'.
System.Private.CoreLib: The input is not a valid Base-64 string as it
contains a non-base 64 character, more than two padding characters, or
an illegal character among the padding characters.
I've found this article that suggests that the new library requires JSON objects to be encoded in Base64 (https://briancaos.wordpress.com/2020/10/16/sending-json-with-net-core-queueclient-sendmessageasync/).
Up to this point, I actually never even serialized my MyQueueRequest object to JSON. The model binder took care of that for me automatically.
Does this mean, going forward, before sending the message to my queue, I need to first serialize MyQueueRequest object and then Base64 encode it and then reverse the process in my Azure Functions?
Yes, for this new package you will need to do this. I ran into the same problem when trying to add POCOs to a queue. I use similar code to the article you cite.
I use the following code to handle this:
await queue.SendMessageAsync(Base64Encode(JsonSerializer.Serialize(myObject)));
Where Base64Encode is:
private string Base64Encode(string plainText)
{
var plainTextBytes = System.Text.Encoding.UTF8.GetBytes(plainText);
return Convert.ToBase64String(plainTextBytes);
}
Be sure it's also UTF-8. I'm using System.Text.Json for this example, but Newtonsoft would work just as well.

Azure Logic App, Cant get data from CreateFile Function

So I've noticed a strange behavior which I would like to share and see if anyone has had the similar problem.
We are using on Prem solution where we pickup a file or a http event request, map it to an outgoing xml xsd/schema and then create the file later on prem.
The problem was that the system where we save the file does not cooperate so good with the logic app, the logic app failes sometime because the system takes the file before the logic app can finish writing the full content.
The system receiving the files only read .xml files, so we though we should first rename the files to tmp, let logic app create the files and then rename them.
This solution sounded quite simple before we started actually applying it to the logic app.
If we take FileSystem function which has Rename File function and use the parameters “Name” from the create file on prem
{
"statusCode": 404,
"message": "Resource not found"
}
We get the message 404 that the resource is not found, now this complicates a lot of things, I’ve checked the privileges on the account that should not be an issue.
What we also have tried is listing all files in the folder, creating a foreach and then adding a rule and the Rename File function. This makes it work but the logic app does not cope well with receiving a lof of files at ones with that solution.
But the Rename Files works when it’s in a foreach loop and we extract the file names in a list from root folder or normal folder.
But why does it not work with just using the Rename Function? Is this perhaps an azure function bug in the Logic app Rename File Function?
So after discussing with Microsoft support on Azure they have actually confirmed that there is a bug with the “Create File” function.
It looks like all the data and information is actually lost during that functions, the support technicians do not know why that is happening but they have had similar cases which people have reported.
I have not stumbled across any of those posts, but I will post how we solved the problem with a work around.
FYI, The support team has taken the case further so that the developers at azure should look into it, because it’s not just “name” tag which is lost from Create a File, ( it’s all valuable options are actually lost ).
So first we initialize a variable and then actually set the variable name with two steps before we create the file:
The name is set with a temp name and a GUID.
Next step is creating the file with the temp-name used in function “Set Variable Temp FileName”
And on the Rename File function we use the Path from where we store the temp file and add \”FILENAME”
And add the “New Name” which we want to use.
This proved to work but is a workaround, support confirmed that you should be able to just use the “RenameFile” after creating the file with a temp name and changing it to the desired name.
But since Create a File does not send or pass any information at all from this list we have to initialize Variables to make it work.
If anyone has stumbled on the same problem where the Backend system reads the files before they are managed to be created by the logic app and you need some workaround this worked good for me.
Hope it helps!
We recently had the same issue; and the workaround of renaming the file also failed.
The cause seems to be that the Azure On Prem Gateway creates a file (or renames a file), then releases its lock, before checking that the file exists. In the gap between releasing the lock and checking that the file exists, the file may be picked up (deleted) thus causing LogicApps to think the step failed (reporting a 404 error), and thus confusion.
Our workaround was to create a Windows service which we hosted on the file servers (so they'd be able to respond to file changes before anything else on the network). This service has a configuration file which accepts a list of paths and file filters, and it uses the FileSystemWatcher to monitor for new files, or renamed files. When it detects a match it takes out a read lock on the file. This ensure it's not blocked by anything writing to the file (i.e. so it doesn't have to wait for the On Prem Gateway's write aciton to complete before obtaining its own lock), but whilst our service holds its lock the file can't be deleted (so the consumer can't remove the file / buying time for the On Prem Gateway to perform it's post-write read and report success). Our service releases its own lock after a defined period (we've gone with 30 seconds, though you could likely get away with much less). At that point, the consumer can successfully consume the file.
Basic code for the file watch & locking logic below:
sing System;
using System.IO;
using System.Diagnostics;
using System.Threading.Tasks;
namespace AzureFileGatewayHelper
{
public class Interceptor: IDisposable
{
object lockable = new object();
bool disposed = false;
readonly FileSystemWatcher watcher;
readonly int lockTimeInMS;
public Interceptor(string path, string filter, int lockTimeInSeconds)
{
lockTimeInMS = lockTimeInSeconds * 1000;
watcher = new FileSystemWatcher();
watcher.Path = path;
watcher.Filter = filter;
watcher.NotifyFilter = NotifyFilters.LastAccess
| NotifyFilters.LastWrite
| NotifyFilters.FileName
| NotifyFilters.DirectoryName;
watcher.Created += OnIncercept;
watcher.Renamed += OnIncercept;
}
public Interceptor(InterceptorConfigElement config) : this(config.Path, config.Filter, config.TimeToLockInSeconds) { Debug.WriteLine($"Loaded config ${config.Key}: Path: '${config.Path}'; Filter: '${config.Filter}'; LockTime: : '${config.TimeToLockInSeconds}'."); }
public void Start()
{
watcher.EnableRaisingEvents = true;
}
public void Stop()
{
if (watcher != null)
watcher.EnableRaisingEvents = false;
}
private async void OnIncercept(object source, FileSystemEventArgs e)
{
using (var fs = new FileStream(e.FullPath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
Debug.WriteLine($"Locked: {e.FullPath} {e.ChangeType}");
await Task.Delay(lockTimeInMS);
}
Debug.WriteLine($"Unlocked {e.FullPath} {e.ChangeType}");
}
public void Dispose()
{
if (disposed) return;
lock (lockable)
{
if (disposed) return;
Stop();
watcher?.Dispose();
disposed = true;
}
}
}
}

Does Azure Function method name matter?

I am currently developing some C# Azure Functions. The naming convention I use is Process[ThingIWantToProcess]() like so...
public static void ProcessRequest([TimerTrigger("00:00:10", RunOnStartup = true, UseMonitor = false)] TimerInfo timer, ILogger logger)
{
// Do function things
}
A few days ago, all of the functions (currently, 6 of them) stopped running when they were deployed, but no code had been changed that I am aware of or can see.
The console, both locally and the Kudu console, say "Found the following functions:" and display all the expected functions; however, those functions are never run.
I tried all sorts of things, including re-deploys, restarting the Azure Web Job, and changing the contents of the methods, but still nothing fired. And then, I changed the name of the function, and suddenly it started working!
So instead of ProcessRequest it was now ProcessRequest1, and the function fired successfully. I changed the name several different ways, and all of them worked, but when I changed back to ProcessRequest, it stopped working again.
I can't find anything explaining this behavior in the docs or internet search, and I'm concerned it will happen again during future maintenance.
Has anyone else experienced this, and if so, can you point me to some kind of explanation?
Heyy !! This is due to the lock behavior that TimerTrigger employs to ensure that only a single instance of your function is running across scaled out instances. So if you are using the same Storage Account for multiple web job's you will face this issue.
To resolve this issue I would suggest just create separate Storage Account for you Job and it should work as it is !!!
For more information please visit: https://github.com/Azure/azure-webjobs-sdk/issues/614
No, it does not matter.
The FunctionName attribute marks the method as a function entry point. The name must be unique within a project, start with a letter and only contain letters, numbers, _, and -, up to 127 characters in length. Project templates often create a method named Run, but the method name can be any valid C# method name.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-dotnet-class-library#methods-recognized-as-functions
Seems you are bit confused about is there any azure function naming convention? Okay let's make it more clear.
Azure Functions Are Like C# Method:
As you know Azure functions are like C# method. So it is good to follow C# method naming convention here also.
But azure functions Suggested Pattern like this <name>-func
Example azureexample-func
public static class AzureFunctionExampleClass
{
[FunctionName("azureexample-func")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
//Read Request Body
var content = await new StreamReader(req.Body).ReadToEndAsync();
//Extract Request Body and Parse To Class
Users objUsers = JsonConvert.DeserializeObject<Users>(content);
//As we have to return IAction Type So converting to IAction Class Using OkObjectResult We Even Can Use OkResult
var result = new OkObjectResult(objUsers);
return (IActionResult)result;
}
}
Note: You even can use PascalCase format like AzureExampleFunc. As there is no such strict bindings.
Casing:
Case insensitive
Function Valid Characters: Alphanumeric and hyphen
Function Length:
1-60
Key-Words Should Skip:
Every language has its own defined key-words while name your function its good to omit that name. So that compiler never get confused about that.
Make Function Readable:
Though function doesn't restricts any mandatory naming convention but its recommended to use a readable name so that its be easy to understand what it does.
If you still have any query regarding Azure naming convention you could check this docs
Thank you and happy coding!

How to get FileInfo objects from the Orchard Media folder?

I'm trying to create a custom ImageFilter that requires me to temporarily write the image to disk, because I'm using a third party library that only takes FileInfo objects as parameters. I was hoping I could use IStorageProvider to easily write and get the file but I can't seem to find a way to either convert an IStorageFile to FileInfo or get the full path to the Media folder of the current tenant to retrieve the file myself.
public class CustomFilter: IImageFilterProvider {
public void ApplyFilter(FilterContext context)
{
if (context.Media.CanSeek)
{
context.Media.Seek(0, SeekOrigin.Begin);
}
// Save temporary image
var fileName = context.FilePath.Split(new char[] { '\\' }, StringSplitOptions.RemoveEmptyEntries).LastOrDefault();
if (!string.IsNullOrEmpty(fileName))
{
var tempFilePath = string.Format("tmp/tmp_{0}", fileName);
_storageProvider.TrySaveStream(tempFilePath, context.Media);
IStorageFile temp = _storageProvider.GetFile(tempFilePath);
FileInfo tempFile = ???
// Do all kinds of things with the temporary file
// Convert back to Stream and pass along
context.Media = tempFile.OpenRead();
}
}
}
FileSystemStorageProvider does a ton of heavy lifting to construct paths to the Media folder so it's a shame that they aren't publicly accessible. I would prefer not to have to copy all of that initialization code. Is there an easy way to directly access files in the Media folder?
I'm not using multitenancy, so forgive me if this is inaccurate, but this is the method I use for retrieving the full storage path and then selecting FileInfo objects from that:
_storagePath = HostingEnvironment.IsHosted
? HostingEnvironment.MapPath("~/Media/") ?? ""
: Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Media");
files = Directory.GetFiles(_storagePath, "*", SearchOption.AllDirectories).AsEnumerable().Select(f => new FileInfo(f));
You can, of course, filter down the list of files using either Path.Combine with subfolder names, or a Where clause on that GetFiles call.
This is pretty much exactly what FileSystemStorageProvider uses, but I haven't had need of the other calls it makes outside of figuring out what _storagePath should be.
In short, yes, you will likely have to re-implement whatever private functions of FileSystemStorageProvider you need for the task. But you may not need all of them.
I was struggling with a similar issue too and i can say that the IStorageProvider stuff is pretty much restricted.
You can see this when viewing the code of FileSystemStorageFile. The class already uses FileInfo to return data but the struct itself isn't accessible and other code is based on this. Therefore you would have to basically reimplement everything from scratch (own implementation of IStorageProvider). The easiest option is to simply call
FileInfo fileInfo = new FileInfo(tempFilePath);
but this would break setups where no file system based storage provider is used like AzureBlobStorageProvider.
The proper way for this task would be to get your hands dirty and extend the storage provider interfaces and update all the code that is based on it. But as far as i can remember the issue here is that you need to update the Azure stuff also and then things get really messy. Due to this fact i aborted that approach when trying to do this heavy stuff on my project.

Resources