Azure Storage Queue message to Azure Blob Storage - azure

I have access to a Azure Storage Queue using a connection string which was provided to me (not my created queue). The messages are sent once every minute. I want to take all the messages and place them in Azure Blob Storage.
My issue is that I haven't been succesful in getting the message from the attached Storage Queue. What is the "easiest" way of doing this data storage?
I've tried accessing the external queue using Logic Apps and then tried to place it in my own queue before moving it to Blob Storage, however without luck.

If you want to access and external storage in the logic app, you will need the name of the storage account and the Key.
You have to choose the trigger for an azure queues and then click in the "Manually enter connection information".
And in the next step you will be able to choose the queue you want to listen for.
I recomend you to use and azure function, something like in this article:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-output?tabs=csharp
Firts you can try only reading the messages, and then add the output that create your blob:
[FunctionName("GetMessagesFromQueue")]
public IActionResult GetMessagesFromQueue(
[QueueTrigger("%ExternalStorage.QueueName%", Connection = "ExternalStorage.StorageConnection")ModelMessage modelmessage,
[Blob("%YourStorage.ContainerName%/{id}", FileAccess.Write, Connection = "YourStorage.StorageConnection")] Stream myBlob)
{
//put the modelmessage into the stream
}
You can bind to a lot of types not only Stream. In the link you have all the information.
I hope I've helped

Related

Azure Function Storage Container Blob Trigger

Azure Function Storage Account Blob Container Trigger
In one of our use case, i am looking for Azure function trigger for any activity in Storage account containers with following conditions
Container with a specific naming convention (name like xxxx-input)
It should automatically detect if a new container(with specific naming convention) is created
Currently, the following events are supported at the moment, per the documentation:
BlobCreated
BlobDeleted
BlobRenamed
DirectoryCreated(Data lake Gen2)
DirectoryRenamed(Data lake Gen2)
DirectoryDeleted(Data lake Gen2)
This means that it is not possible to create such event, but you can try to change the approach(if feasible for your use-case) from 'push' to 'pull'.
I suggest to write a time-triggered function that checks whether container with the given schemes were created. You can leverage the Blob Storage v12 SDK for this task, and get list of the containers.
Save the list to some database(for example CosmosDB), and every time the function gets triggered, you can compare the current state, with the last saved state from the db.
If there is a difference, you can push the message to the EventHub, that triggers another function that actually reacts on this 'new event-type'.
you should use the Azure Event Grid subscribing to the Resource group of your storage account and use for example, the advanced filtering for
"operationName":"Microsoft.Storage/storageAccounts/blobServices/containers/write",
"subject":"/subscriptions/<yourId>/resourcegroups/<yourRG>/providers/Microsoft.Storage/storageAccounts/<youraccount>/blobServices/default/containers/xxxx-input",
"eventType":"Microsoft.Resources.ResourceWriteSuccess",

Re-play/Repeat/Re-Fire Azure BlobStorage Function Triggers for existing files

I've just uploaded several 10s of GBs of files to Azure CloudStorage.
Each file should get picked up and processed by a FunctionApp, in response to a BlobTrigger:
[FunctionName(nameof(ImportDataFile))]
public async Task ImportDataFile(
// Raw JSON Text file containing data updates in expected schema
[BlobTrigger("%AzureStorage:DataFileBlobContainer%/{fileName}", Connection = "AzureStorage:ConnectionString")]
Stream blobStream,
string fileName)
{
//...
}
This works in general, but foolishly, I did not do a final test of that Function prior to uploading all the files to our UAT system ... and there was a problem with the uploads :(
The upload took a few days (running over my Domestic internet uplink due to CoViD-19) so I really don't want to have to re-do that.
Is there some way to "replay" the BlobUpload Triggers? so that the function triggers again as if I'd just re-uploaded the files ... without having to transfer any data again!
As per this link
Azure Functions stores blob receipts in a container named
azure-webjobs-hosts in the Azure storage account for your function app
(defined by the app setting AzureWebJobsStorage).
To force reprocessing of a blob, delete the blob receipt for that blob
from the azure-webjobs-hosts container manually. While reprocessing
might not occur immediately, it's guaranteed to occur at a later point
in time. To reprocess immediately, the scaninfo blob in
azure-webjobs-hosts/blobscaninfo can be updated. Any blobs with a last
modified timestamp after the LatestScan property will be scanned
again.
I found a hacky-AF work around, that re-processes the existing file:
If you add Metadata to a blob, that appears to re-trigger the BlobStorage Function Trigger.
Accessed in Azure Storage Explorer, but Right-clicking on a Blob > Properties > Add Metadata.
I was settings Key: "ForceRefresh", Value "test".
I had a problem with the processing of blobs in my code which meant that there were a bunch of messages in the webjobs-blobtrigger-poison queue. I had to move them back to azure-webjobs-blobtrigger-name-of-function-app. Removing the blob receipts and adjusting the scaninfo blob did not work without the above step.
Fortunately Azure Storage Explorer has a menu option to move the messages from one queue to another:
I found a workaround, if you aren't invested in the file name:
Azure Storage Explorer, has a "Clone with new name" button in the top bar, which will add a new file (and trigger the Function) without transferring the data via your local machine.
Note that "Copy" followed by "Paste" also re-triggers the blob, but appears to transfer the data down to your machine and then back up again ... incredibly slowly!

Why do I see a FunctionIndexingException when creating a QueueTrigger WebJob Function?

I created a function like this
public static Task HandleStorageQueueMessageAsync(
[QueueTrigger("%QueueName%", Connection = "%ConnectionStringName%")] string body,
TextWriter logger)
{
if (logger == null)
{
throw new ArgumentNullException(nameof(logger));
}
logger.WriteLine(body);
return Task.CompletedTask;
}
The queue name and the connection string name come from my configuration that has an INameResolver to get the values. The connection string itself I put from my secret store into the app config at app start. If the connection string is a normal storage connection string granting all permissions for the whole account, the method works like expected.
However, in my scenario I am getting an SAS from a partner team that only offers read access to a single queue. I created a storage connection string from that which looks similar like
QueueEndpoint=https://accountname.queue.core.windows.net;SharedAccessSignature=st=2017-09-24T07%3A29%3A00Z&se=2019-09-25T07%3A29%3A00Z&sp=r&sv=2018-03-28&sig=token
(I tried successfully to connect using this connection string in Microsoft Azure Storage Explorer)
The queue name used in the QueueTrigger attribute is also gathered from the SAS
However, now I am getting the following exceptions
$exception {"Error indexing method 'Functions.HandleStorageQueueMessageAsync'"} Microsoft.Azure.WebJobs.Host.Indexers.FunctionIndexingException
InnerException {"No blob endpoint configured."} System.Exception {System.InvalidOperationException}
If you look at the connection string, you can see the exception is right. I did not configure the blob endpoint. However I also don't have access to it and neither do I want to use it. I'm using the storage account only for this QueueTrigger.
I am using Microsoft.Azure.WebJobs v2.2.0. Other dependencies prevent me from upgrading to a v3.x
What is the recommended way for consuming messages from a storage queue when only a SAS URI with read access to a single queue is available? If I am already on the right path, what do I need to do in order to get rid of the exception?
As you have seen, v2 WebJobs SDK requires access to blob endpoint as well. I am afraid it's by design, using connection string without full access like SAS is an improvement tracked but not realized yet.
Here are the permissions required by v2 SDK. It needs to get Blob Service properties(Blob,Service,Read) and Queue Metadata and process messages(Queue,Container&Object,Read&Process).
Queue Trigger is to get messages and delete them after processing, so SAS requires Process permission. It means the SAS string you got is not authorized correctly even if SDK doesn't require blob access.
You could ask partner team to generate SAS Connection String on Azure portal with minimum permissions above. If they can't provide blob access, v3 SDK seems an option to try.
But there are some problems 1. Other dependencies prevent updating as you mentioned 2. v3 SDK is based on .NET Core which means code changes can't be avoided. 3. v3 SDK document and samples are still under construction right now.
I was having a load of issues getting a SAS token to work for a QueueTrigger.
Not having blob included was my problem. Thanks Jerry!
Slightly newer screenshot (I need add also):

Azure Blob Storage - Send List of image names and get List of images

I am facing a problem. I have to implement a solution to load multiple images from Azure Blob Storage in Xamarin.Forms app.
I would like to know how to send List of image names and get List of images back from Azure Blob Storage Account.
Thank you !
Azure Blob doesn't have a strong query\search story. If you know the container you could always iterate through the blobs in that container but this is going to be very inefficient and less then ideal. You can also use Azure Search to index blob storage (and it's contents) but I find that overkill for your use case.
My suggestion is to store a reference to the blob in a searchable data source. If you already have a data source in place for your app (such as SQL) I would use that.
One inexpensive way to get this into your data source to query later is to use an Azure Function or Logic App to trigger when a new blob is created and store the data you need to later find it (e.g. the filename) along with the blob reference. Table storage is a very inexpensive method or you can use any data store you like. You can then host an API endpoint in Azure Functions (or your host of choice) where your Xamarin app can pass in the filenames and get the results back.
With Azure Functions the code for a a blob trigger that writes to table storage is pretty minimal and would follow something like the pattern below:
public static void Run(
[BlobTrigger("mycollection/{name}")] BlobProperties blobProperties,
[Table("blobdata")]IAsyncCollector<MyBlobDataPoco> tableData,
string name, TraceWriter log)
{
var myData = new MyBlobDataPoco() { Filename = name, Container= "mycollection" };
tableData.AddAsync(myData);
}

C#-Azure function:How to log message to Queue storage when calling to "HTTP Trigger" azure function?

I have a azure function with "HTTP Trigger" in C#. I want to log a message to queue storage when I invoke azure function with "HTTP trigger".
When "HTTP trigger" is invoked it logs a message (log.info("C# Http trigger....");) somewhere else but I want to this log message along with ipaddress, (when user makes request to "HTTP trigger" function) to be stored in Queue storage
(log.info("C# Queue trigger ",ipaddress)) so that I can block client ipaddress for a day, if user makes calls to api beyond the number of attempts and next day again I can unblock client ipaddress by taking all ipaddress from queue storage running timer trigger in cloud.
Is there a possibility to achieve this or else Is there any alternate way to store ipaddress other than Queue storage?
from what you wrote I would guess, that you want to offer some kind of API to your users and want to limit the access. To achieve this, I would rather suggest using Azure API Management (https://azure.microsoft.com/en-us/services/api-management/). It offers you a access quote, which limits the user's access to the API endpoint based on different criteria. It also offers much more: user management, payment, subscriptions, security and much more.
For retrieving the client IP under your Azure function with HTTP Trigger, you could use the following code snippet:
run.csx:
#r "System.Web"
using System.Web;
using System.Net;
public static HttpResponseMessage Run(HttpRequestMessage req, TraceWriter log)
{
var address = ((HttpContextWrapper)req.Properties["MS_HttpContext"]).Request.UserHostAddress;
log.Info($"Your client IP: {address}");
return req.CreateResponse(HttpStatusCode.OK, $"Your client IP: {address}");
}
For more details, you could refer to this issue.
so that I can block client ipaddress for a day, if user makes calls to api beyond the number of attempts and next day again I can unblock client ipaddress by taking all ipaddress from queue storage running timer trigger in cloud.
I would prefer to use table storage for storing the access logs for the specific IPs. You could set the PartitionKey column to the Date, set RowKey to the ClientIP or ClientIP + ApiName, also you could add other columns (Timestamp, TotalRequests, etc). Also, you could refer to Azure Storage Table Design Guide for designing your storage table.
For your azure function, you could use Storage table bindings and Table Storage SDK for reading request log for the specific IP and update the total request count for the specific IP. Moreover, here is a similar blog, you could refer to here.
UPDATE:
You could configure the logs to be stored in the file system or blob storage under the "MONITORING > Diagnostic logs" as follows:
For Application Logging (Filesystem), you could find the logs via kudu:
For Application Logging (Blob), you could leverage the Storage Explorer to retrieve your logs.

Resources