What is the difference between BlobAttribute vs BlobTriggerAttribute? - azure

Can anyone elaborate on the difference between BlobAttribute vs BlobTriggerAttribute?
[FunctionName(nameof(Run))]
public async Task Run(
[BlobTrigger("container/{name}")]
byte[] data,
[Blob("container/{name}", FileAccess.Read)]
byte[] data2,
string name)
{
}
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob?tabs=csharp#trigger
It seems BlobTrigger has all the functionality.

From the doc you could find the main difference is the blob contents are provided as input with BlobTrigger. It means it could only read the blob, not able to write the blob.
And the the BlobAttribute supports binding to single blobs, blob containers, or collections of blobs and supports read and write.
Also the BlobTrigger could only be used to read the blob when a new or updated blob is detected. And the Blob binding could be used in every kind function.
Further more information about these two binding you could check the binding code: BlobAttribute and BlobTriggerAttribute.

Related

Azure Durable orchestrator pass multiple zip files

I am accepting multiple zip file which I want to process in orchestrator. My durable orchestrator is httptriggered.
I am able to access the file in http trigger as a multipartmemorystream but when I pass the same to durable orchestrator , orchestrator triggers but unable to get files for further processing.
Below is my http trigger function code to read the multiple files and pass to orchestrator
var data = req.Content.ReadAsMultipartAsync().Result;
string instanceId = await starter.StartNewAsync("ParentOrchestrator", data);
Orchestrator Trigger code:
public static async Task<List<string>> RunOrchestrator(
[OrchestrationTrigger] IDurableOrchestrationContext context
)
{
var files = context.GetInput<System.Net.Http.MultipartMemoryStreamProvider>();
To read the input I also tried to created class and pass the stream to the property so data can be serialized as JSON but did not work.
anything I am missing in code?
issue is how to get the zip files for processing.
I checked raw input under the orchestrator context , There I can see file name and other details
Passing files as input seems like a bad idea to me.
Those inputs will be loaded by the orchestrator from Table Storage/Blob Storage each time it replays.
Instead I would recommend that you upload the Zip files to Blob Storage and pass the blob URLs as input to the orchestrator.
Then you use the URLs as inputs to activities where the files are actually processed.
Orchestrator accept only the data which can be serialised. As memory stream is not serialisable it was not able to retrieve the data using GetInput<provider>().
I convert the memory stream to byte array as byte array can be serialised.
I read multiple article which was claiming that ,if we convert the file to byte array we loss the file metadata. Actually if you read file as stream and then to byte array then file data along with meta data get converted to byte array.
Here ,
read the httprequest message as multipartread this gives the object as multipartmemoryatreamprovider.
convert data to byte array
pass to orchestrator
4)receive the files as byte array by using GetInput<byte[]>()
In orchestrator convert byte array to stream MemoryStream ms = new MemoryStream(<input byte array>)

Optionally generate output with an Azure Function

I currently have a Timer triggered Azure Function that checks a data endpoint to determine if any new data has been added. If new data has been added, then I generate an output blob (which I return).
However, returning output appears to be mandatory. Whereas I'd only like to generate an output blob under specific conditions, I must do it all of the time, clogging up my storage.
Is there any way to generate output only under specified conditions?
If you have the blob output binding set to your return value, but you do not want to generate a blob, simply return null to ensure the blob is not created.
You're free to execute whatever logic you want in your functions. You may need to remove the output binding from your function (this is what is making the output required) and construct the connection to blob storage in your function instead. Then you can conditionally create and save the blob.

How to convert from Azure Append Blob to Azure Block Blob

Is their any any to convert from Append Blob to Block Blob .
Regards
C
For a blob conversion, I am using a
--blob-type=BlockBlob
option at the end of my azcopy.exe statement. So far it works well.
Good luck!
Is their any any to convert from Append Blob to Block Blob .
Once the blob has been created, its type cannot be changed, and it can be updated only by using operations appropriate for that blob type, i.e., writing a block or list of blocks to a block blob, appending blocks to a append blob, and writing pages to a page blob.
More information please refer to this link: Understanding Block Blobs, Append Blobs, and Page Blobs
Is their any any to convert from Append Blob to Block Blob .
Automatic conversion between blob types is not allowed. What you would need to do is download the blob and reupload it as Block Blob.
Given: i have source blob which is append blob
And: i have to copy source to new blob container as block blob
When: i use CopyBlobToBlobckBlobContainer functionThen: destination container will have same blob as source but as block blob.
public void CopyBlobToBlobckBlobContainer(string sourceBlobName)
{
var sourceContainerClient = new BlobContainerClient(sourceConnectionString, BlobContainerName);
var destinationContainerClient = new BlobContainerClient(destinationConnectionString, OutBlobContainerName);
destinationContainerClient.CreateIfNotExists();
var sourceBlobClient = sourceContainerClient.GetBlockBlobClient(sourceBlobName);
var sourceUri = sourceBlobClient.GenerateSasUri(BlobSasPermissions.Read, ExpiryOffset);
var destBlobClient = destinationContainerClient.GetBlockBlobClient(sourceBlobName);
var result = destBlobClient.SyncUploadFromUri(sourceUri, overwrite: true);
var response = result.GetRawResponse();
if (response.Status != 201) throw new BlobCopyException(response.ReasonPhrase);
}
Use the below command on azure cli.
azcopy copy 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>/<append-or-page-blob-name>' 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>/<name-of-new-block-blob>' --blob-type BlockBlob --block-blob-tier <destination-tier>
The --block-blob-tier parameter is optional. If you omit that
parameter, then the destination blob infers its tier from the default
account access tier setting. To change the tier after you've created a
block blob, see Change a blob's tier.

Is it possible to generate a unique BlobOutput name from an Azure WebJobs QueueInput item?

I have a continuous Azure WebJob that is running off of a QueueInput, generating a report, and outputting a file to a BlobOutput. This job will run for differing sets of data, each requiring a unique output file. (The number of inputs is guaranteed to scale significantly over time, so I cannot write a single job per input.) I would like to be able to run this off of a QueueInput, but I cannot find a way to set the output based on the QueueInput value, or any value except for a blob input name.
As an example, this is basically what I want to do, though it is invalid code and will fail.
public static void Job([QueueInput("inputqueue")] InputItem input, [BlobOutput("fileoutput/{input.Name}")] Stream output)
{
//job work here
}
I know I could do something similar if I used BlobInput instead of QueueInput, but I would prefer to use a queue for this job. Am I missing something or is generating a unique output from a QueueInput just not possible?
There are two alternatives:
Use IBInder to generate the blob name. Like shown in these samples
Have an autogenerated in the queue message object and bind the blob name to that property. See here (the BlobNameFromQueueMessage method) how to bind a queue message property to a blob name
Found the solution at Advanced bindings with the Windows Azure Web Jobs SDK via Curah's Complete List of Web Jobs Tutorials and Videos.
Quote for posterity:
One approach is to use the IBinder interface to bind the output blob and specify the name that equals the order id. The better and simpler approach (SimpleBatch) is to bind the blob name placeholder to the queue message properties:
public static void ProcessOrder(
[QueueInput("orders")] Order newOrder,
[BlobOutput("invoices/{OrderId}")] TextWriter invoice)
{
// Code that creates the invoice
}
The {OrderId} placeholder from the blob name gets its value from the OrderId property of the newOrder object. For example, newOrder is (JSON): {"CustomerName":"Victor","OrderId":"abc42"} then the output blob name is “invoices/abc42″. The placeholder is case-sensitive.
So, you can reference individual properties from the QueueInput object in the BlobOutput string and they will be populated correctly.

How to use Codename one Storage?

I am trying to port my LWUIT application to Codename one.
I have used RMS in LWUIT and now obviously I have to transform this to Storage.
I don't understand how the Storage class works in Codename one and the documentation for codename one has nothing about either.
1) What is the structure of a storage file?
--> In J2ME RecordStore , you have records bunched together like a table. Every row, corresponds to a record. Each record has a unique record ID and you can access the record with this record id. Every record can have some data stored in it.
How does this map to Storage class?
2)I wish to store some records in my storage, how do i do it?
The documentation says:
static Storage getInstance()
Returns the storage instance or null if the storage wasn't initialized using a call to init(String) first.
--> In LWUIT it was something like Storage.init(storageName). ; However there is no init in codename one!!!. How do I open a Storage in Codename one??
3)If i try to open a storage file which does not exist, what will happen (RMS gives an exception)?
The easiest way to think about Storage is as a flat file system (without directories/folders).
When running on top of RMS this file system abstraction is mapped to the RMS database seamlessly for you.
Notice that init() for Storage in Codename One is no longer necessary, under LWUIT it only performed basic initialization and the name was usually ignored.
The Storage class has several methods:
InputStream createInputStream(String name)
Creates an input stream to the given storage source file
OutputStream createOutputStream(String name)
Creates an output stream to the storage with the given name
boolean exists(String name)
Returns true if the given storage file exists
String[] listEntries()
Lists the names of the storage files
You can use these to just store and check if data exists. However you can also store complex objects in storage without using input/output streams by using these two methods:
Object readObject(String name)
Reads the object from the storage, returns null if the object isn't there
boolean writeObject(String name, Object o)
Writes the given object to storage assuming it is an externalizable type or one of the supported types
So to simulate something like byte[] storage you can do something like this:
Vector p = new Vector();
byte[] myData = ...;
p.addElement(myData);
p.addElement(additionalData);
Storage.getInstance().writeObject("myStore", p);
Then just read it as:
Vector p = (Vector)Storage.getInstance().read("myStore");
// p will be null if nothing was written

Resources