Trying to write a file from an Azure WebJob - azure

My plan is to have the WebJob send the newly generated CSV to a blob storage where it's picked up by an Azure Function and emailed out to a group of recipients.
I'm using the CsvWriter library to generate the CSV. I'm told that ideally I should upload straight to the blob storage container, however I'm not sure how I'm going to do that when CsvWriter is only able to write to the filesystem using whatever TextWriter we pass to it through the constructor.
So what I'm trying to do instead is allow CsvWriter to write the file to the filesystem, and then I'll pick that up and upload it to the blob storage. The problem here is that every directory I try to get it to write to denies access to be WebJob. I have tried Environment.GetEnvironmentVariable("WEBJOBS_ROOT_PATH") as well as %HOME% and d:\site\wwwroot.
What directory do I need to be writing to?

You can avoid it altogether by using a StringWriter, you do not have to use the file system:
using (var stringWriter = new StringWriter())
using (var csvWriter = new CsvHelper.CsvWriter(stringWriter))
{
csvWriter.WriteComment("Test");
csvWriter.Flush();
var blob = container.GetBlockBlobReference("test.csv");
await blob.UploadTextAsync(stringWriter.ToString());
}

Related

How to give template file reference from the azure storage in Adobe.PDFServicesSDK document merge operation

I am trying to give file reference from the cloud storage. Below code is taking file reference from local path. I am writing azure function so need to replace below logic with azure storage. Can you please help me out?
// Create a new DocumentMerge Options instance
DocumentMergeOptions documentMergeOptions = new DocumentMergeOptions(jsonDataForMerge, OutputFormat.PDF);
// Create a new DocumentMerge Operation instance with the DocumentMerge Options instance
DocumentMergeOperation documentMergeOperation = DocumentMergeOperation.CreateNew(documentMergeOptions);
// Set the operation input document template from a source file.
//documentMergeOperation.SetInput(FileRef.CreateFromLocalFile("C:\\Work\\Study\\C# Tutorial\\Tekura.PDFSigning\\Tekura.PDFSigning.DocumentCreate\\" + #"salesOrderTemplate.docx"));
If you are asking, how do you use a file as input that's external to your function, you cannot. You must provide a local file, or a stream. If you can stream from Azure Storage, then you could use that. Or you could download the file into temp storage and use that. But - in general - our services can NOT directly use cloud based file storage.

Create text file in memory and upload to azure blob storage using python

So I want to create a python function that creates an inmemory text file that I can pass a string, or multiple strings to and upload to azure blob storage. Doing it with a normal text file is a novel task, but I can't seem to get it happen using the io module.
Please check out this https://pypi.org/project/azure-storage-blob/ which given in detail upload a file to azure blob storage by doing the following.
You can easily pass the string to upload_blob method using python as below:
blob = BlobClient.from_connection_string(conn_str="<connection_string>", container_name="my_container", blob_name="my_blob")
data = "test"
blob.upload_blob(data)

Azure Logic App: Create CSV File in Blob

I am having a problem updating a csv file in my blob. I have an existing file inside my blob which is a CSV file, and when I press the download button it will automatically download the file to my machine.
Now I already created a Logic App that will update the csv file. When I run the trigger of the app, it updates the file but when I press download, it opens up a new tab where the csv file will be displayed.
I want it the way like the original when I press download it download the file to my machine.
Any help will do or verification if this is possible.
I already tried "compose" and "create to csv" but this way it will not store it to a blob.
As I have test, when you want to create a .csv file with "create blob" action in logic app, it will always get the same problem with you. Because the blob content is "text/plan" which will display in another tab to show.
So, I suggest that you could use azure function to create the blob. In azure function you could set:blob.Properties.ContentType = "application/octet-stream";
Here is the create blob method in azure function:
storageAccount = CloudStorageAccount.Parse(connectionString);
client = storageAccount.CreateCloudBlobClient();
container = client.GetContainerReference("data");
await container.CreateIfNotExistsAsync();
blob = container.GetBlockBlobReference(name);
blob.Properties.ContentType = "application/octet-stream";
using (Stream stream = new MemoryStream(Encoding.UTF8.GetBytes(data)))
{
await blob.UploadFromStreamAsync(stream);
}
For more detailed code, you could refer to this article. After following that, you could download your blob in your local machine.
Note: when you create the azure function action in logic app, it will show error. Just delete Appsetting of "AzureWebJobsSecretStorageType" to "blob".

Avoid over-writing blobs AZURE on the server

I have a .NET app which uses the WebClient and the SAS token to upload a blob to the container. The default behaviour is that a blob with the same name is replaced/overwritten.
Is there a way to change it on the server, i.e. prevents from replacing the already existing blob?
I've seen the Avoid over-writing blobs AZURE but it is about the client side.
My goal is to secure the server from overwritting blobs.
AFAIK the file is uploaded directly to the container without a chance to intercept the request and check e.g. existence of the blob.
Edited
Let me clarify: My client app receives a SAS token to upload a new blob. However, an evil hacker can intercept the token and upload a blob with an existing name. Because of the default behavior, the new blob will replace the existing one (effectively deleting the good one).
I am aware of different approaches to deal with the replacement on the client. However, I need to do it on the server, somehow even against the client (which could be compromised by the hacker).
You can issue the SAS token with "create" permissions, and without "write" permissions. This will allow the user to upload blobs up to 64 MB in size (the maximum allowed Put Blob) as long as they are creating a new blob and not overwriting an existing blob. See the explanation of SAS permissions for more information.
There is no configuration on server side but then you can implement some code using the storage client sdk.
// retrieve reference to a previously created container.
var container = blobClient.GetContainerReference(containerName);
// retrieve reference to a blob.
var blobreference = container.GetBlockBlobReference(blobName);
// if reference exists do nothing
// else upload the blob.
You could do similar using the REST api
https://learn.microsoft.com/en-us/rest/api/storageservices/fileservices/blob-service-rest-api
GetBlobProperties which will return 404 if blob does not exists.
Is there a way to change it on the server, i.e. prevents from replacing the already existing blob?
Azure Storage Services expose the Blob Service REST API for you to do operations against Blobs. For upload/update a Blob(file), you need invoke Put Blob REST API which states as follows:
The Put Blob operation creates a new block, page, or append blob, or updates the content of an existing block blob. Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with Put Blob; the content of the existing blob is overwritten with the content of the new blob.
In order to avoid over-writing existing Blobs, you need to explicitly specify the Conditional Headers for your Blob Operations. For a simple way, you could leverage Azure Storage SDK for .NET (which is essentially a wrapper over Azure Storage REST API) to upload your Blob(file) as follows to avoid over-writing Blobs:
try
{
var container = new CloudBlobContainer(new Uri($"https://{storageName}.blob.core.windows.net/{containerName}{containerSasToken}"));
var blob = container.GetBlockBlobReference("{blobName}");
//bool isExist=blob.Exists();
blob.UploadFromFile("{filepath}", accessCondition: AccessCondition.GenerateIfNotExistsCondition());
}
catch (StorageException se)
{
var requestResult = se.RequestInformation;
if(requestResult!=null)
//409,The specified blob already exists.
Console.WriteLine($"HttpStatusCode:{requestResult.HttpStatusCode},HttpStatusMessage:{requestResult.HttpStatusMessage}");
}
Also, you could combine your blob name with the MD5 code of your blob file before uploading to Azure Blob Storage.
As I known, there is no any configurations on Azure Portal or Storage Tools for you to achieve this purpose on server-side. You could try to post your feedback to Azure Storage Team.

Azure blob storage split blobs and store on Cloud file share

I am uploading zip files to Azure Blob Storage which are relatively huge.
Now I need to go to those containers, get the blob reference and store that zip file into multiple zips on Cloud file share. I am not sure how to proceed with that.
var storageAccount = AzureUtility.CreateStorageAccountFromConnectionString();
var container = AzureUtility.GetAzureCloudBlobContainerReference("fabcd");
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
CloudFileShare share = fileClient.GetShareReference("sample-share");
share.CreateIfNotExists();
CloudBlockBlob sourceBlob = container.GetBlockBlobReference("Test.zip");
var file = share.GetRootDirectoryReference().GetFileReference("Test.zip").Exists();
if(file)
{
//split and share
}
any suggestions
My understanding is that you're trying to download a blob and then divide and upload it into multiple files.
There's two options here both of which are exposed through the .Net API. The blob API exposes both DownloadRangeTo* methods and DownloadTo* methods. The file API exposes UploadFrom* methods. If you know in advance the divisions you want to make you can download the range and then upload it to the file. Otherwise you can download the entire blob, divide it client side, and then upload the divisions.

Resources