How to create a zip file from Azure blob storage container files using Pipeline - azure

I have some dynamically created files in a blob storage container. I want to send it through email as a single attachment.
The total file size is less than 5 MB.
But here the difficulty I am facing is, when I try to compress the file using CopyData options, the compressed/zipped file not creating properly with multiple files.
If I try to zip a single file by giving its full path and filename, it is working fine. But when I give a folder name to compress all the files in that folder, it is not working correctly.
Please note that here I am not using any kind of external C# code or libraries.
Any help appreciated
Thank you

You can reference my settings in Data Factory Copy active:
Source settings:
Source dataset settings:
Sink settings:
Sink dataset settings:
Pipeline works ok:
Check the zip file in contianer containerleon:
Hope this helps.

Related

Azure Load Testing: How to add 20k test files into my test?

I have created a JMeter test that randomly selects orders from a pool of 20K .json data files.
I need to upload the .json files along with the .jmx file, however, the Azure Load Testing UI allows the upload of 10 files at most.
I have read the documentation and I could not find anything relevant on how to upload the 20k data files.
Is there a way to upload the 20k files to my test in one go?
Thanks,
P.
I just found out that there is an Azure Load Testing API that I can use to upload my data files.
More information can be found in the following link:
https://learn.microsoft.com/en-us/rest/api/loadtesting/dataplane/test/upload-test-file?tabs=HTTP
Thanks,
P.

XLSX files in azure blob storage get downloaded as zip files

We have some files in our Azure blob storage - they are all xlsx files.
When we download them via Azure portal (we navigate to the storage account, then to the container, and then select a file and download it) it downloads and saves as zip file.
If after downloading we change its extension to xlsx then Excel will recognize it and open without issues. However, something is forcing that extension to change from xlsx (as we see it in the container) to the .zip whilst it is downloaded.
The same happens when we access the files programmatically (via c# code) or generate a shared access signature.
What could it be and how to fix it?
Thanks!
my work around when accessing xlsx files programmatically with C#, is to manually add the mime type specifically for the xlsx file type as, they were one's giving me issues(pdf and pictures work fine), PS, I store filenames in my DB with a corresponding filename. i.e
if (YourModel.FileName.EndsWith("xlsx"))
{
return File(YourModel.FileData, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet");
}

Unzip and Rename underlying File using Azure Logic App

Possible to rename an underlying file while Unzipping using Logic App? I am calling an HTTP activity to download a ZIP file. That Zip contains only 1 Underlying file with some value appended to the name. I want to store the Unzipped file with a better name so that it can be used further. Is it possible ?
Incoming ZIP File --> SAMPLEFile.ZIP
Underlying File --> SampleTextFile20200824121212.TXT
Desired File --> SampleTextFile.TXT
Suggestions ?
As far as I know, we can't implement this requirement directly in "Extract archive to folder" action. We can just rename the file by copy it from one folder to another folder (shown as below).
You can create a new ticket on feedback page to ask azure team for this feature.

How can I decompress my .zip file and store in ADL/Blob storage?

I have a ftp as a source connection where some I have zip file and others are not in compress form. I want to copy the files from ftp,decompress zip files and put all files into azure data lake or azure blob storage wherever it's possible to get decompressed.
I'm using copy data activity where I have a source as ftp and properties is zipDeflate,fastest and binary copy and the sink side, I'm just defining the destination ADL path. The files are getting copied to ADL but they're copying in compress form only.
Please let me know if it's possible to achieve the above objective by using copy activity process?
Using binary copy is your issue here, data factory wont understand the data it is moving to uncompress it. Try the same setup without binary copy!
Hope this helped!

Databricks File Save

I'm using Databricks on Azure and am using a library called OpenPyXl.
I'm running the sameple cosde shown here: and the last line of the code is:
wb.save('document.xlsx', as_template=False)
The code seems to run so I'm guessing it's storing the file somewhere on the cluster. Does anyone know where so that I can then transfer it to BLOB?
To save a file to the FileStore, put it in the /FileStore directory in DBFS:
dbutils.fs.put("/FileStore/my-stuff/my-file.txt", "Contents of my
file")
Note: The FileStore is a special folder within Databricks File System - DBFS where you can save files and have them accessible to your web browser. You can use the File Store to:
For more detials, refer "Databricks - The FileStore".
Hope this helps.

Resources