Is there a way to open and read zip files programmatically uploaded by PXUploadDialog? - acumatica

I have a user that wants to upload a zip file that contains multiple images and a single CSV file with data related to the images. They want to be able to upload the zip file and have the program dissect it by finding and processing the data within the CSV file and then storing the images within the zip to their appropriate locations.
I'm trying to figure out how to open the zip so I can cycle through each file in there to find what I need. Is there any way to do this?

You can use ZipArchive from Acumatica Framework:
// Uploaded file needs to be attached to a DAC record
Guid[] files = PXNoteAttribute.GetFileNotes(DACCache, DACRecord);
UploadFileMaintenance upload = PXGraph.CreateInstance<UploadFileMaintenance>();
foreach (Guid fileID in files)
{
FileInfo fileInfo = upload.GetFile(fileID);
if (fileInfo != null)
{
using (MemoryStream stream = new MemoryStream(fileInfo.BinData))
{
ZipArchive zip = ZipArchive.OpenReadonly(stream);
string tempDirectory = Path.Combine(Path.GetTempPath(), Path.GetRandomFileName());
Directory.CreateDirectory(tempDirectory);
ZipFolder.Decompress(zip, tempDirectory, true);
foreach (string filePath in Directory.GetFiles(tempDirectory))
{
// Enumerating decompressed files
}
}
}
}

Related

change zip contents from zip data stream without extraction - node

i have an endpoint in a nest server with Typescript which gets a zip file from another service.
i want to change the file names of this zip file without extracting them locally but i cant find a way.
I cant find a way to create a zip file from a stream of data.
//get data from stream
formFileContent = downloadFileResponse.data;
//here need to change zip file names
any idea how to do this ? any library or something ?
found the answer :
var zip = new admZip(data);
var newZip = new admZip();
zip.getEntries().forEach((zipEntry, i) => {
const entryData = zipEntry.getData()
var newFileName = cart[i]. '.pdf'
newZip.addFile(newFileName, entryData);
});
return new StreamableFile(newZip.toBuffer());

How to read data from the downloaded excel file content from Google drive api in Dart/Flutter?

I am using google drive API to download an excel file in my Flutter app but I want to store the downloaded file content response in a File and then do some update operations using excel dart package, below is the given code from reading an xlsx file from a path location.
var file = "Path_to_pre_existing_Excel_File/excel_file.xlsx"; //here I want to store the response from drive api
var bytes = File(file).readAsBytesSync();
var excel = Excel.decodeBytes(bytes);
//Do some logic here
for (var table in excel.tables.keys) {
print(table); //sheet Name
print(excel.tables[table].maxCols);
print(excel.tables[table].maxRows);
for (var row in excel.tables[table].rows) {
print("$row");
}
}
//then saving the excel file
// updating the excel sheet to Drive
updateToDrive(excel,fileId);
I have created all the required auth functions, drive scopes and my download function looks like this :
Future<void> downloadFile() async{
String fileId = '1TOa4VKfZBHZe######WLA4M95nOWp';
final response = await driveApi.files.get(
fileId,
downloadOptions: drive.DownloadOptions.fullMedia
);
print(response);
}
This function is executing correctely and giving Media type response, but I could not able to read this response so that I could store it in a file.
Any help would be truly appreciated, Thanks
I changed my download function to this, as drive.files.get() was returning a Future Object so I changed it to return Future<Media?> by type casting.
String fileId = "19jF3lOVW563LU6m########jXVLNQ7poXY1Z";
drive.Media? response = (await driveApi.files.get(
fileId,
downloadOptions: drive.DownloadOptions.fullMedia
)) as drive.Media?;
Now response is a Media on which we can listen to the sream to store the response in a file.
To do that first we need to get the app directory by path_provider
final String path = (await getApplicationSupportDirectory()).path;
final String fileName = '$path/Output.xlsx';
File file = File(fileName);
Now we want to write the stream of response Stream<List> into our file object which I found from this link
List<int> dataStore = [];
await response!.stream.listen((data) {
print("DataReceived: ${data.length}");
dataStore.insertAll(dataStore.length, data);
}, onDone: () {
print("Task Done");
file.writeAsBytes(dataStore);
OpenFile.open(file.path);
print("File saved at ${file.path}");
}, onError: (error) {
print("Some Error");
});
Now we can do whatever we want to make changes through excel package.

Azure Storage zip file corrupted after upload (using Azure Storage Blob client library for Java)

problem: zip file with csv files generated from data seems to be corrupted after upload to Azure Blob Storage.
zip file before upload looks like this:
and everything works fine. That same zip file after upload is corrupted and looks like this:
During upload I use Azure Storage Blob client library for Java (v. 12.7.0, but I tried also previous versions). This is code I use (similar to example provided in SDK readme file):
public void uploadFileFromPath(String pathToFile, String blobName) {
BlobClient blobClient = blobContainerClient.getBlobClient(blobName);
blobClient.uploadFromFile(pathToFile);
}
And I get uploaded file:
When I download file directly from storage explorer, file is already corrupted.
What I'm doing wrong?
According to your description, I suggest you use the following method to upload you zip file
public void uploadFromFile(String filePath, ParallelTransferOptions parallelTransferOptions, BlobHttpHeaders headers, Map<String,String> metadata, AccessTier tier, BlobRequestConditions requestConditions, Duration timeout)
We can use the method to set content type
For example
BlobHttpHeaders headers = new BlobHttpHeaders()
.setContentType("application/x-zip-compressed");
Integer blockSize = 4* 1024 * 1024; // 4MB;
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions(blockSize, null, null);
blobClient.uploadFromFile(pathToFile,parallelTransferOptions,headers,null, AccessTier.HOT, null, null);
For more details, please refer to the document
Eventually it was my fault. I didn't close ZipOutputStream before uploading file. It is not much of a problem when you use try with resources and just want to generate local file. But in my case I want to upload file to Blob Storage (still in the try section). File was incomplete (not closed) so it appeared on storage with corrupted data. This is what I should do from the very beginning.
private void addZipEntryAndDeleteTempCsvFile(String pathToFile, ZipOutputStream zipOut,
File file) throws IOException {
LOGGER.info("Adding zip entry: {}", pathToFile);
zipOut.putNextEntry(new ZipEntry(pathToFile));
try (FileInputStream fis = new FileInputStream(file)) {
byte[] bytes = new byte[1024];
int length;
while ((length = fis.read(bytes)) >= 0) {
zipOut.write(bytes, 0, length);
}
zipOut.closeEntry();
file.delete()
}
zipOut.close(); // missing part
}
After all, thank you for your help #JimXu. I really appreciate that.

how do I unzip one large zip file(100gb+) present in a blob container to another blob container ,I get System.OutOfMemoryException

StorageCredentials creden = new StorageCredentials(AccountNameAzure, AccessKeyAzure);
CloudStorageAccount storageAccount = new CloudStorageAccount(creden, useHttps: true);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
//CloudBlobContainer cont = blobClient.GetContainerReference(SourceContainerName);
//Get a reference to the container where we have our zip file as well as reference to the file itself.
// Retrieve reference to a zip file container.
CloudBlobContainer container = blobClient.GetContainerReference(SourceContainerName);
// Retrieve reference to the blob - zip file which we wanted to extract
Console.WriteLine("SourceContainerName:" + SourceContainerName + "\n");
CloudBlockBlob blockBlob = container.GetBlockBlobReference("ABCD.zip");
//Now also get a reference to the container where you wanted to extract the files to and create the container in Storage Account if it is not exists already.
//Retrieve reference to a container where you wanted to extract the zip file.
Console.WriteLine("TargetContainerName:"+ TargetContainerName);
var extractcontainer = blockBlob.ServiceClient.GetContainerReference(TargetContainerName);
extractcontainer.CreateIfNotExists();
// As we have both source and target container references are setup, let’s download the zip file blob into a memory stream and and pass it on to ZipArchive class which is from System.IO.Compression namespace.
// ZipArchive will take this memory stream as input and will provide a collection of entries property where in each entry represents an individual file in it.
Console.WriteLine("Starting Unzip: "+DateTime.Now);
// Save blob(zip file) contents to a Memory Stream.
using (var zipBlobFileStream = new MemoryStream())
{
blockBlob.DownloadToStream(zipBlobFileStream);
zipBlobFileStream.Flush();
zipBlobFileStream.Position = 0;
//use ZipArchive from System.IO.Compression to extract all the files from zip file
using (var zip = new ZipArchive(zipBlobFileStream))
{
//Each entry here represents an individual file or a folder
foreach (var entry in zip.Entries)
{
Console.WriteLine(entry.FullName+"\n");
Console.WriteLine("File unzip start: " + DateTime.Now+"\n");
//creating an empty file (blobkBlob) for the actual file with the same name of file
var blob = extractcontainer.GetBlockBlobReference(entry.FullName);
using (var stream = entry.Open())
{
//check for file or folder and update the above blob reference with actual content from stream
if (entry.Length > 0)
blob.UploadFromStream(stream);
}
Console.WriteLine("File unzip end: " + DateTime.Now + "\n");
}
}
}
Found no solution yet .
instead used using (var zipBlobFileStream = new FileStream(Path.GetTempFileName(),FileMode.Create,FileAccess.ReadWrite,FileShare.None,4096,FileOptions.DeleteOnClose))
and downloaded the file stream into the temp folder

Azure: How to reliable download a file from blobstorage?

We create a lot of csv files and store them in an azure blob container. When all files have been created, they shall be downloaded to a certain location within a vpn. This works fine most of the time, but it can happen that some files are much smaller than expected.
The files have an average size of ~40 mb, sometimes the files are between 7 and 30 mb. This is the code so far:
private void DownloadBlobToFile(ICloudBlob blob, string fileName)
{
_log.Debug("DownloadBlobToFile");
Contract.Requires<ArgumentNullException>(blob == null, "blob must not be null.");
Contract.Requires<ArgumentNullException>(string.IsNullOrWhiteSpace(fileName), "fileName must not be null or empty.");
using (var fileStream = new FileStream(fileName, FileMode.OpenOrCreate, FileAccess.Write, FileShare.Read))
{
blob.DownloadToStream(fileStream, null, GetBlobRequestOptions());
_log.Info(string.Format("File {0} successfully downloaded.", fileStream.Name));
}
}
private BlobRequestOptions GetBlobRequestOptions()
{
if (_blobRequestOptions != null)
{
return _blobRequestOptions;
}
_blobRequestOptions = new BlobRequestOptions
{
ServerTimeout = TimeSpan.FromSeconds(60),
MaximumExecutionTime = TimeSpan.FromSeconds(180),
RetryPolicy = new ExponentialRetry(TimeSpan.FromSeconds(3), 5)
};
return _blobRequestOptions;
}
When the files are downloaded incompletely, there is no error message at all.
What is the best procedure to check if files have been successfully and completely downloaded?
Not sure if you have control over the name of the file, but if you do you can name the file with the MD5 hash of the file content. Then you can compute that MD5 hash when the file downloads to make sure it matches the file name. This way you would know if you have all of the content. You could also store this metadata somewhere else and associate it, but the key is using a MD5 hash to validate you have everything you expect to have.

Resources