Read a directory as bytes - io

I'm trying to build an archive file using the Archive package and I need to archive a whole directory.
So far I see that I can archive a single file by doing this:
io.File file = new io.File(pathToMyFile);
List<int> bytes = await file.readAsBytes();
io.FileStat stats = await file.stat();
Archive archive = new Archive();
archive.addFile(new ArchiveFile.noCompress('config.yaml', stats.size, bytes)
..mode = stats.mode
..lastModTime = stats.modified.millisecond);
List<int> data = new ZipEncoder().encode(archive, level: Deflate.NO_COMPRESSION);
await new io.File('output.war').writeAsBytes(data, flush: true);
But the to create an ArchiveFile I need the bytes representing the file, however I feel like it would be nice to have the whole directory as Bytes to do this. Is there a way to do this? It seems the Dart API on Directory is pretty limited.
How does one usually go to let's say copy a directory? Just call system cp ? I mean I would like to get a solution that would work on multiple platforms.

You do it file by file recursively. Grinder has a task for this just take a look at the implementation https://github.com/google/grinder.dart/blob/devoncarew_0.7.0-dev.1/lib/grinder_files.dart#L189
You get all files of a directory with
var fileList =
new io.Directory(path.join(io.Directory.current.absolute.path, 'lib'))
.list(recursive: true, followLinks: false);
and then you process one file after the other.

Related

Azure File Share Check if multiple files exits

We are using Azure File Shares (File shares, not GPV2, meaning we're not using blobs or queues, just File Shares) to store our files.
We need to check if a list of file paths exist of not.
Is there a "bulk" version of ShareFileClient.ExistsAsync ?
What's the best workaround otherwise ?
We tried calling Exists on each path, each call in it's own task, but it takes too long to return (for 250 paths it takes around 25 seconds):
var tasks = paths.AsParallel().Select(p => Task.Run(() =>
{
// share is a captured variable of type ShareClient
var dir = share.GetDirectoryClient(GetDirName(p));
var file = dir.GetFileClient(GetFileName(p));
var result = file.Exists();
return result.Value;
}));
As such there's no direct way to check the existence of multiple files.
However, there is a workaround:
What you can do is list the files and subdirectories in a directory using GetFilesAndDirectoriesAsync(ShareDirectoryGetFilesAndDirectoriesOptions, CancellationToken) method. Once you get the list, you will loop over the list and and can check if a file by a particular name exists in a directory or not.
This will be much faster and cost efficient because you are making a single request to get the list of files instead of calling FileClient.Exists method on each file where each method call is a separate network request.

how to prompt where to download zip file created with archiver in node

I am trying to create a zip file in node using the code provided from how to create a zip file in node given multiple downloadable links, as shown below:
var fs = require('fs');
var archiver = require('archiver');
var output = fs.createWriteStream('./example.zip');
var archive = archiver('zip', {
gzip: true,
zlib: { level: 9 } // Sets the compression level.
});
archive.on('error', function(err) {
throw err;
});
// pipe archive data to the output file
archive.pipe(output);
// append files
archive.file('/path/to/file0.txt', {name: 'file0-or-change-this-whatever.txt'});
archive.file('/path/to/README.md', {name: 'foobar.md'});
//
archive.finalize();
When I use this suggestion, the zip file is downloaded without any kind of prompt asking me where I would like to save the file - is there any way I can make it so that a prompt is created asking me where I would like to save the file, which is quite normal these days?
If this is absolutely not possible, would it be possible to always save the file in the downloads folder (regardless of whether on mac or windows or any other operating system)?
So there's a couple of things here. In terms of a 'prompt' or 'pop-up' you won't find anything along the lines of WinForms out of the box, there are options for the command line such as prompts You can use that as your user input.
https://www.npmjs.com/package/prompts
You'll want to use path and more specifically path.join() to combat the mac/windows/linux issue.
Do you need to use path.join in node.js?
You can run an express server and create a route that uses res.download() in which you would provide the zipped file.
https://expressjs.com/en/api.html#res.download

How to avoid performing a firebase function on folders on cloud storage events

I'm trying to organize assets(images) into folders with a unique id for each asset, the reason being that each asset will have multiple formats (thumbnails, and formats optimized for web and different viewports).
So for every asset that I upload to the folder assets-temp/ is then moved and renamed by the functions into assets/{unique-id}/original{extension}.
example: assets-temp/my-awesome-image.jpg should become assets/489023840984/original.jpg.
note: I also keep track of the files with their original name in the DB and in the original's file metadata.
The issue: The function runs and performs what I want, but it also adds a folder named assets/{uuid}/original/ with nothing in it...
The function:
exports.process_new_assets = functions.storage.object().onFinalize(async (object) => {
// Run this function only for files uploaded to the "assets-temp/" folder.
if (!object.name.startsWith('assets-temp/')) return null;
const file = bucket.file(object.name);
const fileExt = path.extname(object.name);
const destination = bucket.file(`assets/${id}/original${fileExt}`);
const metadata = {
id,
name: object.name.split('/').pop()
};
// Move the file to the new location.
return file.move(destination, {metadata});
});
I am guessing that this might happen if the operation of uploading the original image triggers two separate events: one that creates the directory assets-temp and one that creates the file assets-temp/my-awesome-image.jpg.
If I guessed right, the first operation will trigger your function with a directory object (named "assets-temp/"). This matches your first if, so the code will proceed and do
destination = bucket.file('assets/${id}/original') // fileExt being empty
and then call file.move - this will create assets/id/original/ directory.
Simply improve your 'if' to exclude a file named "assets-temp/".
According to the documentation there is no such thing as folders in cloud storage, however, it is possible to emulate them, like you can do by using the console GUI. When creating folders what really happens is that an empty object is created(zero bytes of space) but its name ends with a forward slash, also folder names can end with _$folder$ but it is my understanding that that is how things worked in older versions so for newer buckets the forward slash is enough.

FileSystemWatcher reports file available on network share but file cannot be found

BACKGROUND
I have a server that has a shared folder \\Server\Share with 4 subfolders:
OutboundFinal
OutboundStaging
InboundFinal
InboundStaging
All folders reside on the same physical disk and partition, no junction points used.
I also have several WinForms clients (up to 10) that write and read files to this share, each client is working on multiple threads (up to 5). Files are witten by clients (up to 50 threads altogether) into the \\Server\Share\OutboundStaging folder. Each file has the name of a GUID, so there's no overwriting. Once a file is completely written, it is moved by the client to the \\Server\Share\OutboundFinal folder. A Windows service running on the same server will pick it up, delete it, process it, then writes the file with the same name into the \\Server\Share\InboundStaging folder. Once the file is completely written, it is moved to the \\Server\Share\InboundFinal folder by the service.
This \\Server\Share\InboundFinal folder is monitored by each thread of each WinForms client using a FileSystemWatcher.WaitForChanged(WatcherChangeTypes.Changed | WatcherChangeTypes.Created, timeOut);
The FileSystemWatcher.Filter is set to the GUID filename of the file a certain thread expects to see in the \Server\Share\InboundFinal folder, so the FileSystemWatcher waits until a specific file is shown in the folder.
I have read several SO questions about FileSystemWatcher behaving erratically and not reporting changes on UNC shares. This is however not the case for me.
The code I use looks like this:
FileSystemWatcher fileWatcher = new FileSystemWatcher();
fileWatcher.Path = InboundFinalFolder;
fileWatcher.Filter = GUIDFileName; // contains full UNC path AND the file name
fileWatcher.EnableRaisingEvents = true;
fileWatcher.IncludeSubdirectories = false;
var res = fileWatcher.WaitForChanged(WatcherChangeTypes.Changed | WatcherChangeTypes.Created, timeOut);
if (!fileWatcher.TimedOut)
{
using (FileStream stream = fi.Open(FileMode.Open, FileAccess.Read, FileShare.Read)) {
byte[] res = new byte[stream.Length];
stream.Read(res, 0, stream.Length);
return res;
}
It's the using line that throws the exception.
THE PROBLEM
I would assume that the fileWatcher.WaitForChanged would go on only if the file with the proper GUID name is in the \\Server\Share\InboundFinal folder. This is exactly how FileSystemWatcher works on local folders, but not with file shares accessed over the network (local files, even accessed via a share, also tend to work). FileSystemWatcher reports that the file the thread is waiting for is in the FileSystemWatcher \\Server\Share\InboundFinal folder. However, when I try to read the file, I get a FileNotFoundException. The reading thread has to wait 3-15 seconds before the file can be read. I try to open the file with a FileStream with Read sharing.
What could cause this behavior? How do I work around it? Ideally the FileSystemWatcher.WaitForChanged(WatcherChangeTypes.Changed | WatcherChangeTypes.Created, timeOut); should only continue execution if the file can be read or timeout happens.
The FileSystemWatcher has a bad reputation, but actually, it is not that bad...
1.)
Your code sample does not compile. I tried this:
FileSystemWatcher fileWatcher = new FileSystemWatcher();
fileWatcher.Path = "X:\\temp";
fileWatcher.Filter = "test.txt";
fileWatcher.EnableRaisingEvents = true;
fileWatcher.IncludeSubdirectories = false;
var res = fileWatcher.WaitForChanged(WatcherChangeTypes.Changed |
WatcherChangeTypes.Created, 20000);
if (!res.TimedOut)
{
FileInfo fi = new FileInfo(Path.Combine(fileWatcher.Path, res.Name));
using (FileStream stream = fi.Open(FileMode.Open, FileAccess.Read, FileShare.Read))
{
byte[] buf = new byte[stream.Length];
stream.Read(buf, 0, (int)stream.Length);
}
Console.WriteLine("read ok");
}
else
{
Console.WriteLine("time out");
}
I tested this where X: is a SMB share. It worked without problems (for me, see below).
But:
You should open / read the file with retries (sleeping for 100 ms after every unsuccessfully open). This is because you may run into a situation where the FileSystemWatcher detects a file, but the move (or another write operation) has not yet ended, so you have to wait until the file create / mover is really ready.
Or you do do not wait for the "real" file but for a flag file which the file move task creates after closing the "real" file.
2.)
Could it be that the move task did not close the file correctly?
3.)
Some years ago I had some tools (written in perl) where one script created a flag file and another script waited for it.
I had some nasty problems on a SMB 2 share. I found out that this was due to SMB caching.
See
https://bogner.sh/2014/10/how-to-disable-smb-client-side-caching/
File open fails initially when trying to open a file located on a win2k8 share but eventually can succeeed
https://technet.microsoft.com/en-us/library/ff686200.aspx
Try this (on the client):
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\LanmanWorkstation\Parameters]
"DirectoryCacheLifetime"=dword:00000000
"FileNotFoundCacheLifetime"=dword:00000000
Save this as disablecache.reg and run regedit disablecache.reg
Then reboot.

Azure Web SItes - how to write to a file

I am using abcPDF to dynamically create PDFs.
I want to save these PDFs for clients to retrieve any time they want. The easiest way (and the way I do now on my current server) is to simply save the finished PDF to the file system.
Seems I am stuck with using blobs. Luckily abcPDF can save to a stream as well as a file. Now, how to I wire up a stream to a blob? I have found code that shows the blob taking a stream like:
blob.UploadFromStream(theStream, options);
The abcPDF function looks like this:
theDoc.Save(theStream)
I do not know how to bridge this gap.
Thanks!
Brad
As an alternative that doesn't require holding the entire file in memory, you might try this:
using (var stream = blob.OpenWrite())
{
theDoc.Save(stream);
}
EDIT
Adding a caveat here: if the save method requires a seekable stream, I don't think this will work.
Given the situation and not knowing the full list of overloads of Save() method of abcPdf, it seems that you need a MemoryStream. Something like:
using(MemoryStream ms = new MemoryStream())
{
theDoc.Save(ms);
ms.Seek(0, SeekOrigin.Begin);
blob.UploadFromStream(ms, options);
}
This shall do the job. But if you are dealing with big files, and you are expecting a lot of traffic (lots of simultaneous PDF creations), you might just go for a temp file. Write the PDF to a temp file, then immediatelly upload the temp file for the blob.

Resources