FileSavePicker Contract Implmentation - visual-c++

I have implemented the FileSavePicker Contract in my app,so when user selects an attachment from mail app and want to save to my app ,then OnTargetFileRequested(FileSavePickerUI^ sender, TargetFileRequestedEventArgs^ e) method gets triggered....
OnTargetFileRequested(FileSavePickerUI^ sender, TargetFileRequestedEventArgs^ e)
{
auto request = e->Request;
auto deferral = request->GetDeferral();
create_task(ApplicationData::Current->LocalFolder->CreateFileAsync(sender->FileName, CreationCollisionOption::GenerateUniqueName)).then([request, deferral](StorageFile^ file)
{
// Assign the resulting file to the targetFile property indicates success
request->TargetFile = file;
// Complete the deferral to let the Picker know the request is finished.
deferral->Complete();
return file;
}.then([=](StorageFile^ file)
{
//here i will upload file to my metro app
}
now whatever file i was created that i need to upload to my metro app....but i am facing an issue with deferral->complete...whether deferral->complete() complete need to written after uploading the file to my app or above the deferral->complete statement is correct.??...
but when i use deferral->complete after uploading the file always 0 bytes of file is getting uploaded...
if i use deferral->complete in createFileAsync() as shown in above code then the file is not getting uploaded........please help me...
can you tell me is this the correct approach?..
thanks in advance...

You should call the deferral->Complete() after the last await call in your method - the purpose of defferal is to inform the caller, that even the called method returned, there is still async action in progress. Once deferral is called complete, then the caller knows everything was done.
So you should probably call the deferral->Complete() after uploading the file or after copying the file to your cache. If no bytes are transfered, make sure you transfer the file correctly - you have to open the original file using OpenReadAsync and copy the stream to either your memory stream (not recommended for large files), or to cache file or somewhere and then send it.

Related

Flutter Web - get asset image as File()

I am using a third party library that requires I pass U8IntList to display an image in a PDF. Their examples has me obtain the image in a File and read the bytes out.
PdfBitmap(file.readAsBytesSync())
This system is great when I am obtaining an image from a server, but I want to display an image stored in local assets.
What I tried to implement was this code..
Future<File> getImageFileFromAssets(String path) async {
final byteData = await rootBundle.load('assets/$path');
final file = File('${(await getTemporaryDirectory()).path}/$path');
await file.writeAsBytes(byteData.buffer.asUint8List(byteData.offsetInBytes, byteData.lengthInBytes));
return file;
}
Which returns the error 'No implementation found for method getTemporaryDirectory on channel plugins.flutter.io/path_provider'.
If anyone knows how to get an Asset Image as File on web it would be greatly appreciated.
Why would you want to write byte data to a file just to read it again? Just directly pass your byte data to the constructor that requires it. This should be changed on both your web and mobile implementations as it will end up being far faster.
final byteData = await rootBundle.load('assets/$path');
PdfBitmap(byteData.buffer.asUint8List())

FileSystemWatcher reports file available on network share but file cannot be found

BACKGROUND
I have a server that has a shared folder \\Server\Share with 4 subfolders:
OutboundFinal
OutboundStaging
InboundFinal
InboundStaging
All folders reside on the same physical disk and partition, no junction points used.
I also have several WinForms clients (up to 10) that write and read files to this share, each client is working on multiple threads (up to 5). Files are witten by clients (up to 50 threads altogether) into the \\Server\Share\OutboundStaging folder. Each file has the name of a GUID, so there's no overwriting. Once a file is completely written, it is moved by the client to the \\Server\Share\OutboundFinal folder. A Windows service running on the same server will pick it up, delete it, process it, then writes the file with the same name into the \\Server\Share\InboundStaging folder. Once the file is completely written, it is moved to the \\Server\Share\InboundFinal folder by the service.
This \\Server\Share\InboundFinal folder is monitored by each thread of each WinForms client using a FileSystemWatcher.WaitForChanged(WatcherChangeTypes.Changed | WatcherChangeTypes.Created, timeOut);
The FileSystemWatcher.Filter is set to the GUID filename of the file a certain thread expects to see in the \Server\Share\InboundFinal folder, so the FileSystemWatcher waits until a specific file is shown in the folder.
I have read several SO questions about FileSystemWatcher behaving erratically and not reporting changes on UNC shares. This is however not the case for me.
The code I use looks like this:
FileSystemWatcher fileWatcher = new FileSystemWatcher();
fileWatcher.Path = InboundFinalFolder;
fileWatcher.Filter = GUIDFileName; // contains full UNC path AND the file name
fileWatcher.EnableRaisingEvents = true;
fileWatcher.IncludeSubdirectories = false;
var res = fileWatcher.WaitForChanged(WatcherChangeTypes.Changed | WatcherChangeTypes.Created, timeOut);
if (!fileWatcher.TimedOut)
{
using (FileStream stream = fi.Open(FileMode.Open, FileAccess.Read, FileShare.Read)) {
byte[] res = new byte[stream.Length];
stream.Read(res, 0, stream.Length);
return res;
}
It's the using line that throws the exception.
THE PROBLEM
I would assume that the fileWatcher.WaitForChanged would go on only if the file with the proper GUID name is in the \\Server\Share\InboundFinal folder. This is exactly how FileSystemWatcher works on local folders, but not with file shares accessed over the network (local files, even accessed via a share, also tend to work). FileSystemWatcher reports that the file the thread is waiting for is in the FileSystemWatcher \\Server\Share\InboundFinal folder. However, when I try to read the file, I get a FileNotFoundException. The reading thread has to wait 3-15 seconds before the file can be read. I try to open the file with a FileStream with Read sharing.
What could cause this behavior? How do I work around it? Ideally the FileSystemWatcher.WaitForChanged(WatcherChangeTypes.Changed | WatcherChangeTypes.Created, timeOut); should only continue execution if the file can be read or timeout happens.
The FileSystemWatcher has a bad reputation, but actually, it is not that bad...
1.)
Your code sample does not compile. I tried this:
FileSystemWatcher fileWatcher = new FileSystemWatcher();
fileWatcher.Path = "X:\\temp";
fileWatcher.Filter = "test.txt";
fileWatcher.EnableRaisingEvents = true;
fileWatcher.IncludeSubdirectories = false;
var res = fileWatcher.WaitForChanged(WatcherChangeTypes.Changed |
WatcherChangeTypes.Created, 20000);
if (!res.TimedOut)
{
FileInfo fi = new FileInfo(Path.Combine(fileWatcher.Path, res.Name));
using (FileStream stream = fi.Open(FileMode.Open, FileAccess.Read, FileShare.Read))
{
byte[] buf = new byte[stream.Length];
stream.Read(buf, 0, (int)stream.Length);
}
Console.WriteLine("read ok");
}
else
{
Console.WriteLine("time out");
}
I tested this where X: is a SMB share. It worked without problems (for me, see below).
But:
You should open / read the file with retries (sleeping for 100 ms after every unsuccessfully open). This is because you may run into a situation where the FileSystemWatcher detects a file, but the move (or another write operation) has not yet ended, so you have to wait until the file create / mover is really ready.
Or you do do not wait for the "real" file but for a flag file which the file move task creates after closing the "real" file.
2.)
Could it be that the move task did not close the file correctly?
3.)
Some years ago I had some tools (written in perl) where one script created a flag file and another script waited for it.
I had some nasty problems on a SMB 2 share. I found out that this was due to SMB caching.
See
https://bogner.sh/2014/10/how-to-disable-smb-client-side-caching/
File open fails initially when trying to open a file located on a win2k8 share but eventually can succeeed
https://technet.microsoft.com/en-us/library/ff686200.aspx
Try this (on the client):
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\LanmanWorkstation\Parameters]
"DirectoryCacheLifetime"=dword:00000000
"FileNotFoundCacheLifetime"=dword:00000000
Save this as disablecache.reg and run regedit disablecache.reg
Then reboot.

Event handler on file change and get changes in file using NodeJS File System

I want to watch a specific file to look for specific changes using Node and also fetch those changes in the file. Is there anyway doing that using fs?
Yes, you can do it by using fs.
How to do it:
Load file content
Watch file
Compare content with the previous one when change event is triggered
Update file content
It should look like this:
fs.watch('foo', function (event, filename) {
if (event == 'change') {
// Load file content
// Compare to one previously loaded
// actualContent = newContent
}
});
See:
fs.watch documentation: https://nodejs.org/api/fs.html#fs_fs_watchfile_filename_options_listener
Event change documentation: https://nodejs.org/docs/latest/api/fs.html#fs_event_change

Express res.download() not actually downloading file

I'm attempting to return generated files to the front end through Express' res.download function. I'm using chrome, but whenever I call that API that executes the following code all that is returned is the same values returned from the Express res.sendFile() function.
I know that res.download uses res.sendFile, but I would like the download function to actually save to the file system instead of just returning the file in the body of the response.
This is my code.
exports.download = function(req,res) {
var filePath = //somefile that I want to download
res.download(filePath, 'response.txt', function(err) {
throw err;
}
}
I know that the above code at least partly works because I'm getting back, in the response, the contents of the file. However, I want it to be saved onto the file system.
Am I misunderstanding what the download function is supposed to do? Do I just need to take the response data and write it to the file system manually?
res.download adds headers that suggest to the browser that the file should be downloaded rather than opened. However, there's no way to force the browser to do this; it's ultimately the user's choice whether to download a particular file, typically.
If you're triggering this request with AJAX, well, that's not going to cause it to be downloaded, because your JavaScript is requesting that it get the data.
Do I just need to take the response data and write it to the file system manually?
You don't have file system access in browser-side JavaScript. I'm not sure how you intend to do this.

SharePoint ItemAdded and SPFile.OpenBinary(), zero bytes

I have an event receiver tied to a SharePoint 2010 picture library. When a picture is uploaded, I want to open it for processing. Files uploaded with the web interface work fine, but files copied via Windows Explorer view return zero bytes. Simplified code below.
public override void ItemAdded(SPItemEventProperties properties)
{
SPListItem item = properties.ListItem;
SPFile file = item.File;
byte[] buffer = file.OpenBinary(); //buffer has zero bytes for files copied in Windows Explorer!
}
If I insert a delay before opening, it works.
public override void ItemAdded(SPItemEventProperties properties)
{
SPListItem item = properties.ListItem;
SPFile file = item.File;
System.Threading.Thread.Sleep(2000);
byte[] buffer = file.OpenBinary(); //buffer now populated correctly
}
But I though that ItemAdded was only called after everything was done, including file upload.
I also found that file.CanOpenFile(true) always returns true, whether or not OpenBinary works.
How can I make sure the file is ready to open before I call OpenBinary()?
I don't like the Thread.Sleep solution, because I'm sure larger files or a busier server would require more wait. The time required can't be predicted, and I don't want to loop and try again forever.
Update: I originally thought the failure to open was caused by larger files. Question has been updated to reflect the explorer view as the cause. I also find that Windows Explorer copy also triggers ItemUpdated (twice), and I am able to open the file here. A little messy to have 3 triggers, 2 calls to do 1 thing, so I am still open to suggestions.
I just encountered this issue today on SharePoint 2013. I've taken the suggestions listed here and improved upon them.
It's fine making the thread sleep for 2 seconds, but what happens when you have a large file? You're going to run into the same issue.
My code fix is as follows:
//Check if the SPContext is null since Explorer View isn't within the context
if (SPContext.Current == null)
{
//If the initial file length is 0, pause the thread for 2 seconds
if (properties.ListItem.File.Length == 0)
{
System.Threading.Thread.Sleep(2000);
//Since our item exists, run the GetItemById to instantiate a new and updated SPListItem object
var spFile = properties.List.GetItemById(properties.ListItemId);
//SharePoint places an Exclusive lock on the file while the data is being loaded into the file
while (spFile.File.LockType != SPFile.SPLockType.None)
{
System.Threading.Thread.Sleep(2000);
spFile = properties.List.GetItemById(properties.ListItemId);
//We need to check if the file exists, otherwise it will loop forever if someone decides to cancel the upload
if (!spFile.File.Exists)
return;
}
//If someone thought it was a good idea to actually load a 0 byte file, don't do anything else
if (spFile.File.Length == 0)
return;
}
}
I face the same Problem within SP2010 and SP2013. Any Idea how to solve this?
Somehow this have something to do with bigger files. Small Files work without any problems, bigger files (400kb) won't work always.
I have only one hint. If you copy & paste a file over windows explorer (WebDAV) to the Library the EventHandle (ItemAdded) will trigger as soon the File was created. But this doesn't mean the File is already filled with data. I saw this once, my debugger hit my breakpoint even while windows was still busy with the copyprocess.
Would be great to know when the copyprocess is finished. I thought i could do this by just do "spfile.openBinary()" and if its empty, just wait 2 sec and do it again until it will get something bigger than 0 bytes. But this doesnt work! It only work if you are waiting BEFORE you call openBinary() the first time, all other times of calling openBinary() lead to the same result.

Resources