I am creating a Windows Phone 8 application. I have an image which is currently on my computer disk. I want to store that image in my isolated storage. The problem i am facing is that, that how i will load the image in my application so that i will store the image data in my isolated storage.
The problem is that, the image is located on my computer disk and when i attach the device and run the application it will try to find the file in local sandbox and hence i will get the exception.
System.Io.DirectoryNotFound
Any help will be great.
If I understand you correctly, you want to upload files from your computer to the isolated storage of your app?
I haven't tried it myself but you might want to take a look at Windows Phone Power Tools. According to their own site they let you "Upload / Download Files from IsolatedStorage".
Another option might be ISE (Isolated Storage Explorer).
If you want to access your files on your computer at runtime, I do not believe this is possible.
Related
I am using xamarin Azure SDK to download and manage the local database for my Xamarin . Forms App.
We are facing downloading time issues because we have a lot of data,
so I am thinking of taking backup once of the SQLite File from one device and use it to restore on different devices as restoring the same SQLite File.
Planned to use Azure Blob storage to store backup of SQLite files and for different device planning to download that blob of SQLite file and thinking of restore it on different devices.
Any Help will be appreciated.
Thanks :)
An approach I have used in the past is to create a controller method on the azure end which the client app can call that generates a pre-filled sqlite database or 'snapshot' on the server (making sure you include all the extra azure tables and columns) and then return a download url for the file to the client. We also zip-up the snapshot database to reduce the download times. You could store this 'snapshot' in azure blob if you desired.
Please refer given link. SQLite is not supporting only relationship like Foreign Key.
Memory Stream as DB
you can upload back up file on Blob with respective user details. and when there is any call with same user details you can download it from blob.
Those are the links that provide you with the code / knowledge required to use Azure Blob Storage on Xamarin:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-xamarin-blob-storage
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-dotnet-how-to-use-blobs
As this question is very general I can provide you only with those general links. There are many details on how to implement that in your case, if you face some specific problem I recommend to ask another question with the exact description of that specific problem.
EDIT: According to your comment you have some problems in replacing the local file. The only thing is that you must replace it before you initialize SQLite, otherwise it is a simple file operation.
I've written a powershell script to upload from a windows system to an Amazon S3 Bucket. The script successfully uploads all files, except those over 5Gb. I have the Cloud Berry Explorer Pro license which allows for multipart upload on files up to 5TB. However there is no flag for multipart on the powershell snapin documentation. Cloudberry support directed me here as they only support the GUI not the powershell snapin. When running my script I get the error
"WARNING: Your proposed upload exceeds the maximum allowed object size (5 Gb)".
So question is, does anyone know if there is a command line option, or another way to enable multipart upload to Amazon S3 using Cloudberry Explorer Pro's Powershell Snapin?
Set-CloudOption UseChunks=true
I'm looking for the same in Powershell.
I believe in GUI, the original chunking mechanism has been depreciated. I have not tested myself, but I assume Powershell option UseChunks=true is still using the old mechanism? If so, files may be split into multiple parts and not automatically recombined when they arrive on S3. The new GUI Multipart Upload facility sorts this all out for you.
Annoyed Cloudberry still advertise Powershell as a component of Explorer (Free & Pro), but don't support it, even for fully paid up PRO support customers.
We did purchase the cloudberry explorer pro license for the native multipart upload capability, but we wanted to automate it. I believe based on their documentation that the old chunk method is deprecated in favor of their new multi-part functionality. We wound up testing the options listed in the powershell documentation. Those options are as follows:
Set-CloudOption -UseChunks -ChunkSizeKB
"Defines a size of chunk in KB; files larger than a chunk will be divided into chunks."
We verified that this was successfully uploading files beyond the 5GB restriction to our S3 bucket. I attempted to get a response from Cloudberry as to whether this was the old Chunking method or the new Multi-Part method, but I was unable to get a straight answer. They confirmed that because we were using pro, this powershell option was supported, but they failed to confirm which option the powershell command was using.
From what I can tell it appears that Cloudberry's legacy chunking mechanism would actually just break the file into individual files and thus would appear in S3 as multiple files. The Chunk Transparency mechanism in Cloudberry Explorer would allow the multiple chunks to appear as a single file in the Cloudberry Explorer GUI only. Since I can see the file as a single file on the S3 side, I'm assuming that the powershell option uses the new Multi-Part functionality and not the legacy Chunking functionality. Again I was not able to confirm this through Cloudberry so it's speculation on my part.
However, I can confirm that using the powershell options above will get you around the 5GB upload limit when using powershell.
I am developing an app with NWJS, now I am thinking in the deploy process, what I need is install the app into different machines that will use that app, the problem that I see is if I change some file I will need install again into each machine, I was reading about docker and if I understood fine, I can make an Image and download the last version of the app into each machine that use the app.
The Question is if can I upload the app into a container and download that into each machine?, and How can I search the documentation for do that?.
Thanks for any help
I think I've cheated my way into a solution, this could work for you, depending on what your exact requirements are.
In one scenario, I have a shared network folder that allows machines to launch the NWJS app via the network share, so every time I update the file and someone relaunches their short-cut, they have a fresh copy.
The remote users, who are not directly on our same network, has their copy in a DropBox folder - which - of course - automatically update as I drop the new copy into that folder.
None of these solutions are as "clean" as an installer, but, for our use case, works rather well. It's a bonus that DropBox handles the downloading of the new copy of the file automatically.
I am developing an open source application which should mount webdav share to local drive letter just like NETDRIVE and WEBDRIVE using node.js and electron-js, so in my application at present I am downloading all files from webdav share which takes a lot of time and not reliable for heavy data. Is there any other approach so that whenever user access a file, only that particular file should be fetched from webdav share, I’ve tried to display files meta data(dummy) structure in directory and kept that directory under file-watcher. So that when user tries to open a file then watcher should capture file open event to get which file was user trying to access, so that in background of application a service will triggered to fetch that particular file using file-path as reference, but none of them are unable to capture file open event. Is there any other approach to do so, correct me if I am in wrong direction.
Thank's
I think you want virtual file-system and recommend Dokan library.
Dokan is the start point of Windows virtual file-system application.
Open Source : Dokan (https://en.wikipedia.org/wiki/Dokan_Library)
Commercial: EldoS CBFS (https://www.eldos.com/cbfs)
Google, Naver use Dokan and NetDrive, RaiDrive(mine) use CBFS.
I'm working on a C++ Windows Store DirectX app and I'm trying to save screenshots to disk every so often.
I am using the DirectX Tool Kit (DirectXTK) and the function SaveDDSTextureToFile which returns an HRESULT.
The problem is that the returned HRESULT is always:
E_ACCESSDENIED General access denied error.
I assume this is some permissions/capabilities thing (it being a windows store app) but I can't find what I need to ask for permission for to be able to save files to disk.
The DirectX ToolKit says it is for Windows store applications as well as desktop applications but I can't find any information on their codeplex either.
Does anyone know what I need to have permission to do for this to work?
Thanks for your time.
Windows Store apps are sandboxed and have fewer permissions than desktop apps, especially when it comes to file access. By default, apps only have access to write to the local storage directory, which isn't easily accessible from the shell. If you want to save to the Pictures or Documents library, you will need to specify this access in the package manifest. Additionally, you will need to use the WinRT file APIs to write the DDS files. To do this, use SaveDDSTextureToMemory, then write the resulting raw DDS data to the StorageFile. Check out the File access sample for more info on the WinRT APIs involved in writing this data as a file.
I've managed to find a way to do it. Basically as MooseBoys says you cannot save to anywhere because the app is sandboxed.
You can however save to the TempState folder of your apps package in AppData, which is all I need because I'm using this feature for debugging.
So the line I call is:
DirectX::SaveWICTextureToFile(deviceContext, texture2D, GUID_ContainerFormatPng, L"C:\\Users\\USERNAME\\AppData\\Local\\Packages\\PACKAGENAME\\TempState\\test.png");
And this works great.