Is there a way to add an observer for the notification the logs displays when Core Data changes the local storage?
[PFUbiquitySwitchboardEntryMetadata setUseLocalStorage:](754): CoreData: Ubiquity: mobile~E358C294-F23E-4FBA-8DA6-869A3A848906:R63H5582LG~com~Test~Tester
Using local storage: 0
I am asking because I want to know to which local storage Core Data has changed.
Thanks a lot!
Unfortunately the transition from Using local storage: 1 to Using local storage: 0 is not as simple as it looks because it depends on whether Core Data had to create a brand new store and import the initial store from iCloud or whether it is opening an already existing iCloud store. I have posted a more detailed explanation and logs showing the different behaviours.
So the best way is to check if the iCloud file exists, if not you know to expect two storesDidChange notifications, the second of which means you are now using a store which has the initial iCloud store imported (note this may not be all the iCloud data).
http://ossh.com.au/design-and-technology/software-development/sample-library-style-ios-core-data-app-with-icloud-integration/sample-apps-explanations/handling-icloud-account-transitions/
There are sample apps that have implemented this here
http://ossh.com.au/design-and-technology/software-development/sample-library-style-ios-core-data-app-with-icloud-integration/
Hope these help
Related
I get a strange problem with my s3 storage.
I develop a web-application which need image storage.
On the application, we need to make many update on them and display them in live so we disabled version management on the bucket.
we store the different url in a postgres DB and display them on the website.
But sometimes (can't really know in which condition because it seems to be random) the app display the old version of the image.
I try set some things in the metadata of my update request but it doesn't seems to do anythings.
Also we tried to add a ?versionId=null params at the end of the url but also we keep having issues
Did someone have an idea on this ?
I want to save my files to google cloud storage. I have stored my files like this name doc_personId_fileId. But now If my user uploads another file old file will be replaced. I want to keep revisions. What is best approach to keep record of all the revisions. For example:
I have a file named doc_1_1. Now if user uploads another file. Old file should be named as doc_1_1_revision_1 and after that doc_1_1_revision_2 and so on and new file should be doc_1_1.
What is best method to save this?
Or is there anything provided by google to handle this type of scenarios?
Thanks.
You want to upload doc_1_1 a few times, for example 3 times, and expect your bucket to look like:
doc_1_1
doc_1_1_revision_3
doc_1_1_revision_2
. . .
In short, you cannot achieve this automatically by GCP supports and it requires you work around your upload code to do 2 operations :
moving the old file to name it with revision
upload the new file
Alternatively, GCP support object revision using two concepts generation on the object itself and metagenerationon meta-data associated with the object. So you either keep uploading new file and do not need to pay attention to other revisions but leave it to GCP to handle. Listing files with option to see generation and metadata will give you all files and revisions
Of course, you can restore / retrieve a file with specfiying the revision
Your goal is:
I have a file named doc_1_1. Now if user uploads another file. Old
file should be named as doc_1_1_revision_1 and after that
doc_1_1_revision_2 and so on and new file should be doc_1_1.
Google Cloud Storage does not support this naming technique. You will have to implement this on the client side as part of your upload process.
Another option is to enable "Object Versioning" where previous objects with the same name still persist. The last uploaded instance is the "current" version.
This link will help you understand object versions:
Object Versioning
I will need to store photo's on a datastore, store the location in Mongo so I can retrieve it later to display. What is a good way to go about doing this? If I just google I end up getting information on accessing the files in node itself.....
At this point I am not sure what datastore I will use, for now just on another server I have, Ubuntu Server.
I made something like what you need. I had to store photos on s3 and store the path on mongo. If you use amanzon too, they provide a sdk, that contains all possible functions for interactions with the cloud. So you'll need a way to work with asynchronous tasks when store the data in the cloud, I used async module to manage the functions. I hope I gave you some direction where go to. I can't help you code, once I don't know your business rules, good lucky
Here is a common scenario: app is installed the first time and needs some initial data. You could bundle it in the app and have it load from a plist or something, or a CSV file. Or you could go get it from a remote store.
I want to get it from CloudKit. Yes, I know that CloudKit is not to be treated as a remote database but rather a hub. I am fine with that. Frankly I think this use case is one of the only holes in that strategy.
Imagine I have an object graph I need to get that has one class at the base and then 3 or 4 related classes. I want the new user to install the app and then get the latest version of this class. If I use CloudKit, I have to load each entity with a separate fetch and assemble the whole. It's ugly and not generic. Once I do that, I will go into change tracking mode. Listening for updates and syncing my local copy.
In some ways this is similar to the challenge that you have using Services on Android: suppose I have a service for the weather forecast. When I subscribe to it, I will not get the weather until tomorrow when it creates its next new forecast. To handle the deficiency of this, the Android Services SDK allows me to make 'sticky' services where I can get the last message that service produced upon subscribing.
I am thinking of doing something similar in a generic way: making it possible to hold a snapshot of some object graph, probably in JSON, with a version token, and then for initial loads, just being able to fetch those and turn them into CoreData object graphs locally.
Question is does this strategy make sense or should I hold my nose and write pyramid of doom code with nested queries? (Don't suggest using CoreData syncing as that has been deprecated.)
Your question is a bit old, so you probably already moved on from this, but I figured I'd suggest an option.
You could create a record type called Data in the Public database in your CloudKit container. Within Data, you could have a field named structure that is a String (or a CKAsset if you wanted to attach a JSON file).
Then on every app load, you query the public database and pull down the structure string that has your classes definitions and use it how you like. Since it's in the public database, all your users would have access to it. Good luck!
I am using Azure Storage Nodejs and what i need to do is to copy image from one blob to another.
First i tried to getBlobToFile to get the image on temp location in disk and then just createBlockBlobFromFile from that temp location. That method did the task, but for some reason it didn't copied completely in 10% of cases.
The i was trying to use getBlobToText and the result of that put into createBlockBlobFromText, also tried to put options which is need blob to be image container. That method failed completely, image not even opened after copy.
Perhaps there is a way to copy blob file and paste it in other blobl but i didn't find that method.
What else can i do?
I'm not sure what your particular copy-error is, but... with getLocalBlobToFile(), you're actually physically moving blob content from blob storage to your VM (or local machine), and then with createBlockBlobFromLocalFile() you're pushing the entire contents back to blob storage, which is resulting in two physical network moves.
The Azure Storage system supports blob-copy as a 1st-class operation. While it's available via REST API call, it's also wrapped in the same SDK you're using, in the method BlobService.startCopyBlob() (source code here). This will instruct the storage to initiate an async copy operation, completely within the storage system (meaning no download+upload on your side). You'll be able to set source and destination, set timeouts, etc. (all parameters are fully documented in the source code).
The link in the accepted answer is broken, although the method is correct: the method startCopyBlob is documented here
(Updated: Jan 3, 2020) https://learn.microsoft.com/en-us/javascript/api/azure-storage/BlobService?view=azure-node-latest#azure_storage_BlobService_createBlockBlobFromLocalFile
(The old link) https://learn.microsoft.com/en-us/javascript/api/azure-storage/BlobService?view=azure-node-latest#azure_storage_BlobService_createBlockBlobFromLocalFile