Difference between UIManagedDocument saveToURL and UIManagedDocumentContext save - core-data

I'm debugging a Core Data issue in my app (running on iOS 5), where the changes to existing managed objects are not being persisted between app sessions. If I use [UIManagedDocumentContext save], nothing happens, and my changes are lost next time I restart the app. If I use [UIManagedDocument saveToURL], however, the changes do get saved. The Apple docs are unclear as to which is the right way of persisting the updates. Can someone explain the difference between the two, and when to use each method?

Related

iOS App going in background - Need to remove removeFilePresenter using Graph

I have a timer running in my app. When the app goes in background I realized I need to removeFilePresenter because otherwise the process is killed and when I am back in foreground my table cannot reload data.
How can I do that and reload my graph database as soon as the app comes back to foreground?
Thanks!
You can reload your data in the viewDidLoad, or viewDidAppear lifecycle calls in any view controller. Even with many instances of Graph, it will always reference a single instance under the hood, so you don't have to worry about duplication or poorly managed resources.

Core Data with iCloud import changes takes randomly long time

Before you down-vote this question, please note that I have already searched Google and asked on Apple Developer Forums but got no solution.
I am making an app that uses core data with iCloud. Everything is set up fine and the app is saving core data records to the persistent store in the ubiquity container and fetching them just fine.
My problem is that to test if syncing is working fine between two devices (on the same icloud ID), I depend on NSPersistentStoreDidImportUbiquitousContentChangesNotification to be fired so that my app (in foreground) can update the table view.
Now it takes random amount of time for this to happen. Sometimes it takes a few seconds and at times even 45 minutes is not enough! I have checked my broadband speed several times and everything is fine there.
I have a simple NSLog statement in the notification handler that prints to the console when the notification is fired, and then proceeds to update the UI.
With this randomly large wait time before changes are imported, I am not able to test my app at all!
Anyone knows what can be done here?
Already checked out related threads...
More iCloud Core Data synching woes
App not syncing Core Data changes with iCloud
PS: I also have 15 GB free space in my iCloud Account.
Unfortunately, testing with Core Data + iCloud can be difficult, precisely because iCloud is an asynchronous data transfer, and you have little influence over when that transfer will take place.
If you are working with small changes, it is usually just 10-20 seconds, sometimes faster. But larger changes may get delayed to be batch uploaded by the system. And it is also possible that if you constantly hit iCloud with new changes — which is common in testing — it can throttle back the transfers.
There isn't much you can really do about it. Try to keep your test data small where possible, and don't forget the Xcode debug menu items to force iCloud to sync up in the simulator.
This aspect of iCloud file sync is driving a lot of developers to use CloudKit, where at least you have a synchronous line of communication, removing some of the uncertainty. But setting up CloudKit takes a lot of custom code, or moving to a non-Apple sync solution.

How do I share a cache across Node workers with Redis?

Forgive me if this is a really dumb question. I have been googling for the past hour and can't seem to find it answered anywhere.
Our application needs to query our CMS database every hour or so to update all of its non-user-specfic CMS content. I would like to store that data in one place and have all the workers have access to it - w/o each worker having to call the API every hour. Also I would like this cache to persist in the event of a node worker crash. Since we're pretty new to node here I predict we might have some of those.
I will handle all the cache expiration logic. I just want a store that can be shared between users, can handle worker crashing and restarting, and is at the application level - not the user level. So user sessions are no good for this.
Is Redis even what I'm looking for? Sadly it may be too late to install mongo on our web layer for this release anyway. Pub/sub looks promising but really seems like it's made for messaging - not a shared cache. Maybe I am reading that wrong though.
Thank you so much stack overflow! I promise to be a good citizen now that I have registered.
Redis is a great solution for your problem. Not sure why you are considering pub/sub though. Doesn't sound like the workers need to be notified when the cache is updated, they just need to be able to read the latest value written to the cache. You can use a simple string value in redis for this stored under a consistent key.
In summary, you'd have a process that would update a redis key (say, cms-cache-stuff) every hour. Each worker which needs that data will just GET cms-cache-stuff from redis every time it needs that cached info.
This solution will survive both the cache refresh process crashing or workers crashing, since the key in redis will always have data in it (though that data will be stale if the refresh process doesn't come back up).
If for some wild reason you don't want the workers continually reading from redis (why not? its plenty fast enough) you could still store the latest cached data in cms-cache-stuff and then publish a message through pub/sub to your workers letting them know the cache is updated, so they can read cms-cache-stuff again. This gives you durability and recovery, since crashed workers can just read cms-cache-stuff again at startup and then start listening on the pub/sub channel for additional updates.
Pub/sub alone is pretty useless for caching since it provides no durability. If a worker is crashed and not listening on the channel, the messages are simply discarded.
Well as I suspected my problem was a super-basic noob mistake that's hard to even even explain well enough to get the "duh" answer. I was using the connect-redis package, which is really designed for sessions, not a cache. Once someone pointed to node_redis client I was able to pretty easily get it set up and do what I wanted to do.
Thanks a lot - hopefully this helps some redis noob in the future!

How does azure websites with EF migrations ensure integrity when updating

The scenario is simple: using EF code first migrations, with multiple azure website instances, decent size DB like 100GB (assuming azure SQL), lots of active concurrent users..say 20k for the heck of it.
Goal: push out update, with active users, keep integrity while upgrading.
I've sifted through all the docs I can find. However the core details seem to be missing or I'm blatantly overlooking them. When Azure receives an update request via FTP/git/tfs, how does it process the update? What does it do with active users? For example, does it freeze incoming requests to all instances, let items already processing finish, upgrade/replace each instance, let EF migrations process, then let traffics start again? If it upgrades/refreshes all instances simultaneously, how does it ensure EF migrations run only once? If it refreshes instances live in a rolling upgrade process (upgrade 1 at a time with no inbound traffic freeze), how could it ensure integrity since instances in the older state would/could potentially break?
The main question, what is the real process after it receives the request to update? What are the recommendations for updating a live website?
To put it simply, it doesn't.
EF Migrations and Azure deployment are two very different beasts. Azure deployment gives you a number of options including update and staging slots, you've probably seen
Deploy a web app in Azure App Service, for other readers this is a good start point.
In General the Azure deployment model is concerned about the active connections to the IIS/Web Site stack, in general update ensures uninterrupted user access by taking the instance being deployed out of the load balancer pool and redirecting traffic to the other instances. It then cycles through the instances updating one by one.
This means that at any point in time, during an update deployment there will be multiple versions of your code running at the same time.
If your EF Model has not changed between code versions, then Azure deployment works like a charm, users won't even know that it is happening. But if you need to apply a migration as part of the migration BEWARE
In General, EF will only load the model if the code and DB versions match. It is very hard to use EF Migrations and support multiple code versions of the model at the same time
EF Migrations are largely controlled by the Database Initializer.
See Upgrade the database using migrations for details.
As a developer you get to choose how and when the database will be upgraded, but know that if you are using Mirgrations and deployment updates:
New code code will not easily run against the old data schema.
If the old code/app restarts many default initialization strategies will attempt roll the schema back, if this happens refer to point 1. ;)
If you get around the EF model loading up against the wrong version of the schema, you will experience exceptions and general failures when the code tries to use schema elements that are not there
The simplest way to manage a EF migration on a live site is to take all instances of the site down for deployments that include an EF Migration
- You can use a maintenance page or a redirect, that's up to you.
If you are going to this trouble, it is probably best to manually apply the DB update, then if it fails you can easily abort the deployment, because it hasn't started yet!
Otherwise, deploy the update and the first instance to spin up will run the migration, if the initializer has been configured to do so...
If you absolutely must have continuous deployment of both site code/content and model updates then EF migrations might not be the best tool to get started with as you will find it very restrictive OOTB for this scenario.
I was watching a "Fundamentals" course on Pluralsight and this was touched upon.
If you have 3 sites, Azure will take one offline and upgrade that, and then when ready restart it. At that point, the other 2 instances get taken off-line and your upgraded insance will start, thus running your schema changes.
When those 2 come back the EF migrations would already have been run, thus your sites are back.
In theory then it all sounds like it should work, although depending upon how much EF migrations need running, requests may be delayed.
However, the comment from the author was that in this scenario (i.e. making schema changes) you should consider if your website can run in this situation. The suggestion being that you either need to make your code work with both old and new schemas, or show a "maintenance system down page".
The summary seems to be that depending on what you are actually upgrading, this will impact and affect your choices and method of deployment.
Generally speaking if you want to support active upgrades you need to support multiple version of you application simultaneously. This is really the only way to reliably stay active while you migrate/upgrade. Also consider feature switches to scale up your conversion in a controlled manner.

Can Use Castle Windsor to Keep Static References In Memory?

Background...
I have to build a new (asp.net mvc) app that uses an existing class library that is complex and can't be rewritten at this stage. The main problem is that this class library has a huge initialisation hit - it takes up to 10 mins to load all its data into memory. This is fine for production environment where it performs fast after IIS has started up. However, for development this is a nightmare because every time you build the solution and start it up in a browser, it takes ages.
Possible Solution?
So, the idea was that Castle Windsor or IOC lifestyle can be used to hold this in memory so that only recycling the application pool will force an expensive reload. I remember having a problem before when Windsor was keeping code in memory so even after changing it and recompiling, IIS still had the old code running - in this scenario it was a problem, but in my new scenario this is exactly what I'd like.
Anyone know how this can be done? I have tried with a dummy project using Singleton lifestyle but after changing the mvc project it still reloads the class library.
If the data does serialize then you could store in a cache that will keep it's state when you recompile. For example, memcached runs as a separate process. You could change the bin or restart the dev server process and the cache will keep it's state. There's a provider for accessing memcacheD on codeplex.
Maybe you could serialize the contents of the loaded library and save it binary form on the disk. This could potentially speed up the load. It's a crazy idea, but then again, having a class library that takes 10 minutes to load is crazy, too.

Resources