Fetching Initial Data from CloudKit - core-data

Here is a common scenario: app is installed the first time and needs some initial data. You could bundle it in the app and have it load from a plist or something, or a CSV file. Or you could go get it from a remote store.
I want to get it from CloudKit. Yes, I know that CloudKit is not to be treated as a remote database but rather a hub. I am fine with that. Frankly I think this use case is one of the only holes in that strategy.
Imagine I have an object graph I need to get that has one class at the base and then 3 or 4 related classes. I want the new user to install the app and then get the latest version of this class. If I use CloudKit, I have to load each entity with a separate fetch and assemble the whole. It's ugly and not generic. Once I do that, I will go into change tracking mode. Listening for updates and syncing my local copy.
In some ways this is similar to the challenge that you have using Services on Android: suppose I have a service for the weather forecast. When I subscribe to it, I will not get the weather until tomorrow when it creates its next new forecast. To handle the deficiency of this, the Android Services SDK allows me to make 'sticky' services where I can get the last message that service produced upon subscribing.
I am thinking of doing something similar in a generic way: making it possible to hold a snapshot of some object graph, probably in JSON, with a version token, and then for initial loads, just being able to fetch those and turn them into CoreData object graphs locally.
Question is does this strategy make sense or should I hold my nose and write pyramid of doom code with nested queries? (Don't suggest using CoreData syncing as that has been deprecated.)

Your question is a bit old, so you probably already moved on from this, but I figured I'd suggest an option.
You could create a record type called Data in the Public database in your CloudKit container. Within Data, you could have a field named structure that is a String (or a CKAsset if you wanted to attach a JSON file).
Then on every app load, you query the public database and pull down the structure string that has your classes definitions and use it how you like. Since it's in the public database, all your users would have access to it. Good luck!

Related

MERN Stack, how big of a role can redux play?

My project involves somewhat of a checklist. Initially, I used Redux to keep track of the state (whether something is checked off or not). Later I implemented a backend node server and a mongo database, and I load data from the database every time I fire up or refresh localhost. Since the checkoffs directly modify the elements in the database, there's not a whole lot Redux is doing that pre-emptive loading isn't already doing.
So my main question is that if the data is fetched from the backend the moment I start everything up, what else can I use Redux for in this case? I know my project might be too small and simple to give out a good answer, but I'd still like to know possibilities if possible.
No matter, your data is coming form backend but you still need redux for many reasons. Redux is not about just for storing data but it best for performance. Let discuss it with use cases.
Suppose you have main component of COMPANY and that is fetching data from API/backend and data cam to COMPANY component and same data is required to your ADMIN component and you again call network for data, and you know fetching data for each component from backend is very heavy and make your application slow.
So the best solution is to fetch all you data one time and save them in REDUX STORE and distribute data over your components.
MAIN ROLE:
1- Easy to manage data and state
2- Optimization and performace improvement with SELECTORS
3- Debugging is very easy
4- Easy to track data

Real-Time Database Messaging

We've got an application in Django running against a PGSQL database. One of the functions we've grown to support is real-time messaging to our UI when data is updated in the backend DB.
So... for example we show the contents of a customer table in our UI, as records are added/removed/updated from the backend customer DB table we echo those updates to our UI in real-time via some redis/socket.io/node.js magic.
Currently we've rolled our own solution for this entire thing using overloaded save() methods on the Django table models. That actually works pretty well for our current functions but as tables continue to grow into GB's of data, it is starting to slow down on some larger tables as our engine digs through the current 'subscribed' UI's and messages out appropriately which updates are needed as which clients.
Curious what other options might exist here. I believe MongoDB and other no-sql type engines support some constructs like this out of the box but I'm not finding an exact hit when Googling for better solutions.
Currently we've rolled our own solution for this entire thing using
overloaded save() methods on the Django table models.
Instead of working on the app level you might want to work on the lower, database level.
Add a PostgreSQL trigger after row insertion, and use pg_notify to notify external apps of the change.
Then in NodeJS:
var PGPubsub = require('pg-pubsub');
var pubsubInstance = new PGPubsub('postgres://username#localhost/tablename');
pubsubInstance.addChannel('channelName', function (channelPayload) {
// Handle the notification and its payload
// If the payload was JSON it has already been parsed for you
});
See that and that.
And you will be able to to the same in Python https://pypi.python.org/pypi/pgpubsub/0.0.2.
Finally, you might want to use data-partitioning in PostgreSQL. Long story short, PostgreSQL has already everything you need :)

PouchDB - start local, replicate later

Does it create any major problems if we always create and populate a PouchDB database locally first, and then later sync/authenticate with a centralised CouchDB service like Cloudant?
Consider this simplified scenario:
You're building an accommodation booking service such as hotel search or airbnb
You want people to be able to favourite/heart properties without having to create an account, and will use PouchDB to store this list
i.e. the idea is to not break their flow by making them create an account when it isn't strictly necessary
If users wish to opt in, they can later create an account and receive credentials for a "server side" database to sync with
At the point of step 3, once I've created a per-user CouchDB database server-side and assigned credentials to pass back to the browser for sync/replication, how can I link that up with the PouchDB data already created? i.e.
Can PouchDB somehow just reuse the existing database for this sync, therefore pushing all existing data up to the hosted CouchDB database, or..
Instead do we need to create a new PouchDB database and then copy over all docs from the existing (non-replicated) one to this new (replicated) one, and then delete the existing one?
I want to make sure I'm not painting myself into any corner I haven't thought of, before we begin the first stage, which is supporting non-replicated PouchDB.
It depends on what kind of data you want to sync from the server, but in general, you can replicate a pre-existing database into a new one with existing documents, just so long as those document IDs don't conflict.
So probably the best idea for the star-rating model would be to create documents client-side with IDs like 'star_<timestamp>' to ensure they don't conflict with anything. Then you can aggregate them with a map/reduce function.

Syncing Local Domain Entity Changes When Using CQRS

Lets suppose I have a basic CustomerEntity which has the following attributes
Name
Surname
IsPreferred
Taking CQRS in it's simplest form I would have the following services
CustomerCommandService
CustomerQueryService
If on the CustomerCommandService I call UpgradeToPreferred(CustomerEntity) The store behind it will update and any queries will reflect this. So far so good.
My question is how to I sync this back to the local entity I have? I have called the UpgradeToPreferred() method on the service not on the entity so it will not reflect in the local copy unless I query the CustomerQueryService and get the update which seems a tad redundant.
..Or am I doing it wrong?
EDIT:
To clarify, the question is. If I am going through a command service to modify the entity in storage and not calling the command on the entity directly or editing it's properties how should I handle the same modification on the entity I have in memory.
Few things wrong here. Your comand service takes a command, not an entity. So if you want to upgrade that customer to be preferred, then the command would be the intent (makecustomerpreferred) and the data needed to perfomr the command (a customer identification would suffice). The service would load up the entity using the identification, and invoke the makepreferred behavior on the entity. The entity would be changed internally. Persistence would map it back to the database. Ergo, no need to resync with the database.

Getting Started With Subsonic Repository for a 3 tier app

I was able to get active record running right away. The instructions for getting started were great and in no time I had built a webservice that would let me create and read widgets in my existing db. It was awesome. When it came to updating though, things fell apart. I would edit the object on the client and send it back to the service but when the service saved it, it would just create a new one. I reasoned that this meant that I would need to re-query the db and assign the values sent up to the service from the client but my boss said that would be hacky and that the repository pattern would be better because could use pocos. Unfortunately that's the extent of the guidance I've gotten. So here are my questions.
Are the t4 templates only good for active record or will they build
up your simple repository for you
too. Eg, is there something that
will gen up my pocos too or are they
all 'roll your own'?
Has anyone seen a working example of a subsonic 3 tier
solution? I've read about them but
are there any samples floating
around?
The active record samples/ screencasts were really easy to follow because they started at the same point I was starting with. The simple repository ones seemed to focus more on migrations and other advanced features and being that this stuff is new to me, I don't know enough to connect the dots.
Ugh. There's nothing quite like having a deadline to learn something and have it running by the end of the week. Any advice would be welcome, even if it's rtfm with a link to the manual I should have read.
Thanks in advance
If you want to use a repository pattern you can either use the linq templates or use the simple repository which does not require any t4 templates.
With simple repository you create the pocos yourself. Subsonic can create or update the database scheme for you if you choose:
var repository=new SimpleRepository(SimpleRepositoryOptions.RunMigrations);
but If you ask me I would choose SimpleRepositoryOptions.None and update the database by myself.
Anyway, your initial problem with the ActiveRecord templates could be fixed pretty easy.
I suggest your ActiveRecord object is serialized on the client side and deserialized on the server.
The default constructor of an ActiveRecord object calls the Init function which looks like this:
void Init(){
TestMode=this._db.DataProvider.ConnectionString.Equals("test", StringComparison.InvariantCultureIgnoreCase);
_dirtyColumns=new List<IColumn>();
if(TestMode){
<#=tbl.ClassName#>.SetTestRepo();
_repo=_testRepo;
}else{
_repo = new SubSonicRepository<<#=tbl.ClassName#>>(_db);
}
tbl=_repo.GetTable();
SetIsNew(true);
OnCreated();
}
As you can see, the internal repository is created and SetIsNew(true) is executed.
The only thing you have to do is to call myitem.SetIsNew(false) after the object gets populated with the deserialized values. I suppose that is sufficient to tell subsonic to do an update query during save.

Resources