Syncing Local Domain Entity Changes When Using CQRS - domain-driven-design

Lets suppose I have a basic CustomerEntity which has the following attributes
Name
Surname
IsPreferred
Taking CQRS in it's simplest form I would have the following services
CustomerCommandService
CustomerQueryService
If on the CustomerCommandService I call UpgradeToPreferred(CustomerEntity) The store behind it will update and any queries will reflect this. So far so good.
My question is how to I sync this back to the local entity I have? I have called the UpgradeToPreferred() method on the service not on the entity so it will not reflect in the local copy unless I query the CustomerQueryService and get the update which seems a tad redundant.
..Or am I doing it wrong?
EDIT:
To clarify, the question is. If I am going through a command service to modify the entity in storage and not calling the command on the entity directly or editing it's properties how should I handle the same modification on the entity I have in memory.

Few things wrong here. Your comand service takes a command, not an entity. So if you want to upgrade that customer to be preferred, then the command would be the intent (makecustomerpreferred) and the data needed to perfomr the command (a customer identification would suffice). The service would load up the entity using the identification, and invoke the makepreferred behavior on the entity. The entity would be changed internally. Persistence would map it back to the database. Ergo, no need to resync with the database.

Related

Send a PUT/GET/POST request to JHipster in one single transaction

I am quite new to Jhipster and have problem understanding some of its functionalities. Hence here is my question.
I have the following two microservices.
Microservice 1 (MS1) has the following data structures in Java:
Lead {
Customer customer;
Deal deal;
}
Customer{
Integer phoneNumber;
etc...
}
Deal{
Integer value;
etc...
}
Microservice 2 (MS2) is a JHipster generated database.
The DB only has the following SQL tables :
CUSTOMER
LEAD
When changes happen in Microservice 1, I send 2 separate PUT requests from MS1 to MS2.
first a request to update CUSTOMER through the /customer API in MS2
if update is OK, then send a request to update DEAL /deal API in MS2
For a successful update for Lead, PUT requests to Customer, Deal should all be OK. If updating one table fails, all should fail.
Hence, I would like to avoid sending 2 separate requests to avoid a case where CUSTOMER request is OK and DEAL request fails for whatever reason.
If possible, I would like to send one single transaction throught an API such as /lead that udpates the two table..
What is the best way I can achieve this without creating an extra table for LEAD?
e.g., a layer/service that I should generate using Jhipster.
If possible (but not necessary), I would like to avoid touching code that are frequently regenerated. (e.g., Customer, Deal)
Please kindly direct me to a documentation too if one already exist. They are quite hard to understand so I am not sure if any current one specifically addresses this problem. Thank you.
This is a common issue when directly exposing JPA entities from a CRUD REST API.
Your persistence model does not need to be your API model.
If 2 entities are related and should be updated within same transaction, it means that they should be updated with one atomic API request.
So, you could define a new resource with a DTO combining your 2 entities, exposed by a new API that you would code manually (so no need for an additional table).
As you are using microservices architecture, you could have similar situation also between MS1 and MS2 and here you could not use a transaction, you could then have to implement remediation.

Fetching Initial Data from CloudKit

Here is a common scenario: app is installed the first time and needs some initial data. You could bundle it in the app and have it load from a plist or something, or a CSV file. Or you could go get it from a remote store.
I want to get it from CloudKit. Yes, I know that CloudKit is not to be treated as a remote database but rather a hub. I am fine with that. Frankly I think this use case is one of the only holes in that strategy.
Imagine I have an object graph I need to get that has one class at the base and then 3 or 4 related classes. I want the new user to install the app and then get the latest version of this class. If I use CloudKit, I have to load each entity with a separate fetch and assemble the whole. It's ugly and not generic. Once I do that, I will go into change tracking mode. Listening for updates and syncing my local copy.
In some ways this is similar to the challenge that you have using Services on Android: suppose I have a service for the weather forecast. When I subscribe to it, I will not get the weather until tomorrow when it creates its next new forecast. To handle the deficiency of this, the Android Services SDK allows me to make 'sticky' services where I can get the last message that service produced upon subscribing.
I am thinking of doing something similar in a generic way: making it possible to hold a snapshot of some object graph, probably in JSON, with a version token, and then for initial loads, just being able to fetch those and turn them into CoreData object graphs locally.
Question is does this strategy make sense or should I hold my nose and write pyramid of doom code with nested queries? (Don't suggest using CoreData syncing as that has been deprecated.)
Your question is a bit old, so you probably already moved on from this, but I figured I'd suggest an option.
You could create a record type called Data in the Public database in your CloudKit container. Within Data, you could have a field named structure that is a String (or a CKAsset if you wanted to attach a JSON file).
Then on every app load, you query the public database and pull down the structure string that has your classes definitions and use it how you like. Since it's in the public database, all your users would have access to it. Good luck!

PouchDB - start local, replicate later

Does it create any major problems if we always create and populate a PouchDB database locally first, and then later sync/authenticate with a centralised CouchDB service like Cloudant?
Consider this simplified scenario:
You're building an accommodation booking service such as hotel search or airbnb
You want people to be able to favourite/heart properties without having to create an account, and will use PouchDB to store this list
i.e. the idea is to not break their flow by making them create an account when it isn't strictly necessary
If users wish to opt in, they can later create an account and receive credentials for a "server side" database to sync with
At the point of step 3, once I've created a per-user CouchDB database server-side and assigned credentials to pass back to the browser for sync/replication, how can I link that up with the PouchDB data already created? i.e.
Can PouchDB somehow just reuse the existing database for this sync, therefore pushing all existing data up to the hosted CouchDB database, or..
Instead do we need to create a new PouchDB database and then copy over all docs from the existing (non-replicated) one to this new (replicated) one, and then delete the existing one?
I want to make sure I'm not painting myself into any corner I haven't thought of, before we begin the first stage, which is supporting non-replicated PouchDB.
It depends on what kind of data you want to sync from the server, but in general, you can replicate a pre-existing database into a new one with existing documents, just so long as those document IDs don't conflict.
So probably the best idea for the star-rating model would be to create documents client-side with IDs like 'star_<timestamp>' to ensure they don't conflict with anything. Then you can aggregate them with a map/reduce function.

RestKit and Core Data - How to POST data?

I am using RestKit .22.0 with Core Data integration, both of which I'm pretty unfamiliar with. I followed the RKGist tutorial and was able to learn how to get objects from a REST endpoint, set up object mappings, add routes, and see the data from the web service correctly insert into the Core Data sqlite database.
Now I'm starting to work on persisting objects to the web service, but can't find any information on how best to do this. It seems like there are multiple ways to skin a cat with RestKit, so I wanted to see what the best practices are for POST/PUTing data.
When POSTing a new object, do you usually save the object in the managed object context first, then call [[RKObjectManager sharedManager] postObject:path:parameters:success:failure:]? Or is there some RestKit method that performs both of these operations at once?
If you first save the object in Core Data then POST it to the web service, is RestKit going to be able to update the already inserted object with the service's database identification attributes? Does the [[RKObjectManager sharedManager] postObject:path:parameters:success:failure:] method do this for you?
If there was an error POSTing the object, what is the typical way you'd retry the POST? Would you look for some sort of flag in the core data managed object and retry in a separate thread?
Thanks!
Yes, then the response from the POST updates that same object (perhaps filling in the server specified unique id)
Yes, updating the POSTed object is the default behaviour (you need to specify the response mapping and the response must be a single object)
No separate thread generally, and it depends what caused the error. Have a flag that indicates it's uploaded and retry when network connection is reestablished

Restkit: GET remote linked object when foreign key refers to missing local object in Core Data

I cannot figure out if Restkit is able to download an object not present locally, in particular when a foreign key is referring to that missing object in a to-one relationship in Core Data.
Take the following example:
- contact 1 refers to company 2
contact 1 is present in the local db but company 2 is not
when in the UI the user inspects the details of contact 1, then a GET for the contact 1 is performed
the GET returns a JSON containing among other contact details the property company_id = 2
I have already setup Restkit via the foreign key convention where I store the foreign key in the contact object (companyID) and I link it to the remote identifier (identifier) stored in the company object:
[contactResponseMapping addConnectionForRelationship:#"forCustomer" connectedBy:#{#"companyID" : #"identifier"}]
I do not manage in this case to configure restkit to download the entity (referred to by the company_id) if not present in the local db.
Is this possible?
The alternative workaround would be to override the companyID setter and double check if the entity exists and if not download the entity, although this is not desirable to me as I have setup an engine that everytime that an object is edited receives a notification and post it to the server. This means I'd have to block the thread execution until the linked object is downloaded.
Also is this mechanism called lazy loading? or hydrating entities?
I cannot find any other similar cases around.
Hope you can help, going a bit crazy on this.
Thanks a lot.
PS: I am using restkit 0.21.0 (i.e. the latest development release which Blake Watters confirmed to be stable)
This is not a feature that RestKit currently offers (probably because of the types of issues you discuss).
For your workaround, consider what your engine is doing in relation to relationship edits - how are they pushed back to the server? Are they always pushed?
Also, think about creating stub objects for your 'foreign' objects so that at least some representation always exists (there are a couple of ways to do this, you can setup mappings to accomplish the task). Then when you come to use one of these objects you can hydrate / lazy load it.
See also Clarifying terminology : "Hydrating" an entity : Fetching properties from the DB.

Resources