mergepolicy on a to-one relationship with uniquing constraints erases object instead of updating it - core-data

I've just encountered an odd Core Data behaviour. For context, my app consumes a JSON API by making my various Core Data entities decodable. Users can track TV Shows they're interested in so I have a Show entity with a to-many relationship to Season. The inverse is a to-one relationship since a season can only belong to a single TV show. I have a uniquing constraint on a remoteID property for both Show and Season entities.
At launch, I request updates on the user's TV shows. Due to the way I ingest the data, the JSON response essentially creates duplicates, then on save the NSMergePolicy.mergeByPropertyObjectTrump would ensure duplicates are merged thanks to the uniquing constraint.
This has been working fine for regular properties and to-many relationships but now that I'm syncing seasons, the to-one relationship deletes seasons with every other sync. For example:
Sync 1:
Show A has 4 seasons locally.
Show A is fetched with 4 seasons.
After context is saved Show A now has 0 seasons
Sync 2:
Show A now has 0 seasons locally.
Show A is fetched with 4 seasons.
After context is saved Show A now has 4 seasons
Note that if I set my inverse relationship to be to-many (from Season to Show), this behaviour does not happen. It's ok for my purposes but not ideal as it could create issues down the line.
This is my interpretation of what happens:
Core Data first merges the duplicated Season objects.
Because a Season can only have 1 Show, one of the duplicated Show will lose its link to that Season.
Core Data then merges the duplicated Show objects.
At that point, one of them has seasons and the other one doesn't.
Core Data decides that the one without seasons is the most recent "change" and thus the one that should be committed to the store.
That last point I am unsure about and doesn't quite make sense to me. I guess my question is: does anyone knows precisely what is happening here and could confirm or correct my explanation? Also, if there's any way around it I don't know, I'd be extremely grateful!

Related

How to ensure data consistency between two different aggregates in an event-driven architecture?

I will try to keep this as generic as possible using the “order” and “product” example, to try and help others that come across this question.
The Structure:
In the application we have 3 different services, 2 services that follow the event sourcing pattern and one that is designed for read only having the separation between our read and write views:
- Order service (write)
- Product service (write)
- Order details service (Read)
The Background:
We are currently storing the relationship between the order and product in only one of the write services, for example within order we have a property called ‘productItems’ which contains a list of the aggregate Ids from Product for the products that have been added to the order. Each product added to an order is emitted onto Kafka where the read service will update the view and form the relationships between the data.
 
The Problem:
As we pull back by aggregate Id for the order and the product to update them, if a product was to be deleted, there is no way to disassociate the product from the order on the write side.
 
This in turn means we have inconsistency, that the order holds a reference to a product that no longer exists within the product service.
The Ideas:
Master the relationship on both sides, which means when the product is deleted, we can look at the associated orders and trigger an update to remove from each order (this would cause duplication of reference).
Create another view of the data that shows the relationships and use a saga to do a clean-up. When a delete is triggered, it will look up the view database, see the relationships within the data and then trigger an update for each of the orders that have the product associated.
Does it really matter having the inconsistencies if the Product details service shows the correct information? Because the view database will consume the product deleted event, it will be able to safely remove the relationship that means clients will be able to get the correct view of the data even if the write models appear inconsistent. Based on the order of the events, the state will always appear correct in the read view.
Another thought: as the aggregate Id is deleted, it should never be reused which means when we have checks on the aggregate such as: “is this product in the order already?” will never trigger as the aggregate Id will never be repurposed meaning the inconsistency should not cause an issue when running commands in the future.
Sorry for the long read, but these are all the ideas we have thought of so far, and I am keen to gain some insight from the community, to make sure we are on the right track or if there is another approach to consider.
 
Thank you in advance for your help.
Event sourcing suites very well human and specifically human-paced processes. It helps a lot to imagine that every event in an event-sourced system is delivered by some clerk printed on a sheet of paper. Than it will be much easier to figure out the suitable solution.
What's the purpose of an order? So that your back-office personnel would secure the necessary units at a warehouse, then customer would do a payment and you start shipping process.
So, I guess, after an order is placed, some back-office system can process it and confirm that it can be taken into work and invoicing. Or it can return the order with remarks that this and that line are no longer available, so that a customer could agree to the reduced order or pick other options.
Another option is, since the probability of a customer ordering a discontinued item is low, just not do this check. But if at the shipping it still occurs - then issue a refund and some coupon for inconvenience. Why is it low? Because the goods are added from an online catalogue, which reflects the current state. The availability check can be done on the 'Submit' button click. So, an inconsistency may occur if an item is discontinued the same minute (or second) the order has been submitted. And usually the actual decision to discontinue is made up well before the information was updated in the Product service due to some external reasons.
Hence, I suggest to use eventual consistency. Since an event-sourced entity should only be responsible for its own consistency and not try to fulfil someone else's responsibility.

Creating a viewing registry using UML

I would like to create a registry of the times a user has viewed audio visual content. I created the following diagram and was wondering if it would be a good way to achieve it.
Note: The AudiovisualContent is connected to DateTimeStamp just as a way to record when it was added to the platform.
Is the diagram correct and is it meaningful?
The diagram seems formally correct and has the advantage of being very clear on multiplicity and role names for the DateTimeStamp.
It is diffucult to say if this approach is correct for a "registry". But it makes sense at first sight; I understand from the diagram that:
a Profile (user?) can do several Viewing and each Viewing is about one Audiovisual Content. Conversely, an Audiovisual Content can be the subject of several Viewing, and each Viewing is performed by a Profile
Each Viewing (of a given content by a given user) has a DateTimeStamp
Each Audiovisual Content has a DateTimeStamp corresponding to the moment the content was added.
If a user views the same content several time at different moments, each of this Viewing may have a different rating, and the rating is optional.
What I can further infer from the multiplicities, is that the timestamp corresponds to the beginning of the viewing act (because if it would be the end of the viewing, there wouldn't be timestamp when the viewing starts, so the multiplicity would have been 0..1).
Areas of concern
The DateTimeStamp is a class according to your diagram. The fact that you have a 1 multiplicity on the side of the Viewing and on the side of Audiovisual Content means that every single timestamp must be associated with BOTH. I doubt that this is correct.
You could consider using 0..1 instead, which would leave the possibility of having a time stamp associated with only one of the two or none at all. But still the timestamp could possibly have both, with the risk of inconsistency between them.
Personanlly, I'd go for * to clarify that many viewing and uploading could happen at the same time. I'd probably show it as a property -addedOn: DateTimeStamp and -viewedOn:DateTimeStamp.
In reality, the time stamp is very probably a value object. You could then consider making it a «dataType»; showing it as a property may then seem even more intuitive.
Unrelated: while your current way of modeling Viewing as a class is perfectly fine, you may be interested to show is as an association class between many Profile and many Audiovisual Content

Core Data multiple relationships to similar objects, can't inverse

I have an Entity Storm, which has two One-to-Many relationships, "history" and "forecast" both of these are NSSets that contain a StormPosition Entity which contains time, latitude and longitude.
I am able to build this but while I can set up the "history" and "forecast" relationship, they can't seem to both point to objects of type StormPosition because the inverse relationships can't both point back to the Storm Entity.
I assume this is because when I do:
myStormPosition.owner = self
it needs to know which NSSet (history or forecast) to place it into.
Do I need to mix these into one "track" relationship? I'd rather not since it is nice to have one set for history and one set for forecast without having to examine the date property.
Also, elsewhere in the program I'd like to be able to only work with a StormPosition type instead of a HistoricPosition and PredictedPosition type which would effectively be the same but make for difficult type casting unless I gave them both a parent class that was identical.
This does feel to me like one of the rare occasions when you would benefit from using a parent entity/class. Be careful with this, because all instances of both HistoricPosition and PredictedPosition will have eat space in the persistent store for all of properties of both (because they have a common parent).
Will you have multiple Forecasts per storm? E.g. predicted track as of day 1, predicted track as of day 2, …?
It does feel like a heavyweight solution, though. Perhaps a protocol to include the location and timestamp? Get away fromthe notion of a Position entity completely? A forecast position has a valid time, and the time it was issued, while a historic track position has only the time it was recorded.

CQRS Read Model Projections: How complex is too complex a data transformation

I want to sanity check myself on a view projection, in regards to if an intermediary concept can purely exist in the read model while providing a bridge between commands.
Let me use a contrived example to explain.
We place an order which raises an OrderPlaced event. The workflow then involves generating a picking slip, which is used to prepare a shipment.
A picking slip can be generated from an order (or group of orders) without any additional information being supplied from any external source or user. Is it acceptable then that the picking slip can be represented purely as a read model?
So:
PlaceOrderCommand -> OrderPlacedEvent
OrderPlacedEvent -> PickingSlipView
The warehouse manager can then view a picking slip, select the lines they would like to ship, and then perform a PrepareShipment command. A ShipmentPrepared event will then update the original order, and remove the relevant lines from the PickingSlipView.
I know it's a toy example, but I have a conceptually similar use case where a colleague believes the PickingSlip should be a domain entity/aggregate in its own right, as it's conceptually different to order. So you have PlaceOrder, GeneratePickingSlip, and PrepareShipment commands.
The GeneratePickingSlip command however simply takes an order number (identifier), transforms the order data into a picking slip entity, and persists the entity. You can't modify or remove a picking slip or perform any action on it, apart from using it to prepare a shipment.
This feels like introducing unnecessary overhead on the write model, for what is ultimately just a transformation of existing information to enable another command.
So (and without delving deeply into the problem space of warehouses and shipping)...
Is what I'm proposing a legitimate use case for a read model?
Acting as an intermediary between two commands, via transformation of some data into a different view. Or, as my colleague proposes, should every concept be represented in the write model in all cases?
I feel my approach is simpler, and avoiding unneeded complexity, but I'm new to CQRS and so perhaps missing something.
Edit - Alternative Example
Providing another example to explore:
We have a book of record for categories, where each record is information about products and their location. The book of record is populated by an external system, and contains SKU numbers, mapped to available locations:
Book of Record (Electronics)
SKU# Location1 Location2 Location3 ... Location 10
XXXX Introduce Remove Introduce ... N/A
YYYY N/A Introduce Introduce ... Remove
Each book of record is an entity, and each line is a value object.
The book of record is used to generate different Tasks (which are grouped in a TaskPlan to be assigned to a person). The plan may only cover a subset of locations.
There are different types of Tasks: One TaskPlan is for the individual who is on a location to add or remove stock from shelves. Call this an AllocateStock task. Another type of Task exists for a regional supervisor managing multiple locations, to check that shelving is properly following store guidelines, say CheckDisplay task. For allocating stock, we are interested in both introduced and removed SKUs. For checking the displays, we're only interested in newly Introduced SKUs, etc.
We are exploring two options:
Option 1
The person creating the tasks has a View (read model) that allows them to select Book of Records. Say they select Electronics and Fashion. They then select one or more locations. They could then submit a command like:
GenerateCheckDisplayTasks(TaskPlanId, List<BookOfRecordId>, List<Locations>)
The commands would then orchestrate going through the records, filtering out locations we don't need, processing only the 'Introduced' items, and creating the corresponding CheckDisplayTasks for each SKU in the TaskPlan.
Option 2
The other option is to shift the filtering to the read model before generating the tasks.
When a book of record is added a view model for each type of task is maintained. The data might be transposed, and would only include relevant info. ie. the CheckDisplayScopeView might project the book of record to:
Category SKU Location
Electronics (BookOfRecordId) XXXX Location1
Electronics (BookOfRecordId) XXXX Location3
Electronics (BookOfRecordId) YYYY Location2
Electronics (BookOfRecordId) YYYY Location3
Fashion (BookOfRecordId) ... ... etc
When generating tasks, the view enables the user to select the category and locations they want to generate the tasks for. Perhaps they select the Electronics category and Location 1 and 3.
The command is now:
GenerateCheckDisplayTasks(TaskPlanId, List<BookOfRecordId, SKU, Location>)
Where the command now no longer is responsible for the logic needed to filter out the locations, the Removed and N/A items, etc.
So the command for the first option just submits the ID of the entity that is being converted to tasks, along with the filter options, and does all the work internally, likely utilizing domain services.
The second option offloads the filtering aspect to the view model, and now the command submits values that will generate the tasks.
Note: In terms of the guidance that Aggregates shouldn't appear out of thin air, the Task Plan aggregate will create the Tasks.
I'm trying to determine if option 2 is pushing too much responsibility onto the read model, or whether this filtering behavior is more applicable there.
Sorry, I attempted to use the PickingSlip example as I thought it would be a more recognizable problem space, but realize now that there are connotations that go along with the concept that may have muddied the waters.
The answer to your question, in my opinion, very much depends on how you design your domain, not how you implement CQRS. The way you present it, it seems that all these operations and aggregates are in the same Bounded Context but at first glance, I would think that there are 3 (naming is difficult!):
Order Management or Sales, where orders are placed
Warehouse Operations, where goods are packaged to be shipped
Shipments, where packages are put in trucks and leave
When an Order is Placed in Order Management, Warehouse reacts and starts the Packaging workflow. At this point, Warehouse should have all the data required to perform its logic, without needing the Order anymore.
The warehouse manager can then view a picking slip, select the lines they would like to ship, and then perform a PrepareShipment command.
To me, this clearly indicates the need for an aggregate that will ensure the invariants are respected. You cannot select items not present in the picking slip, you cannot select more items than the quantities specified, you cannot select items that have already been packaged in a previous package and so on.
A ShipmentPrepared event will then update the original order, and remove the relevant lines from the PickingSlipView.
I don't understand why you would modify the original order. Also, removing lines from a view is not a safe operation per se. You want to guarantee that concurrency doesn't cause a single item to be placed in multiple packages, for example. You guarantee that using an aggregate that contains all the items, generates the packaging instructions, and marks the items of each package safely and transactionally.
Acting as an intermediary between two commands
Aggregates execute the commands, they are not in between.
Viewing it from another angle, an indication that you need that aggregate is that the PrepareShippingCommand needs to create an aggregate (Shipping), and according to Udi Dahan, you should not create aggregate roots (out of thin air). Instead, other aggregate roots create them. So, it seems fair to say that there needs to be some aggregate, which ensures that the policies to create shippings are applied.
As a final note, domain design is difficult and you need to know the domain very well, so it is very likely that my proposed solution is not correct, but I hope the considerations I made on each step are helpful to you to come up with the right solution.
UPDATE after question update
I read a couple of times the updated question and updated several times my answer, but ended up every time with answers very specific to your example again and I'm most likely missing a lot of details to actually be helpful (I'd be happy to discuss it on another channel though). Therefore, I want to go back to the first sentence of your question to add an important comment that I missed:
an intermediary concept can purely exist in the read model, while providing a bridge between commands.
In my opinion, read models are disposable. They are not a single source of truth. They are a representation of the data to easily fulfil the current query needs. When these query needs change, old read models are deleted and new ones are created based on the data from the write models.
So, only based on this, I would recommend to not prepare a read model to facilitate your commands operations.
I think that your solution is here:
When a book of record is added a view model for each type of task is maintained. The data might be transposed, and would only include relevant info.
If I understand it correctly, what you should do here is not create view model, but create an Aggregate (or multiple). Then this aggregate can receive the commands, apply the business rules and mutate the state. So, instead of having a domain service reading data from "clever" read models and putting it all together, you have an aggregate which encapsulates the data it needs and the business logic.
I hope it makes sense. It's a broad topic and we could talk about it for hours probably.

Spotify API, same music with differents IDs in App get the same IDs from API

Title says almost everything.
I found that the music "Boom - 2006 Remastered Version" has two different IDs that can be found in the App:
3EKjTDAEIdyQqsA9qtb5P2
0zlAqnRv07p9ezzFf3k2ky
But when using the API to get information about each one, it returns the same ID:
3EKjTDAEIdyQqsA9qtb5P2
Is this a bug?
It is unfortunately not a bug, but it is indeed very annoying, and your code needs to be able to handle it.
"Give me info for track A! Ok, here is info for track B, just like you asked".
It is a legacy thing still left in the Spotify metadata model called track redirects (the some concept exist on albums and artist too, but are less of a problem there). It was made so that we could quickly merge duplicate albums. It means that once upon time, there were two "different" tracks on different albums that were identical. We had lots of them on artists pages for popular artists. Labels would very often upload one album for one country and another identical one for another country instead of just saying that one album was available in two countries. Sometimes by mistake, most often because of cross licensing issues between labels and countries.
Track redirects are quite rare though if you look at the entire catalog. Most of these redirect tracks are only surfaced in old playlists and are for instance never returned in search results or artist pages. These days we never merge duplicates like this, but instead make sure only one is shown on artist pages, etc. and link to the other in case one is unavailable in your country. That is the concept called Track relinking in the docs. https://developer.spotify.com/web-api/track-relinking-guide/
I work at Spotify and bump into this problem every now and then. I want to change this so the tracks and album become just regular duplicates, because it is much easier to reason about, but it will take a while to fix. I guess I can update my answer here in a few years when it is done.

Resources