Core Data multiple relationships to similar objects, can't inverse - core-data

I have an Entity Storm, which has two One-to-Many relationships, "history" and "forecast" both of these are NSSets that contain a StormPosition Entity which contains time, latitude and longitude.
I am able to build this but while I can set up the "history" and "forecast" relationship, they can't seem to both point to objects of type StormPosition because the inverse relationships can't both point back to the Storm Entity.
I assume this is because when I do:
myStormPosition.owner = self
it needs to know which NSSet (history or forecast) to place it into.
Do I need to mix these into one "track" relationship? I'd rather not since it is nice to have one set for history and one set for forecast without having to examine the date property.
Also, elsewhere in the program I'd like to be able to only work with a StormPosition type instead of a HistoricPosition and PredictedPosition type which would effectively be the same but make for difficult type casting unless I gave them both a parent class that was identical.

This does feel to me like one of the rare occasions when you would benefit from using a parent entity/class. Be careful with this, because all instances of both HistoricPosition and PredictedPosition will have eat space in the persistent store for all of properties of both (because they have a common parent).
Will you have multiple Forecasts per storm? E.g. predicted track as of day 1, predicted track as of day 2, …?
It does feel like a heavyweight solution, though. Perhaps a protocol to include the location and timestamp? Get away fromthe notion of a Position entity completely? A forecast position has a valid time, and the time it was issued, while a historic track position has only the time it was recorded.

Related

Creating a viewing registry using UML

I would like to create a registry of the times a user has viewed audio visual content. I created the following diagram and was wondering if it would be a good way to achieve it.
Note: The AudiovisualContent is connected to DateTimeStamp just as a way to record when it was added to the platform.
Is the diagram correct and is it meaningful?
The diagram seems formally correct and has the advantage of being very clear on multiplicity and role names for the DateTimeStamp.
It is diffucult to say if this approach is correct for a "registry". But it makes sense at first sight; I understand from the diagram that:
a Profile (user?) can do several Viewing and each Viewing is about one Audiovisual Content. Conversely, an Audiovisual Content can be the subject of several Viewing, and each Viewing is performed by a Profile
Each Viewing (of a given content by a given user) has a DateTimeStamp
Each Audiovisual Content has a DateTimeStamp corresponding to the moment the content was added.
If a user views the same content several time at different moments, each of this Viewing may have a different rating, and the rating is optional.
What I can further infer from the multiplicities, is that the timestamp corresponds to the beginning of the viewing act (because if it would be the end of the viewing, there wouldn't be timestamp when the viewing starts, so the multiplicity would have been 0..1).
Areas of concern
The DateTimeStamp is a class according to your diagram. The fact that you have a 1 multiplicity on the side of the Viewing and on the side of Audiovisual Content means that every single timestamp must be associated with BOTH. I doubt that this is correct.
You could consider using 0..1 instead, which would leave the possibility of having a time stamp associated with only one of the two or none at all. But still the timestamp could possibly have both, with the risk of inconsistency between them.
Personanlly, I'd go for * to clarify that many viewing and uploading could happen at the same time. I'd probably show it as a property -addedOn: DateTimeStamp and -viewedOn:DateTimeStamp.
In reality, the time stamp is very probably a value object. You could then consider making it a «dataType»; showing it as a property may then seem even more intuitive.
Unrelated: while your current way of modeling Viewing as a class is perfectly fine, you may be interested to show is as an association class between many Profile and many Audiovisual Content

How to properly render multiple 3D models in Direct3D11?

After some search I've learned it is possible to create multiple Vertex Buffers, each for a specific 3D model, and set them in the Input Assembler to be read by my shaders, or at least this is what I could understand. But by reading Microsoft's documentation I've got very confused of how to do this the right way, this is what I was reading, and they say I can pass in an array of Vertex Buffers to the IA stage, but it also says that the maximum number of Vertex Buffers my Input Assembler can take in D3D11 is 32. What would I do if I needed 50 different models being rendered at the same time? And also if someone could clarify how the pOffset work in this situation with multiple models would also help, as I could understand it should always be assigned a 0 value as the beginning of my buffers is always the vertex data, but I could've understood wrong. And by last I want to add I've already rendered some buffers which consists of multiple models together, but I don't know exactly how could I deal with many individual models.
The short answer is: You don't try to draw all your models in one Draw call.
You are free to organize rendering in many ways, but here is one approach:
A 'model' consists of a one or more 'meshes'. Each mesh is collection of a vertices (in a VB), indices (in an IB), and some material information associated with each 'subset' of indices.
To draw:
foreach M in models
foreach mesh in M
foreach part in mesh
Set shaders based on material
Set VB/IB based on mesh
DrawIndexed
Since this is a number of nested loops, there are several ways to improve the performance. For example, you might just queue up the information instead of actually calling DrawIndexed, then sort by material. Then call DrawIndexed from the sorted queue.
For alpha-blending to appear correct, you have to do at least two rendering passes: First to render opaque things, then the second to render alpha-blended things.
You may also want to combine all the content in a given model into one VB and one IB with offsets rather than use individual resources.
You may have the same model in multiple locations in the world, so you may have many model instances sharing the same mesh data. In this case, sorting by VB/IB as well as material could be useful. If you are drawing the same model in many locations (100s or 1000s), then you should to look into hardware instancing.
An example implementation of this can be found in DirectX Tool Kit as Model, ModelMesh, and ModelMeshPart.

CQRS Read Model Projections: How complex is too complex a data transformation

I want to sanity check myself on a view projection, in regards to if an intermediary concept can purely exist in the read model while providing a bridge between commands.
Let me use a contrived example to explain.
We place an order which raises an OrderPlaced event. The workflow then involves generating a picking slip, which is used to prepare a shipment.
A picking slip can be generated from an order (or group of orders) without any additional information being supplied from any external source or user. Is it acceptable then that the picking slip can be represented purely as a read model?
So:
PlaceOrderCommand -> OrderPlacedEvent
OrderPlacedEvent -> PickingSlipView
The warehouse manager can then view a picking slip, select the lines they would like to ship, and then perform a PrepareShipment command. A ShipmentPrepared event will then update the original order, and remove the relevant lines from the PickingSlipView.
I know it's a toy example, but I have a conceptually similar use case where a colleague believes the PickingSlip should be a domain entity/aggregate in its own right, as it's conceptually different to order. So you have PlaceOrder, GeneratePickingSlip, and PrepareShipment commands.
The GeneratePickingSlip command however simply takes an order number (identifier), transforms the order data into a picking slip entity, and persists the entity. You can't modify or remove a picking slip or perform any action on it, apart from using it to prepare a shipment.
This feels like introducing unnecessary overhead on the write model, for what is ultimately just a transformation of existing information to enable another command.
So (and without delving deeply into the problem space of warehouses and shipping)...
Is what I'm proposing a legitimate use case for a read model?
Acting as an intermediary between two commands, via transformation of some data into a different view. Or, as my colleague proposes, should every concept be represented in the write model in all cases?
I feel my approach is simpler, and avoiding unneeded complexity, but I'm new to CQRS and so perhaps missing something.
Edit - Alternative Example
Providing another example to explore:
We have a book of record for categories, where each record is information about products and their location. The book of record is populated by an external system, and contains SKU numbers, mapped to available locations:
Book of Record (Electronics)
SKU# Location1 Location2 Location3 ... Location 10
XXXX Introduce Remove Introduce ... N/A
YYYY N/A Introduce Introduce ... Remove
Each book of record is an entity, and each line is a value object.
The book of record is used to generate different Tasks (which are grouped in a TaskPlan to be assigned to a person). The plan may only cover a subset of locations.
There are different types of Tasks: One TaskPlan is for the individual who is on a location to add or remove stock from shelves. Call this an AllocateStock task. Another type of Task exists for a regional supervisor managing multiple locations, to check that shelving is properly following store guidelines, say CheckDisplay task. For allocating stock, we are interested in both introduced and removed SKUs. For checking the displays, we're only interested in newly Introduced SKUs, etc.
We are exploring two options:
Option 1
The person creating the tasks has a View (read model) that allows them to select Book of Records. Say they select Electronics and Fashion. They then select one or more locations. They could then submit a command like:
GenerateCheckDisplayTasks(TaskPlanId, List<BookOfRecordId>, List<Locations>)
The commands would then orchestrate going through the records, filtering out locations we don't need, processing only the 'Introduced' items, and creating the corresponding CheckDisplayTasks for each SKU in the TaskPlan.
Option 2
The other option is to shift the filtering to the read model before generating the tasks.
When a book of record is added a view model for each type of task is maintained. The data might be transposed, and would only include relevant info. ie. the CheckDisplayScopeView might project the book of record to:
Category SKU Location
Electronics (BookOfRecordId) XXXX Location1
Electronics (BookOfRecordId) XXXX Location3
Electronics (BookOfRecordId) YYYY Location2
Electronics (BookOfRecordId) YYYY Location3
Fashion (BookOfRecordId) ... ... etc
When generating tasks, the view enables the user to select the category and locations they want to generate the tasks for. Perhaps they select the Electronics category and Location 1 and 3.
The command is now:
GenerateCheckDisplayTasks(TaskPlanId, List<BookOfRecordId, SKU, Location>)
Where the command now no longer is responsible for the logic needed to filter out the locations, the Removed and N/A items, etc.
So the command for the first option just submits the ID of the entity that is being converted to tasks, along with the filter options, and does all the work internally, likely utilizing domain services.
The second option offloads the filtering aspect to the view model, and now the command submits values that will generate the tasks.
Note: In terms of the guidance that Aggregates shouldn't appear out of thin air, the Task Plan aggregate will create the Tasks.
I'm trying to determine if option 2 is pushing too much responsibility onto the read model, or whether this filtering behavior is more applicable there.
Sorry, I attempted to use the PickingSlip example as I thought it would be a more recognizable problem space, but realize now that there are connotations that go along with the concept that may have muddied the waters.
The answer to your question, in my opinion, very much depends on how you design your domain, not how you implement CQRS. The way you present it, it seems that all these operations and aggregates are in the same Bounded Context but at first glance, I would think that there are 3 (naming is difficult!):
Order Management or Sales, where orders are placed
Warehouse Operations, where goods are packaged to be shipped
Shipments, where packages are put in trucks and leave
When an Order is Placed in Order Management, Warehouse reacts and starts the Packaging workflow. At this point, Warehouse should have all the data required to perform its logic, without needing the Order anymore.
The warehouse manager can then view a picking slip, select the lines they would like to ship, and then perform a PrepareShipment command.
To me, this clearly indicates the need for an aggregate that will ensure the invariants are respected. You cannot select items not present in the picking slip, you cannot select more items than the quantities specified, you cannot select items that have already been packaged in a previous package and so on.
A ShipmentPrepared event will then update the original order, and remove the relevant lines from the PickingSlipView.
I don't understand why you would modify the original order. Also, removing lines from a view is not a safe operation per se. You want to guarantee that concurrency doesn't cause a single item to be placed in multiple packages, for example. You guarantee that using an aggregate that contains all the items, generates the packaging instructions, and marks the items of each package safely and transactionally.
Acting as an intermediary between two commands
Aggregates execute the commands, they are not in between.
Viewing it from another angle, an indication that you need that aggregate is that the PrepareShippingCommand needs to create an aggregate (Shipping), and according to Udi Dahan, you should not create aggregate roots (out of thin air). Instead, other aggregate roots create them. So, it seems fair to say that there needs to be some aggregate, which ensures that the policies to create shippings are applied.
As a final note, domain design is difficult and you need to know the domain very well, so it is very likely that my proposed solution is not correct, but I hope the considerations I made on each step are helpful to you to come up with the right solution.
UPDATE after question update
I read a couple of times the updated question and updated several times my answer, but ended up every time with answers very specific to your example again and I'm most likely missing a lot of details to actually be helpful (I'd be happy to discuss it on another channel though). Therefore, I want to go back to the first sentence of your question to add an important comment that I missed:
an intermediary concept can purely exist in the read model, while providing a bridge between commands.
In my opinion, read models are disposable. They are not a single source of truth. They are a representation of the data to easily fulfil the current query needs. When these query needs change, old read models are deleted and new ones are created based on the data from the write models.
So, only based on this, I would recommend to not prepare a read model to facilitate your commands operations.
I think that your solution is here:
When a book of record is added a view model for each type of task is maintained. The data might be transposed, and would only include relevant info.
If I understand it correctly, what you should do here is not create view model, but create an Aggregate (or multiple). Then this aggregate can receive the commands, apply the business rules and mutate the state. So, instead of having a domain service reading data from "clever" read models and putting it all together, you have an aggregate which encapsulates the data it needs and the business logic.
I hope it makes sense. It's a broad topic and we could talk about it for hours probably.

Correct structure for MongoDB

I am fairly new to mongoDB and databases in general and I am not sure what the correct/typical structure is for setting up different attributes.
For example, Lets say I have a person named Tim and he can play basketball, soccer, and tennis. How do you go about stating this? do you use booleans or store an array of strings?
This is how I think the format is..is this the correct way to think about it?
name: 'Tim',
sports: {
soccer: true,
tennis: true,
basketball: true,
football: false
}
Data modeling in MongoDB works differently than with RDBMS. A typical workflow with RDBMs is that you define your entities and their properties as well as their relations and then bang your head against the wall to get your „upper left above and beyond“™ JOINs right so that the data gives you the answers you need.
The typical workflow for NoSQL databases in general and MongoDB in particular is different. You identify the questions you need to get answered by your data first, and model your data so that these questions can be answered in the most efficient way. Hence, let us play this through.
You obviously have people, for which sports they participate in should be recorded. So the first question we have to ask ourselves is wether we embed the data or wether we reference the data. Since it is rather unlikely that we hit the 16MB document limit, embedding seems to be a good choice. Now, with staying with an object holding Boolean values for every sport imaginable (later down the road), not only to we introduce unnecessary verbosity, we add exactly 0 informational value in comparison to holding an array of sports:
{
Name: “Tim“,
Sports: [”Soccer”,“Tennis”,”Basketball”]
}
This makes it much easier to introduce new Sports later down the road (simply append said sport to the array of people doing it), does not pollute each and every document with irrelevant data and holds the same informational value.

Difference between element, item and entry when it comes to programming?

Naming variables is quite important and being a non-native English speaker I wonder what the difference would be for using element, item and entry to name things within data structures or variables/parameters.
Let us start with the plain English meaning of each of these:
Element: a part or aspect of something abstract, especially one that is essential or characteristic.
Thus, they can be thought of logically connected atomic parts of a whole. E.g. Elements(Nodes) of a Tree, Elements of a HTML code(Opening tag, InnerHtml content and Closing tag)
Item: an individual article or unit, especially one that is part of a list, collection, or set.
I prefer this when the thing are logically independent like Items in a Shopping cart, Items in a bag, etc
Entry: an item written or printed in a diary, list, ledger, or reference book.
I usually use this for tables like Hash Table or Accounts(Transaction entry) or Records(recording entries in sales, etc.)
Now we can't refer the items in a bag(considered as an Object in Object oriented paradigm) as entries or elements(probably not elements because the items as not constituents of the bag itself).
However, in some cases like an array we can use the element or item or entry interchangeably too :)
Had to think on this for a few minutes, interesting :)
Note I'm not a native English speaker either so my opinions are just that, opinions.
I use 'element' for things that have some connection with each other, like nodes in a graph or tree. I use 'item' for individual elements in a list (i.e. that don't necessarily have a connection to each other). I don't use 'entry' because I don't like it in this context, but it's just a matter of preference.
Since I'm primarily a C# dev, this is apparent in .Net's naming too: a List<T> has Items, but WPF building blocks in XAML, or XML tags, are Elements (and many more similar examples); that's probably at least part of the reason why I formed this habit.
I don't think there would be anything very wrong with switching things around though; it would certainly be understandable enough from my point of view.

Resources