Can someone explain to me in detail what it means when something is delivered and when something is derived.
Thanks buddies.
I assume we are talking with respect to PeopleSoft. we call an object 'Delivered' if it comes as a part of Peoplesoft suite and not customized or modified. 'Derived Record' is a type of record in peoplesoft which we use online when we don't want to store the value in database.
Assuming you're really talking about Peoplesoft (as you tagged your question) a Derived (aka Derived Record) is a non persistent data structure in a component buffer. So you will not commit its content to the database when saving process occurs.
Concerning the delivered part of your question, well it doesn't really make sense to compare with derived record. You deliver package, project or pizza :)
Related
What is the best way to import an excel file (or do mass insert) containing entities using the Axon Framework ?
Should we use a command with the excel file as a byte array, then parse the file in the Aggregate and send them each line as an event ? or create an event the list of entities (but then how to update aggregates) ? or parse the excel file outside of the aggregate and then create command of each row ?
Thanks for help.
Technically speaking, you have two options (each with many variations):
Parse the file on the client-side and issue a command per record
Send one command with the entire content (the file itself or the content converted to another format). The command handler (aggregate or not) then iterates over the records and performs the required action per each.
Which one you choose and how exactly do you implement it depends on a lot of factors, such as:
whether or not the data is about "entities" that are part of the same aggregate or not
the size of the data (both in bytes and number of records)
the performance and security requirements and constraints
what information needs to be stored (namely, should the system "remember" there was a "mass insert")
are the aggregates event-sourced or state-stored
...
As you can see, there are way too many possibilities for anyone to be able to give you a generic "best way".
That said, it is a very interesting question that can spark some architectural discussions. Unfortunately, StackOverflow is not the right place to have those (see What topics can I ask about here? and What types of questions should I avoid asking?).
If you would like to discuss those options in more detail I suggest posting the question on AxonIQ's Discuss platform.
it pretty much depends on what kind of Event is valuable for your business.
But in general, parsing the file outside of the Aggregate (this is not what the Aggregate is used for) and firing multiple commands (one for each line) would be my choice.
In that case, you will have an Event on your EventStore for each line which will make it way more explicity about what happened. Also, important to note that in this case, your Events will be granular and not that big, which usually I see as a code smell =)
KR,
I am currently just trying to learn some new programming patterns and I decided to give event sourcing a shot.
I have decided to model a warehouse as my aggregate root in the domain of shipping/inventory where the number of warehouses is generally pretty constant (i.e. a company wont be adding warehouses too often).
I have run into the question of how to set my aggregateId, which should correspond to a warehouse, on my server. Most examples I have seen, including this one, show the aggregate ID being generated server side when a new aggregate is being created (in my case a warehouse), and then passed in the command request when referring to that aggregate for subsequent commands.
Would you say this is the correct approach? Can I expect the user to know and pass aggregate Ids when issuing commands? I realize this is probably domain dependent and could also be a UI/UX choice as well, just wondering what other's have done. It would make more sense to me if the number of my event sourced aggregates were more frequent, such as with meal tabs or shopping carts.
Thanks!
Heuristic: aggregate id, in many cases, is analogous to the primary key used to distinguish entities in a database table. Many of the lessons of natural vs surrogate keys apply.
Can I expect the user to know and pass aggregate Ids when issuing commands?
You probably can't depend on the human to know the aggregate ids. But the client that the human operator is using can very well know them.
For instance, if an operator is going to be working in a single warehouse during a session, then we might look up the appropriate identifier, cache it, and use it when constructing messages on behalf of the user.
Analog: when you fill in a web form and submit it, the browser does the work of looking at the form action and using that information to construct the correct URI, and similarly the correct HTTP Request.
The client will normally know what the ID is, because it just got it during a previous query.
Creation patterns are weird. It can, in some circumstances, make sense for the client to choose the identifier to be used when creating a new aggregate. In others, it makes sense for the client to provide an identifier for the command message, and the server decides for itself what the aggregate identifier should be.
It's messaging, so you want to be careful about coupling the client directly to your internal implementation details -- especially if that client is under a different development schedule. If you get the message contract right, then the server and client can evolve in any way consistent with the contract at any time.
You may want to review Greg Young's 10 year retrospective, which includes a discussion of warehouse systems. TL;DR - in many cases the messages coming from the human operators are events, not commands.
Would you say this is the correct approach?
You're asking if one of Greg Young's Event Sourcing samples represents the correct approach... Given that the combination of CQRS and Event Sourcing was essentially (re)invented by Greg, I'd say there's a pretty good chance of that.
In general, letting the code that implements the Command-side generate a GUID for every Command, Event, or other persistent object that it needs to write is by far the simplest implementation, since GUIDs are guaranteed to be unique. In a distributed system, uniqueness without coordination is a big thing.
Can I expect the user to know and pass aggregate Ids when issuing commands?
No, and you particularly can't expect a user to know the GUID of their assets. What you may be able to do is to present the user with a list of his or her assets. Each item in the list will have the GUID associated, but it may not be necessary to surface that ID in the user interface. It's just data that the underlying UI object carries around internally.
In some cases, users do need to know the ID of some of their assets (e.g. if it involves phone support). In that case, you can add a lookup API to address that concern.
I was reading 'CouchDB: The Definitive Guide' and I was confused by this paragraph:
For demoing purposes, having CouchDB assign a UUID is fine. When you write your first programs, we recommend assigning your own UUIDs. If your rely on the server to generate the UUID and you end up making two POST requests because the first POST request bombed out, you might generate two docs and never find out about the first one because only the second one will be reported back. Generating your own UUIDs makes sure that you’ll never end up with duplicate documents.
I thought that uuids (specifically the _id) were saved only when the document creation was successful. That is, when I "post" an insert request for a new document, the _id is generated automatically. If the document is saved then the field is kept, otherwise discarded. Is that not the case?
Can you please explain what is the correct way to generate _id fields in CouchDB?
I think this quote is not really about UUIDs but about using PUT (which is idempotent) instead of POST.
Check this thread for more information : Consequences of POST not being idempotent (RESTful API)
I think that quote is wrong or out of date, and it's fine to rely on CouchDB for ID generation. I've used this at work a lot and have never really run into any issues.
I'm designing a academic decision support system. I have to write documentation for that project. The part I am stuck on is writing contracts.
I've a use case Generate custom reports.
The interaction the user will do with the system is setParametersforReport().
In this function he will set attributes, like student_rollNumber or marks, or warning count or anything else he wants to see on the report.
However I am confused what to write in the contract's post condition.
The 3 things that I should mention are:
Instances created
Associations formed or broken
Attributes changed
I don't get what to write in that and how to explain since nothing is actually being created. I have all the data I want in the database and I am accessing them without classes. I am confused because database instance can't be created.
Please any help will be appreciated.
Postconditions are used to specify the state of the system at the end of the operation execution. In your case it looks like the state at the system at the end is the same that the state at the beginning since you´re not modifying the database (and you´re not storing the report instance either). Therefore I don't see the point of defining a contract for this operation.
Having Watched this video by Greg Yound on DDD
http://www.infoq.com/interviews/greg-young-ddd
I was wondering how you could implement Command-Query Separation (CQS) with DDD when you have in memory changes?
With CQS you have two repositories, one for commands, one for queries.
As well as two object groups, command objects and query objects.
Command objects only have methods, and no properties that could expose the shape of the objects, and aren't to be used to display data on the screen.
Query objects on the other hand are used to display data to the screen.
In the video the commands always go to the database, and so you can use the query repository to fetch the updated data and redisplay on the screen.
Could you use CQS with something like and edit screen in ASP.NET, where changes are made in memory and the screen needs to be updated several times with the changes before the changes are persisted to the database?
For example
I fetch a query object from the query repository and display it on the screen
I click edit
I refetch a query object from the query object repository and display it on the form in edit mode
I change a value on the form, which autoposts back and fetches the command object and issues the relevant command
WHAT TO DO: I now need to display the updated object as the command made changes to the calculated fields. As the command object has not been saved to the database I can't use the query repository. And with CQS I'm not meant to expose the shape of the command object to display on the screen. How would you get a query object back with the updated changes to display on the screen.
A couple of possible solutions I can think of is to have a session repository, or a way of getting a query object from the command object.
Or does CQS not apply to this type of scenario?
It seems to me that in the video changes get persisted straight away to the database, and I haven't found an example of DDD with CQS that addresses the issue of batching changes to a domain object and updating the view of the modified domain object before finally issuing a command to save the domain object.
So what it sounds like you want here is a more granular command.
EG: the user interacts with the web page (let's say doing a check out with a shopping cart).
The multiple pages getting information are building up a command. The command does not get sent until the user actually checks out where all the information is sent up in a single command to the domain let's call it a "CheckOut" command.
Presentation models are quite helpful at abstracting this type of interaction.
Hope this helps.
Greg
If you really want to use CQS for this, I would say that both the Query repo and the Write repo both have a reference to the same backing store. Usually this reference is via an external database - but in your case it could be a List<T> or similar.
Also for the rest of your concerns ...
These are more so concerns with eventual consistency as opposed to CQRS. You do not need to be eventually consistent with CQRS you can make the processing of the command also write to the reporting store (or use the same physical store for both as mentioned) in a consistent fashion. I actually recommend people to do this as their base architecture and to later come throught and introduce eventual consistency where needed as there are costs azssociated with it.
In memory, you would usually use the Observer design pattern.
Actually, you always want to use this pattern but most databases don't offer an efficient way to call a method in your app when something in the DB changes.
The Unit of Work design pattern from Patterns of Enterprise Application Architecture matches CQS very well - it is basically a big Command that persist stuff in the database.
JdonFramework is CQRS DDD java framework, it supply a domain events + Asynchronous pattern, more details https://jdon.dev.java.net/