How can i with Axon re-initialize some value with the same first InitAvailableQuantityCommand? - domain-driven-design

assume we have an stock. this stock should persist product id and available quantity. the user of this stock can frequently update(InitAvailableQuantityCommand) available quantity.
if some product has been sold, our system will get a soldEvent(DecreaseAvailableQuantityCommand) and available quantity for sold product should be decress.
it works well with aggregate below, until one thing,
if i try again to re-initialize stock with InitAvailableQuantityCommand, the event will be ignored and an error is thrown
An event for aggregate [3333] at sequence [0] was already inserted"
What i try to achive is following:
InitAvailableQuantityCommand (productId =1, quantity = 10)
DecreaseAvailableQuantityCommand (productId =1, quantity = 1)
DecreaseAvailableQuantityCommand (productId =1, quantity = 1)
now hier we have 8 available products more.
and it this moment user will re-initialize stock with 20 available products for productId 1. the user will send a new InitAvailableQuantityCommand (productId =1, quantity = 20) and it this moment it fail and doesn't work.
What do i wrong?
thx.
#NoArgsConstructor
#Aggregate
#Data
public class AvailableQuantityAggregate {
private String partnerId;
private String productId;
#AggregateIdentifier
private String productVariationId;
private int quantity;
#CommandHandler
public AvailableQuantityAggregate(InitAvailableQuantityCommand cmd) {
final ApplyMore apply = AggregateLifecycle.apply(AvailableQuantityInitializedEvent.builder()
.partnerId(cmd.getPartnerId())
.productId(cmd.getProductId())
.productVariationId(cmd.getProductVariationId())
.quantity(cmd.getQuantity())
.build());
}
#CommandHandler
public void handle(DecreaseAvailableQuantityCommand cmd) {
AggregateLifecycle.apply(AvailableQuantityDecreasedEvent.builder()
.productVariationId(cmd.getProductVariationId())
.quantity(cmd.getQuantity())
.build());
}
#EventSourcingHandler
protected void on(AvailableQuantityInitializedEvent event) {
this.productVariationId = event.getProductVariationId();
this.partnerId = event.getPartnerId();
this.productId = event.getProductId();
this.quantity = event.getQuantity();
}
#EventSourcingHandler
protected void on(AvailableQuantityDecreasedEvent event) {
this.quantity = this.quantity-event.getQuantity();
}
}

The InitAvailableQuantityCommand instantiates an Aggregate. Aggregates inherently have identity. As such, the Aggregate Identifier is in place to denote whom/what it is. When you are event sourcing, which you are by default in Axon, the Event Store will ensure that you will not add events with the same aggregate id and sequence number. When you are publishing the InitAvailableQuantityCommand a second time however, you are telling the framework to publish an event with the same aggregate id and sequence number.
Hence, your modelling solution should be a little different. The action (aka, the command) of instantiating the aggregate is different from resetting it. Thus I'd suggest to add a different command to reset your aggregate to it's initial values.

Judging from your code snippet, the InitAvailableQuantityCommand is handled by a constructor. This means that Axon expects to need to create a new instance of an aggregate. But as you are expecting to load an instance, there is a collision of identifiers (fortunately).
What you'd need to do, is create a different command that contains the same information, but is handled by an instance method. This might be what you want to do anyway, because there a conceptual/functional different between first-time initialization, and "resetting".
In future versions of Axon, we will support "create-or-update" kind functionality, where a single Command could fulfill both roles.

Related

How can I set the LocationID on the 'Create Shipment' process from the Sales Orders screen?

I have a process that creates records in the Sales Orders screen's Details grid based on two Header user fields: SiteID (Warehouse) and LocationID.
When the 'Create Shipment' process is initiated, the shipment is created which contains the SiteID in the Sales Orders grid - but since there is no locationID in the grid, this 'Create Shipment' process uses some default(?) LocationID, where I'd like to use the Header User field's LocationID.
My question is, how would I intercept this process to set the LocationID to something other than what it's defaulting to?
Thanks...
Update:
Using the virtual method:
SetShipmentFieldsFromOrder(SOOrder order, SOShipment shipment, Nullable<Int32> siteID, Nullable<DateTime> shipDate, String operation, SOOrderTypeOperation orderOperation, Boolean newlyCreated, SetShipmentFieldsFromOrderDelegate baseMethod)
I don't see any way to set the grid value for LocationID (i.e., there is no SOShipLine record to set a value in the virtual method. How would I do this?
There's a virtual method on the SOShipmentEntry graph called SetShipmentFieldsFromOrder, you can override that to update the CustomerLocationID as needed. The create shipment action calls SOShipmentEntry.CreateShipment which inserts the shipment and then calls the SetShipmentFieldsFromOrder method.
The system should be pulling the SOShipment.CustomerLocationID from the SOOrder.CustomerLocationID field by default though.
I believe the question is about defaulting of warehouse locations into the shipment lines and allocations rather than customer locations
Currently, shipment line selects location(s) by the following way
Originally (SelectLocationStatus), it selects location based on their pick priority (smaller value means higher priority)
After this method, the ResortStockForShipmentByDefaultItemLocation is executed. This method puts the default issue location for the item-warehouse combination (InItemSite) at the top of this list regardless of its pick priority.
I believe you should override this method to put the needed location to the top of the list instead of (or ahead of) the default issue location. Here is the code of this method of the SOShipmentEntry class for reference:
protected virtual void ResortStockForShipmentByDefaultItemLocation(SOShipLine newline, List<PXResult> resultset)
if (INSite.PK.Find(this, newline.SiteID)?.UseItemDefaultLocationForPicking != true)
return;
var dfltShipLocationID = INItemSite.PK.Find(this, newline.InventoryID, newline.SiteID)?.DfltShipLocationID;
if (dfltShipLocationID == null)
return;
var listOrderedByDfltShipLocationID = resultset.OrderByDescending(
r => PXResult.Unwrap<INLocation>(r).LocationID == dfltShipLocationID).ToList();
resultset.Clear();
resultset.AddRange(listOrderedByDfltShipLocationID);
}
Important! If we are talking about 21R2 version, there is the "Project-Specific Inventory" (materialManagement) feature which has its own extension of the SOShipmentEntry where some of the shipment creation methods (including the SelectLocationStatus) are overridden. The ResortStockForShipmentByDefaultItemLocation is not overridden, but if the customer uses this feature, I suggest to extend this extension rather than base SOSHipmentEntry:
namespace PX.Objects.PM.MaterialManagement
{
public class SOShipmentEntryMaterialExt : PXGraphExtension<SOShipmentEntry>

What is the proper way to update values of DAC's retrieved via PXResultset?

We have a business requirement to set the SO return COST to the original cost issued without invoicing if possible. We determined that Sales Orders are necessary to track issuing materials to our client, and we are cost driven rather than price driven. We use FIFO costing, but SO return orders do not seem to return at the original COST unless invoiced (which we also don't do in a traditional manner).
I found that setting the unit/ext cost on the SO Shipment Line directly in the database before Confirm Shipment and Update IN appears to provide the results desired. Applying a custom menu option to streamline and strongly control the return, I cloned nearby code as a base. The section between the === is where I set the unit/ext cost. The PXTrace shows the expected value, but it is coming out as $0 on the shipment record. I thought I might need "docgraph.Update(sOShipmentLine)" to save it, but that's not accessible in this scope.
using (var ts = new PXTransactionScope())
{
PXTimeStampScope.SetRecordComesFirst(typeof(SOOrder), true);
//Reminder - SOShipmentEntry docgraph = PXGraph.CreateInstance<SOShipmentEntry>();
docgraph.CreateShipment(order, SiteID, filter.ShipDate, adapter.MassProcess, SOOperation.Receipt, created, adapter.QuickProcessFlow);
PXTrace.WriteError("Setting Cost");
//Set Cost on Shipment to Cost On SO Line
PXResultset<SOShipment> results =
PXSelectJoin<SOShipment,
InnerJoin <SOShipLine, On<SOShipLine.shipmentNbr, Equal<SOShipment.shipmentNbr>>,
InnerJoin <SOLine, On<SOLine.orderType, Equal<SOShipLine.origOrderType>,
And<SOLine.orderNbr, Equal<SOShipLine.origOrderNbr>, And<SOLine.lineNbr, Equal<SOShipLine.origLineNbr>>>>
>>,
Where<SOShipment.shipmentNbr, Equal<Required<SOShipment.shipmentNbr>>>>
.Select(docgraph, docgraph.Document.Current.ShipmentNbr);
PXTrace.WriteError("Shipment {0} - Records {1}", docgraph.Document.Current.ShipmentNbr, results.Count);
foreach (PXResult<SOShipment, SOShipLine, SOLine> record in results)
{
SOShipment shipment = (SOShipment)record;
SOShipLine shipmentLine = (SOShipLine)record;
SOLine sOLine = (SOLine)record;
==============================================
shipmentLine.UnitCost = GetReturnUnitCost(sOLine.OrigOrderType, sOLine.OrigOrderNbr, sOLine.OrigLineNbr, sOLine.CuryInfoID);
shipmentLine.ExtCost = shipmentLine.Qty * shipmentLine.UnitCost;
PXTrace.WriteError(string.Format("{0} {1}-{2} = {3} / {4}", shipmentLine.LineType, shipmentLine.ShipmentNbr, shipmentLine.LineNbr, shipmentLine.Qty, shipmentLine.UnitCost));
==============================================
}
PXAutomation.CompleteSimple(docgraph.Document.View);
var items = new List<object> { order };
PXAutomation.RemovePersisted(docgraph, typeof(SOOrder), items);
PXAutomation.RemoveProcessing(docgraph, typeof(SOOrder), items);
ts.Complete();
}
Still on the learning curve so I'm expecting the solution is likely simple and obvious to someone more experienced.
There's three phase to it:
Changing the value
Updating the cache
Persisting the cache
I think you are changing the value but not persisting it. The reason why it works after invoking Confirm Shipment or Update IN action is probably that these actions will persist all changes by calling the graph Save action.
To change a field value in a data view you would do:
DACRecord.Field = value;
DataView.Update(DACRecord);
The particularity of your example is that the request is not bound to a data view.
When you have a loose BQL request you can do the same operation with a cache object. In your example the Caches context is available from docGraph:
DACRecord.Field = value;
graph.Caches[typeof(DACType)].Update(DACRecord);
graph.Caches[typeof(DACType)].Persist(DACRecord, PXDBOperation.Update);
Update and Persist are often omitted because in many scenarios they will be called later on by other framework mechanism. For example if you were to do only Update on a UI field, the record won't be persisted until the user clicks on the save button.
Updating value on UI is a bit different than updating in cache.
The recommended approach for UI fields is to use SetValue:
cache.SetValue<DAC.DacField>(DACRecord, fieldValue);
Or use SetValueExt when you want to trigger the framework events like FieldUpdated when changing the field value:
cache.SetValueExt<DAC.DacField>(DACRecord, fieldValue);
You'll still have to update and persist the changes in cache for these too if you want the changes to stick without requiring the user to manually save the document.

Am I designing and constructing my value objects correctly?

Sorry in advance if this question is unclear. Please tell me what to change to make it a better question.
I am currently maintaining a C# WinForm system where I'm trying to learn and use DDD and CQRS principles. Vaughn Vernon's Implementing Domain Driven Design is my main DDD reference literature.
The system currently uses legacy code which makes use of Data Aware Controls.
In the Asset Inventory Context, i have designed my aggregate root Asset which composes of multiple valueObjects which are standard entries in the system:
In this Context, i'm trying to implement a use case where the user can manually register an Asset to the system.
My current implementation is the following:
From Presentation Layer:
Upon loading the RegisterAssetForm.cs it loads the existing standard entry lists of Group, ItemName, etc. through the Data Aware controls, all consisting of data rows with columnsid: int and name: string.
When the user selects the desired ItemName, Group, PropertyLevel, Department, and Category, then clicks save, a command is performed:
RegisterAssetForm.cs
...
AssetInventoryApplicationService _assetInventoryServ;
...
void btnSave_Click(object sender, EventArgs e)
{
int itemNameId = srcItemName.Value // srcItemName is a custom control whose Value = datarow["id"]
int groupId = srcGroup.Value;
string categoryId = srcCategory.Value;
string departmentId = srcDepartment.Value;
string propLvlId = srcPropLevel.Value;
...
RegisterAssetCommand cmd = new RegisterAssetCommand(itemNameId, groupId, categoryId, departmentId, propLvlId);
_assetInventoryServ.RegisterAsset(cmd);
...
}
From Application Layer:
The AssetInventoryApplicationService depends on domain services.
AssetInventoryApplicationService.cs
...
IAssetRepository _assetRepo;
...
public void RegisterAsset(RegisterAssetCommand cmd)
{
...
AssetFactory factory = new AssetFactory();
AssetID newId = _assetRepo.NextId();
Asset asset = factory.CreateAsset(newId, cmd.ItemNameId, cmd.PropertyLevelId,
cmd.GroupId, cmd.CategoryId, cmd.DepartmentId);
_assetRepo.Save(asset);
...
}
From Domain Layer:
AssetFactory.cs //not my final implementation
...
public class AssetFactory
{
...
public Asset CreateAsset(AssetID id, int itemNameId, int propLvlId, int groupId, int categoryId, int departmentId)
{
ItemName itemName = new ItemName(itemNameId);
PropertyLevel propLvl = new PropertyLevel(propLvlNameId);
Group group = new Group(groupNameId);
Category category = new Category(categoryNameId);
Department department = new Department(departmentNameId);
return new Asset(id, itemName, propLvl, group, category, deparment);
}
...
}
Sample table of what fills my value objects
+------------+--------------+
| CategoryID | CategoryName |
+------------+--------------+
| 1 | Category1 |
| 2 | Category2 |
| 3 | Category3 |
| 4 | Category4 |
| 5 | Category5 |
+------------+--------------+
I know domain models must be persistence-ignorant that's why i intend to use surrogate identites (id field) in Layer Supertype with my valueobject to separate the persistence concern from the domain.
The main property to distinguish my value objects is their Name
From the presentation layer, i send the standard entry value as integer id corresponding to primary keys through a command to the application layer which uses domain services.
Problem
* Is it right for me to pass the standard entry's id when creating the command, or should I pass the string name?
* If id is passed, how do i construct the standard entry value object if name is needed?
* If name is passed, do i need to figure out the id from a repository?
* Or am I simply designing my standard entry value objects incorrectly?
Thanks for your help.
It looks to me that you may be confusing a Value Object and an Entity.
The essential difference is that an Entity needs an Id but a VO is a thing (rather than a specific thing). A telephone number in a CRM would likely be a VO. But it would likely be an Entity in if you are a telephone company.
I have an example of VO in this post which you may find helpful - you can get it here
To answer your 'Problems' more specifically:
If you are creating some entity then it can be advantageous to pass in the id to a command. That way you already know what the id will be.
You shouldn't be able to create an invalid value object.
Why can't you pass in the name and the ID? Again - not sure this is relevant to a Value Object
I think you have designed them incorrectly. But I can't be sure because I don't know your specific domain.
Hope this helps!

DDD: How do structure or resolve more complex behaviour in a domain entity?

Assume the classic Order/OrderLine scenario.
public class Order {
...
public void AddOrderLine(OrderLine ol) {
this.OrderLines.Add(ol);
UpdateTaxes();
}
private void UpdateTaxes() {
//Traverse the order lines
//Collect the VAT amounts etc
//Update totals
var newTaxes = Orderlines.SelectMany(ol => ol.GetTaxes());
Taxes.Clear();
Taxes.Add(newTaxes);
}
}
Now, we figure that we need to handle taxes better, with different ways for customers in various countries etc, where some require VAT to be collected and others not.
In short, the tax rules will depend on the customer's location, items purchased etc. How would we do this? Should we put a lot of code into UpdateTaxes? Could we use tax calculator factory and reference it in UpdateTaxes?
private void UpdateTaxes() {
var taxRules = TaxRulesFactory.Get(this);
var taxes = taxRuleCalc.Apply(this);
Taxes.Clear();
Taxes.Add(taxes);
}
Considering your broader question regarding complex behaviour in ARs the preferred way to handle this would be to use double-dispatch. Bear in mind that complex behaviour certainly can be included in the AR if that behaviour is cohesive.
However, for functionality that varies to the degree of tax or even discount calculation, where one would implement various strategies, you could opt for the double dispatch:
public class Order
{
public void ApplyTax(ITaxService taxService)
{
_totalTax = taxService.Calculate(TotalCost());
}
public void ApplyDiscount(IDiscountService discountService)
{
_discount = discountService.GetDiscount(_orderLines.Count, TotalCost());
}
public Money TotalCost()
{
// return sum of Cost() of order lines
}
}
These services also should not be injected into the AR but rather passed into the relevant method.
May be you could extract UpdateTaxes into a separate class which would be responsible for tax calculation against a particular order. And itself would chose an appropriate strategy (a separate strategy class) depending on customer, order, etc. I feel that tax calculation is a separate responsibility here.
You might also have a think about whether the concept of Tax and the concept of Orders need to be located within the same bounded context. It perhaps seems logical or at least implicit that when you're in the process of creating an Order you will want to know Tax due, but does this necessarily apply in your domain?
I'm not saying it does or it doesn't, by the way -- I'm simply saying think about it for your particular domain. Very often when the model seems awkward to represent it's because it mixes concerns that don't belong together.

CRM PlugIn Pass Variable Flag to New Execution Pipeline

I have records that have an index attribute to maintain their position in relation to each other.
I have a plugin that performs a renumbering operation on these records when the index is changed or new one created. There are specific rules that apply to items that are at the first and last position in the list.
If a new (or existing changed) item is inserted into the middle (not technically the middle...just somewhere between start and end) of the list a renumbering kicks off to make room for the record.
This renumbering process fires in a new execution pipeline...We are updating record D. When I tell record E to change (to make room for D) that of course fires the plugin on update message.
This renumbering is fine until we reach the end of the list where the plugin then gets into a loop with the first business rule that maintains the first and last record differently.
So I am trying to think of ways to pass a flag to the execution context spawned by the renumbering process so the recursion skips the boundary edge business rules if IsRenumbering == true.
My thoughts / ideas:
I have thought of using the Depth check > 1 but that isn't a reliable value as I can't explicitly turn it on or off....it may happen to work but that is not engineering a solid solution that is hoping nothing goes bump. Further a colleague far more knowledgeable than I said that when a workflow calls a plugin the depth value is off and can't be trusted.
All my variables are scoped at the execute level so as to avoid variable pollution at the class level....However if I had a dictionary object, tuple, something at the class level and one value would be the thread id and the other the flag value then perhaps my subsequent execution context could check if the same owning thread id had any values entered.
Any thoughts or other ideas on how to pass context information to a new pipeline would be greatly appreciated.
Per Nicknow sugestion I tried sharedvariables but they seem to be going out of scope...:
First time firing post op:
if (base.Stage == EXrmPluginStepStage.PostOperation)
{
...snip...
foreach (var item in RenumberSet)
{
Context.ParentContext.SharedVariables[recordrenumbering] = "googly";
Entity renumrec = new Entity("abcd") { Id = item.Id };
#region We either add or subtract indexes based upon sortdir
...snip...
renumrec["abc_indexfield"] = TmpIdx + 1;
break;
.....snip.....
#endregion
OrganizationService.Update(renumrec);
}
}
Now we come into Pre-Op of the recursion process kicked off by the above post-op OrganizationService.Update(renumrec); and it seems based upon this check the sharedvariable didn't carry over...???
if (!Context.SharedVariables.Contains(recordrenumbering))
{
//Trace.Trace("Null Set");
//Context.SharedVariables[recordrenumbering] = IsRenumbering;
Context.SharedVariables[recordrenumbering] = "Null Set";
}
throw invalidpluginexception reveals:
Sanity Checks:
Depth : 2
Entity: ...
Message: Update
Stage: PreOperation [20]
User: 065507fe-86df-e311-95fe-00155d050605
Initiating User: 065507fe-86df-e311-95fe-00155d050605
ContextEntityName: ....
ContextParentEntityName: ....
....
IsRenumbering: Null Set
What are you looking for is IExecutionContext.SharedVariables. Whatever you add here is available throughout the entire transaction. Since you'll have child pipelines you'll want to look at the ParentContext for the value. This can all get a little tricky, so be sure to do a lot of testing - I've run into many issues with SharedVariables and looping operations in Dynamics CRM.
Here is some sample (very untested) code to get you started.
public static bool GetIsRenumbering(IPluginExecutionContext pluginContext)
{
var keyName = "IsRenumbering";
var ctx = pluginContext;
while (ctx != null)
{
if (ctx.SharedVariables.Contains(keyName))
{
return (bool)ctx.SharedVariables[keyName];
}
else ctx = ctx.ParentContext;
}
return false;
}
public static void SetIsRenumbering(IPluginExecutionContext pluginContext)
{
var keyName = "IsRenumbering";
var ctx = pluginContext;
ctx.SharedVariables.Add(keyName, true);
}
A very simple solution: add a bit field to the entity called "DisableIndexRecalculation." When your first plugin runs, make sure to set that field to true for all of your updates. In the same plugin, check to see if "DisableIndexRecalculation" is set to true: if so, set it to null (by removing it from the TargetEntity entirely) and stop executing the plugin. If it is null, do your index recalculation.
Because you are immediately removing the field from the TargetEntity if it is true the value will never be persisted to the database so there will be no performance penalty.

Resources