The performance issue of validating entity using value object - domain-driven-design

I have the following value object code which validates CustCode by some expensive database operations.
public class CustCode : ValueObject<CustCode>
{
private CustCode(string code) { Value = code; }
public static Result<CustCode> Create(string code)
{
if (string.IsNullOrWhiteSpace(code))
return Result.Failure<CustCode>("Code should not be empty");
// validate if the value is still valid against the database. Expensive and slow
if (!ValidateDB(code)) // Or web api calls
return Result.Failure<CustCode>("Database validation failed.");
return Result.Success<CustCode>(new CustCode(code));
}
public string Value { get; }
// other methods omitted ...
}
public class MyEntity
{
CustCode CustCode { get; }
....
It works fine when there is only one or a few entity instances with the type. However, it becomes very slow for method like GetAll() which returns a lot of entities with the type.
public async IAsyncEnumerable<MyEntity> GetAll()
{
string line;
using var sr = File.OpenText(_config.FileName);
while ((line = await sr.ReadLineAsync()) != null)
{
yield return new MyEntity(CustCode.Create(line).Value); // CustCode.Create called many times
}
}
Since data in the file was already validated before saving so it's actually not necessary to be validated again. Should another Create function which doesn't validate the value to be created? What's the DDD idiomatically way to do this?

I generally attempt not to have the domain call out to retrieve any additional data. Everything the domain needs to do its job should be passed in.
Since value objects represent immutable state it stands to reason that once it has managed to be created the values are fine. To this end perhaps the initial database validation can be performed in the integration/application "layer" and then the CustCode is created using only the value(s) provided.

Just wanted to add an additional point to #Eben Roux answer:
In many cases the validation result from a database query is dependent on when you run the query.
For example when you want to check if a phone number exists or if some product is in stock. The answers to those querys can change any second, and though are not suited to allow or prevent the creation of a value object.
You may create a "valid" object, that is (unknowingly) becoming invalid in the very next second (or the other way around). So why bother running an expensive validation, if the validation result is not reliable.

Related

What type of data should be passed to domain events?

I've been struggling with this for a few days now, and I'm still not clear on the correct approach. I've seen many examples online, but each one does it differently. The options I see are:
Pass only primitive values
Pass the complete model
Pass new instances of value objects that refer to changes in the domain/model
Create a specific DTO/object for each event with the data.
This is what I am currently doing, but it doesn't convince me. The example is in PHP, but I think it's perfectly understandable.
MyModel.php
class MyModel {
//...
private MediaId $id;
private Thumbnails $thumbnails;
private File $file;
//...
public function delete(): void
{
$this->record(
new MediaDeleted(
$this->id->asString(),
[
'name' => $this->file->name(),
'thumbnails' => $this->thumbnails->toArray(),
]
)
);
}
}
MediaDeleted.php
final class MediaDeleted extends AbstractDomainEvent
{
public function name(): string
{
return $this->payload()['name'];
}
/**
* #return array<ThumbnailArray>
*/
public function thumbnails(): array
{
return $this->payload()['thumbnails'];
}
}
As you can see, I am passing the ID as a string, the filename as a string, and an array of the Thumbnail value object's properties to the MediaDeleted event.
How do you see it? What type of data is preferable to pass to domain events?
Updated
The answer of #pgorecki has convinced me, so I will put an example to confirm if this way is correct, in order not to change too much.
It would now look like this.
public function delete(): void
{
$this->record(
new MediaDeleted(
$this->id,
new MediaDeletedEventPayload($this->file->copy(), $this->thumbnail->copy())
)
);
}
I'll explain a bit:
The ID of the aggregate is still outside the DTO, because MediaDeleted extends an abstract class that needs the ID parameter, so now the only thing I'm changing is the $payload array for the MediaDeletedEventPayload DTO, to this DTO I'm passing a copy of the value objects related to the change in the domain, in this way I'm passing objects in a reliable way and not having strange behaviours if I pass the same instance.
What do you think about it?
A domain event is simply a data-holding structure or class (DTO), with all the information related to what just happened in the domain, and no logic. So I'd say Create a specific DTO/object for each event with the data. is the best choice. Why don't you start with the less is more approach? - think about the consumers of the event, and what data might they need.
Also, being able to serialize and deserialize the event objects is a good practice, since you could want to send them via a message broker (although this relates more to integration events than domain events).

DDD : Business Logic which need infra layer access should be in application service layer, domain service or domain objects?

For an attribute which need to be validated, lets say for an entity we have country field as VO
This country field needs to be validated to be alpha-3 code as per some business logic required by domain expert.
NOTE:
*We need to persist this country data as it can have other values also and possible in future there can be addition, updating and deleting of the country persisted data.
This is just one example using country code which may rarely change, there can be other fields which needs to be validated from persistence like validating some quantity with wrt data in persistence and it won't be efficient to store them in memory or prefetching them all.
Another valid example can be user creation with unique and valid domain email check, which will need uniqueness check from persistence
*
Case 1.
Doing validation in application layer:
If we call repository countryRepo.getCountryByCountryAlpha3Code() in application layer and then if the value is correct and valid part of system we can then pass the createValidEntity() and if not then can throw the error directly in application layer use-case.
Issue:
This validation will be repeated in multiple use-case if same validation need to be checked in other use-cases if its application layer concern
Here the business logic is now a part of application service layer
Case 2
Validating the country code in its value object class or domain service in Domain Layer
Doing this will keep business logic inside domain layer and also won't violate DRY principle.
import { ValueObject } from '#shared/core/domain/ValueObject';
import { Result } from '#shared/core/Result';
import { Utils } from '#shared/utils/Utils';
interface CountryAlpha3CodeProps {
value: string;
}
export class CountryAlpha3Code extends ValueObject<CountryAlpha3CodeProps> {
// Case Insensitive String. Only printable ASCII allowed. (Non-printable characters like: Carriage returns, Tabs, Line breaks, etc are not allowed)
get value(): string {
return this.props.value;
}
private constructor(props: CountryAlpha3CodeProps) {
super(props);
}
public static create(value: string): Result<CountryAlpha3Code> {
return Result.ok<CountryAlpha3Code>(new CountryAlpha3Code({ value: value }));
}
}
Is it good to call the repository from inside domain layer (Service
or VO (not recommended) ) then dependency flow will change?
If we trigger event how to make it synchronous?
What are some better ways to solve this?
export default class UseCaseClass implements IUseCaseInterface {
constructor(private readonly _repo: IRepo, private readonly countryCodeRepo: ICountryCodeRepo) {}
async execute(request: dto): Promise<dtoResponse> {
const someOtherKeyorError = KeyEntity.create(request.someOtherDtoKey);
const countryOrError = CountryAlpha3Code.create(request.country);
const dtoResult = Result.combine([
someOtherKeyorError, countryOrError
]);
if (dtoResult.isFailure) {
return left(Result.fail<void>(dtoResult.error)) as dtoResponse;
}
try {
// -> Here we are just calling the repo
const isValidCountryCode = await this.countryCodeRepo.getCountryCodeByAlpha2Code(countryOrError.getValue()); // return boolean value
if (!isValidCountryCode) {
return left(new ValidCountryCodeError.CountryCodeNotValid(countryOrError.getValue())) as dtoResponse;
}
const dataOrError = MyEntity.create({...request,
key: someOtherKeyorError.city.getValue(),
country: countryOrError.getValue(),
});
const commandResult = await this._repo.save(dataOrError.getValue());
return right(Result.ok<any>(commandResult));
} catch (err: any) {
return left(new AppError.UnexpectedError(err)) as dtoResponse;
}
}
}
In above application layer,
this part of code :
const isValidCountryCode = await this.countryCodeRepo.getCountryCodeByAlpha2Code(countryOrError.getValue()); // return boolean value
if (!isValidCountryCode) {
return left(new ValidCountryCodeError.CountryCodeNotValid(countryOrError.getValue())) as dtoResponse;
}
it it right to call the countryCodeRepo and fetch result or this part should be moved to domain service and then check the validity of the countryCode VO?
UPDATE:
After exploring I found this article by Vladimir Khorikov which seems close to what I was looking, he is following
As per his thoughts some domain logic leakage is fine, but I feel it will still keep the value object validation in invalid state if some other use case call without knowing that persistence check is necessary for that particular VO/entity creation.
I am still confused for the right approach
In my opinion, the conversion from String to ValueObject does not belong to the Business Logic at all. The Business Logic has a public contract that is invoked from the outside (API layer or presentation layer maybe). The contract should already expect Value Objects, not raw strings. Therefore, whoever is calling the business logic has to figure out how to obtain those Value Objects.
Regarding the implementation of the Country Code value object, I would question if it is really necessary to load the country codes from the database. The list of country codes very rarely changes. The way I've solved this in the past is simply hardcoding the list of country codes inside the value object itself.
Sample code in pseudo-C#, but you should get the point:
public class CountryCode : ValueObject
{
// Static definitions to be used in code like:
// var myCountry = CountryCode.France;
public static readonly CountryCode France = new CountryCode("FRA");
public static readonly CountryCode China = new CountryCode("CHN");
[...]
public static AllCountries = new [] {
France, China, ...
}
public string ThreeLetterCode { get; }
private CountryCode(string threeLetterCountryCode)
{
ThreeLetterCode = threeLetterCountryCode;
}
public static CountryCode Parse(string code)
{
[...] handle nulls, empties, etc
var exists = AllCountries.FirstOrDefault(c=>c.ThreeLetterCode==code);
if(exists == null)
// throw error
return exists;
}
}
Following this approach, you can make a very useful and developer-friendly CountryCode value object. In my actual solution, I had both the 2 and 3-letter codes and display names in English only for logging purposes (for presentation purposes, the presentation layer can look up the translation based on the code).
If loading the country codes from the DB is valuable for your scenario, it's still very likely that the list changes very rarely, so you could for example load a static list in the value object itself at application start up and then refresh it periodically if the application runs for very long.

How do I get a parameter to not just display in a component, but also be recognized inside of OnInitializedAsync()?

I'm working on a blazor server-side project and I have a component that gets passed a model (pickedWeek) as a parameter. I can use the model fine in-line with the html, but OnInitializedAsync always thinks that the model is null.
I have passed native types in as parameters, from the Page into a component, this way without an issue. I use a NullWeek as a default parameter, so the number getting used in OnInitializedAsync only ever appears to be from the NullWeek. In case this is related, there is a sibling component that is returning the Week model to the Page through an .InvokeAsync call, where StateHasChanged() is being called after the update. It appears that the new Week is getting updated on the problem component, but that OnInitializeAsync() either doesn't see it, or just never fires again- which maybe is my problem, but I didn't think it worked that way.
For instance, the below code will always show "FAILURE" but it will show the correct Week.Number. Code below:
<div>#pickedWeek.Number</div>
#if(dataFromService != null)
{
<div>SUCCESS</div>
}
else
{
<div>FAILURE</div>
}
#code{
[Parameter]
public Week pickedWeek { get; set; }
protected IEnumerable<AnotherModel> dataFromService { get; set; }
protected override async Task OnInitializedAsync()
{
if (pickedWeek.Number > 0)
{
dataFromService = await _injectedService.MakeACall(pickedWeek.Id);
}
}
}
#robsta has this correct in the comments, you can use OnParametersSet for this. Then, you will run into another issue, in that each rerender will set your parameters again and generate another call to your service. I've gotten around this by using a flag field along with the the OnParametersSet method. Give this a shot and report back.
private bool firstRender = true;
protected override async Task OnParametersSetAsync()
{
if (pickedWeek.Number > 0 && firstRender)
{
dataFromService = await _injectedService.MakeACall(pickedWeek.Id);
firstRender = false;
// MAYBE call this if it doesn't work without
StateHasChanged();
}
}
Another alternative is to use the OnAfterRender override, which supplies a firstRender bool in the the method signature, and you can do similar logic. I tend to prefer the first way though, as this second way allows it to render, THEN sets the value of your list, THEN causes another rerender, which seems like more chatter than is needed to me. However if your task is long running, use this second version and build up a loading message to display while the list is null, and another to display if the service call fails. "FAILURE" is a bit misleading as you have it as it's being displayed before the call completes.
I've also found that a call to await Task.Delay(1); placed before your service call can be useful in that it breaks the UI thread loose from the service call awaiter and allows your app to render in a loading state until the data comes back.

EF 5.0 new object: assign foreign key property does not set the foreign key id, or add to the collection

EF 5.0, using code-first on existing database workflow.
Database has your basic SalesOrder and SalesOrderLine tables with required foreign key on the SalesOrderLine as follows;
public class SalesOrder
{
public SalesOrder()
{
this.SalesOrderLines = new List<SalesOrderLine>();
}
public int SalesOrderID { get; set; }
public int CustomerID { get; set; }
public virtual Customer Customer { get; set; }
public virtual ICollection<SalesOrderLine> SalesOrderLines { get; set; }
}
public class SalesOrderLine
{
public SalesOrderLine()
{
}
public int SalesOrderLineID { get; set; }
public int SalesOrderID { get; set; }
public virtual SalesOrder SalesOrder { get; set; }
}
public SalesOrderLineMap()
{
// Primary Key
this.HasKey(t => t.SalesOrderLineID);
// Table & Column Mappings
this.ToTable("SalesOrderLine");
this.Property(t => t.SalesOrderLineID).HasColumnName("SalesOrderLineID");
this.Property(t => t.SalesOrderID).HasColumnName("SalesOrderID");
// Relationships
this.HasRequired(t => t.SalesOrder)
.WithMany(t => t.SalesOrderLines)
.HasForeignKey(d => d.SalesOrderID);
}
Now according to this page:
http://msdn.microsoft.com/en-us/data/jj713564
...we are told that:
The following code removes a relationship by setting the foreign key
to null. Note, that the foreign key property must be nullable.
course.DepartmentID = null;
Note: If the reference is in the added state (in this example, the
course object), the reference navigation property will not be
synchronized with the key values of a new object until SaveChanges is
called. Synchronization does not occur because the object context does
not contain permanent keys for added objects until they are saved. If
you must have new objects fully synchronized as soon as you set the
relationship, use one of the following methods.
By assigning a new object to a navigation property. The following code
creates a relationship between a course and a department. If the
objects are attached to the context, the course is also added to the
department.Courses collection, and the corresponding foreign key
property on the course object is set to the key property value of the
department.
course.Department = department;
...sounds good to me!
Now my problem:
I have the following code, and yet both of the the Asserts fail - why?
using (MyContext db = new MyContext ())
{
SalesOrder so = db.SalesOrders.First();
SalesOrderLine sol = db.SalesOrderLines.Create();
sol.SalesOrder = so;
Trace.Assert(sol.SalesOrderID == so.SalesOrderID);
Trace.Assert(so.SalesOrderLines.Contains(sol));
}
Both objects are attached to the context - are they not? Do I need to do a SaveChanges() before this will work? If so, that seems a little goofy and it's rather annoying that I need to set all of the references on the objects by hand when a new object is added to a foreign-key collection.
-- UPDATE --
I should mark Gert's answer as correct, but I'm not very happy about it, so I'll wait a day or two. ...and here's why:
The following code does not work either:
SalesOrder so = db.SalesOrders.First();
SalesOrderLine sol = db.SalesOrderLines.Create();
db.SalesOrderLines.Add(sol);
sol.SalesOrder = so;
Trace.Assert(so.SalesOrderLines.Contains(sol));
The only code that does work is this:
SalesOrder so = db.SalesOrders.First();
SalesOrderLine sol = db.SalesOrderLines.Create();
sol.SalesOrder = so;
db.SalesOrderLines.Add(sol);
Trace.Assert(so.SalesOrderLines.Contains(sol));
...in other words, you have to set all of your foreign key relationships first, and then call TYPE.Add(newObjectOfTYPE)
before any of the relationships and foreign-key fields are wired up. This means that from the time the Create is done until the time you do the Add(), the object is basically in a half-baked state. I had (mistakenly) thought that since I used Create(), and since Create() returns a sub-classed dynamic object (as opposed to using "new" which returns a POCO object) that the relationships wire-ups would be handled for me. It's also odd to me, that you can call Add() on an object created with the new operator and it will work, even though the object is not a sub-classed type...
In other words, this will work:
SalesOrder so = db.SalesOrders.First();
SalesOrderLine sol = new SalesOrderLine();
sol.SalesOrder = so;
db.SalesOrderLines.Add(sol);
Trace.Assert(sol.SalesOrderID == so.SalesOrderID);
Trace.Assert(so.SalesOrderLines.Contains(sol));
...I mean, that's cool and all, but it makes me wonder; what's the point of using "Create()" instead of new, if you're always going to have to Add() the object in either case if you want it properly attached?
Most annoying to me is that the following fails;
SalesOrder so = db.SalesOrders.OrderBy(p => p.SalesOrderID).First();
SalesOrderLine sol = db.SalesOrderLines.Create();
sol.SalesOrder = so;
db.SalesOrderLines.Add(sol);
// NOTE: at this point in time, the SalesOrderId field has indeed been set to the SalesOrderId of the SalesOrder, and the Asserts will pass...
Trace.Assert(sol.SalesOrderID == so.SalesOrderID);
Trace.Assert(so.SalesOrderLines.Contains(sol));
sol.SalesOrder = db.SalesOrders.OrderBy(p => p.SalesOrderID).Skip(5).First();
// NOTE: at this point in time, the SalesOrderId field is ***STILL*** set to the SalesOrderId of the original SO, so the relationships are not being maintained!
// The Exception will be thrown!
if (so.SalesOrderID == sol.SalesOrderID)
throw new Exception("salesorderid not changed");
...that seems like total crap to me, and makes me feel like the EntityFramework, even in version 5, is like a minefield on a rice-paper bridge. Why would the above code not be able to sync the SalesOrderId on the second assignment of the SalesOrder property? What essential trick am I missing here?
I've found what I was looking for! (and learned quite a bit along the way)
What I thought the EF was generating in it's dynamic proxies were "Change-Tracking Proxies". These proxy classes behave more like the old EntityObject derived partial classes from the ADO.Net Entity Data Model.
By doing some reflection on the dynamically generated proxy classes (thanks to the information i found in this post: http://davedewinter.com/2010/04/08/viewing-generated-proxy-code-in-the-entity-framework/ ), I saw that the "get" of my relationship properties was being overridden to do Lazy Loading, but the "set" was not being overriden at all, so of course nothing was happening until DetectChanges was called, and DetectChanges was using the "compare to snapshot" method of detecting changes.
Further digging ultimately lead me to this pair of very informative posts, and I recommend them for anyone using EF:
http://blog.oneunicorn.com/2011/12/05/entity-types-supported-by-the-entity-framework/
http://blog.oneunicorn.com/2011/12/05/should-you-use-entity-framework-change-tracking-proxies/
Unfortunately, in order for EF to generate Change-Tracking Proxies, the following must occur (quoted from the above):
The rules that your classes must follow to enable change-tracking
proxies are quite strict and restrictive. This limits how you can
define your entities and prevents the use of things like private
properties or even private setters. The rules are: The class must be
public and not sealed. All properties must have public/protected
virtual getters and setters. Collection navigation properties must be
declared as ICollection<T>. They cannot be IList<T>, List<T>,
HashSet<T>, and so on.
Because the rules are so restrictive it’s easy to get something wrong and the result is you won’t get a change-tracking proxy. For example,
missing a virtual, or making a setter internal.
...he goes on to mention other things about Change-Tracking proxies and why they may show better or worse performance.
In my opinion, the change-tracking proxy classes would be nice as I'm coming from the ADO.Net Entity Model world, and I'm used to things working that way, but I've also got some rather rich classes and I'm not sure if I will be able to meet all of the criteria. Additionally that second bullet point makes me rather nervous (although I suppose I could just create a unit test that loops through all of my entities, does a Create(0 on each and then tests the resulting object for the IEntityWithChangeTracker interface).
By setting all of my properties to virtual in my original example I did indeed get IEntityWithChangeTracker typed proxy classes, but I felt a little ... I don't know... "dirty" ...for using them, so I think I will just have to suck it up and remember to always set both sides of my relationships when doing assignments.
Anyway, thanks for the help!
Cheers,
Chris
No, SalesOrderLine sol is not attached to the context (although it is created by a DbSet). You must do
db.SalesOrderLines.Add(sol);
to have it attached to the context in a way that the ChangeTracker executes DetectChanges() (DbSet.Add() is one of the methods that trigger this) and, thus, also executes relationship fixup, which sets sol.SalesOrderID and ensures that so.SalesOrderLines contains the new object.
So, no, you don't need to execute SaveChanges(), but the object must be added to the context and relationship fixup must have been triggered.

IEnumerable<T>.ConvertAll & DDD

I have an interesting need for an extension method on the IEumerable interface - the same thing as List.ConvertAll. This has been covered before here and I found one solution here. What I don't like about that solution is he builds a List to hold the converted objects and then returns it. I suspect LINQ wasn't available when he wrote his article, so my implementation is this:
public static class IEnumerableExtension
{
public static IEnumerable<TOutput> ConvertAll<T, TOutput>(this IEnumerable<T> collection, Func<T, TOutput> converter)
{
if (null == converter)
throw new ArgumentNullException("converter");
return from item in collection
select converter(item);
}
}
What I like better about this is I convert 'on the fly' without having to load the entire list of whatever TOutput's are. Note that I also changed the type of the delegate - from Converter to Func. The compilation is the same but I think it makes my intent clearer - I don't mean for this to be ONLY type conversion.
Which leads me to my question: In my repository layer I have a lot of queries that return lists of ID's - ID's of entities. I used to have several classes that 'converted' these ID's to entities in various ways. With this extension method I am able to boil all that down to code like this:
IEnumerable<Part> GetBlueParts()
{
IEnumerable<int> keys = GetBluePartKeys();
return keys.ConvertAll<Part>(PartRepository.Find);
}
where the 'converter' is really the repository's Find-by-ID method. In my case, the 'converter' is potentially doing quite a bit. Does anyone see any problems with this approach?
The main issue I see with this approach is it's completely unnecessary.
Your ConvertAll method is nothing different than Enumerable.Select<TSource,TResult>(IEnumerable<TSource>, Func<TSource,TResult>), which is a standard LINQ operator. There's no reason to write an extension method for something that already is in the framework.
You can just do:
IEnumerable<Part> GetBlueParts()
{
IEnumerable<int> keys = GetBluePartKeys();
return keys.Select<int,Part>(PartRepository.Find);
}
Note: your method would require <int,Part> as well to compile, unless PartRepository.Find only works on int, and only returns Part instances. If you want to avoid that, you can probably do:
IEnumerable<Part> GetBlueParts()
{
IEnumerable<int> keys = GetBluePartKeys();
return keys.Select(i => PartRepository.Find<Part>(i)); // I'm assuming that fits your "Find" syntax...
}
Why not utilize the yield keyword (and only convert each item as it is needed)?
public static class IEnumerableExtension
{
public static IEnumerable<TOutput> ConvertAll<T, TOutput>
(this IEnumerable<T> collection, Func<T, TOutput> converter)
{
if(null == converter)
throw new ArgumentNullException("converter");
foreach(T item in collection)
yield return converter(item);
}
}

Resources