DDD - Repository Pattern returning db keys? - ddd-repositories

There is a big design flaw here, but I'm having trouble solving it:
The business need is a little involved so I'll try to keep this simple.
We have a table with purchases, and a table for returns. When a return is made, we have to find match that return to the oldest purchase in the db, and record that in a "returns applied" table.
So, when I insert a Return, within that transaction, I need to apply the return to a purchase record.
As it stands now, we have a service that calls the repository for the insert. The service needs to know what the key is of that inserted record, so that it can finish the transaction by inserting an "applied" record using that key.
We're basically stuck because my understanding is that a repository should not return this kind of data. Doesn't this defeat the idea of the Repository being a collection?
What is the alternative?
CLARIFICATION:
We have a Purchase table, a Return table, and an Applied table
The applied table looks like this
purchaseId returnId qtyReturned
So when a return is inserted I need the id of a purchase (decided by some business rules) and the id of the newly inserted return.

I suppose the following according to your question:
public class Purchase {
// ReturnRepository plays the role of collaborator
public Return returnMe(PurchaseRepository purchaseRepository, int quantity) {
return purchaseRepository.returnPurchase(this, quantity);
}
}
public class PurchaseRepositoryImpl implements PurchaseRepository {
// returnd Purchase object has its id set up
public Purchase getOldestPurchase() {
// logic in order to get oldest purchase
}
public Return returnPurchase(Purchase purchase, quantity) {
// logic in order to save a return record
// Some ORM frameworks returns ids when a record is saved. In my case, Hibernate or NHibernate (.NET) fulfill ths requirement
// Then purchaseId, returnId and quantity is saved in "returns applied" table
}
}
public class PurchaseServiceImpl implements PurchaseService {
// Injected through dependency injection
private PurchaseRepository purchaseRepository;
// Spring transaction boundary, for example
// Notice returnPurchase method returns a Return object
public Return returnPurchase(int quantity) {
Purchase purchase = purchaseRepository.getOldestPurchase();
return purchase.returnMe(purchaseRepository, quantity);
}
}

Related

How to extend the core customer table?

I created a custom table with additional settings for customers. Next I added a field to the customer core table in which I'd like to store the id choice per customer. I extended with EntityExtensionInterface the customerDefinition :
public function extendFields(FieldCollection $collection): void
{
$collection->add(
(new OneToOneAssociationField(
'customerSetting',
'customer_setting',
'id',
WdtCustomerSettingsDefinition::class,
true
))->addFlags(new Inherited())
);
}
public function getDefinitionClass(): string
{
return CustomerDefinition::class;
}
When I manually manipulate the customer table, with an id from my custom table in the added field, I can retrieve and use the settings from my custom table in the storefront.
For the backend I created a single select to the entity custom_table,
<sw-entity-single-select entity="wdt_customer_settings" v-model="customer.extensions.customerSetting.id" >
</sw-entity-single-select>
and with the manually 'injected' id from the custom table, this choice indicates indeed 'selected' However, after changing to another choice and saving results in an error: Customer could not be saved.
What am I missing?
You should look always to the ajax responses. There is the explict error which is occured. Do you added some boilerplate code to check that your extensions is always available? Otherwise it would cause issues on new entities

Document not available in query direct after store

I'm trying to store a "Role" object and then get a list of Roles, as shown here:
public class Role
{
public Guid RoleId { get; set; }
public string RoleName { get; set; }
public string RoleDescription { get; set; }
}
//Function store:
private void StoreRole(Role role)
{
using (var docSession = docStore.OpenSession())
{
docSession.Store(role);
docSession.SaveChanges();
}
}
// then it return and a function calls this
public List<Role> GetRoles()
{
using (var docSession = docStore.OpenSession())
{
var Roles = from roles in docSession.Query<Role>() select roles;
return Roles.ToList();
}
}
However, in the GetRoles I am missing the last inserted record/document. If I wait 200ms and then call this function the item is there.
So I am not in sync. ?!
How can I solve this, or alternately how could I know when the result is in the document store for querying?
I've used transactions, but cannot figure this out. Update and delete are just fine, but when inserting I need to delay my 'List' call.
You are treating RavenDB as if it is a relational database, and it isn't. Load and Store are ACID operations in RavenDB, Query is not. Indexes (necessary for queries) are updated asynchronously, and in fact, temporary indexes may have to be built from scratch when you do a session.Query<T>() without a durable index specified. So, if you are trying to query for information you JUST stored, or if you are doing the FIRST query that requires a temporary index to be created, you probably won't get the data you expect.
There are methods of customizing your query to wait for non-stale results but you shouldn't lean on these too much because they're indicative of a bad design - it is better to figure out a better way to do the same thing in a way that embraces eventual consistency, either changing your model (so you get consistency via Load/Store - perhaps you could have one document that defines ALL of the roles in a list?) or by changing the application flow so you don't need to Store and then immediately Query.
An additional way of solving this is to query the index with WaitForNonStaleResultsAsOfLastWrite() turned on inside the save function. That way when the save is completed the index will be updated to at least include the change you just made.
You can read more about this here

To aggregate or not - order/orderline

About Domain Driven Design, Order and OrderLines are always seen as an aggregate, where Order is the root. Normally, once an order is created, one cannot change it. In my case however, that is possible. Instead each order has a state determining whether the order can be changed or not.
In this case, are both Order and OrderLines their own “aggregate root”? I need to be able to update order lines, so I figure that they should have their own repository. But I do not want to retrieve order lines, and persist them without the order. So this indicates that there’s still an aggregate where Order is the root with a factory method to create order lines (Order.CreateOrderLine(quantity, text, …).
Another approach could be to update the Order when the order lines collection has been modified, and then call UpdateOrder(Order). I would need some way of detecting that only the collection should be updated, and no the Order itself (using Entity Framework).
What do you think?
Order lines shouldn't be an aggregate of it's own, and doesn't need it's own repository. Your aggregate should be setup something like this...
public class Order
{
private List<OrderLine> _orderLines;
private OrderState _orderState;
public IEnumerable<OrderLine> OrderLines
{
get { return _orderLines.AsReadOnly();}
}
public OrderState Status
{
get { return _orderState; }
}
public void DeleteOrderLine(Guid orderLineID)
{
if (Status.IsProcessed)
throw new InvalidOperationException("You cannot delete items from a processed order");
OrderLine lineToRemove = _orderLines.Find(ol => ol.Id == orderLineID);
_orderLines.Remove(lineToRemove);
}
public void AddOrderLine(Product product, int quantity)
{
if (Status.IsProcessed)
throw new InvalidOperationException("You cannot add items to a processed order");
OrderLine line = new OrderLine(product.ProductID, (product.Price * quantity), quantity);
_orderLines.Add(line);
}
}
Entity framework has some built in features to detect changes to your object. This is explained here (conveniently with an order/order lines example): http://msdn.microsoft.com/en-us/library/dd456854.aspx

Subsonic - Where do i include my busines logic or custom validation

Im using subsonic 2.2
I tried asking this question another way but didnt get the answer i was looking for.
Basically i ususally include validation at page level or in my code behind for my user controls or aspx pages. However i haev seen some small bits of info advising this can be done within partial classes generated from subsonic.
So my question is, where do i put these, are there particular events i add my validation / business logic into such as inserting, or updating. - If so, and validation isnt met, how do i stop the insert or update. And if anyone has a code example of how this looks it would be great to start me off.
Any info greatly appreciated.
First you should create a partial class for you DAL object you want to use.
In my project I have a folder Generated where the generated classes live in and I have another folder Extended.
Let's say you have a Subsonic generated class Product. Create a new file Product.cs in your Extended (or whatever) folder an create a partial class Product and ensure that the namespace matches the subsonic generated classes namespace.
namespace Your.Namespace.DAL
{
public partial class Product
{
}
}
Now you have the ability to extend the product class. The interesting part ist that subsonic offers some methods to override.
namespace Your.Namespace.DAL
{
public partial class Product
{
public override bool Validate()
{
ValidateColumnSettings();
if (string.IsNullOrEmpty(this.ProductName))
this.Errors.Add("ProductName cannot be empty");
return Errors.Count == 0;
}
// another way
protected override void BeforeValidate()
{
if (string.IsNullOrEmpty(this.ProductName))
throw new Exception("ProductName cannot be empty");
}
protected override void BeforeInsert()
{
this.ProductUUID = Guid.NewGuid().ToString();
}
protected override void BeforeUpdate()
{
this.Total = this.Net + this.Tax;
}
protected override void AfterCommit()
{
DB.Update<ProductSales>()
.Set(ProductSales.ProductName).EqualTo(this.ProductName)
.Where(ProductSales.ProductId).IsEqualTo(this.ProductId)
.Execute();
}
}
}
In response to Dan's question:
First, have a look here: http://github.com/subsonic/SubSonic-2.0/blob/master/SubSonic/ActiveRecord/ActiveRecord.cs
In this file lives the whole logic I showed in my other post.
Validate: Is called during Save(), if Validate() returns false an exception is thrown.
Get's only called if the Property ValidateWhenSaving (which is a constant so you have to recompile SubSonic to change it) is true (default)
BeforeValidate: Is called during Save() when ValidateWhenSaving is true. Does nothing by default
BeforeInsert: Is called during Save() if the record is new. Does nothing by default.
BeforeUpdate: Is called during Save() if the record is new. Does nothing by default.
AfterCommit: Is called after sucessfully inserting/updating a record. Does nothing by default.
In my Validate() example, I first let the default ValidatColumnSettings() method run, which will add errors like "Maximum String lenght exceeded for column ProductName" if product name is longer than the value defined in the database. Then I add another errorstring if ProductName is empty and return false if the overall error count is bigger than zero.
This will throw an exception during Save() so you can't store the record in the DB.
I would suggest you call Validate() yourself and if it returns false you display the elements of this.Errors at the bottom of the page (the easy way) or (more elegant) you create a Dictionary<string, string> where the key is the columnname and the value is the reason.
private Dictionary<string, string> CustomErrors = new Dictionary<string, string>
protected override bool Validate()
{
this.CustomErrors.Clear();
ValidateColumnSettings();
if (string.IsNullOrEmpty(this.ProductName))
this.CustomErrors.Add(this.Columns.ProductName, "cannot be empty");
if (this.UnitPrice < 0)
this.CustomErrors.Add(this.Columns.UnitPrice, "has to be 0 or bigger");
return this.CustomErrors.Count == 0 && Errors.Count == 0;
}
Then if Validate() returns false you can add the reason directly besides/below the right field in your webpage.
If Validate() returns true you can safely call Save() but keep in mind that Save() could throw other errors during persistance like "Dublicate Key ...";
Thanks for the response, but can you confirm this for me as im alittle confused, if your validating the column (ProductName) value within validate() or the beforevalidate() is string empty or NULL, doesnt this mean that the insert / update has already been actioned, as otherwise it wouldnt know that youve tried to insert or update a null value from the UI / aspx fields within the page to the column??
Also, within asp.net insert or updating events we use e.cancel = true to stop the insert update, if beforevalidate failes does it automatically stop the action to insert or update?
If this is the case, isnt it eaiser to add page level validation to stop the insert or update being fired in the first place.
I guess im alittle confused at the lifecyle for these methods and when they come into play

Is it necessary to cache the data for a lazy loaded property with Subsonic 3 simple repository?

I have added a lazyloaded property called Orders on my Customer class. Do you think it's wise to cache the data in a private field?
private IList<Order> _orders;
[SubSonicIgnore]
public IList<Order> Orders
{
get
{
if (_orders == null)
{
var repository = new SimpleRepository("MyConnectionString", SimpleRepositoryOptions.None);
_orders = repository.Find<Order>(x => x.CustomerId == this.CustomerId);
}
return _orders;
}
}
Or is it better to not cache it like so:
[SubSonicIgnore]
public IList<Order> Orders
{
get
{
var repository = new SimpleRepository("MyConnectionString", SimpleRepositoryOptions.None);
return repository.Find<Order>(x => x.CustomerId == this.CustomerId);
}
}
The reason I'm asking is because I think it's a good idea to cache the data for performance sake, but at the same time I'm affraid that caching the data can cause it to become out-of-sync of some other process inserts/deletes records from database.
In your case, your cached Orders will exist for the lifetime of your Customers object. If you needed to clear the cached orders, you could simply requery for your Customer.
If I were you, I'd add an additional property whose name specifies that there is caching, add a custom cacheScope object (like transactionScope, the cache only exists as long as the scope exists), or specify in the documentation which properties will perform caching of child objects and for how long.
I would not remove caching. I'd leave it in there as an additional property. You'll have it if you need it.
Thanks for showing your caching logic. Here's mine. In my case, the life expectancy of my parent object is short, I don't expect >100 records of total parent/child data, and I do expect that all the child data will be used. If my data changes, then I'll need to readdress the caching logic I use in this particular instance:
private static List<HostHeader> _cachedHostHeaders;
public List<HostHeader> CachedHostHeaders
{
get
{
if (_cachedHostHeaders == null)
_cachedHostHeaders = this.HostHeaders.ToList();
return _cachedHostHeaders.Where(i => i.SiteID == this.ID).ToList();
}
}

Resources