Orchard cms tracking record repository like tracking content item in Audit Trail - orchardcms

I'm using Orchard's Audit Trail module to track change content items.
But, in my project, I has some objects that only have record model (don't have part model), I query and save them by using Repository.Fetch()
In Audit Trail module, it override Updating(), Updated() method to track change a content item.
I want to know if there are something like that for record Repository, because I also want to track objects that only use record model.
Ex:
I have these models : InvoicePart, InvoicePartRecord, DetailRecord, ProductPart, ProductPartRecord
An Invoice has many Details.
Detail stores Invoice_Id & Product_Id, salesman can add more Details into Invoice, or delete it from Invoice, or change Product_Id of a Detail.
Orchard Audit Trail can only tracking change for Invoice & Product, but not for Detail because it is not a content item, so that I can not know when a Detail was created, modified or deleted.

To do this, you must add new repository implementation inherited from the original one, like this:
public class MyRepository<T> : Repository<T>, IRepository<T> where T : class
here, you can trigger your events whenever you want to track changes.
Then, to suppress the original implementation you must add new Autofac Module class:
public class DataModule : Module {
protected override void Load(ContainerBuilder builder) {
builder.RegisterGeneric(typeof(MyRepository<>)).As(typeof(IRepository<>)).InstancePerDependency();
}
}
after that you can handle these events and audit them, you will find a good sample on this in Orchard.AuditTrail module with ContentDefinitionProviders.
Note: if you want to specify the records which have auditing enabled (this useful if you don't want to audit all the changes on all records in your system), you can add a new class attribute:
[AttributeUsage(AttributeTargets.Class)]
public sealed class AuditTrailEnabledAttribute : Attribute {
}
then use this attribute to check if the record class has it or not, and do your auditing based on it.

Related

Pull Attributes from Stock Item to SO Line custom field

I'm relatively new to customization programming in Acumatica and I have a question about the best way to pull in attribute values from the stock item screen to a custom field on the SO Line in the sales order screen.
I have done something similar on another screen using a FieldSelecting event and a PXSelect statement to pull data from another screen and update my custom field.
Needing to pull the values from the attribute value is what's stumping me, should I just join to the CSAnswers table in the pxselect?
I also wanted to ask if there is a better overall approach for pulling data from another screen into a custom field in an Acumatica customization.
*update *
I'm attempting to use the PXDBScalar attribute, but I cannot figure out how to join multiple tables. Here's what I've tried.
[PXDBScalar(typeof(
Search2<CSAnswers.value,
InnerJoin<InventoryItem,
On<SOLine.inventoryID, Equal<InventoryItem.inventoryID>>>,
InnerJoin<CSAnswers,
On<InventoryItem.noteID, Equal<CSAnswers.refNoteID>>>,
Where<CSAnswers.attributeID, Like<QHOLDAttr>>
>))]
Thanks
Scott
Since you asked for "a better overall approach" then I'll share my preferred way of handling this. Not to say it is "better" but perhaps just "different".
I try to keep my data tied to the object where it is related directly. In the case of an attribute of an InventoryItem, I would elevate that attribute value to InventoryItem in a DAC extension so that it can be leveraged anywhere the InventoryItem is used. You seem to want to use it in conjunction with the SOLine record, but since the attribute is not tied to the SOLine, I would not extend SOLine to add it. There is nothing wrong with adding it there if your business requirement mandates it, but it keeps me more sane to know that an InventoryItem's related data comes from InventoryItem rather than trying to remember where I put it and possibly duplicate the effort elsewhere (like on a POLine) later.
Here is an example of how I've done it, pulled straight from my project but changing the Attribute references to be more generic.
public sealed class InventoryItemExt : PXCacheExtension<PX.Objects.IN.InventoryItem>
{
#region MyAttribute
[PXString]
[PXUIField(DisplayName = Messages.MyAttribDisplayName)]
[PXDBScalar(typeof(Search<CSAnswers.value,
Where<CSAnswers.refNoteID, Equal<InventoryItem.noteID>,
And<CSAnswers.attributeID, Equal<MyAttribute>>>>))]
public string MyAttribute { get; set; }
public abstract class myAttribute : PX.Data.BQL.BqlString.Field<myAttribute> { }
#endregion
public class MyAttribute : PX.Data.BQL.BqlString.Constant<MyAttribute>
{
public MyAttribute() : base("MYATTRIB") { }
}
}
Notice the constant defined to access the AttributeID "MYATTRIB" which is attached to the Item Class of the InventoryItem record.
Now that the attribute is pulled into the DAC extension on InventoryItem, I can use it on any screen related to an InventoryItem with ease as long as InventoryItem (and the associated DAC extension) have been made available to that screen.
Without testing, I may be off on this but with regards to your attempt to join 2 tables in the PXDBScalar...
When you say Search2<CSAnswers.value..., you have indicated that you want to search the CSAnswers DAC and retrieve the value field. By subsequently trying to InnerJoin back to CSAnswers, I believe you effectively have said:
Select CSAnswers.Value From CSAnswers
Inner Join InventoryItem On SOLine.InventoryID = InventoryItem.InventoryID
Inner Join CSAnswers...
I'm not sure if the SOLine reference is valid here, but it may be if you are defining this directly in the SOLine DAC extension. However, you have tried to query CSAnswers Inner Join CSAnswers. Not sure if this will fix your PXDBScalar, but if you really want to use your method to attach this to SOLine, try:
Search2<CSAnswers.value,
InnerJoin<InventoryItem,
On<InventoryItem.inventoryID, Equal<SOLine.inventoryID>>>,
Where<InventoryItem.noteID, Equal<CSAnswers.refNoteID>,
And<CSAnswers.attributeID, Equal<QHOLDAttr>>>
>))]
(Notice that I swapped the order of the relations in On clauses.)

Handling soft deletes with jhipster

I need to perform soft deletion using jhipster.
Ex:
If user click on delete button I need to mark is_active flag to false.
Writing custom code will be the only solution or is there any way to do it?
Things I have done so far :
I have followed Jhipster web site and created Monolithic application.
Then I have created new entity called 'Student' using the entity sub-generator . (filed names : id, name age, address and is_active. )
The Sub generator generated files contain all the basic CRUD operations and provided a CRUD front-end for student entity.
The front end contains buttons for create student, view, edit and delete.
When I Click on a delete button the studuent record will be permanatly delete from the database.
But my Requirement is to keep the record in the database and mark only the is_active filed in to false.
Basically I need to keep history data of the student without permanatly deleting them.
you will have to edit it yourself. First of all, I recommend you to extend all classes generated by JHipster (except for Resources, you will have to create another class and use a custom mapping thus changing the service in the frontend)
Extend your StudentRepository and create a method that updates the is_active value (it would be better if it receives the id and the value: true or false)
Extend yout StudentService and create a new method (e.g. call it public void disableAddress(Long id))
In that method the your newly created method in the repository layer
Edit StudentResource and change the code inside the deleteStudent method and call your newly created method in the service layer
If you created a new resource, change the service calling your api.

Make a Field Mandatory on the Graph Level

Newbie to Acumatica here. I've performed a small amount of customization to our system, and am now diving into adding custom data fields.
My goal is to synchronize hardware shipment information from Acumatica into our legacy (outdated and proprietary) hardware management system, as we will need to continue using this system for the time being for warranty calculations. I plan to eventually build this into Acumatica.
My current issue is that I need a method of associating Customer Locations to the customer locations in our legacy system. Adding the field DCL_ID was easy enough to accomplish following the To Add a Custom Data Field documentation. I made the column be required by setting
[PXDefault]
[PXUIField(DisplayName="DCL Account ID", Required = true)]
to the attributes section of the Data Access class as outlined here. I then added the field to my form using the Layout Editor.
At this point all seemed well. The field shows an asterisk in the UI and also validates that a value is provided. Then I realized that Customer Locations is not the only place that uses CR.Location -- it is also used by Account Locations. Doing some digging I've found that Account Locations can include many more account types than Customer Locations. I only need this attribute to be required for Customer Locations. Thus, I have opted to use the To Make a Field Mandatory on the Graph Level.
Here is my CustomerLocationMaint code:
using System;
using PX.Data;
using PX.Objects.CR;
using System.Collections.Generic;
using PX.Objects;
using PX.Objects.AR;
namespace PX.Objects.AR
{
public class CustomerLocationMaint_Extension : PXGraphExtension<CustomerLocationMaint>
{
#region Event Handlers
[PXDefault]
[PXCustomizeBaseAttribute(typeof(PXUIFieldAttribute), "Required", true)]
protected virtual void SelectedCustomerLocation_UsrDCL_ID_CacheAttached(PXCache cache)
{
}
#endregion
}
}
After I save and publish the customization, the field does not function as a required field, as it did when I defined the requirements at the DAC level.
So, what have I done wrong? I've read and re-read the documentation multiple times, but cannot find my mistake.
Setup:
My thought is the underscore in the field name causing the cache attached to not properly register the graph level attribute change. Using a field name without the underscore is the preferred naming convention for tables and columns.
The Acumatica documentation mentions this should be avoided as listed here:
Database Design Guidelines
Found under Table and Column Naming Conventions:
Do not use the underscore symbol (_) in table or column names, because
it is a reserved symbol in Acumatica Framework. For example,
CompanyType is a valid column name, while Company_Type is invalid.

Should I add item using repository pattern or a create event if I am using domain events?

I am trying to understand the Domain Event pattern illustrated by Udi Dahan with regard to adding new domain entities in a certain situation.
Now normally with entities I would create them and then add them via repository. I assume I would still do this?
My example is we normally add assets to the system. Like this:
var asset= new Asset();
/*bunch of prop setting*/
_assetRepository.Add(asset);
However asset creation is an event that we want to follow certain processes as a result of. Therefore it was suggested by developer we no longer need to do this as it could be handled by domain event:
var asset= new Asset();
/*bunch of prop setting*/
asset.Create(location);
Now a create method would raise an event and be handled by a create event handler that basically just inserts it into the repo and does some other stuff email the warehouse manager of the create location etc.
However having a create event on the asset looks pretty active record to me. However in the domain people talk about new assets being created. So we were not sure.
Thoughts?
The created domain event should be raised in the constructor of the Asset class because that is when that particular entity is created. In your current implementation, this would be erroneous because the Asset entity provides a parameterless constructor. Instead, create a constructor which has all required properties as parameters thereby preventing creation of an Asset entity in an inconsistent state. It could look like this:
public class Asset
{
public Asset(string prop1, decimal prop2)
{
this.Prop1 = prop1;
this.Prop2 = prop2;
DomainEvents.Raise(new AssetCreated(prop1, prop2));
}
public string Id { get; private set; }
public string Prop1 { get; private set; }
public decimal Prop2 { get; private set; }
}
You still have to persist the entity using the repository after creating it. This can be problematic because the handlers for the AssetCreated cannot reference its ID since it is not yet assigned when they are notified. If using event sourcing, then the creation event would be explicitly stored in the underlying event store.
I've been struggling for this problem for quite a long time. But no good solution. I think,
A domain event shouldn't be published or handled before the aggregate it belongs to being successfully persisted
It's not the application layer's responsibility to publish any domain events
So far, I think the best approach is to take advantage of AOP. We can "fire" events in the aggregate, but instead of dispatching them instantly, we keep it in a queue, and really dispatch it after the corresponding transaction successes. We can define a custom #Transactional interceptor to achieve this, thus keeping the app service from knowning any concept of "event publishing".

Getting Additional Data for a Domain Entity

I have a domain Aggregate, call it "Order" that contains a List of OrderLines. The Order keeps track of the sum of the Amount on the Order Lines. The customer has a running "credit" balance that they can order from that is calculated by summing the history of their database transactions. Once they use up all the money in the "pool" they can't order any more products.
So every time a line is added to the order, I need to get to check how much is left in the pool and if the order pushes them over it. The amount in the pool is continually changing because other related customers are continually using it.
The question is, thinking in terms of DDD, how do I get that amount since I don't want to pollute my Domain Layer with DataContext concerns (using L2S here). Since I can't just query out to the database from the domain, how would I get that data so I can validate the business rule?
Is this an instance where Domain Events are used?
Your Order aggregate should be fully encapsulated. It therefore needs to be able to determine whether it's valid to add an item, i.e. whether or not the customer credit is exceeded. There are various ways to do this but they all depend on the Order repository returning a specific aggregate that knows how to do this particular thing. This will probably be a different Order aggregate from one you'd use for satisfying orders, for example.
You have to recognise, then capture in code, the fact that you're expecting the order to fulfil a particular role in this case, i.e. the role of adding additional line items. You do this by creating an interface for this role and a corresponding aggregate that has the internal support for the role.
Then, your service layer can ask your Order repository for an order that satisfies this explicit role interface and the repository thus has enough information about what you need to be able to build something that can satisfy that requirement.
For example:
public interface IOrder
{
IList<LineItem> LineItems { get; }
// ... other core order "stuff"
}
public interface IAddItemsToOrder: IOrder
{
void AddItem( LineItem item );
}
public interface IOrderRepository
{
T Get<T>( int orderId ) where T: IOrder;
}
Now, your service code would look something like:
public class CartService
{
public void AddItemToOrder( int orderId, LineItem item )
{
var order = orderRepository.Get<IAddItemsToOrder>( orderId );
order.AddItem( item );
}
}
Next, your Order class that implements IAddItemsToOrder needs a customer entity so that it can check the credit balance. So you just cascade the same technique by defining a specific interface. The order repository can call on the customer repository to return a customer entity that fulfils that role and add it to the order aggregate.
Thus you'd have a base ICustomer interface and then an explicit role in the form of an ICustomerCreditBalance interface that descends from it. The ICustomerCreditBalance acts both as a marker interface to your Customer repository to tell it what you need the customer for, so it can create the appropriate customer entity, and it has the methods and/or properties on it to support the specific role. Something like:
public interface ICustomer
{
string Name { get; }
// core customer stuff
}
public interface ICustomerCreditBalance: ICustomer
{
public decimal CreditBalance { get; }
}
public interface ICustomerRepository
{
T Get<T>( int customerId ) where T: ICustomer;
}
Explicit role interfaces give repositories the key information they need to make the right decision about what data to fetch from the database, and whether to fetch it eagerly or lazily.
Note that I've put the CreditBalance property on the ICustomerCreditBalance interface in this case. However, it could just as well be on the base ICustomer interface and ICustomerCreditBalance then becomes an empty "marker" interface to let the repository know that you're going to be querying the credit balance. It's all about letting the repository know just what role you want for the entity it returns.
The final part which brings this all together, as you mentioned in your question, is domain events. The order can raise a failure domain event if the customer's credit balance would be exceeded, to notify the service layer that the order is invalid. If the customer has enough credit, on the other hand, it can either update the balance on the customer object or raise a domain event to notify the rest of the system that the balance needs to be reduced.
I've not added the domain event code to the CartService class since this answer is already rather long! If you want to know more about how to do that, I suggest you post another question targeting that specific issue and I'll expand on it there ;-)
In such a scenario, I off-load responsibility using events or delegates. Maybe the easiest way to show you is with some code.
Your Order class will have a Predicate<T> that is used to determine if the customer's credit line is big enough to handle the order line.
public class Order
{
public Predicate<decimal> CanAddOrderLine;
// more Order class stuff here...
public void AddOrderLine(OrderLine orderLine)
{
if (CanAddOrderLine(orderLine.Amount))
{
OrderLines.Add(orderLine);
Console.WriteLine("Added {0}", orderLine.Amount);
}
else
{
Console.WriteLine(
"Cannot add order. Customer credit line too small.");
}
}
}
You will probably have a CustomerService class or something like that to pull the available credit line. You set the CanAddOrderLine predicate before adding any order lines. This will perform a check of the customer's credit each time a line is added.
// App code.
var customerService = new CustomerService();
var customer = new Customer();
var order = new Order();
order.CanAddOrderLine =
amount => customerService.GetAvailableCredit(customer) >= amount;
order.AddOrderLine(new OrderLine { Amount = 5m });
customerService.DecrementCredit(5m);
No doubt your real scenario will be more complicated than this. You may also want to check out the Func<T> delegate. A delegate or event could be useful for decrementing the credit amount after the order line is placed or firing some functionality if the customer goes over their credit limit in the order.
Good luck!
In addition to the problem of getting the "pool" value (where I would query the value using a method on an OrderRepository), have you considered the locking implications for this problem?
If the "pool" is constantly changing, is there a chance that someone elses transaction creeps in just after your rule passes, but just before you commit your changes to the db?
Eric Evans refers to this very problem in Chapter 6 of his book ("Aggregates").

Resources