Is it necessary to cache the data for a lazy loaded property with Subsonic 3 simple repository? - subsonic

I have added a lazyloaded property called Orders on my Customer class. Do you think it's wise to cache the data in a private field?
private IList<Order> _orders;
[SubSonicIgnore]
public IList<Order> Orders
{
get
{
if (_orders == null)
{
var repository = new SimpleRepository("MyConnectionString", SimpleRepositoryOptions.None);
_orders = repository.Find<Order>(x => x.CustomerId == this.CustomerId);
}
return _orders;
}
}
Or is it better to not cache it like so:
[SubSonicIgnore]
public IList<Order> Orders
{
get
{
var repository = new SimpleRepository("MyConnectionString", SimpleRepositoryOptions.None);
return repository.Find<Order>(x => x.CustomerId == this.CustomerId);
}
}
The reason I'm asking is because I think it's a good idea to cache the data for performance sake, but at the same time I'm affraid that caching the data can cause it to become out-of-sync of some other process inserts/deletes records from database.

In your case, your cached Orders will exist for the lifetime of your Customers object. If you needed to clear the cached orders, you could simply requery for your Customer.
If I were you, I'd add an additional property whose name specifies that there is caching, add a custom cacheScope object (like transactionScope, the cache only exists as long as the scope exists), or specify in the documentation which properties will perform caching of child objects and for how long.
I would not remove caching. I'd leave it in there as an additional property. You'll have it if you need it.
Thanks for showing your caching logic. Here's mine. In my case, the life expectancy of my parent object is short, I don't expect >100 records of total parent/child data, and I do expect that all the child data will be used. If my data changes, then I'll need to readdress the caching logic I use in this particular instance:
private static List<HostHeader> _cachedHostHeaders;
public List<HostHeader> CachedHostHeaders
{
get
{
if (_cachedHostHeaders == null)
_cachedHostHeaders = this.HostHeaders.ToList();
return _cachedHostHeaders.Where(i => i.SiteID == this.ID).ToList();
}
}

Related

EF Core - why Explicit Loading is terribly slow?

I am working on microservices in .Net Core with the domain driven design. The infrastructure layer has EF Core DbContext to access the database and in my repositories, I have async methods to retrieve data.
Because Include/ThenInclude does not support filtering (at least not up to Ef Core 2.1), I have tried all the possible approaches I found when googling on how to replace Include. I watched the Pluralsight videos about Ef Core too and when I saw the Explicit Loading option I was really happy due to its ability to filter related objects, but when I rewrote one of the methods into Explicit version, the query that ran for a couple of milliseconds went up to a couple of minutes!
In my entity configurations I set up all the navigations and foreign keys, but I am not sure whether explicit loading requires any additional setup or not? Before recommending to use global filters, please note that the Where clause is usually longer, so the below example is just a shortened version of the actual filtering!
This is how my methods look like (TransferService serves as the aggregate, TransferServiceDetail and any other classes are just entities within the TransferService domain):
public async Task<IEnumerable<TransferService>> GetAllAsync(
TransferServiceFilter transferServiceFilter)
{
int? pageIndex = null;
int? itemsPerPage = null;
IEnumerable<TransferService> filteredList = DBContext.TransferServices.Where(
ts => !ts.IsDeleted); //This one itself is quick.
//This is just our filtering, it does not affect performance.
if (transferServiceFilter != null)
{
pageIndex = transferServiceFilter.PageIndex;
itemsPerPage = transferServiceFilter.ItemsPerPage;
filteredList = filteredList.Where(f =>
(transferServiceFilter.TransferSupplierId == null ||
f.TransferSupplierId == transferServiceFilter.TransferSupplierId) &&
(transferServiceFilter.TransferDestinationId == null ||
f.TransferDestinationId == transferServiceFilter.TransferDestinationId) &&
(transferServiceFilter.TransferSupplierId == null ||
f.TransferSupplierId == transferServiceFilter.TransferSupplierId) &&
(string.IsNullOrEmpty(transferServiceFilter.TransportHubRef) ||
f.NormalizeReference(f.TransportHubRef) ==
f.NormalizeReference(transferServiceFilter.TransportHubRef)));
}
//This is just for paging and again, this is quick.
return await FilterList(filteredList.AsQueryable(), pageIndex, itemsPerPage);
}
public async Task<IEnumerable<TransferService>> GetAllWithServiceDetailsAsync(
TransferServiceFilter transferServiceFilter)
{
IEnumerable<TransferService> returnList = await GetAllAsync(
transferServiceFilter);
//This might be the problem as I need to iterate through my TransferServices
//to be able to load all TransferServiceDetails that belong to each individual
//Service.
foreach (TransferService service in returnList)
{
await DBContext.Entry<TransferService>(service)
.Collection(ts => ts.TransferServiceDetails.Where(
tsd => !tsd.IsDeleted)).LoadAsync();
}
return returnList;
}
In my repository I have other methods as well, similarly referring to a previous GetAllXY... method (TransferServiceDetails have Rates, Rates have Periods, etc...).
My idea was to simply call GetAllAsync when I only need TransferService data (and alone this method is lightning quick), or call GetAllWithServiceDetailsAsync when I also need the Details of the selected Services, etc, but the lower I go in this parent-child hierarchy, the slower the execution becomes and I am talking about minutes, not just a couple of extra milliseconds, or in worst case seconds.
So my question again: is there any additional setting that I might have missed from the entity configurations that explicit loading requires, or simply my queries are incorrect? Or maybe explicit loading is only good when there is only one TransferService as a parent instead of a list of TransferServices (50-100 in my case) and also there are only just a few children related entities (in my case I usually have 5-10 Details, each Detail has 2-3 Rates, each Rate has exactly 1 Period, etc...)?
I guess your filtering can't be converted to SQL Where and all filtering happens client-side (EF loads ALL TransferServices entities into memory, filters in-memory and drops mismatched).
I may check this by enabling detailed (debug) logging - EF will dump SQLs into log.
After you confirm, your should make improvements:
First, put ifs out of Where. Instead of:
filteredList = filteredList.Where(f => transferServiceFilter.TransferSupplierId == null ||
f.TransferSupplierId == transferServiceFilter.TransferSupplierId)
use
if (transferServiceFilter.TransferSupplierId != null)
{
filteredList = filteredList.Where(f => f.TransferSupplierId == transferServiceFilter.TransferSupplierId)
}
Second, you should re-think NormalizeReference. This can't be executed server-side, because SQL server doesn't know about this implementation. You should pre-normalize TransportHubRef, save it in DB (say, NormalizedTransportHubRef) and use Where with simple equality.
(Also, don't forget about indexes).

Data View Customization in Extension

I have overwritten the data view for a custom graph in an extension, which returns the correct data without issue, both by re-declaring the view, and using the delegate object techniques. The issue is that when I do, the AllowSelect/AllowDelete modifications on the view in the primary graph stop working, once I comment out the overwrite, the logic works as normal.
Not sure what I'm missing, but any thoughts would be appreciated
Edit: To clarify, on the main graph, without the extension, the data retrieval and Allow... work without issue
public class FTTicketEntry : PXGraph<FTTicketEntry, UsrFTHeader>
{
public PXSelect<UsrFTHeader> FTHeader;
public PXSelect<UsrFTGridLabor, Where<UsrFTGridLabor.ticketNbr, Equal<Current<UsrFTHeader.ticketNbr>>>> FTGridLabor;
And with the extension, the data is returned correctly from the modified view, but the Allow... do not work from the main graph, only when entered on the extension
public class FTTicketEntryExtension : PXGraphExtension<FTTicketEntry>
{
public PXSelect<UsrFTGridLabor, Where<UsrFTGridLabor.ticketNbr, Equal<Current<UsrFTHeader.ticketNbr>>, And<UsrFTGridLabor.projectID, Equal<Current<UsrFTHeader.projectID>>, And<UsrFTGridLabor.taskID, Equal<Current<UsrFTHeader.taskID>>>>>> FTGridLabor;
I have also tried the other process on the extension with the same results, the data is filtered correctly, but the Allow... commands fail.
public PXSelect<UsrFTGridLabor, Where<UsrFTGridLabor.ticketNbr, Equal<Current<UsrFTHeader.ticketNbr>>>> FTGridLabor;
public virtual IEnumerable fTGridLabor()
{
foreach (PXResult<UsrFTGridLabor> record in Base.FTGridLabor.Select())
{
UsrFTGridLabor p = (UsrFTGridLabor)record;
if (p.ProjectID == Base.FTHeader.Current.ProjectID && p.TaskID == Base.FTHeader.Current.TaskID)
{
yield return record;
}
}
}
My main concern with not wanting to use PXSelectReadOnly, is that there is a status field on the header which drives when certain combinations of the conditions are required and are called on the rowselected events, sometimes all and sometimes none, and the main issue is that I obviously don't want to have to replicate all of the UI logic into the extension, when overwriting the view was the main intent of the extension for the screen.
Appreciate the assistance, and hopefully you see something I'm overlooking or have missed
Thanks
Every BLC instance stores all actual data views and actions within 2 collections: Views and Actions. Whenever, you customize a data view or an action with a BLC extension, the original data view / action gets replaced in the appropriate collection by your custom object declared within the extension class. After the original data view or action was removed from the appropriate collection, it's quite obvious that any change made to the original object will not make any effect, since the original object is not used by the BLC anymore.
The easiest way to access actual object from either of these 2 collections would be as follows: Views["FTGridLabor"].Allow... = value;
Alternatively, you might operate with AllowInsert, AllowUpdate and AllowDelete properties on the cache level: FTGridLabor.Cache.Allow... = value;
By changing AllowXXX properties on the cache level, you completely eliminate the need for setting AllowXXX on the data view, since PXCache.AllowXXX properties have higher priority when compared to identical properties on the data view level:
public class PXView
{
...
protected bool _AllowUpdate = true;
public bool AllowUpdate
{
get
{
if (_AllowUpdate && !IsReadOnly)
{
return Cache.AllowUpdate;
}
return false;
}
set
{
_AllowUpdate = value;
}
}
...
}
With all that said, to resolve your issue with UI Logic not applying to modified view, please consider one of the following options:
Set AllowXXX property values in both the original BLC and its extensions via the object obtained from the Views collection:
Views["FTGridLabor"].Allow... = value;
operate with AllowXXX property values on the cache level: FTGridLabor.Cache.Allow... = value;
First check if your DataView should/should not be a variant of PXSelectReadonly.
Without more information my advice would be to set the Allow properties in Initialize method of your extension:
public override void Initialize()
{
// This is similar to PXSelectReadonly
DataView.AllowDelete = false;
DataView.AllowInsert = false;
DataView.AllowUpdate = false;
}

Document not available in query direct after store

I'm trying to store a "Role" object and then get a list of Roles, as shown here:
public class Role
{
public Guid RoleId { get; set; }
public string RoleName { get; set; }
public string RoleDescription { get; set; }
}
//Function store:
private void StoreRole(Role role)
{
using (var docSession = docStore.OpenSession())
{
docSession.Store(role);
docSession.SaveChanges();
}
}
// then it return and a function calls this
public List<Role> GetRoles()
{
using (var docSession = docStore.OpenSession())
{
var Roles = from roles in docSession.Query<Role>() select roles;
return Roles.ToList();
}
}
However, in the GetRoles I am missing the last inserted record/document. If I wait 200ms and then call this function the item is there.
So I am not in sync. ?!
How can I solve this, or alternately how could I know when the result is in the document store for querying?
I've used transactions, but cannot figure this out. Update and delete are just fine, but when inserting I need to delay my 'List' call.
You are treating RavenDB as if it is a relational database, and it isn't. Load and Store are ACID operations in RavenDB, Query is not. Indexes (necessary for queries) are updated asynchronously, and in fact, temporary indexes may have to be built from scratch when you do a session.Query<T>() without a durable index specified. So, if you are trying to query for information you JUST stored, or if you are doing the FIRST query that requires a temporary index to be created, you probably won't get the data you expect.
There are methods of customizing your query to wait for non-stale results but you shouldn't lean on these too much because they're indicative of a bad design - it is better to figure out a better way to do the same thing in a way that embraces eventual consistency, either changing your model (so you get consistency via Load/Store - perhaps you could have one document that defines ALL of the roles in a list?) or by changing the application flow so you don't need to Store and then immediately Query.
An additional way of solving this is to query the index with WaitForNonStaleResultsAsOfLastWrite() turned on inside the save function. That way when the save is completed the index will be updated to at least include the change you just made.
You can read more about this here

What is the best way to recycle Domino objects in Java Beans

I use a function to get access to a configuration document:
private Document lookupDoc(String key1) {
try {
Session sess = ExtLibUtil.getCurrentSession();
Database wDb = sess.getDatabase(sess.getServerName(), this.dbname1);
View wView = wDb.getView(this.viewname1);
Document wDoc = wView.getDocumentByKey(key1, true);
this.debug("Got a doc for key: [" + key1 + "]");
return wDoc;
} catch (NotesException ne) {
if (this.DispLookupErrors)
ne.printStackTrace();
this.lastErrorMsg = ne.text;
this.debug(this.lastErrorMsg, "error");
}
return null;
}
In another method I use this function to get the document:
Document wDoc = this.lookupDoc(key1);
if (wdoc != null) {
// do things with the document
wdoc.recycle();
}
Should I be recycling the Database and View objects when I recycle the Document object? Or should those be recycled before the function returns the Document?
The best practice is to recycle all Domino objects during the scope within which they are created. However, recycling any object automatically recycles all objects "beneath" it. Hence, in your example method, you can't recycle wDb, because that would cause wDoc to be recycled as well, so you'd be returning a recycled Document handle.
So if you want to make sure that you're not leaking memory, it's best to recycle objects in reverse order (e.g., document first, then view, then database). This tends to require structuring your methods such that you do whatever you need to/with a Domino object inside whatever method obtains the handle on it.
For instance, I'm assuming the reason you defined a method to get a configuration document is so that you can pull the value of configuration settings from it. So, instead of a method to return the document, perhaps it would be better to define a method to return an item value:
private Object lookupItemValue(String configKey, itemName) {
Object result = null;
Database wDb = null;
View wView = null;
Document wDoc = null;
try {
Session sess = ExtLibUtil.getCurrentSession();
wDb = sess.getDatabase(sess.getServerName(), this.dbname1);
wView = wDb.getView(this.viewname1);
wDoc = wView.getDocumentByKey(configKey, true);
this.debug("Got a doc for key: [" + configKey + "]");
result = wDoc.getItemValue(itemName);
} catch (NotesException ne) {
if (this.DispLookupErrors)
ne.printStackTrace();
this.lastErrorMsg = ne.text;
this.debug(this.lastErrorMsg, "error");
} finally {
incinerate(wDoc, wView, wDb);
}
return result;
}
There are a few things about the above that merit an explanation:
Normally in Java, we declare variables at first use, not Table of Contents style. But with Domino objects, it's best to revert to TOC so that, whether or not an exception was thrown, we can try to recycle them when we're done... hence the use of finally.
The return Object (which should be an item value, not the document itself) is also declared in the TOC, so we can return that Object at the end of the method - again, whether or not an exception was encountered (if there was an exception, presumably it will still be null).
This example calls a utility method that allows us to pass all Domino objects to a single method call for recycling.
Here's the code of that utility method:
private void incinerate(Object... dominoObjects) {
for (Object dominoObject : dominoObjects) {
if (null != dominoObject) {
if (dominoObject instanceof Base) {
try {
((Base)dominoObject).recycle();
} catch (NotesException recycleSucks) {
// optionally log exception
}
}
}
}
}
It's private, as I'm assuming you'll just define it in the same bean, but lately I tend to define this as a public static method of a Util class, allowing me to follow this same pattern from pretty much anywhere.
One final note: if you'll be retrieving numerous item values from a config document, obviously it would be expensive to establish a new database, view, and document handle for every item value you want to return. So I'd recommend overriding this method to accept a List<String> (or String[ ]) of item names and return a Map<String, Object> of the resulting values. That way you can establish a single handle for the database, view, and document, retrieve all the values you need, then recycle the Domino objects before actually making use of the item values returned.
Here's an idea i'm experimenting with. Tim's answer is excellent however for me I really needed the document for other purposes so I've tried this..
Document doc = null;
View view = null;
try {
Database database = ExtLibUtil.getCurrentSessionAsSigner().getCurrentDatabase();
view = database.getView("LocationsByLocationCode");
doc = view.getDocumentByKey(code, true);
//need to get this via the db object directly so we can safely recycle the view
String id = doc.getNoteID();
doc.recycle();
doc = database.getDocumentByID(id);
} catch (Exception e) {
log.error(e);
} finally {
JSFUtils.incinerate(view);
}
return doc;
You then have to make sure you're recycling the doc object safely in whatever method calls this one
I have some temporary documents which exist for a while as config docs then no longer needed, so get deleted. This is kind of enforced by an existing Notes client application: they have to exist to keep that happy.
I've written a class which has a HashMap of Java Date, String and Doubles with the item name as the key. So now I have a serializable representation of the document, plus the original doc noteID so that it can be found quickly and amended/deleted when it's not needed anymore.
That means the config doc can be collected, a standard routine creates a map of all items for the Java representation, taking into account the item type. The doc object can then be recycled right away.
The return object is the Java class representation of the document, with getValue(String name) and setValue(String name, val) where val can be Double String or Date. NB: this structure has no need for rich text or attachments, so it's kept to simple field values.
It works well although if the config doc has lots of Items, it could mean holding a lot of info in memory unnecessarily. Not so in my particular case though.
The point is: the Java object is now serializable so it can remain in memory, and as Tim's brilliant reply suggests, the document can be recycled right away.

To aggregate or not - order/orderline

About Domain Driven Design, Order and OrderLines are always seen as an aggregate, where Order is the root. Normally, once an order is created, one cannot change it. In my case however, that is possible. Instead each order has a state determining whether the order can be changed or not.
In this case, are both Order and OrderLines their own “aggregate root”? I need to be able to update order lines, so I figure that they should have their own repository. But I do not want to retrieve order lines, and persist them without the order. So this indicates that there’s still an aggregate where Order is the root with a factory method to create order lines (Order.CreateOrderLine(quantity, text, …).
Another approach could be to update the Order when the order lines collection has been modified, and then call UpdateOrder(Order). I would need some way of detecting that only the collection should be updated, and no the Order itself (using Entity Framework).
What do you think?
Order lines shouldn't be an aggregate of it's own, and doesn't need it's own repository. Your aggregate should be setup something like this...
public class Order
{
private List<OrderLine> _orderLines;
private OrderState _orderState;
public IEnumerable<OrderLine> OrderLines
{
get { return _orderLines.AsReadOnly();}
}
public OrderState Status
{
get { return _orderState; }
}
public void DeleteOrderLine(Guid orderLineID)
{
if (Status.IsProcessed)
throw new InvalidOperationException("You cannot delete items from a processed order");
OrderLine lineToRemove = _orderLines.Find(ol => ol.Id == orderLineID);
_orderLines.Remove(lineToRemove);
}
public void AddOrderLine(Product product, int quantity)
{
if (Status.IsProcessed)
throw new InvalidOperationException("You cannot add items to a processed order");
OrderLine line = new OrderLine(product.ProductID, (product.Price * quantity), quantity);
_orderLines.Add(line);
}
}
Entity framework has some built in features to detect changes to your object. This is explained here (conveniently with an order/order lines example): http://msdn.microsoft.com/en-us/library/dd456854.aspx

Resources