What does the "check in memory" mean in Orchard CMS? - orchardcms

I tried to customize the queries executed by Orchard.ContentManagement.DefaultContentManager but the following peace of code *1 render my efforts useless:
class DefaultContentManager
{
...
public virtual ContentItem Get(int id, VersionOptions options, QueryHints hints) {
...
// implemention of the query comes here
...
*1 -> // no record means content item is not in db
if (versionRecord == null) {
// check in memory
var record = _contentItemRepository.Get(id);
if (record == null) {
return null;
}
versionRecord = GetVersionRecord(options, record);
if (versionRecord == null) {
return null;
}
}
The query is executed correctly and it does not return any data (which was my goal) but afterwards a second attempt *1 is executed to still get the content item.
Why is this part of code there? What is its purpose? Also why does the comment state check in memory and then the repository (DB table) is queried.

It's already been verified at this point that the item doesn't exist in the database, but it may have just been created from code during the same request. In that case, the nHibernate session has the item, but the database doesn't have it yet. The repository hits the session, not the db directly, so if it's there, it'll retrieve it, but that'll happen in memory.

Related

How do I attach data to custom fields on INTranCost during release of POReceiptEntry?

I need to attach custom data into new fields added to INTranCost when the PO Receipt occurs.
Following the breadcrumbs, it seems that POReceiptEntry -> Release Action eventually calls INDocumentRelease.ReleaseDoc that eventually creates INTranCost. I tried extending both POReceiptEntry and INDocumentRelease to add an event for INTranCost_RowInserted to publish a PXTrace message, but the trace doesn't appear, telling me that I'm not hitting the event that I expected. (Which explains why the real business logic I need included didn't fire.)
protected virtual void _(Events.RowInserted<INTranCost> e)
{
PXTrace.WriteInformation("This is it!");
}
Of course, I want to put real code in this spot, but I am just trying to make sure I'm hitting the event properly. This works on pretty much everything else I've done, including attaching similar data to INTranExt fields. I cannot get it to work for INTranCost so that I can add to INTranCostExt. At this point, I can't determine if it is location (which graph extension) or a special methodology required for this special case.
I also tried overriding events and putting a breakpoint on the code, but it's like I'm not even on the same process. (Yes, I checked that I am connected to the right Acumatica instance and that I have no errors.)
What event in which graph is required to capture the creation in INTranCost for a PO Receipt to update custom fields in INTranCostExt?
Using Request Profiler, I was able to determine that I was close but not deep enough. While the INTranCost object to insert was built in INDocumentRelease FILE, the actual insert was processed in INReleaseProcess graph in that same file.
I only need to execute this "push" from the data captured on the POLine when the INTranCost record is created, and LineNbr is a key field and therefore never updated after it is set. I need to be sure that I have enough data to make the connection back, and the primary key links me back to the INTran easily. That subsequently gets back to the POReceiptLine to the POLine where the data is maintained that needs the "current value" to be captured when the transaction is posted. Since I need to update the DAC Extension, I need to use an event that will allow an existing DAC.Update to apply my values. Therefore, I added an event handler on INTranCost_LineNbr_FieldUpdated since that value should not be "updated" after it is set initially.
Code that accomplished the task:
public class INReleaseProcess_Extension : PXGraphExtension<INReleaseProcess>
{
public override void Initialize()
{
base.Initialize();
}
protected virtual void _(Events.FieldUpdated<INTranCost.lineNbr> e)
{
INTranCost row = (INTranCost) e.Row;
INTran tran = PXSelect<INTran,
Where<INTran.docType, Equal<Required<INTran.docType>>,
And<INTran.refNbr, Equal<Required<INTran.refNbr>>,
And<INTran.lineNbr, Equal<Required<INTran.lineNbr>>
>>>>
.SelectSingleBound(Base, null, row.DocType, row.RefNbr, (int?) e.NewValue);
if (tran?.POReceiptType != null && tran?.POReceiptNbr != null)
{
PXResultset<POReceiptLine> Results = PXSelectJoin<POReceiptLine,
InnerJoin<POLine, On<POLine.orderType, Equal<POReceiptLine.pOType>,
And<POLine.orderNbr, Equal<POReceiptLine.pONbr>,
And<POLine.lineNbr, Equal<POReceiptLine.pOLineNbr>>>>,
InnerJoin<POOrder, On<POOrder.orderType, Equal<POLine.orderType>,
And<POOrder.orderNbr, Equal<POLine.orderNbr>>>>>,
Where<POReceiptLine.receiptType, Equal<Required<POReceiptLine.receiptType>>,
And<POReceiptLine.receiptNbr, Equal<Required<POReceiptLine.receiptNbr>>,
And<POReceiptLine.lineNbr, Equal<Required<POReceiptLine.lineNbr>>>>>>.
SelectSingleBound(Base, null, tran.POReceiptType, tran.POReceiptNbr, tran.POReceiptLineNbr);
if (Results != null)
{
foreach (PXResult<POReceiptLine, POLine, POOrder> result in Results)
{
POReceiptLine receipt = result;
POLine line = result;
POOrder order = result;
POLineExt pOLineExt = PXCache<POLine>.GetExtension<POLineExt>(line);
INTranCostExt iNTranCostExt = PXCache<INTranCost>.GetExtension<INTranCostExt>(row);
if (pOLineExt != null && iNTranCostExt != null)
{
Base.Caches[typeof(INTranCost)].SetValueExt<INTranCostExt.usrField>(row, pOLineExt.UsrField);
}
}
}
}
}
}

Azure Mobile Services Soft Delete Issue / Practices

With soft delete turned on, I add a single record on the client, push, delete the added record push and then attempt to add a new record (and then push) with the same primary key as the initial record I get an exception. It would appear that EntityDomainManager just attempts to do a new insert without checking to see if the record is to be 'updated' instead of inserted.
However if I turn off soft delete in the domain manager constructor everything works fine.
We are using incremental sync, so soft delete as I understand it is required to make this work, so we don't end up with different pictures of what's right between mobile and server.
When is/are the recommended approach? A Custom EntityDomainManager (or other DomainManager)? If so it would be useful for more clarity on the interactions between the table controller and the domain manager.
I have constructed this custom domain manager which seems to work, but would appreciate any guidance/suggestions.
public class CustomEntityDomainManager<TData> : EntityDomainManager<TData> where TData : class, ITableData
{
public CustomEntityDomainManager(DbContext context, HttpRequestMessage request, ApiServices services)
: base(context, request, services)
{
}
public CustomEntityDomainManager(DbContext context, HttpRequestMessage request, ApiServices services, bool enableSoftDelete) : base(context, request, services, enableSoftDelete)
{
}
public async override Task<TData> InsertAsync(TData data)
{
if (data == null)
{
throw new ArgumentNullException("data");
}
// now then, if we have soft delete enabled & data has been provided with an id in it
if (EnableSoftDelete && data.Id != null)
{
// now look to see if the record exists and if it is deleted
// if so we look to remove the record before then attempting the insert
// record old value of deleted, since need to query to see if deleted.
var oldIncludeDeleted = IncludeDeleted;
try
{
IncludeDeleted = true;
var existingData = await this.Lookup(data.Id).Queryable.FirstOrDefaultAsync();
// if record exists, and its soft deleted then truly delete it
if (existingData != null && existingData.Deleted)
{
// now need to remove this record...
this.Context.Set<TData>().Remove(existingData);
}
}
finally
{
IncludeDeleted = oldIncludeDeleted;
}
}
if (data.Id == null)
{
data.Id = Guid.NewGuid().ToString("N");
}
return await base.InsertAsync(data);
}
This behavior is by design--we require that you do an explicit undelete before doing the update.
The solution you've presented is fine. You can also move the code to your table controller, assuming you only need this behavior in one table. If you need it in multiple tables, then the custom domain manager is the best approach.

Getting an error creating a Query object in SubSonic

I am getting the following error in one of our environments. It seems to occur when IIS is restarted, but we haven't narrowed down the specifics to reproduce it.
A DataTable named 'PeoplePassword' already belongs to this DataSet.
at System.Data.DataTableCollection.RegisterName(String name, String tbNamespace)
at System.Data.DataTableCollection.BaseAdd(DataTable table)
at System.Data.DataTableCollection.Add(DataTable table)
at SubSonic.SqlDataProvider.GetTableSchema(String tableName, TableType tableType)
at SubSonic.DataService.GetSchema(String tableName, String providerName, TableType tableType)
at SubSonic.DataService.GetTableSchema(String tableName, String providerName)
at SubSonic.Query..ctor(String tableName)
at Wad.Elbert.Data.Enrollment.FetchByUserId(Int32 userId)
Based on the stacktrace, I believe the error is happening on the second line of the method while creating the query object.
Please let me know if anyone else has this problem.
Thanks!
The code for the function is:
public static List<Enrollment> FetchByUserId(int userId)
{
List<Enrollment> enrollments = new List<Enrollment>();
SubSonic.Query query = new SubSonic.Query("Enrollment");
query.SelectList = "userid, prompt, response, validationRegex, validationMessage, responseType, enrollmentSource";
query.QueryType = SubSonic.QueryType.Select;
query.AddWhere("userId", userId);
DataSet dataset = query.ExecuteDataSet();
if (dataset != null &&
dataset.Tables.Count > 0)
{
foreach (DataRow dr in dataset.Tables[0].Rows)
{
enrollments.Add(new Enrollment((int)dr["userId"], dr["prompt"].ToString(), dr["response"].ToString(), dr["validationRegex"] != null ? dr["validationRegex"].ToString() : string.Empty, dr["validationMessage"] != null ? dr["validationMessage"].ToString() : string.Empty, (int)dr["responseType"], (int)dr["enrollmentSource"]));
}
}
return enrollments;
}
This is a threading issue.
Subsonic loads it's schema on the first call of SubSonic.DataService.GetTableSchema(...) but this is not Thread safe.
Let me demonstrate this with a little example
private static Dictionary<string, DriveInfo> drives = new Dictionary<string, DriveInfo>;
private static DriveInfo GetDrive(string name)
{
if (drives.Count == 0)
{
Thread.Sleep(10000); // fake delay
foreach(var drive in DriveInfo.GetDrives)
drives.Add(drive.Name, drive);
}
if (drives.ContainsKey(name))
return drives[name];
return null;
}
this explains well what happens, on the first call to this method the dictionary is empty
If that's the case the method will preload all drives.
For every call the requested drive (or null) is returned.
But what happens if you fire the method two times directly after the start? Then both executions try to load the drives in the Dictionary. The first one to add a drive wins the second will throw an ArgumentException (element already exists).
After the initial preload, everything works fine.
Long story short, you have two choices.
Modify subsonic source to make SubSonic.DataService.GetTableSchema(...) thread safe.
http://msdn.microsoft.com/de-de/library/c5kehkcz(v=vs.80).aspx
"Warmup" subsonic before accepting requests. The technic to achive this depends on your application design. For ASP.NET you have an Application_Start method that is only executed once during your application lifecycle
http://msdn.microsoft.com/en-us/library/ms178473(v=vs.100).aspx
So you can basically put a
var count = new SubSonic.Query("Enrollment").GetRecordCount();
in the method to force subsonic to init the table schema itself.

How can I update a content item (draft) from a background task in Orchard?

I have a simple IBackgroundTask implementation that performs a query and then either performs an insert or one or more updates depending on whether a specific item exists or not. However, the updates are not persisted, and I don't understand why. New items are created just as expected.
The content item I'm updating has a CommonPart and I've tried authenticating as a valid user. I've also tried flushing the content manager at the end of the Sweep method. What am I missing?
This is my Sweep, slightly edited for brevity:
public void Sweep()
{
// Authenticate as the site's super user
var superUser = _membershipService.GetUser(_orchardServices.WorkContext.CurrentSite.SuperUser);
_authenticationService.SetAuthenticatedUserForRequest(superUser);
// Create a dummy "Person" content item
var item = _contentManager.New("Person");
var person = item.As<PersonPart>();
if (person == null)
{
return;
}
person.ExternalId = Random.Next(1, 10).ToString();
person.FirstName = GenerateFirstName();
person.LastName = GenerateLastName();
// Check if the person already exists
var matchingPersons = _contentManager
.Query<PersonPart, PersonRecord>(VersionOptions.AllVersions)
.Where(record => record.ExternalId == person.ExternalId)
.List().ToArray();
if (!matchingPersons.Any())
{
// Insert new person and quit
_contentManager.Create(item, VersionOptions.Draft);
return;
}
// There are at least one matching person, update it
foreach (var updatedPerson in matchingPersons)
{
updatedPerson.FirstName = person.FirstName;
updatedPerson.LastName = person.LastName;
}
_contentManager.Flush();
}
Try to add _contentManager.Publish(updatedPerson). If you do not want to publish, but just to save, you don't need to do anything more, as changes in Orchard as saved automatically unless the ambient transaction is aborted. The call to Flush is not necessary at all. This is the case both during a regular request and on a background task.

What is the best way to recycle Domino objects in Java Beans

I use a function to get access to a configuration document:
private Document lookupDoc(String key1) {
try {
Session sess = ExtLibUtil.getCurrentSession();
Database wDb = sess.getDatabase(sess.getServerName(), this.dbname1);
View wView = wDb.getView(this.viewname1);
Document wDoc = wView.getDocumentByKey(key1, true);
this.debug("Got a doc for key: [" + key1 + "]");
return wDoc;
} catch (NotesException ne) {
if (this.DispLookupErrors)
ne.printStackTrace();
this.lastErrorMsg = ne.text;
this.debug(this.lastErrorMsg, "error");
}
return null;
}
In another method I use this function to get the document:
Document wDoc = this.lookupDoc(key1);
if (wdoc != null) {
// do things with the document
wdoc.recycle();
}
Should I be recycling the Database and View objects when I recycle the Document object? Or should those be recycled before the function returns the Document?
The best practice is to recycle all Domino objects during the scope within which they are created. However, recycling any object automatically recycles all objects "beneath" it. Hence, in your example method, you can't recycle wDb, because that would cause wDoc to be recycled as well, so you'd be returning a recycled Document handle.
So if you want to make sure that you're not leaking memory, it's best to recycle objects in reverse order (e.g., document first, then view, then database). This tends to require structuring your methods such that you do whatever you need to/with a Domino object inside whatever method obtains the handle on it.
For instance, I'm assuming the reason you defined a method to get a configuration document is so that you can pull the value of configuration settings from it. So, instead of a method to return the document, perhaps it would be better to define a method to return an item value:
private Object lookupItemValue(String configKey, itemName) {
Object result = null;
Database wDb = null;
View wView = null;
Document wDoc = null;
try {
Session sess = ExtLibUtil.getCurrentSession();
wDb = sess.getDatabase(sess.getServerName(), this.dbname1);
wView = wDb.getView(this.viewname1);
wDoc = wView.getDocumentByKey(configKey, true);
this.debug("Got a doc for key: [" + configKey + "]");
result = wDoc.getItemValue(itemName);
} catch (NotesException ne) {
if (this.DispLookupErrors)
ne.printStackTrace();
this.lastErrorMsg = ne.text;
this.debug(this.lastErrorMsg, "error");
} finally {
incinerate(wDoc, wView, wDb);
}
return result;
}
There are a few things about the above that merit an explanation:
Normally in Java, we declare variables at first use, not Table of Contents style. But with Domino objects, it's best to revert to TOC so that, whether or not an exception was thrown, we can try to recycle them when we're done... hence the use of finally.
The return Object (which should be an item value, not the document itself) is also declared in the TOC, so we can return that Object at the end of the method - again, whether or not an exception was encountered (if there was an exception, presumably it will still be null).
This example calls a utility method that allows us to pass all Domino objects to a single method call for recycling.
Here's the code of that utility method:
private void incinerate(Object... dominoObjects) {
for (Object dominoObject : dominoObjects) {
if (null != dominoObject) {
if (dominoObject instanceof Base) {
try {
((Base)dominoObject).recycle();
} catch (NotesException recycleSucks) {
// optionally log exception
}
}
}
}
}
It's private, as I'm assuming you'll just define it in the same bean, but lately I tend to define this as a public static method of a Util class, allowing me to follow this same pattern from pretty much anywhere.
One final note: if you'll be retrieving numerous item values from a config document, obviously it would be expensive to establish a new database, view, and document handle for every item value you want to return. So I'd recommend overriding this method to accept a List<String> (or String[ ]) of item names and return a Map<String, Object> of the resulting values. That way you can establish a single handle for the database, view, and document, retrieve all the values you need, then recycle the Domino objects before actually making use of the item values returned.
Here's an idea i'm experimenting with. Tim's answer is excellent however for me I really needed the document for other purposes so I've tried this..
Document doc = null;
View view = null;
try {
Database database = ExtLibUtil.getCurrentSessionAsSigner().getCurrentDatabase();
view = database.getView("LocationsByLocationCode");
doc = view.getDocumentByKey(code, true);
//need to get this via the db object directly so we can safely recycle the view
String id = doc.getNoteID();
doc.recycle();
doc = database.getDocumentByID(id);
} catch (Exception e) {
log.error(e);
} finally {
JSFUtils.incinerate(view);
}
return doc;
You then have to make sure you're recycling the doc object safely in whatever method calls this one
I have some temporary documents which exist for a while as config docs then no longer needed, so get deleted. This is kind of enforced by an existing Notes client application: they have to exist to keep that happy.
I've written a class which has a HashMap of Java Date, String and Doubles with the item name as the key. So now I have a serializable representation of the document, plus the original doc noteID so that it can be found quickly and amended/deleted when it's not needed anymore.
That means the config doc can be collected, a standard routine creates a map of all items for the Java representation, taking into account the item type. The doc object can then be recycled right away.
The return object is the Java class representation of the document, with getValue(String name) and setValue(String name, val) where val can be Double String or Date. NB: this structure has no need for rich text or attachments, so it's kept to simple field values.
It works well although if the config doc has lots of Items, it could mean holding a lot of info in memory unnecessarily. Not so in my particular case though.
The point is: the Java object is now serializable so it can remain in memory, and as Tim's brilliant reply suggests, the document can be recycled right away.

Resources