I have records that have an index attribute to maintain their position in relation to each other.
I have a plugin that performs a renumbering operation on these records when the index is changed or new one created. There are specific rules that apply to items that are at the first and last position in the list.
If a new (or existing changed) item is inserted into the middle (not technically the middle...just somewhere between start and end) of the list a renumbering kicks off to make room for the record.
This renumbering process fires in a new execution pipeline...We are updating record D. When I tell record E to change (to make room for D) that of course fires the plugin on update message.
This renumbering is fine until we reach the end of the list where the plugin then gets into a loop with the first business rule that maintains the first and last record differently.
So I am trying to think of ways to pass a flag to the execution context spawned by the renumbering process so the recursion skips the boundary edge business rules if IsRenumbering == true.
My thoughts / ideas:
I have thought of using the Depth check > 1 but that isn't a reliable value as I can't explicitly turn it on or off....it may happen to work but that is not engineering a solid solution that is hoping nothing goes bump. Further a colleague far more knowledgeable than I said that when a workflow calls a plugin the depth value is off and can't be trusted.
All my variables are scoped at the execute level so as to avoid variable pollution at the class level....However if I had a dictionary object, tuple, something at the class level and one value would be the thread id and the other the flag value then perhaps my subsequent execution context could check if the same owning thread id had any values entered.
Any thoughts or other ideas on how to pass context information to a new pipeline would be greatly appreciated.
Per Nicknow sugestion I tried sharedvariables but they seem to be going out of scope...:
First time firing post op:
if (base.Stage == EXrmPluginStepStage.PostOperation)
{
...snip...
foreach (var item in RenumberSet)
{
Context.ParentContext.SharedVariables[recordrenumbering] = "googly";
Entity renumrec = new Entity("abcd") { Id = item.Id };
#region We either add or subtract indexes based upon sortdir
...snip...
renumrec["abc_indexfield"] = TmpIdx + 1;
break;
.....snip.....
#endregion
OrganizationService.Update(renumrec);
}
}
Now we come into Pre-Op of the recursion process kicked off by the above post-op OrganizationService.Update(renumrec); and it seems based upon this check the sharedvariable didn't carry over...???
if (!Context.SharedVariables.Contains(recordrenumbering))
{
//Trace.Trace("Null Set");
//Context.SharedVariables[recordrenumbering] = IsRenumbering;
Context.SharedVariables[recordrenumbering] = "Null Set";
}
throw invalidpluginexception reveals:
Sanity Checks:
Depth : 2
Entity: ...
Message: Update
Stage: PreOperation [20]
User: 065507fe-86df-e311-95fe-00155d050605
Initiating User: 065507fe-86df-e311-95fe-00155d050605
ContextEntityName: ....
ContextParentEntityName: ....
....
IsRenumbering: Null Set
What are you looking for is IExecutionContext.SharedVariables. Whatever you add here is available throughout the entire transaction. Since you'll have child pipelines you'll want to look at the ParentContext for the value. This can all get a little tricky, so be sure to do a lot of testing - I've run into many issues with SharedVariables and looping operations in Dynamics CRM.
Here is some sample (very untested) code to get you started.
public static bool GetIsRenumbering(IPluginExecutionContext pluginContext)
{
var keyName = "IsRenumbering";
var ctx = pluginContext;
while (ctx != null)
{
if (ctx.SharedVariables.Contains(keyName))
{
return (bool)ctx.SharedVariables[keyName];
}
else ctx = ctx.ParentContext;
}
return false;
}
public static void SetIsRenumbering(IPluginExecutionContext pluginContext)
{
var keyName = "IsRenumbering";
var ctx = pluginContext;
ctx.SharedVariables.Add(keyName, true);
}
A very simple solution: add a bit field to the entity called "DisableIndexRecalculation." When your first plugin runs, make sure to set that field to true for all of your updates. In the same plugin, check to see if "DisableIndexRecalculation" is set to true: if so, set it to null (by removing it from the TargetEntity entirely) and stop executing the plugin. If it is null, do your index recalculation.
Because you are immediately removing the field from the TargetEntity if it is true the value will never be persisted to the database so there will be no performance penalty.
Related
I want to get all the attributes from my "Actual Item Inventry" (From Stock Items Form) so i have:
PXResultset<CSAnswers> res = PXSelectJoin<CSAnswers,
InnerJoin<InventoryItem,
On<CSAnswers.refNoteID, Equal<Current<InventoryItem.noteID>>>
>
>.Select(new PXGraph());
But, this returns me 0 rows.
Where is my error?
UPDATED:
My loop is like this:
foreach (PXResult<CSAnswers> record in res)
{
CSAnswers answers = (CSAnswers)record;
string refnoteid = answers.RefNoteID.ToString();
string value = answers.Value;
}
... but i can not go inside foreach.
Sorry for the English.
You should use an initialized graph rather than just "new PXGraph()" for the select. This can be as simple as "this" or "Base" depending on where this code is located. There are times that it is ok to initialize a new graph instance, but also times that it is not ok. Not knowing the context of your code sample, let's assume that "this" and "Base" were insufficient, and you need to initialize a new graph. If you need to work within another graph instance, this is how your code would look.
InventoryItemMaint graph = PXGraph<InventoryItemMaint>.CreateInstance<InventoryItemMaint>();
PXResultset<CSAnswers> res = PXSelectJoin<CSAnswers,
InnerJoin<InventoryItem, On<CSAnswers.refNoteID, Equal<Current<InventoryItem.noteID>>>>>
.Select(graph);
foreach (PXResult<CSAnswers> record in res)
{
CSAnswers answers = (CSAnswers)record;
string refnoteid = answers.RefNoteID.ToString();
string value = answers.Value;
}
However, since you should be initializing graph within a graph or graph extension, you should be able to use:
.Select(this) // To use the current graph containing this logic
or
.Select(Base) // To use the base graph that is being extended if in a graph extension
Since you are referring to:
Current<InventoryItem.noteID>
...but are using "new PXGraph()" then there is no "InventoryItem" to be in the current data cache of the generic base object PXGraph. Hence the need to reference a fully defined graph.
Another syntax for specifying exactly what value you want to pass in is to use a parameter like this:
var myNoteIdVariable = ...
InventoryItemMaint graph = PXGraph<InventoryItemMaint>.CreateInstance<InventoryItemMaint>();
PXResultset<CSAnswers> res = PXSelectJoin<CSAnswers,
InnerJoin<InventoryItem, On<CSAnswers.refNoteID, Equal<Required<InventoryItem.noteID>>>>>
.Select(graph, myNoteIdVariable);
foreach (PXResult<CSAnswers> record in res)
{
CSAnswers answers = (CSAnswers)record;
string refnoteid = answers.RefNoteID.ToString();
string value = answers.Value;
}
Notice the "Required" and the extra value in the Select() section. A quick and easy way to check if you have a value for your parameter is to use PXTrace to write to the Trace that you can check after refreshing the screen and performing whatever action would execute your code:
PXTrace.WriteInformation(myNoteIdVariable.ToString());
...to see if there is a value in myNoteIdVariable to retrieve a result set. Place that outside of the foreach block or you will only get a value in the trace when you actually get records... which is not happening in your case.
If you want to get deep into what SQL statements are being generated and executed, look for Request Profiler in the menus and enable SQL logging while you run a test. Then come back to check the results. (Remember to disable the SQL logging when done or you can generate a lot of unnecessary data.)
We have a business requirement to set the SO return COST to the original cost issued without invoicing if possible. We determined that Sales Orders are necessary to track issuing materials to our client, and we are cost driven rather than price driven. We use FIFO costing, but SO return orders do not seem to return at the original COST unless invoiced (which we also don't do in a traditional manner).
I found that setting the unit/ext cost on the SO Shipment Line directly in the database before Confirm Shipment and Update IN appears to provide the results desired. Applying a custom menu option to streamline and strongly control the return, I cloned nearby code as a base. The section between the === is where I set the unit/ext cost. The PXTrace shows the expected value, but it is coming out as $0 on the shipment record. I thought I might need "docgraph.Update(sOShipmentLine)" to save it, but that's not accessible in this scope.
using (var ts = new PXTransactionScope())
{
PXTimeStampScope.SetRecordComesFirst(typeof(SOOrder), true);
//Reminder - SOShipmentEntry docgraph = PXGraph.CreateInstance<SOShipmentEntry>();
docgraph.CreateShipment(order, SiteID, filter.ShipDate, adapter.MassProcess, SOOperation.Receipt, created, adapter.QuickProcessFlow);
PXTrace.WriteError("Setting Cost");
//Set Cost on Shipment to Cost On SO Line
PXResultset<SOShipment> results =
PXSelectJoin<SOShipment,
InnerJoin <SOShipLine, On<SOShipLine.shipmentNbr, Equal<SOShipment.shipmentNbr>>,
InnerJoin <SOLine, On<SOLine.orderType, Equal<SOShipLine.origOrderType>,
And<SOLine.orderNbr, Equal<SOShipLine.origOrderNbr>, And<SOLine.lineNbr, Equal<SOShipLine.origLineNbr>>>>
>>,
Where<SOShipment.shipmentNbr, Equal<Required<SOShipment.shipmentNbr>>>>
.Select(docgraph, docgraph.Document.Current.ShipmentNbr);
PXTrace.WriteError("Shipment {0} - Records {1}", docgraph.Document.Current.ShipmentNbr, results.Count);
foreach (PXResult<SOShipment, SOShipLine, SOLine> record in results)
{
SOShipment shipment = (SOShipment)record;
SOShipLine shipmentLine = (SOShipLine)record;
SOLine sOLine = (SOLine)record;
==============================================
shipmentLine.UnitCost = GetReturnUnitCost(sOLine.OrigOrderType, sOLine.OrigOrderNbr, sOLine.OrigLineNbr, sOLine.CuryInfoID);
shipmentLine.ExtCost = shipmentLine.Qty * shipmentLine.UnitCost;
PXTrace.WriteError(string.Format("{0} {1}-{2} = {3} / {4}", shipmentLine.LineType, shipmentLine.ShipmentNbr, shipmentLine.LineNbr, shipmentLine.Qty, shipmentLine.UnitCost));
==============================================
}
PXAutomation.CompleteSimple(docgraph.Document.View);
var items = new List<object> { order };
PXAutomation.RemovePersisted(docgraph, typeof(SOOrder), items);
PXAutomation.RemoveProcessing(docgraph, typeof(SOOrder), items);
ts.Complete();
}
Still on the learning curve so I'm expecting the solution is likely simple and obvious to someone more experienced.
There's three phase to it:
Changing the value
Updating the cache
Persisting the cache
I think you are changing the value but not persisting it. The reason why it works after invoking Confirm Shipment or Update IN action is probably that these actions will persist all changes by calling the graph Save action.
To change a field value in a data view you would do:
DACRecord.Field = value;
DataView.Update(DACRecord);
The particularity of your example is that the request is not bound to a data view.
When you have a loose BQL request you can do the same operation with a cache object. In your example the Caches context is available from docGraph:
DACRecord.Field = value;
graph.Caches[typeof(DACType)].Update(DACRecord);
graph.Caches[typeof(DACType)].Persist(DACRecord, PXDBOperation.Update);
Update and Persist are often omitted because in many scenarios they will be called later on by other framework mechanism. For example if you were to do only Update on a UI field, the record won't be persisted until the user clicks on the save button.
Updating value on UI is a bit different than updating in cache.
The recommended approach for UI fields is to use SetValue:
cache.SetValue<DAC.DacField>(DACRecord, fieldValue);
Or use SetValueExt when you want to trigger the framework events like FieldUpdated when changing the field value:
cache.SetValueExt<DAC.DacField>(DACRecord, fieldValue);
You'll still have to update and persist the changes in cache for these too if you want the changes to stick without requiring the user to manually save the document.
I seem to be having a problem with assigning values to fields of a content item with a custom content part and the values not persisting.
I have to create the content item (OrchardServices.ContentManager.Create) first before calling the following code which modifies a field value:
var fields = contentItem.As<MyPart>().Fields;
var imageField = fields.FirstOrDefault(o => o.Name.Equals("Image"));
if (imageField != null)
{
((MediaLibraryPickerField)imageField).Ids = new int[] { imageId };
}
The above code works perfectly when against an item that already exists, but the imageId value is lost if this is done before creating it.
Please note, this is not exclusive to MediaLibraryPickerFields.
I noticed that other people have reported this aswell:
https://orchard.codeplex.com/workitem/18412
Is it simply the case that an item must be created prior to amending it's value field?
This would be a shame, as I'm assigning this fields as part of a large import process and would inhibit performance to create it and then modify the item only to update it again.
As the comments on this issue explain, you do need to call Create. I'm not sure I understand why you think that is an issue however.
I have the following c# code:
private XElement BuildXmlBlob(string id, Part part, out int counter)
{
// return some unique xml particular to the parameters passed
// remember to increment the counter also before returning.
}
Which is called by:
var counter = 0;
result.AddRange(from rec in listOfRecordings
from par in rec.Parts
let id = GetId("mods", rec.CKey + par.UniqueId)
select BuildXmlBlob(id, par, counter));
Above code samples are symbolic of what I am trying to achieve.
According to the Eric Lippert, the out keyword and linq does not mix. OK fair enough but can someone help me refactor the above so it does work? A colleague at work mentioned accumulator and aggregate functions but I am novice to Linq and my google searches were bearing any real fruit so I thought I would ask here :).
To Clarify:
I am counting the number of parts I might have which could be any number of them each time the code is called. So every time the BuildXmlBlob() method is called, the resulting xml produced will have a unique element in there denoting the 'partNumber'.
So if the counter is currently on 7, that means we are processing 7th part so far!! That means XML returned from BuildXmlBlob() will have the counter value embedded in there somewhere. That's why I need it somehow to be passed and incremented every time the BuildXmlBlob() is called per run through.
If you want to keep this purely in LINQ and you need to maintain a running count for use within your queries, the cleanest way to do so would be to make use of the Select() overloads that includes the index in the query to get the current index.
In this case, it would be cleaner to do a query which collects the inputs first, then use the overload to do the projection.
var inputs =
from recording in listOfRecordings
from part in recording.Parts
select new
{
Id = GetId("mods", recording.CKey + part.UniqueId),
Part = part,
};
result.AddRange(inputs.Select((x, i) => BuildXmlBlob(x.Id, x.Part, i)));
Then you wouldn't need to use the out/ref parameter.
XElement BuildXmlBlob(string id, Part part, int counter)
{
// implementation
}
Below is what I managed to figure out on my own:.
result.AddRange(listOfRecordings.SelectMany(rec => rec.Parts, (rec, par) => new {rec, par})
.Select(#t => new
{
#t,
Id = GetStructMapItemId("mods", #t.rec.CKey + #t.par.UniqueId)
})
.Select((#t, i) => BuildPartsDmdSec(#t.Id, #t.#t.par, i)));
I used resharper to convert it into a method chain which constructed the basics for what I needed and then i simply tacked on the select statement right at the end.
I am trying to execute parallel functions on a list of objects using the new C# 4.0 Parallel.ForEach function. This is a very long maintenance process. I would like to make it execute in the order of the list so that I can stop and continue execution at the previous point. How do I do this?
Here is an example. I have a list of objects: a1 to a100. This is the current order:
a1, a51, a2, a52, a3, a53...
I want this order:
a1, a2, a3, a4...
I am OK with some objects being run out of order, but as long as I can find a point in the list where I can say that all objects before this point were run. I read the parallel programming csharp whitepaper and didn't see anything about it. There isn't a setting for this in the ParallelOptions class.
Do something like this:
int current = 0;
object lockCurrent = new object();
Parallel.For(0, list.Count,
new ParallelOptions { MaxDegreeOfParallelism = MaxThreads },
(ii, loopState) => {
// So the way Parallel.For works is that it chunks the task list up with each thread getting a chunk to work on...
// e.g. [1-1,000], [1,001- 2,000], [2,001-3,000] etc...
// We have prioritized our job queue such that more important tasks come first. So we don't want the task list to be
// broken up, we want the task list to be run in roughly the same order we started with. So we ignore tha past in
// loop variable and just increment our own counter.
int thisCurrent = 0;
lock (lockCurrent) {
thisCurrent = current;
current++;
}
dothework(list[thisCurrent]);
});
You can see how when you break out of the parallel for loop you will know the last list item to be executed, assuming you let all threads finish prior to breaking. I'm not a big fan of PLINQ or LINQ. I honestly don't see how writing LINQ/PLINQ leads to maintainable source code or readability.... Parallel.For is a much better solution.
If you use Parallel.Break to terminate the loop then you are guarenteed that all indices below the returned value will have been executed. This is about as close as you can get. The example here uses For but ForEach has similar overloads.
int n = ...
var result = new double[n];
var loopResult = Parallel.For(0, n, (i, loopState) =>
{
if (/* break condition is true */)
{
loopState.Break();
return;
}
result[i] = DoWork(i);
});
if (!loopResult.IsCompleted &&
loopResult.LowestBreakIteration.HasValue)
{
Console.WriteLine("Loop encountered a break at {0}",
loopResult.LowestBreakIteration.Value);
}
In a ForEach loop, an iteration index is generated internally for each element in each partition. Execution takes place out of order but after break you know that all the iterations lower than LowestBreakIteration will have been completed.
Taken from "Parallel Programming with Microsoft .NET" http://parallelpatterns.codeplex.com/
Available on MSDN. See http://msdn.microsoft.com/en-us/library/ff963552.aspx. The section "Breaking out of loops early" covers this scenario.
See also: http://msdn.microsoft.com/en-us/library/dd460721.aspx
For anyone else who comes across this question - if you're looping over an array or list (rather than an IEnumberable ), you can use the overload of Parallel.Foreach that gives the element index to maintain original order too.
string[] MyArray; // array of stuff to do parallel tasks on
string[] ProcessedArray = new string[MyArray.Length];
Parallel.ForEach(MyArray, (ArrayItem,loopstate,ArrayElementIndex) =>
{
string ProcessedArrayItem = TaskToDo(ArrayItem);
ProcessedArray[ArrayElementIndex] = ProcessedArrayItem;
});
As an alternate suggestion, you could record which object have been run and then filter the list when you resume exection to exclude the objects which have already run.
If this needs to be persistent across application restarts, you can store the ID's of the already executed objects (I assume here the objects have some unique identifier).
For anybody looking for a simple solution, I have posted 2 extension methods (one using PLINQ and one using Parallel.ForEach) as part of an answer to the following question:
Ordered PLINQ ForAll
Not sure if question was altered as my comment seems wrong.
Here improved, basically remind that parallel jobs run in out of your control order.
ea printing 10 numbers might result in 1,4,6,7,2,3,9,0.
If you like to stop your program and continue later.
Problems alike this usually endup in batching workloads.
And have some logging of what was done.
Say if you had to check 10.000 numbers for prime or so.
You could loop in batches of size 100, and have a prime log1, log2, log3
log1= 0..99
log2=100..199
Be sure to set some marker to know if a batch job was finished.
Its a general aprouch since the question isnt that exact either.