I have a graph used by a screen that I'm accessing via a webservice endpoint, e.g.
http://myserver/entity/MyEndpoints/17.200.001/MyEndpoint
In the graph I'm defining
public PXSelectReadonly<MyDAC> Items;
and then a delegate to serve the Items
protected virtual IEnumerable items()
When I try to access the endpoint, I get an error:
"There is a BqlDelegate in view Items"
and an exception of type PX.Api.ContractBased.OptimizedExport.CannotOptimizeException
I can't find anything on this anywhere, so I'm a bit stumped.
The reason I didn't use generic inquiries for this, is due to the logic in the delegate, which is a bit more intricate than GIs are capable of.
Any ideas?
I used Jetbrains Dotpeek to dig into Acumatica's source code.
Found an answer to my own question, which at least allows me to use the graph as is
Adding the following attribute declaration to the graph class allows the delegate to be used
[NonOptimizable(IgnoreOptimizationBehavior = true)]
b
Acumatica tries to optimize the API call by "translating" it to a single SQL query. It does it by looking at your endpoint definition, the fields that you want returned, and the actual BQL queries behind it. If you have a delegate, it can't and won't do it, unless you tell the system that it is "safe" to do so.
In the past, Acumatica would end up silently executing the delegates, leading to very poor performance when doing a GetList() on a large dataset. A good example is the Sales Order screen; there's a delegate behind the Transactions view (SOLine). Without this optimization, returning all orders and order lines in a single GetList() call would force the system to invoke the delegate for every single order, resulting in hundreds or even thousands of SQL queries!
In this case however, the delegate is only used to cache some data related to the item cost, so the Transactions view delegate was marked with the following attribute:
[Api.Export.PXOptimizationBehavior(IgnoreBqlDelegate = true)]
protected virtual IEnumerable transactions()
That tells the system that it is safe to ignore the delegate, and optimize the Transactions view like any other PXSelect. If the delegates must be invoked for your data to be returned properly, then you must use another attribute, this time on the graph itself:
[Api.Export.NonOptimizable(IgnoreOptimizationBehavior = true)]
public class SomeGraph : PXGraph<SomeGraph>
No optimization will be done by the system and system will invoke the delegate at all times. This will more than likely result in slower than expected response, especially when the delegate is not the primary view. There's another variant of this attribute where you can identify specific fields that can't be optimized; this will result in optimization being enabled unless you include these fields as part of your endpoint:
[Api.Export.NonOptimizable(new Type[] { typeof(ARPayment.adjDate), typeof(ARPayment.curyApplAmt) })]
In the past, Acumatica silently reverted to non-optimized behaviour, but this was very problematic because small changes to a graph would wreak havoc on API performance, and problems would get detected too late.
Note that these attributes are undocumented, and are used internally only for now. My advice is to avoid having to use them as much as possible, and stick to simple graphs and selects without delegates if possible...
Related
In the article "Improve XPages Application Performance with JSON-RPC" Brad Balassaitis writes:
For example, if you have a repeat control with a collection named myRepeat and
a property named myProperty, you could pass/retrieve it in client-side JavaScript
with this syntax:
‘#{javascript: myRepeat.myProperty}’
Then your call to the remote method would look like this:
myRpcService.setScopeVar(‘#{javascript: myRepeat.myProperty}’);
If I look at the xp:repeat control where should I set this myProperty property?
My idea is to display values from another source within a repeat control. So for each entry in the repeat control I would like to make a call via the Remote Service control and add additional information received from the service.
Anyone achieved this before?
JSON-RPC is just a mechanism to allow you to trigger server-side code without needing a full partial refresh. myProperty is not an actual property, same as myRepeat would not, in practice, be the name of your repeat. It's an example.
Are you wanting the user to click on something in the row in order to load additional information? That's the only use case for going down the RPC route.
If you want to show additional information not available in the current entry but based on a property of that entry, just add a control and compute the value.
In terms of optimisation, unless you're displaying hundreds of rows at a time or loading data from lots of different databases or views, each based on a property in the current row, it should be pretty quick. I would recommend getting it working, then optimise it if you find server-side performance is an issue. view.isRenderingPhase() is a good option for optimising performance of read-only data within a repeat, as is custom language to minimise the amount of HTML pushed to the browser, and also using a dataContext to ensure you only do the lookup to e.g. another document once. But if the network requests to and from the server are slow, optimising the server-side code to shave a fraction of a second on processing will have not have a very perceptible impact.
1.iOS8 provided asynchronousFetchRequest,and we can also create a 'private context' to fetch results, so what is the difference between asynchronousFetchRequest and 'create a private context'?
2.The type of NSFetchedResultsController's context must be MainQueueConcurrencyType?(block the UI?) Is there any solution to resolve this?
AsynchronousFetchRequest will create another context and perform the fetch on that context. Creating a private context yourself means that you can work on managedObjects on the background context without having to block the main thread while that work is being performed. If you have your own context though your going to have to transfer your managedObjects onto the main thread yourself though, while async fetch request is already doing that. FetchedResutlsControllers do not necessarily need a context that is MainQueueConcurrency, but do remember that if its PrivateQueueConcurrency, than the cache won't work and you will need to use performBlock: method in order to work with the objects. Your UI can get blocked while fetching objects for an FRC, but it shouldn't take a long time. IF you need speed from core data index your entities first. If you want to make sure you have data before a fetch, you can use an AsynchronousFetchRequest with a countForFetchRequest to just have a number returned and act accordingly.
In our wicket application I need to start a long-running operation. It will communicate with an external device and provide a result after some time (up to a few minutes).
Java-wise the long running operation is started by a method where I can provide a callback.
public interface LegacyThingy {
void startLegacyWork(WorkFinished callback);
}
public interface WorkFinished {
public void success(Whatever ...);
// failure never happens
}
On my Wicket Page I plan to add an Ajax Button to invoke startLegacyWork(...) providing an appropriate callback. For the result I'd display a panel that polls for the result using an AbstractTimerBehavior.
What boggles my mind is the following problem:
To keep state Wicket serializes the component tree along with the data, thus the data needs to be wrapped in serializable models (or detachable models).
So to keep the "connection" between the result panel and the WorkFinished callback I'd need some way to create a link between the "we serialize everything" world of Wicket and the "Hey I'm a Java Object and nobody manages my lifetime" world of the legacy interface.
Of course I could store ongoing operations in a kind of global map and use a Wicket detachable model that looks them up by id ... but that feels dirty and I don't assume that's the correct way. (It opens up a whole can of worms regarding lifetime of such things).
Or I'm looking at a completly wrong angle on how to do long running operations from wicket?
I think the approach with the global map is good. Wicket also uses something similar internally - org.apache.wicket.protocol.http.StoredResponsesMap. This is a special map that keeps the generated responses for REDIRECT_TO_BUFFER strategy. It has the logic to keep the entries for at most some pre-configured duration and also can have upper limit of entries.
I understand that App Server takes care of the threading so the developer should only concentrate on the business logic...
but consider an example. A stateless EJB has a member of type CountManager.
#WebService
#Stateless
public class StatelessEJB {
private CountManager countManager;
...
public void incrementCount() {countManager.incrementCount();}
public int getCount(){return countManager.getCount();}
}
And the CountManager
public class CountManager {
public void increaseCount() {
// read count from database
// increase count
// save the new count in database table.
}
public int getCount() {
// returns the count value from database.
}
}
The developer should think about multi-threading here. If you make CountManager also an EJB, I guess problem won't go away.
What would be the general guideline for developer to watch out for?
Update:
Changed the code. Assume that the methods of EJB are exposed as webservice, so we have no control what order client calls them. Transaction attribute is default. Does this code behave correctly under multi threaded scenario?
The fact that EJB are thread-safe doesn't mean that different methods invocations will give you consistent results.
EJB gives you the certainty that every method in your particular EJB instance will be executed by exactly one thread. This doesn't save you from multiple users accessing different instances of your EJB and inconsistent results dangers.
Your CountManager seems to be a regular Java class which means that you hold a state in Stateless EJB. This is not good and EJB thread-safety won't protect you from anything in such case. Your object can be accessed through multiple EJB instances at the same time.
Between your client's first method invocation StatelessEJB.incrementCount() (which starts a transaction - default TransactionAttribute) and the second client's method invocation StatelessEJB.getCount() (which starts new transaction) many things might happen and the value of the count could be changed.
If you'd change it to be an EJB I don't think you'd be any more safe. If it's a SLSB than it still can't have any state. If the state is not realized as a EJB field variable but a database fetched data, than it's definitely better but still - the transaction is not a real help for you because your WebService client still executes these two methods separately therefore landing in two different transactions.
The simple solution would be to:
use the database (no state in SLSB) which can be synchronized with your EJB transaction,
execute both of these methods within the transaction (like incrementAndGet(-) method for WebService client).
Than you can be fairly sure that the results you get are consistent.
Notice that is not really a problem of synchronization or multi-threading, but of transactional behavior.
The above code, if run inside an EJB, will take care of race conditions by delegating transaction support to the data base. Depending on the isolation level and transactional attributes, the data base can take care of locking the underlying tables to ensure that the information remains consistent, even in the face of concurrent access and/or modifications.
I am working on a SharePoint application that supports importing multiple documents in a single operation. I also have an ItemAdded event handler that performs some basic maintenance of the item metadata. This event fires for both imported documents and manually created ones. The final piece of the puzzle is a batch operation feature that I implemented to kick off a workflow and update another metadata field.
I am able to cause a COMException 0x81020037 by extracting the file data of a SPListItem. This file is just an InfoPath form/XML document. I am able to modify the XML and sucessfully push it back into the SPListItem. When I fire off the custom feature immediately afterwards and modify metadata, it occassionally causes the COM error.
The error message basically indicates that the file was modified by another thread. It would seem that the ItemAdded event is still writing the file back to the database while the custom feature is changing metadata. I have tried putting in delays and error catching loops to try to detect that the SPListItem is safe to modify with little success.
Is there a way to tell if another thread has a lock on a document?
Sometimes I see the ItemAdded or ItemUpdated firing twice for a single operation.
You can try to put a breakpoint in the ItemAdded() method to confirm that.
The solution in my case was to single thread the ItemAdded() method:
private static object myLock = new object();
public override void ItemAdded(SPItemEventProperties properties) {
if (System.Threading.Monitor.TryEnter(myLock, TimeSpan.FromSeconds(30))
{
//do your stuff here.
System.Threading.Monitor.Exit(myLock);
}
}
I'll have to look into that and get back to you. The problem on my end seems to be that there is code running in a different class, in a different feature, being controlled by a different thread, all of which are trying to access the same record.
I am trying to avoid using a fixed delay. With any threading issue, there is the pathological possibility that one thread can delay or block beyond what we expect. With deployments on different server hardware with different loads, this is a very real possibility. On the other end of the spectrum, even if I were to go with a delay, I don't want it to be very high, especially not 30 seconds. My client will be importing tens of thousands of documents, and a delay of any significant length will cause the import to take literally all day.