I've got a service call which then saves a lot of data after returning:
[MagicalRecord saveWithBlockAndWait:^(NSManagedObjectContext *localContext) {
for (NSDictionary *dictionary in result) {
// create managed object, set parameters
}
}];
Now if the user logs out during this for-loop, I would like cancel the saving - how is this achieved?
You can't use saveWithBlockAndWait because it will still save even if you have code to exit the loop. Potentially you could reset the context before you exit, but you would need to carefully consider what side effects that could cause.
So, you want to run a block where you have control of the contents so you can check a flag while looping and exit the block without saving if the flag indicates a cancel.
Also, if you have so many items to save you should batch the save operation...
So, use MR_saveToPersistentStoreCompletion to save, but check a cancel flag first and return from the block if it's set:
localContext = NSManagedObjectContext.MR_contextForCurrentThread
[localContext performBlock:^{
for (int i...) {
...
if (i % 100 == 0) {
if (!cancelled) {
[context MR_saveToPersistentStoreAndWait];
} else {
return;
}
}
}
}];
Related
I have a Document/Transaction (FormGrid) structure to a custom screen. I need to validate the values in the Transaction level when the Hold box is unchecked in the header. Fundamentally, my challenge seems to come from order of operation for event handlers. (FieldVerifying -> FieldUpdating -> Field Updated -> RowUpdating -> RowUpdated) combined with order of views processed (Seems to be the primary view before the others).
What has me completely confused right now is that if I leave the field in the transaction (grid) section of my form, CommitChanges fires, and everything is ok. However, if I set my field (OrderQty) to 0 and then immediately go uncheck Hold in the header without "leaving" the OrderQty field first, the OrderQty value is not committed until AFTER the Hold checkbox is processed. That means that I cannot validate that OrderQty is greater than 0. Literally, I cannot see my OrderQty value in the Cache or any form of the view because the hold box in the header is being processed before the OrderQty in the grid.
I have tried simple means to do the tests, and this is a snippit of something a bit more complex to try to grab the data. Caches[].Updated holds the new values AFTER the hold checkbox is processed, not before. AddWithoutDuplicates is a method to simply make sure I didn't have the record in the list already (allows looking at the updated cache values rather than the old values not yet committed)
List<SSRQLine> list = new List<SSRQLine>();
foreach (SSRQLine line in Caches[typeof(SSRQLine)].Updated)
{
AddWithoutDuplicates(list, line);
}
foreach (SSRQLine line in Caches[typeof(SSRQLine)].Inserted)
{
AddWithoutDuplicates(list, line);
}
foreach (SSRQLine line in Lines.Select())
{
AddWithoutDuplicates(list, line);
}
foreach (SSRQLine line in list)
...
Is there way to validate the grid data from the hold checkbox reliably during entry? Or must I handle the validation in a RowPersisting event later? Or any other suggestions on how to validate the user entry in the grid from Hold_FieldVerifying?
The parent DAC is SSRQRequisition, and the child DAC is SSRQLine.
As the goal was to monitor taking the request off of hold, the original effort was attached to the header DAC's hold field. The problem is that it required validating child DAC values in a One to Many relationship in which the child DAC had not processed its Field Events. To warn the user but not prevent saving the request, create the following event handler:
#region SSRQLine_OrderQty_FieldVerifying
protected virtual void _(Events.FieldVerifying<SSRQLine.orderQty> e)
{
SSRQLine row = (SSRQLine) e.Row;
if((decimal?)e.NewValue == 0)
{
e.Cache.RaiseExceptionHandling<SSRQLine.orderQty>(e.Row, row.OrderQty, new PXSetPropertyException(Messages.InvalidQuantity, PXErrorLevel.Warning));
}
}
#endregion
To re-validate the entire request before allowing the user to save it upon taking the request off of hold, perform the validation in the header DAC's RowPersisting event handler. Note that validation should not be performed if the request is being deleted, so the operation is tested as well. Additionally, the test is only needed if the user is taking the request off of hold, so no need to validate when placing back on hold.
#region SSRQRequisition_RowPersisting
protected void _(Events.RowPersisting<SSRQRequisition> e)
{
SSRQRequisition row = e.Row;
if (e.Operation != PXDBOperation.Delete && row?.Hold == false)
{
ValidateConsignment();
}
}
#endregion
The ValidateConsignment method has been created to simplify reading the code, but is called by the SSRQRequisition RowPersisting event. By raising exception handling as an ERROR this time and throwing an exception, the user is alerted that the warning is now what prevents saving the record without it being on hold.
#region ValidateConsignment
protected virtual void ValidateConsignment()
{
int lineCounter = 0;
SSRQRequisition req = Requisitions.Current;
List<SSRQLine> list = new List<SSRQLine>();
foreach (SSRQLine line in Caches[typeof(SSRQLine)].Updated)
{
AddWithoutDuplicates(list, line);
}
foreach (SSRQLine line in Caches[typeof(SSRQLine)].Inserted)
{
AddWithoutDuplicates(list, line);
}
foreach (SSRQLine line in Lines.Select())
{
AddWithoutDuplicates(list, line);
}
foreach (SSRQLine line in list)
{
lineCounter++;
if (line.SiteID == null)
{
throw new PXSetPropertyException(Messages.NoWhse, PXErrorLevel.Error);
}
if (line.OrderQty == null || line.OrderQty == 0)
{
this.Lines.Cache.RaiseExceptionHandling<SSRQLine.orderQty>(line, line.OrderQty, new PXSetPropertyException(Messages.InvalidQuantity, PXErrorLevel.Error));
throw new PXSetPropertyException(Messages.InvalidQuantity, PXErrorLevel.Error);
}
}
if (lineCounter == 0)
{
this.Lines.Cache.RaiseExceptionHandling<SSRQRequisition.hold>(req, req.Hold, new PXSetPropertyException(Messages.NoLines, PXErrorLevel.Error));
throw new PXSetPropertyException(Messages.NoLines, PXErrorLevel.Error);
}
}
#endregion
The ValidateConsignment method currently cycles through the Inserted and Updated copies of the cache, but I may be able to remove these since it is no longer being tested before the updates are applied to the cache. As this is now called from the RowPersisting method, all field and row event handlers for inserting and updating records in the cache should be applied already. However, I'll be testing further before removing them. Initial testing confirms that the code provided achieves the goal. To provide the desired end user experience, the code simply needed to be spread across 3 different events rather than cramming it all into the event on the Hold checkbox that originally intended to initiate the checks. The behavior ends up being slightly different than originally intended, but the result is more user friendly and achieves the same goal.
I am creating an un-partitioned change feed that I want to resume e.g. poll for new changes every X seconds. The checkpoint variable below holds the last response continuation response.
private string checkpoint;
private async Task ReadEvents()
{
FeedResponse<dynamic> feedResponse;
do
{
feedResponse = await client.ReadDocumentFeedAsync(commitsLink, new FeedOptions
{
MaxItemCount = this.subscriptionOptions.MaxItemCount,
RequestContinuation = checkpoint
});
if (feedResponse.ResponseContinuation != null)
{
checkpoint = feedResponse.ResponseContinuation;
}
// Code to process docs goes here...
} while (feedResponse.ResponseContinuation != null);
}
Note the use of the "if" block around the checkpoint. This is done because if I leave this out the responseContinuation gets set to null, which will basically restart the polling cycle as setting the request continuation to null will pull the 1st set of documents in the change feed.
However, the downside is each polling loop will replay the previous set of documents rather than just any additional changes. Is there anything I can do in order to optimized this further or is this a limitation of the change feed API?
In order to read change feed, you must use CreateDocumentChangeFeedQuery (which never resets ResponseContinuation), instead of ReadDocumentFeed (which sets to null when there are no more results).
See https://learn.microsoft.com/en-us/azure/documentdb/documentdb-change-feed#working-with-the-rest-api-and-sdk for a sample.
Here's my problem : I'm doing a background work, where I parse some JSON and write some Objects into my Realm, and in the main thread I try to update the UI (reloading the TableView, it's linked to an array of Object). But when I reload the UI, my tableView doesn't update, like my Realm wasn't updated. I have the reload my View to see the updates. Here's my code :
if (Realm().objects(Objects).filter("...").count > 0)
{
var results = Realm().objects(Objects) // I get the existing objects but it's empty
tableView.reloadData()
}
request(.GET, url).responseJSON() {
(request, response, data, error) in
let priority = DISPATCH_QUEUE_PRIORITY_DEFAULT
dispatch_async(dispatch_get_global_queue(priority, 0)) {
// Parsing my JSON
Realm().write {
Realm().add(object)
}
dispatch_sync(dispatch_get_main_queue()) {
// Updating the UI
if (Realm().objects(Objects).filter("...").count > 0)
{
results = Realm().objects(Objects) // I get the existing objects but it's empty
tableView.reloadData()
}
}
}
}
I have to do something bad with my threads, but I couldn't find what. Can someone know what's wrong?
Thank you for your answer!
such workflow makes more sense to me for your case:
let priority = DISPATCH_QUEUE_PRIORITY_DEFAULT
dispatch_async(dispatch_get_global_queue(priority, 0)) {
// Parsing my JSON
Realm().write {
Realm().add(object)
dispatch_sync(dispatch_get_main_queue()) {
// Updating the UI
if (Realm().objects(Objects).filter("...").count > 0)
{
results = Realm().objects(Objects) // I get the existing objects but it's empty
tableView.reloadData()
}
}
}
}
NOTE: you have a problem with timing in your original workflow: the UI might be updated before the write's block executed, that is why your UI looks abandoned; this idea above would be a more synchronised way between tasks, according their performance's schedule.
You are getting some new objects and storing them into "results".
How is tableView.reloadData () supposed to access that variable? You must change something that your tableView delegate will access.
PS. Every dispatch_sync () is a potential deadlock. You are using one that is absolutely pointless. Avoid dispatch_sync unless you have a very, very good reason to use it.
I am having a child window derived from CFormView. On certain condition in OnCreate() function, I want to close this window.
I tried 2 options:
int CFilterWindow::OnCreate(LPCREATESTRUCT lpCreateStruct)
{
if (CFormView::OnCreate(lpCreateStruct) == -1)
return -1;
//Trial-1
if (!IsInitialized())
{
DestroyWindow();
return 0;
}
//Trial-2
if (!IsInitialized())
{
return -1;
}
return 0;
}
In both scenarios, the window is closed but my system returns "Failed to create empty document."
How do I avoid this message?
This is a completely normal behavior.
Document, Frame and View are created in one turn. First the document is created, than the frame and than the inner view. If one of the operations failed all other are also rolled back and fail.
So in the case of the MDI OnFileNew calls OpenDocumentFile from you template.
This function creates the new CDocument, followed by a new frame window. The frame window creates the view. This fails due to your code.
Your error message comes from CMultiDocTemplate::OpenDocumentFile because CreateNewFrame fails.
Let the MFC create your window and destroy the view it in OnInitialUpdate. This should work without this message.
What is the best way to deal with document locking in xPages? Currently we use the standard soft locking and it seems to work fairly well in the Notes client.
In xPages I considered using the "Allow Document Locking" feature but I am worried that people would close the browser without using a close or save button then the lock would never be cleared.
Is there a way to clear the locks when the user has closed his session? I am seeing no such event.
Or is there an easier way to have document locking?
I realize I can clear the locks using an agent but when to run it? I would think sometime a night then I am fairly certain the lock should no longer really be active.
Here is code I'm using:
/* DOCUMENT LOCKING */
/*
use the global object "documentLocking" with:
.lock(doc) -> locks a document
.unlock(doc) -> unlocks a document
.isLocked(doc) -> returns true/false
.lockedBy(doc) -> returns name of lock holder
.lockedDT(doc) -> returns datetime stamp of lock
*/
function ynDocumentLocking() {
/*
a lock is an entry in the application scope
with key = "$ynlock_"+UNID
containing an array with
(0) = username of lock holder
(1) = timestamp of lock
*/
var lockMaxAge = 60 * 120; // in seconds, default 120 min
this.getUNID = function(v) {
if (!v) return null;
if (typeof v == "NotesXspDocument") return v.getDocument().getUniversalID();
if (typeof v == "string") return v;
return v.getUniversalID();
}
/* puts a lock into application scope */
this.lock = function(doc:NotesDocument) {
var a = new Array(1);
a[0] = #UserName();
a[1] = #Now();
applicationScope.put("$ynlock_"+this.getUNID(doc), a);
// print("SET LOCK "+"$ynlock_"+doc.getUniversalID()+" / "+a[0]+" / "+a[1]);
}
/* removes a lock from the application scope */
this.unlock = function(doc:NotesDocument) {
applicationScope.put("$ynlock_"+this.getUNID(doc), null);
//print("REMOVED LOCK for "+"$ynlock_"+doc.getUniversalID());
}
this.isLocked = function(doc:NotesDocument) {
try {
//print("ISLOCKED for "+"$ynlock_"+doc.getUniversalID());
// check how old the lock is
var v = applicationScope.get("$ynlock_"+this.getUNID(doc));
if (!v) {
//print("no lock found -> return false");
return false;
}
// if lock holder is the current user, treat as not locked
if (v[0] == #UserName()) {
//print("lock holder = user -> not locked");
return false;
}
var dLock:NotesDateTime = session.createDateTime(v[1]);
var dNow:NotesDateTime = session.createDateTime(#Now());
// diff is in seconds
//print("time diff="+dNow.timeDifference(dLock)+" dLock="+v[1]+" now="+#Now());
// if diff > x seconds then remove lock, it not locked
if (dNow.timeDifference(dLock) > lockMaxAge) {
// print("LOCK is older than maxAge "+lockMaxAge+" -> returning false");
return false;
}
//print("return true");
return true;
// TODO: check how old the lock is
} catch (e) {
print("ynDocumentLocking.isLocked: "+e);
}
}
this.lockedBy = function(doc:NotesDocument) {
try {
var v = applicationScope.get("$ynlock_"+this.getUNID(doc));
if (!v) return "";
//print("ISLOCKEDBY "+"$ynlock_"+doc.getUniversalID()+" = "+v[0]);
return v[0];
} catch (e) {
print("ynDocumentLocking.isLockedBy: "+e);
}
}
this.lockedDT = function(doc:NotesDocument) {
try {
var v = applicationScope.get("$ynlock_"+this.getUNID(doc));
if (!v) return "";
return v[1];
} catch (e) {
print("ynDocumentLocking.isLockedBy: "+e);
}
}
}
var documentLocking = new ynDocumentLocking();
You could take a page from the way webDAV works. There a servlet manages a "lock-list" of locked documents. The locks automatically expire after 10 minutes. Locks can be renewed or terminated trough calls. So when you edit a document you would request a lock, then kick off a CSJS timer that calls the relocking function every 8 minutes (so you have some margin for error) and the postSave calls the unlock (unless you stay in edit mode).
If a user closes the browser after 10 minutes the document is automatically unlocked. Since you are free how to implement the locking function, you can capture user/location and use that information in the "lock failed" display (you event could push that further and let the original author know about it or do some "retry" option.
It isn't simple to implement, but once implemented simple to use
ApplicationScope may be a good place to capture "locked" documents. After all, for applicationScope to expire, all users' sessions have to have expired, so anyone with the page open will not be able to save anyway.
Maybe capture UNID, user and time when someone edits a doc. Clear the value when the document is saved. Bear in mind that the user might close the browser etc. I've been discussing this approach internally and if we end up building this I would look to add it to OpenNTF. But we're unlikely to get onto it within the next month.
I Prefer to use a solution similar to Mr. Withers' answer. The main issue is how to deal with the unwanted and dreaded back button. It is easy to lock a document when it is opened, but there are many ways to close the XPage, and the user is not limited to just the navigation you provide but also can, as he stated, close the browser completely, use the back button, etc. So, the best way that I can think of is to create a few java objects which we will use in the application and session scopes.
The first step is to create a "LockedDocument" class. As we know, the documents are not serializable and we do not want to save the document itself in this object, we want to save the UNID and the time it was saved. We want to do it this way so that we can manage to clear the object after a given time (like thirty minutes to an hour). This class should also implement the comparable interface in order to sort the collection by this time so that the oldest documents are first and the newest documents are last.
Next we create another class that holds a list or a map with these LockedDocuments. This class must also have a thread (implement Runnable) that will check all documents every five minutes or so, I did not test this yet, but it should work). Any document that was locked thirty to sixty minutes ago (predefined) will be unlocked (deleted from the list). It is important that the list be sorted as described above and that the loop is "broken" when a time less than the locktime is reached in order to prevent unwanted processing.
The next step would be to include the user specific list in the sessionScope. This list is the LockedDocuments that this current user has. It is set when the user changes the document's status to editable, and is checked before the document is set to editable to prevent one document from being opened in multiple tabs by the same user. The lock is once again checked onquerysave(). Once a main page is opened, the lock is automatically released. The onquerysave() must also check to make sure the documents UNID is in the sessionScope list, or if the document is new before allowing a save.
quick recap
Any UNID saved in the applicationScope LockedDocumentList would not be editable by anyone unless it exists in their own sessionScope list.
It is possible to warn a user that their lockedTime is approaching and reset the timer.
The class containing a list with the locked documents must be a singleton
There are probably ways to improve this answer, and I am sure I am missing something. It is just a thought.
There might be a better way to handle this, but it is the best I found.
You can remove the Domino lock in window.onunload event:
window.onunload = function(){
dojo.xhrGet(...
}
No need to reinvent the wheel.