Core data and cloudkit sync wwdc 2019 not working for beta 3 - core-data

I am trying to replicate the result of WWDC talk on syncing core data with cloud kit automatically.
I tried three approaches:
Making a new master slave view app and following the steps at in
wwdc 2019 talk, in this case no syncing happens
Downloading the sample wwdc 2019 app also in this case no symcing happens
I made a small app with a small core data and a cloud kit container in this case syncing happens but I have to restart the app. I suspected it had to do with history management so observed the NSPersistentStoreRemoteChange notification not nothing receives.
Appreciate any help.

I also played around with CoreData and iCloud and it work perfectly. I would like to list some important points that may help you go further:
You have to run the app on a real device with iCloud Acc We can now test iCloud Sync on Simulator, but it will not get notification automatically. We have to trigger manually by select Debug > Trigger iCloud Sync
Make sure you added Push Notification and iCloud capability to your app. Make sure that you don't Dave issue with iCloud container (in this case, you will see red text on iCloud session in Xcode)
In order to refresh the view automatically, you need to add this line into your Core Data Stack: container.viewContext.automaticallyMergesChangesFromParent = true.
Code:
public lazy var persistentContainer: NSPersistentCloudKitContainer = {
/*
The persistent container for the application. This implementation
creates and returns a container, having loaded the store for the
application to it. This property is optional since there are legitimate
error conditions that could cause the creation of the store to fail.
*/
let container = NSPersistentCloudKitContainer(name: self.modelName)
container.viewContext.automaticallyMergesChangesFromParent = true
container.loadPersistentStores(completionHandler: { (storeDescription, error) in
if let error = error as NSError? {
// Replace this implementation with code to handle the error appropriately.
// fatalError() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development.
/*
Typical reasons for an error here include:
* The parent directory does not exist, cannot be created, or disallows writing.
* The persistent store is not accessible, due to permissions or data protection when the device is locked.
* The device is out of space.
* The store could not be migrated to the current model version.
Check the error message to determine what the actual problem was.
*/
fatalError("Unresolved error \(error), \(error.userInfo)")
}
})
return container
}()
When you add some data, normally you should see console log begin with CloudKit: CoreData+CloudKit: ..........
Sometimes the data is not synced immediately, in this case, I force close the app and build a new one, then the data get syncing.
There was one time, the data get synced after few hours :(

I found that the NSPersistentStoreRemoteChange notification is posted by the NSPersistentStoreCoordinator and not by the NSPersistentCloudKitContainer, so the following code solves the problem:
// Observe Core Data remote change notifications.
NotificationCenter.default.addObserver(
self, selector: #selector(self.storeRemoteChange(_:)),
name: .NSPersistentStoreRemoteChange, object: container.persistentStoreCoordinator)

Also ran into the issue with .NSPersistentStoreRemoteChange notification not being sent.
Code from Apples example:
// Observe Core Data remote change notifications.
NotificationCenter.default.addObserver(
self, selector: #selector(type(of: self).storeRemoteChange(_:)),
name: .NSPersistentStoreRemoteChange, object: container)
Solution for me was to not set the container as object for the notification, but nil instead. Is it not used anyway and prevents the notification from being received:
// Observe Core Data remote change notifications.
NotificationCenter.default.addObserver(
self, selector: #selector(type(of: self).storeRemoteChange(_:)),
name: .NSPersistentStoreRemoteChange, object: nil)
Update:
As per this answer: https://stackoverflow.com/a/60142380/3187762
The correct way would be to set container.persistentStoreCoordinator as object:
// Observe Core Data remote change notifications.
NotificationCenter.default.addObserver(
self, selector: #selector(type(of: self).storeRemoteChange(_:)),
name: .NSPersistentStoreRemoteChange, object: container.persistentStoreCoordinator)

I had the same problem, reason was that iCloudDrive must be enabled in your devices. Check it in the Settings of every your device

I understand this answer comes late and is not actually specific to the WWDC 19 SynchronizingALocalStoreToTheCloud Apple's sample project to which OP refers to, but I had syncing issues (not upon launch, when it synced fine, but only during the app being active but idle, which seems to be case 3 of the original question) in a project that uses Core Data + CloudKit with NSPersistentCloudKitContainer and I believe the same problems I had - and now apparently I have solved - might affect other Users reading this question in the future.
My app was built using Xcode's 11 Master-Detail template with Core Data + CloudKit from the start, so I had to do very little to have syncing work initially:
Enable Remote Notifications Background Mode in Signing & Capabilities for my target;
Add the iCloud capability for CloudKit;
Select the container iCloud.com.domain.AppName
Add viewContext.automaticallyMergesChangesFromParent = true
Basically, I followed Getting Started With NSPersistentCloudKitContainer by Andrew Bancroft and this was enough to have the MVP sync between devices (Catalina, iOS 13, iPadOS 13) not only upon launch, but also when the app was running and active (thanks to step 4 above) and another device edited/added/deleted an object.Being the Xcode template, it did not have the additional customisations / advanced behaviours of WWDC 2019's sample project, but it actually accomplished the goal pretty well and I was satisfied, so I moved on to other parts of this app's development and stopped thinking about sync.
A few days ago, I noticed that the iOS/iPadOS app was now only syncing upon launch, and not while the app was active and idle on screen; on macOS the behaviour was slightly different, because a simple command-tab triggered sync when reactivating the app, but again, if the Mac app was frontmost, no syncing for changes coming from other devices.
I initially blamed a couple of modifications I did in the meantime:
In order to have the sqlite accessible in a Share Extension, I moved the container in an app group by subclassing NSPersistentCloudKitContainer;
I changed the capitalisation in the name of the app and, since I could not delete the CloudKit database, I created a new container named iCloud.com.domain.AppnameApp (CloudKit is case insensitive, apparently, and yes, I should really start to care less about such things).
While I was pretty sure that I saw syncing work as well as before after each one of these changes, having sync (apparently) suddenly break convinced me, for at least a few hours, that either one of those modification from the default path caused the notifications to stop being received while the app was active, and that then the merge would only happen upon launch as I was seeing because the running app was not made aware of changes.
I should mention, because this could help others in my situation, that I was sure notifications were triggered upon Core Data context saves because CloudKit Dashboard was showing the notifications being sent:
So, I tried a few times clearing Derived Data (one never knows), deleting the apps on all devices and resetting the Development Environment in CloudKit's Dashboard (something I already did periodically during development), but I still had the issue of the notifications not being received.
Finally, I realised that resetting the CloudKit environment and deleting the apps was indeed useful (and I actually rebooted everything just to be safe ;) but I also needed to delete the app data from iCloud (from iCloud's Settings screen on the last iOS device where the app was still installed, after deleting from the others) if I really wanted a clean slate; otherwise, my somewhat messed up database would sync back to the newly installed app.
And indeed, a truly clean slate with a fresh Development Environment, newly installed apps and rebooted devices resumed the notifications being detected from the devices also when the apps are frontmost.So, if you feel your setup is correct and have already read enough times that viewContext.automaticallyMergesChangesFromParent = true is the only thing you need, but still can't see changes come from other devices, don't exclude that something could have been messed up beyond your control (don't get me wrong: I'm 100% sure that it must have been something that I did!) and try to have a fresh start... it might seem obscure, but what isn't with the syncing method we are choosing for our app?

Related

Azure WebJobs getting initialized randomly

We have webjobs consisting of several methods in a single Functions.cs file. They have servicebus triggers on topic/queues. Hence, keep listening to topic/queue for brokeredMessage. As soon as the message arrives, we have a processing logic that does lot of stuff. But, we find sometimes, all the webjobs get reinitialized suddenly. I found few articles on the website which says webjobs do get initialized and it is usual.
But, not sure if that is the only way and can we prevent it from getting reinitialized as we call brokeredMessage.Complete as soon we get brokeredMessage since we do not want it to be keep processing again and again?
Also, we have few webjobs in one app service and few webjobs in other app service. And, we find all of the webjobs from both the app service get re initialized at the same time. Not sure, why?
You should design your process to be able to deal with occasional disconnects and failures, since this is a "feature" or applications living in the cloud.
Use a transaction to manage the critical area of your code.
Pseudo/commented code below, and a link to the Microsoft documentation is here.
var msg = receiver.Receive();
using (scope = new TransactionScope())
{
// Do whatever work is required
// Starting with computation and business logic.
// Finishing with any persistence or new message generation,
// giving your application the best change of success.
// Keep in mind that all BrokeredMessage operations are enrolled in
// the transaction. They will all succeed or fail.
// If you have multiple data stores to update, you can use brokered messages
// to send new individual messages to do the operation on each store,
// giving eventual consistency.
msg.Complete(); // mark the message as done
scope.Complete(); // declare the transaction done
}

Azure function goes idle when running in Consumption Plan with ServiceBus Queue trigger

I have also asked this question in the MSDN Azure forums, but have not received any guidance as to why my function goes idle.
I have an Azure function running on a Consumption plan that goes idle (i.e. does not respond to new messages on the ServiceBus trigger queue) despite following the instructions outlined in this GitHub issue:
The configuration for the function is the following json:
{
"ConnectionStrings": {
"MyConnectionString": "Server=tcp:project.database.windows.net,1433;Database=myDB;User ID=user#project;Password=password;Encrypt=True;Connection Timeout=30;"
},
"Values": {
"serviceBusConnection": "Endpoint=sb://project.servicebus.windows.net/;SharedAccessKeyName=SharedAccessKeyName;SharedAccessKey=KEY_HERE",
}
}
And the function signature is:
public static void ProcessQueue([ServiceBusTrigger("queueName", AccessRights.Listen, Connection = "serviceBusConnection")] ...)
Based on the discussion in the GitHub issue, I believed that having either a serviceBusConnection entry OR an AzureWebJobServiceBus entry should be enough to ensure that the central listener triggers the function when a new message is added to the ServiceBusQueue, but that is proving to not be the case.
Can anyone clarify the difference between how those two settings are used, or notice anything else with the settings I provided that might be causing the function to not properly be triggered after a period of inactivity?
I suggest there are several possible causes for this behavior. I have several Azure subs and only one of them had issues with Storage/Service Bus-based triggers only popping up when app is not idle. So far I have observed that actions listed below will prevent triggers from working correctly:
Creating any Storage-based trigger, deleting (for any reason) the triggering object and re-creating it.
Corrupting azure function input parameters by deleting/altering associated objects without recompiling a function
Restarting functions app when one of the functions fails to compile/bind to trigger OR input parameter and hangs may cause same problems.
It has also been observed that using legacy Connection Strings setting for trigger binding will not work.
Clean deploy of an affected function app will most likely solve the problem if it was caused by any of the actions described above.
EDIT:
It looks like this is also caused by setting Authorization/Authentication on the functions app, but I have not yet figured out if it happens in general or when Auth has specific configuration. Tested on affected Azure sub by disabling auth at all - function going idle after 30-40 mins, queue trigger still initiates an execution, though with a delay as expected. I have found an old bug related to this, but it says issue resolved.

Azure webjob - QueueTrigger stops triggering

I am running an azure webjobs SDK console application (continuous) with the recommended setup:
public static void ProcessQueueMessage([QueueTrigger("logqueue")] string logMessage, TextWriter logger)
The azure queue I am running against has ~6000 messages in it and I am running the web-job locally, as a console application.
The problem I'm having is that the processing randomly stops after processing between zero and ~30 messages. The console stays open, but no more console messages are displayed.
For example, it might just process 2 messages:
Executing: 'Functions.ProcessQueueMessage' - Reason: 'New queue message detected on 'QueueName'.'
Executed: 'Functions.ProcessQueueMessage' (Succeeded)
Executing: 'Functions.ProcessQueueMessage' - Reason: 'New queue message detected on 'QueueName'.'
Executed: 'Functions.ProcessQueueMessage' (Succeeded)
And then, nothing. There doesn't seem to be anything wrong with my internet connection and I can't trace the issues down to any particular messages.
Has anyone else had issues with this SDK?
Update:
I made sure that I was using the right versions of all of the dependencies by removing the nuget packages and then re-running install-package Microsoft.Axure.Webjobs. I am now using webjobs version 1.1.0 which has pulled in version 4.3 of azure storage.
As recommended by Matthew, I have pulled down the source code for azure webjobs to determine where the process is freezing up. Once the freez-up occurs, I pause execution and checked the running threads for what I believe is the culprit within Microsoft.Azure.WebJobs.Host.CompositeTraceWriter
protected virtual void InvokeTextWriter(TraceEvent traceEvent)
{
if (_innerTextWriter != null)
{
string message = traceEvent.Message;
if (!string.IsNullOrEmpty(message) &&
message.EndsWith("\r\n", StringComparison.OrdinalIgnoreCase))
{
// remove any terminating return+line feed, since we're
// calling WriteLine below
message = message.Substring(0, message.Length - 2);
}
_innerTextWriter.WriteLine(message);
if (traceEvent.Exception != null)
{
_innerTextWriter.WriteLine(traceEvent.Exception.ToDetails());
}
}
}
The line it freezes on is line 66 : _innerTextWriter.WriteLine(message);
_innerTextWriter is an instance of System.IO.TextWriter.SyncTextWriter
Is it possible there is some deadlock issue with this class or the way it is being used?
Some notes:
I am running in the debugger, so in this case I believe the textwriter is forwarding to the console internally
I have my batchsize set to 1 via config.Queues.BatchSize = 1;, not sure if that could matter
I'm currently working on setting up an environment on another computer so that I can see if it is reproducible somewhere other than this machine (surface book).
Update
The issue was me not understanding how the new windows 10 command prompt works. Any time you click on the command window, it goes into "select" mode which completely pauses execution of the process.
Basically: https://superuser.com/questions/419717/windows-command-prompt-freezing-randomly?newreg=ece53f5584254346be68f85d1fd2f18d
You can tell it is in this state because it will prefix the window title with the word "Select":
You have to press enter or click again to get it going once again.
So, two final comments:
1) What an incredibly confusing and un-intuitive behavior for a command window!
2) I hope some admin will come take pity on the shame I have brought upon myself and my family by deleting this question.
To get rid of this strange behavior, you can disable QuickEdit mode:
Strange. When it is in this stuck state, can you try adding a new queue message to the queue and see if that triggers? Are you sure your function isn't hanging internally? What version of the SDK are you using? You might also try upgrading to v1.1.0 which we just released last week. If there are really a bunch of messages in the queue waiting to be processed, I can't think of anything that would cause this. The queue listener in the SDK should chug along, reading batches of messages in parallel and dispatching them to your function. Have you changed any of the JobHostConfiguration.Queues configuration knobs? You haven't force updated the version of the Azure SDK have you to something higher than the WebJobs SDK supports?
Another option if you can't figure this out might be to clone the SDK, build it and debug it locally. The repo is here. The main queue processing loop is here.

Leap Listener controller stops working after some time in VS2012

We have integrated flash game (crazytaxi.swf) inside Windows form application in VS Express 2012 for Windows Desktop.We are controlling game with the help of gestures using leapmotion controller.
What happens is when we run project in VS2012,game starts normally.We play game using gesture (left,right etc.).But after some time controller stops listening by exiting its thread.We can see that in output window.
"The thread '' (0x1b50) has exited with code 0 (0x0)." this we get in output window.
We are not getting how to overcome this challenge.
You didn't provide enough information but from what I gather, your LEAP thread ran to completion and closed. This can occur if you declare and initialize your listener and controller variables in a function and the variables go out of scope when the function is executed. This causes Garbage Collector to dispose both controller and listener at what might seemingly appear as random (non reproductable) moments. To fix this issue, create a singleton for the controller and listener - this can boil down to simply defining a static controller and listener variables in some class. This way the listener and controller objects will never go out of scope and get disposed by the GC.
"The thread '' (0x1b50) has exited with code 0 (0x0) basically says that:
The thread with the ID 6992 ran and completed the operation successfully.
System Error Codes (0-499)
The question is, does the device stop tracking?
Here are my 0x0's from my Leap Motion app (its working fine):
The thread 'vshost.NotifyLoad' (0x364) has exited with code 0 (0x0).
The thread '' (0x3348) has exited with code 0 (0x0).
The thread 'vshost.LoadReference' (0x37c8) has exited with code 0 (0x0).
Also, on a side note, because it has nothing to do with the error code - are you removing the listener from the controller, and then disposing of them both when the application is being closed? Not properly disposing of objects will cause problems.
A second note - onExit and onDisconnet are two different things.
onDisconnect(controller:Controller):void
Called when the Controller object disconnects from the Leap Motion software.
Listener
onExit(controller:Controller):void
Called when this Listener object is removed from the Controller or the Controller instance is destroyed.
In case somebody is having similar problems here is the reply I sent over email after having a look at the code:
I've looked at the code, since you only sent me a few files I had to comment out the the references to classes that was not included in the zip.
The code runs fine with 3 different Leap Motion devices - when I comment out from the method, what I suggest is:
Update the SDK. I used .Net 4 dll and the latest version of the SDK which today is: v.1.0.8.7665
Dispose the objects once you are done using them. Dispose Frames after using them, Remove listener from the Controller, and dispose Controller when the device isn't used anymore or the application is being closed.
I noticed some Timers and DispatcherTimers they were being created, but I couldn't find any references to where they were being used. What are these being used for? DispatcherTimer doesn't belong in a windows forms application.
My best guess - and I hate to guess - is that there is a threading problem OR that the objects aren't being disposed correctly - OR you are using an SDK version that had bugs.
I have two applications on GitHub - feel free to use the code as you want. There is one for WPF and one for Windows Forms - both need to be updated for latest SDK as some things have been deprecated (such as the Screen class) in later versions of the SDK.
WPF:
https://github.com/IrisClasson/Leap-Motion
Windows Forms:
https://github.com/IrisClasson/LeapMotion_WinForms_Demo_OLD_SDK
Disclaimer: I don't do much, if any WinForms development
I had a same issue with my WPF application and Leap Motion code but the code works fine with a console application.
In WPF when I globally declared the Leap-Listener and the controller Object, I did not face the error any more and the Listener is active all time.
Just declare the listener and controller in the class (globally) from which you call the listener
public static LeapListener listener;
public static Controller controller;
Now use this listener object, and add it to the controller and enable gesture property of the controller in a function or a constructor.
listener = new LeapListener();
controller.AddListener(listener);
controller = new Controller();
This should solve the issue. If the problem still persists (detection problem), simply initialize and add the same listener object to the controller object onExit event. Now the gesture and other properties exist as you are not creating a new instance of the controller again.

Getting Azure Logs from role process

I've configured the Azure Diagnostics so that the logs get uploaded to a storage table. I'm using Trace.TraceXxx from my code and all works well.
Now I'm trying to add tracing from the Role OnStart() and OnStop() methods. I know that the tracing works as I see the lines in the Debug window when running in the emulator. But from the cloud deployment, it seems that these trace lines never get uploaded to the table. My guess is that it is somewhat related to TraceSources, as the only trace lines I've in the table come from the w3wp.exe source... Any hint ?
Thanks
Like you said you can add the trace listener using the WaIISHost.exe.config, but besides that you can also add the trace listener in code (you'll need a reference to Microsoft.WindowsAzure.Diagnostics.dll):
public class WebRole : RoleEntryPoint
{
public override void Run()
{
var listener = new Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener();
Trace.Listeners.Add(listener);
...
}
}
Another way of setting up diagnostics is through the configuration file. If you created a VS solution recently, it will automatically create the diagnostics plug-in and configuration for the trace listener. With the config file (diagnostics.wadcfg) there is no code that needs to be written for the different data sources. Here is a link where you can get started and a sample file:
http://msdn.microsoft.com/en-us/library/gg604918.aspx
You cannot include custom performance counters right now and you need to make sure that you don’t try to allocate more than 4GB of buffer to anything (you can leave at 0), or it tends to fail.
Note, the time interval format (e.g PT1M). That is a serialization format, so PTXM is X minutes, while PTXS is X in seconds. You need to mark this as content and copy always in Visual Studio (place at root of project) so it gets packaged.
And here is a link to the three ways to setup diagnostics
http://msdn.microsoft.com/en-us/library/windowsazure/hh411541.aspx
Ranjith
http://www.opstera.com

Resources