Core Data and CloudKit integration issue when renaming relationship (code 134110) - core-data

I currently have an app using Core Data in the App Store: the app allows people to record their water and sailing activities (think of it like Strava for sailors). I have not updated the app for 3 years, the app seems to be still working fine on latest iOS versions but I recently planned to improve the app.
I am currently working on an update for this app, and need to change the data model and schema. I would like to have an automatic lightweight migration. I renamed some entities, properties and relationships, but I made sure to put the previous ids in the Renaming ID field in the editor.
I want to take advantage of the opportunity to sync the updated schema on CloudKit. I followed the instruction on Apple Developer documentation to setup the sync. I also made sure to initialize the schema using initializeCloudKitSchema(). When I visit the dashboard, I see the correct schema. The container is only in development mode, not pushed into production.
When I launch the app with a sqlite file generated by the available app, it seems the migration works well because the data is still here and correct. I can navigate in the app normally and when I visit the CloudKit dashboard, the data is correctly saved.
But sometimes, the app crashes at launch with the following error:
UserInfo={reason=CloudKit integration forbids renaming 'crewMembers' to 'sailors'.
Older devices can't process the new relationships.
NSUnderlyingException=CloudKit integration forbids renaming 'crewMembers' to 'sailors'.
Older devices can't process the new relationships.}}}
Fatal error: Unresolved error Error Domain=NSCocoaErrorDomain Code=134110 "An error occurred during persistent store migration."
The concerned entities were renamed, as the relationships and the relationship is a many-to-many, optional on both sides. This is occurring even if I reset the CloudKit development container. I don’t really have a clear idea of when this is appearing (seems random, after I updated some data or after I update the Core Data model). Any idea why the app is crashing? I would like as much as possible to keep the new naming for my entities and relationships.
SKPRCrewMemberMO renamed to Sailor
SKPRTrackMO renamed to Activity
crewMembers <<--->> tracks renamed sailors <<--->> activities
Here are some screenshots of the previous and updated data model for the entity at the origin of the migration issue, as well as some code regarding my Core Data stack initialization and the console error il getting.
PS: the app is used by few hundreds of people. That’s not a lot, but still, some of them have dozens of recorded activities and I don’t want to break anything and lose or corrupt data. I could launch a new app but users would lose their progress as it’s only saved locally in a shared container (app group was used as I wanted to share the Core Data with an Apple Watch extension). And I would lose the user base and App Store related things.
private init() {
container = NSPersistentCloudKitContainer(name: "Skipper")
guard let description = container.persistentStoreDescriptions.first else {
fatalError("###\(#function): Failed to retrieve a persistent store description.")
}
description.setOption(true as NSNumber, forKey: NSPersistentHistoryTrackingKey)
description.setOption(true as NSNumber, forKey: NSPersistentStoreRemoteChangeNotificationPostOptionKey)
let id = "iCloud.com.alepennec.sandbox20201013"
let options = NSPersistentCloudKitContainerOptions(containerIdentifier: id)
description.cloudKitContainerOptions = options
container.loadPersistentStores(completionHandler: { (storeDescription, error) in
if let error = error as NSError? {
fatalError("Unresolved error \(error), \(error.userInfo)")
}
})
do {
try container.initializeCloudKitSchema()
} catch {
print("Unable to initialize CloudKit schema: \(error.localizedDescription)")
}
container.viewContext.automaticallyMergesChangesFromParent = true
container.viewContext.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy
}

Related

Revit model locked after relinquishing ownership through Revit API with a BIM360 project

We have a process where we check out all elements in the Revit model using worksharing to make sure everything is available when processing the data. After processing the data, then it's time to relinquish all the elements again but here we get an error when doing this on larger projects on BIM360.
The method used to relinquish the elements is this one:
var relinquishOptions = new RelinquishOptions(true);
try
{
// Relinquish everything
WorksharingUtils.RelinquishOwnership(doc, relinquishOptions, null);
}
catch (Exception e)
{
MsgBox.ShowMsg(e.ToString(), "Relinquish ownership failed");
throw;
}
This triggers this error message on the "sync with central" dialog from BIM360:
After this the model can't be synced with the central model on BIM360. Using the Revit API to sync with central we get this exception:
Autodesk.revit.Exceptions.CentralModelContentionException: The central model is being accessed by another client.
The model is locked by our user and the only way to "unblock" it is by reuploading the file to BIM360.
This only happens on files on BIM360 and only on larger files. The file is around 350MB.
Are we missing something?
I can't seem to find any documentation on RevitAPI mentioning any restrictions on the integration with BIM360 that could explain the issue.
Do any of you know why this is happening?
Thank you in advance!

Problem with JHipster app generation - conflict between two created app on the server port

I generated two apps using jhipster command. One for a Jhipster demo called Blog and the other one called MyFarm.
In blog, there were three entities: Blog, Entry and Tag
In MyFarm there are two entities: Farm and Product.
The first app Blog works properly. Then I stop it. I open and run the new App myFarm and it keeps trying to reach the Blog entities... that it doesn't find obviously and then I get an error.
To generate the entities I used the import of a jh file containing the following:
entity Farm {
name String required minlength(3),
details TextBlob required
}
entity Product {
type String required,
quality Quality required,
quantity Integer required,
date Instant required
}
relationship ManyToOne {
Farm {user(login)} to User,
Product{farm(name)} to Farm
}
paginate Product with infinite-scroll
enum Quality {
MAUVAISE, BONNE, EXCELLENTE
}
The entities have been generated properly then. But they are in red in my IDE and the App doesn't try to reachc those entities upon running.
Does somebody have a clue, please?
The port is configured in the .yo-rc.json file in each project, edit one and change value of serverPort property, then re-generate your app by executing jhipster.

Azure DocumentDB Owner resource does not exist

I having the same error icrosoft.Azure.Documents.DocumentClientException: Message: {"Errors":["Owner resource does not exist"]} , this is my scenario. When I deployed my webapp to Azure and try to get some document from docDb it throws this error. The docdb exists in azure and contains the document that i looking for.
The weird is that from my local machine(running from VS) this works fine. I'm using the same settings in Azure and local. Somebody have and idea about this.
Thanks
Owner resource does not exist
occurs when you have given a wrong database name.
For example, while reading a document with client.readDocument(..), where the client is DocumentClient instance, the database name given in docLink is wrong.
This error for sure appears to be related to reading a database/collection/document which does not exist. I got the same exact error for a database that did exist but I input the name as lower case, this error appears to happen regardless of your partition key.
The best solution I could come up with for now is to wrap the
var response = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(database, collection, "documentid"));
call in a try catch, not very elegant and I would much rather the response comes back with some more details but this is Microsoft for ya.
Something like the below should do the trick.
Model myDoc = null;
try
{
var response = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(database, collection, document));
myDoc = (Model )(dynamic)response.Resource;
}
catch { }
if (myDoc != null)
{
//do your work here
}
That is to get a better idea of the error then create the missing resource so you dont get the error anymore.
A few resources I had to go through before coming to this conclusion:
https://github.com/DamianStanger/DocumentDbDemo
Azure DocumentDB Read Document Resource Not Found
I had the same problem. I found that Visual Studio 2017 is publishing using my Release configuration instead of the Test configuration I've selected. In my case the Release configuration had a different CosmosDB database name that doesn't exist, which resulted in the "owner resource does not exist" error when I published to my Azure test server. Super frustrating and a terrible error message.
It may also be caused by a not found attachment to a document.
This is a common scenario when you move cosmos db contents using Azure Cosmos DB Data Migration tool which moves all the documents with their complete definition but unfortunately not the real attachment content.
Therefore this result in a document that states it has an attachment, and also states the attachment link, but at that given link no attachment can be found because the tool have not moved it.
Now i wrap my code as follow
try{
var attachments = client.CreateAttachmentQuery(attacmentLink, options);
[...]
}
catch (DocumentClientException ex)
{
throw new Exception("Cannot retrieve attachment of document", ex);
}
to have a meaningful hint of what is going on.
I ran into this because my dependency injection configuration had two instances of CosmosClient being created for different databases. Thus any code trying to query the first database was running against the second database.
My fix was to create a CosmosClientCollection class with named instances of CosmosClient

ios8 Core Data iCloud Today Widget not synchronizing

I have been unsuccessful in getting core data to work on an app and today widget on my device.
let url = NSFileManager.defaultManager().containerURLForSecurityApplicationGroupIdentifier("group.mygroup.name").URLByAppendingPathComponent("fileName.sqllite")
var error: NSError? = nil
let options = [NSMigratePersistentStoresAutomaticallyOption: true,
NSInferMappingModelAutomaticallyOption: true,NSPersistentStoreUbiquitousContentNameKey:"SharedContainerName"
]
let s = coordinator?.addPersistentStoreWithType(NSSQLiteStoreType, configuration: nil, URL: url, options: options
, error: &error)
I have added a group container that I use for the URL of the stores. I have noticed on the simulators that my persistent coordinator points to the same sqllite file
(URL: file:///Users/xxxx/Library/Developer/CoreSimulator/Devices/6Cxxxxxx/data/Containers/Shared/AppGroup/E65xxxxxx/fileName.sqllite))
This seems to work fine on the simulator and I can store data in my main app and fetch it in today widget. When I run the code on my device the files are at different locations and the databases are not synchronized (no data on the today widget).
My Main App
(URL: file:///private/var/mobile/Containers/Shared/AppGroup/2CCXXX/CoreDataUbiquitySupport/mobile~F74XXX/SharedContainerName/0E8XXXX/store/fileName.sqllite))
Today Widget
(URL: file:///private/var/mobile/Containers/Shared/AppGroup/2CCXXX/CoreDataUbiquitySupport/mobile~F74XXX/SharedContainerName/2FBYYYY/store/fileName.sqllite))
I am assuming this should be fine as they should be synchronized by iCloud. The widget runs fine, however it has no data (like it has not been synchronized). Now debugging this has been tricky as I am unable to get console output while running the today widget. When I run the widget from Xcode as opposed to attaching to the running process (The only way I can get any output on the console) I receive an error core data iCloud: Error: initial sync notification returned an error BRCloudDocsErrorDomain error 12. I receive no notifications. Maybe iCloud and Core Data do not work at this time with a today widget? The core data code in my app and extension are identical so I do not think I have a bug.
According to this Apple Developer Forum message from an Apple employee:
None of iCloud is accessible from within an Extension in iOS 8.0
and he adds in another message:
Document syncing, I should clarify, or anything which requires file coordination. I'm not sure about KVS or CloudKit
The recommendation is to expose the application state to extensions using some other method (plist, separate files, etc), which is a bit of a bummer.

Project Server 2010 Event Handler

I am working on OnPublished event handler that will update one custom field of a project based on change in another field.
I am getting an error
Event Handler for event \ProjectPublished\ of type \PS.UpdateProjectStatusChangeDate.EventHandlerUpdateField\ threw an exception: ProjectServerError(s) LastError=CICOCheckedOutToOtherUser Instructions: Pass this into PSClientError constructor to access all error information
This is the code
//loading project data from server
//Every change on this dataset will be updated on the server!
ProjectDataSet projectDs = projectClient.ReadProject(projectId, projectSvc.DataStoreEnum.WorkingStore);
foreach (projectSvc.ProjectDataSet.ProjectRow row in projectDs.Project)
{
if (row.PROJ_SESSION_UID != null)
{
sessionId = row.PROJ_SESSION_UID;
break;
}
}
//send the dataset to the server to update the database
bool validateOnly = false;
Guid jobId = Guid.NewGuid();
projectClient.QueueUpdateProject(jobId, sessionId, projectDs, validateOnly);
Unlike other answers where we are running the code when the project is in checked-in state, we are checking-out and assigning new SessionID.
But when the event handler fires, the project is already is checked-out. So how do I get the SessionID. I think that is where the code is breaking.
Logically that makes sense. While project is checked out that means that somebody can change it at any time and in any way.
So even if your idea works your update can be overwritten by the next save done from Project Pro. Because Project Pro knows nothing about your manipulations.
I don't know anything about your system so let me guess that your users work with Project Pro mostly. In this case you can add your event handler to Application.ProjectBeforePublish msdn link event and update the field from Project Pro. But please keep in mind that your users will be asked to save the project before publishing.
If the solution with Project Pro does not work for you - you can flag published projects somehow and as soon as the project is checked in - do check out, update the field, save and publish the project again.

Resources