Revit model locked after relinquishing ownership through Revit API with a BIM360 project - revit-api

We have a process where we check out all elements in the Revit model using worksharing to make sure everything is available when processing the data. After processing the data, then it's time to relinquish all the elements again but here we get an error when doing this on larger projects on BIM360.
The method used to relinquish the elements is this one:
var relinquishOptions = new RelinquishOptions(true);
try
{
// Relinquish everything
WorksharingUtils.RelinquishOwnership(doc, relinquishOptions, null);
}
catch (Exception e)
{
MsgBox.ShowMsg(e.ToString(), "Relinquish ownership failed");
throw;
}
This triggers this error message on the "sync with central" dialog from BIM360:
After this the model can't be synced with the central model on BIM360. Using the Revit API to sync with central we get this exception:
Autodesk.revit.Exceptions.CentralModelContentionException: The central model is being accessed by another client.
The model is locked by our user and the only way to "unblock" it is by reuploading the file to BIM360.
This only happens on files on BIM360 and only on larger files. The file is around 350MB.
Are we missing something?
I can't seem to find any documentation on RevitAPI mentioning any restrictions on the integration with BIM360 that could explain the issue.
Do any of you know why this is happening?
Thank you in advance!

Related

Core Data and CloudKit integration issue when renaming relationship (code 134110)

I currently have an app using Core Data in the App Store: the app allows people to record their water and sailing activities (think of it like Strava for sailors). I have not updated the app for 3 years, the app seems to be still working fine on latest iOS versions but I recently planned to improve the app.
I am currently working on an update for this app, and need to change the data model and schema. I would like to have an automatic lightweight migration. I renamed some entities, properties and relationships, but I made sure to put the previous ids in the Renaming ID field in the editor.
I want to take advantage of the opportunity to sync the updated schema on CloudKit. I followed the instruction on Apple Developer documentation to setup the sync. I also made sure to initialize the schema using initializeCloudKitSchema(). When I visit the dashboard, I see the correct schema. The container is only in development mode, not pushed into production.
When I launch the app with a sqlite file generated by the available app, it seems the migration works well because the data is still here and correct. I can navigate in the app normally and when I visit the CloudKit dashboard, the data is correctly saved.
But sometimes, the app crashes at launch with the following error:
UserInfo={reason=CloudKit integration forbids renaming 'crewMembers' to 'sailors'.
Older devices can't process the new relationships.
NSUnderlyingException=CloudKit integration forbids renaming 'crewMembers' to 'sailors'.
Older devices can't process the new relationships.}}}
Fatal error: Unresolved error Error Domain=NSCocoaErrorDomain Code=134110 "An error occurred during persistent store migration."
The concerned entities were renamed, as the relationships and the relationship is a many-to-many, optional on both sides. This is occurring even if I reset the CloudKit development container. I don’t really have a clear idea of when this is appearing (seems random, after I updated some data or after I update the Core Data model). Any idea why the app is crashing? I would like as much as possible to keep the new naming for my entities and relationships.
SKPRCrewMemberMO renamed to Sailor
SKPRTrackMO renamed to Activity
crewMembers <<--->> tracks renamed sailors <<--->> activities
Here are some screenshots of the previous and updated data model for the entity at the origin of the migration issue, as well as some code regarding my Core Data stack initialization and the console error il getting.
PS: the app is used by few hundreds of people. That’s not a lot, but still, some of them have dozens of recorded activities and I don’t want to break anything and lose or corrupt data. I could launch a new app but users would lose their progress as it’s only saved locally in a shared container (app group was used as I wanted to share the Core Data with an Apple Watch extension). And I would lose the user base and App Store related things.
private init() {
container = NSPersistentCloudKitContainer(name: "Skipper")
guard let description = container.persistentStoreDescriptions.first else {
fatalError("###\(#function): Failed to retrieve a persistent store description.")
}
description.setOption(true as NSNumber, forKey: NSPersistentHistoryTrackingKey)
description.setOption(true as NSNumber, forKey: NSPersistentStoreRemoteChangeNotificationPostOptionKey)
let id = "iCloud.com.alepennec.sandbox20201013"
let options = NSPersistentCloudKitContainerOptions(containerIdentifier: id)
description.cloudKitContainerOptions = options
container.loadPersistentStores(completionHandler: { (storeDescription, error) in
if let error = error as NSError? {
fatalError("Unresolved error \(error), \(error.userInfo)")
}
})
do {
try container.initializeCloudKitSchema()
} catch {
print("Unable to initialize CloudKit schema: \(error.localizedDescription)")
}
container.viewContext.automaticallyMergesChangesFromParent = true
container.viewContext.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy
}

files in server root not loading properly in Azure

I'm working on an app that instead of a database uses file system in the server's root directory. It's basically a note application that allows me to save notes. Each note is a serialized object of Note class represented by following structure \Data\Notes\MyUsername\Title.txt
When I'm testing this on localhost through IIS Express everything works fine and I can easily go step by step there.
However, once I publish the app to Azure, the folder structure is still there (made a test Controller that uses Directory.GetFiles() and .GetDirectories() to simulate folder browsing so I'm sure that the files are there) but the file simply doesn't get loaded.
Loading script that's being called:
public T Load<T>(string filePath) where T : new()
{
StreamReader reader = null;
try
{
reader = new StreamReader(filePath);
var RawDB = reader.ReadToEnd();
return JsonConvert.DeserializeObject<T>(RawDB);
}
catch
{
return default(T);
}
finally
{
if (reader != null)
reader.Dispose();
}
}
Since I can't normally debug the app on Azure I tried to dump as much info as I can through ViewData and even there, everything looks okay and the paths match, but the deserialized object is still null, and this is only when trying to open an existing note WITHOUT creating a new one first (more on that later)
Additionally, like I said, those new notes get saved in the folder structure, and there's a Note sidebar on the left that allows users to switch between notes. The note browser is nothing more but a list that's collected with a .GetFiles() of that folder.
On Azure, this works normally and if I were to delete one manually it'd be removed from the sidebar as well.
Now here's the kicker. On localhost, adding a note adds it to the sidebar and I can switch between them normally.
Adding a note on Azure makes all Views only display that new note regardless of which note I open and the new note does NOT get stored in the structure (I don't know where it ended up at all!) even though the path is defined at that point normally and it should save just like it does on localhost.
var model = new ViewNoteModel()
{
Note = Load<Note>($#"{NotePath}\{Title}.txt"), //Works on localhost, fails on Azure on many levels. Title is a URL param.
MyNotes = GetMyNotes() //works fine, reads right directory on local and Azure
};
To summarize:
Everything works fine on localhost, Important part doesn't work on Azure.
If new note is not created but an existing note is opened, Correct note gets loaded (based on URL Param) on Localhost, it breaks on Azure and loads default Note object (not null, just the default constructor data since it's required by JsonConvert)
If a new note is created, you'll see it on Localhost and you'll be able to open all other notes regardless, you will see only the new note on Azure regardless of note picked.
It's really strange and I have no idea what could cause this? I thought it had something to do with Azure requests being handled differently so maybe controller pushes the View before the model is initialized completely but that doesn't make sense since there's nothing async here.
However the fact that it loads a note that doesn't exist on the server it's even more apsurd and I have no explanation for that.
Additionally this issue is not linked with a session. I logged in through my phone and it showed the fake note there as well right away.
P.S. Before you say anything about storage, please note this. Our university grants us a very limited Azure subscription. Simple lowest tier App service and 5DTU SQL server and 99% of the rest is locked out of our subscription. This is why I'm storing stuff on the server, not because I believe it's the smart thing to do.

Azure DocumentDB Owner resource does not exist

I having the same error icrosoft.Azure.Documents.DocumentClientException: Message: {"Errors":["Owner resource does not exist"]} , this is my scenario. When I deployed my webapp to Azure and try to get some document from docDb it throws this error. The docdb exists in azure and contains the document that i looking for.
The weird is that from my local machine(running from VS) this works fine. I'm using the same settings in Azure and local. Somebody have and idea about this.
Thanks
Owner resource does not exist
occurs when you have given a wrong database name.
For example, while reading a document with client.readDocument(..), where the client is DocumentClient instance, the database name given in docLink is wrong.
This error for sure appears to be related to reading a database/collection/document which does not exist. I got the same exact error for a database that did exist but I input the name as lower case, this error appears to happen regardless of your partition key.
The best solution I could come up with for now is to wrap the
var response = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(database, collection, "documentid"));
call in a try catch, not very elegant and I would much rather the response comes back with some more details but this is Microsoft for ya.
Something like the below should do the trick.
Model myDoc = null;
try
{
var response = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(database, collection, document));
myDoc = (Model )(dynamic)response.Resource;
}
catch { }
if (myDoc != null)
{
//do your work here
}
That is to get a better idea of the error then create the missing resource so you dont get the error anymore.
A few resources I had to go through before coming to this conclusion:
https://github.com/DamianStanger/DocumentDbDemo
Azure DocumentDB Read Document Resource Not Found
I had the same problem. I found that Visual Studio 2017 is publishing using my Release configuration instead of the Test configuration I've selected. In my case the Release configuration had a different CosmosDB database name that doesn't exist, which resulted in the "owner resource does not exist" error when I published to my Azure test server. Super frustrating and a terrible error message.
It may also be caused by a not found attachment to a document.
This is a common scenario when you move cosmos db contents using Azure Cosmos DB Data Migration tool which moves all the documents with their complete definition but unfortunately not the real attachment content.
Therefore this result in a document that states it has an attachment, and also states the attachment link, but at that given link no attachment can be found because the tool have not moved it.
Now i wrap my code as follow
try{
var attachments = client.CreateAttachmentQuery(attacmentLink, options);
[...]
}
catch (DocumentClientException ex)
{
throw new Exception("Cannot retrieve attachment of document", ex);
}
to have a meaningful hint of what is going on.
I ran into this because my dependency injection configuration had two instances of CosmosClient being created for different databases. Thus any code trying to query the first database was running against the second database.
My fix was to create a CosmosClientCollection class with named instances of CosmosClient

Uploading photos using Grails Services

I would like to ask, What would be the most suitable scope for my upload photo service in Grails ? I created this PhotoService in my Grails 2.3.4 web app, all it does is to get the request.getFile("myfile") and perform the necessary steps to save it on the hard drive whenever a user wants to upload an image. To illustrate what it looks like, I give a skeleton of these classes.
PhotoPageController {
def photoService
def upload(){
...
photoService.upload(request.getFile("myfile"))
...
}
}
PhotoService{
static scope="request"
def upload(def myFile){
...
// I do a bunch of task to save the photo
...
}
}
The code above isn't the exact code, I just wanted to show the flow. But my question is:
Question:
I couldn't find the exact definition of these different grails scopes, they have a one liner explanation but I couldn't figure out if request scope means for every request to the controller one bean is injected, or each time a request comes to upload action of the controller ?
Thoughts:
Basically since many users might upload at the same time, It's not a good idea to use singleton scope, so my options would be prototype or request I guess. So which one of them works well and also which one only gets created when the PhotoService is accessed only ?
I'm trying to minimize the number of services being injected into the application context and stays as long as the web app is alive, basically I want the service instance to die or get garbage collect at some point during the web app life time rather than hanging around in the memory while there is no use for it. I was thinking about making it session scope so when the user's session is terminated the service is cleaned up too, but in some cases a user might not want to upload any photo and the service gets created for no reason.
P.S: If I move the "def photoService" within the upload(), does that make it only get injected when the request to upload is invoked ? I assume that might throw exception because there would be a delay until Spring injects the service and then the ref to def photoService would be n
I figured out that Singleton scope would be fine since I'm not maintaining the state for each request/user. Only if the service is supposed to maintain state, then we can go ahead and use prototype or other suitable scopes. Using prototype is safer if you think the singleton might cause unexpected behavior but that is left to testing.

Project Server 2010 Event Handler

I am working on OnPublished event handler that will update one custom field of a project based on change in another field.
I am getting an error
Event Handler for event \ProjectPublished\ of type \PS.UpdateProjectStatusChangeDate.EventHandlerUpdateField\ threw an exception: ProjectServerError(s) LastError=CICOCheckedOutToOtherUser Instructions: Pass this into PSClientError constructor to access all error information
This is the code
//loading project data from server
//Every change on this dataset will be updated on the server!
ProjectDataSet projectDs = projectClient.ReadProject(projectId, projectSvc.DataStoreEnum.WorkingStore);
foreach (projectSvc.ProjectDataSet.ProjectRow row in projectDs.Project)
{
if (row.PROJ_SESSION_UID != null)
{
sessionId = row.PROJ_SESSION_UID;
break;
}
}
//send the dataset to the server to update the database
bool validateOnly = false;
Guid jobId = Guid.NewGuid();
projectClient.QueueUpdateProject(jobId, sessionId, projectDs, validateOnly);
Unlike other answers where we are running the code when the project is in checked-in state, we are checking-out and assigning new SessionID.
But when the event handler fires, the project is already is checked-out. So how do I get the SessionID. I think that is where the code is breaking.
Logically that makes sense. While project is checked out that means that somebody can change it at any time and in any way.
So even if your idea works your update can be overwritten by the next save done from Project Pro. Because Project Pro knows nothing about your manipulations.
I don't know anything about your system so let me guess that your users work with Project Pro mostly. In this case you can add your event handler to Application.ProjectBeforePublish msdn link event and update the field from Project Pro. But please keep in mind that your users will be asked to save the project before publishing.
If the solution with Project Pro does not work for you - you can flag published projects somehow and as soon as the project is checked in - do check out, update the field, save and publish the project again.

Resources