I have an External Command to create about 40 (or even more) Generic model
in Revit. I want to make Revit run the command asynchronously to speed up the process according to this flow:
load command -> run command -> load families -> generate family instances asynchronously -> end command.
I 've read some ideas a bout make a modeless dialog to work around but it is not what I need. How can I do this?
Meiki is completely correct. The Revit API can only be used within a valid Revit API context, and such a context is provided exclusively by Revit events. You can however implement an external event and trigger that from outside to obtain access to a valid Revit API context. This is discussed in detail with many solutions provided by The Building Coder in the topic group on Idling and External Events for Modeless Access and Driving Revit from Outside.
Another approach might be to make use of the DocumentOpened Event. You could use that to trigger the execution flow you desire.
A third but unsupported approach might be to make use of a journal file, as in the IFC Import and Conversion Journal Script.
I would start out reading the numerous solutions listed in the topic group, and probably end up making use of an external event.
Good luck and have fun!
You can't run external command asynchronously for creating (or modifying) something because of Transactions and remember that Revit doesn't support Async methods or approaches. can you describe what you gonna do exactly maybe there is another approach.
According to the docs:
The Autodesk Revit API supports single threaded access only. This means that your API application must perform all Autodesk Revit API calls in the main thread (which is called by the Autodesk Revit process at various API entry points), and your API application cannot maintain operations in other threads and expect them to be able to make calls to Autodesk Revit at any time.
However, I believe you could create an external API and consume it from your command.
Related
We are building a work order management integration layer on top of the base Maximo, communicating via provided REST/OSLC API, but we are stuck when it comes to finding all possible statuses a work order could transition to for a given work order.
Is there a REST/OSLC API, or some way to expose it externally (ex. some kind of one-time config export), the possible status transitions for a given work order?
This should consider all the customizations we've made to Maximo including additional statuses, extra conditions, etc. We are targeting version 7.6.1.
IBM seems to have dropped some things from the new NextGen REST/JSON API documentation. There is almost no mention of the "getlist" action anymore, something I have really enjoyed using for domain controlled fields. This should give you exactly what you are looking for, a list of the possible statuses that a given work order could go into. I was unable to verify this call today, but I remember it working as desired when I last used it (many months ago).
<hostname>/maximo/oslc/os/mxwo/<href_value_of_a_specific_wo>?action=getlist&attribute=status
The method you're looking for is psdi.mbo.StatefulMbo.getValidStatusList
See details here:
https://developer.ibm.com/assetmanagement/7609-maximo-javadoc/
Now, you want to expose the result to a REST API. You could create an automation script that given the WONUM would return the allowed status list. You can leverage the new REST API to achieve that quite easily.
See how you can call an automation script with a REST call here:
https://developer.ibm.com/static/site-id/155/maximodev/restguide/Maximo_Nextgen_REST_API.html#_automation_scripts
Last part: you will need to create a request response based on the mboset returned from getValidStatusList.
Are there best practices for implementation of eventsourcing gateways? The gateway is meant as infrastructure or service which allows to generate a set of events, proceeding from the status returned by some external service.
Even if application based on eventsourcing, some external uncontrollable entitles can still be present. For example, you want to synchronize users list from Azure AD, and perform prompt to service, which return users list. Then you get users list from projection, make difference with external state, and produce events to fill this difference.
Or your application is online-shop, and you should import actual USD/EUR/bitcoin ranks for showing prices. Gateway can poll some currencies provider and produce event. In simple case it's very easy, but if projection state is more complex structure, trivial import is not obvious.
Maybe is there common approach for this case?
Building integration adapters that use poll-emit is normal and I personally prefer this way of doing integrations in general.
However, this has little to do with event sourcing, since what you actually need to solve your integration problems is to simulate the desired functionality that the external system will emit events on its own and you can build a reactive system that consumes these events.
When these events come to your system from the adapter - you can do whatever you want with them but essentially, event sourcing assumes that you store your own object's state in event streams but in case the event comes from some external system - it is not your state. You can derive your system state from external events but these will be your own events.
Can anyone explain any reason why and when should I use PublishOnBackgroundThread instead of PublishOnUIThread.
I cannot find any use cases for usage PublishOnBackgroundThread and I am not sure what method should I use?
It really depends on the type of the message you're publishing.
If you're using the EventAggregator to surface a message from a low laying service back the UI then PublishOnUIThread makes the most sense since you'll be updating the UI when handling the message. The same applies when you're using it to communicate between view models.
Conversely sometimes it get used for view models to publish events that an underlying service is listening to (rather than the view model depending on that service).
That service may perform some expensive work which makes sense to happen on a background thread. Personally I'd have gone in the background service to push that work onto a background thread but different people want different options.
Ultimately the method was included for completeness.
I have an VOIP application i'd like to implement, that requires me to process the audio from a call in real time during the call.
Currently, I'm using Asterisk to handle my calls, and it looks like there's a functionality built in called Audiohooks which is designed to let me access the audiostream, and process it from the dialplay
However, I can not find any documentation whatsoever on how to actually create my own audio hook, nor any recent examples on how it should be done. Are there resources that show how I could use this?
Thanks
That api is availible when you do c/c++ modules for asterisk. No external API.
For examples you can check MixMonitor,func_volume,app_conference and other similar application already developed.
Hint: after work done, you have test for memory leaks and hi-load/concurrent load. Code must be thread-safe.
Do you do automated testing on a complex workflow system like K2?
We are building a system with extensive integration between Sharepoint 2007 and K2. I can't even imagine where to start with automated testing as the workflow involves multiple users interacting with Sharepoint, K2 workflows and custom web pages.
Has anyone done automated testing on a workflow server like K2? Is it more effort than it's worth?
I'm having a similar problem testing workflow-heavy MOSS-based application. Workflows in our case are based on WWF.
My idea is to mock pretty much everything that you can't control from unit tests - documents storage, authentication, user rights and actions, sharepoint-specific parts of workflows for sharepoint (these mocks should be thoroughly tested to mirror behavior of real components).
You use inversion of control to make code choose which component to use at runtime - real or mock.
Then you can write system-wide tests to test workflows behavior - setting up your own environment, checking how workflow engine reacts. These tests are too big to call them unit-tests, still it is automated testing.
This approach seems to work on trivial cases, but I still have to prove it is worthy to use in real-world workflows.
Here's the solution I use. This is a simple wrapper around the runtime that allows executing single activity, simplifies passing the parameters, blocks the invoking thread until the workflow or activity is done, and translates / rethrows exceptions if any. Since my workflow only sends or waits for messages through a custom workflow service, I can mock out the service to expect certain messages from workflow and post certain messages to it and here I'm having real unit-tests for my WF! The credit for technology goes to Michael Kennedy.
If you are going to do unit testing, Typemock Isolator is the only tool that can currently mock SharePoint objects.
And by the way, Richard Fennell is working on a workflow mocking solution here.
We've just today written an application that monitors our K2 worklist, picks up certain tasks from it, fills in some data and submits the tasks for completion. This is allowing us to perform automated testing, find regressions, and run through as many different paths of the workflow in a fraction of the time that it would take people to do it. I'd imagine a similar program could be written to pretend to be sharepoint.
As for the unit testing of the workflow items themselves, we have a dll referenced from k2 which contains all of our line rule and processing logic. We don't have any code in the k2 workflows themselves, it is all referenced from these dlls. This allows us to easily write unit tests on them to test all of the individual line rules.
I've done automated integration testing on K2 workflows using the K2ROM API (probably SourceCode.Workflow.Client if you're using K2 blackpearl).
Basically you start a process on a test server with a known folio (I generate a GUID), then use the management API to delete it afterwards. I wrote helper methods like AssertAtClientActivity (basically calls ProvideWorkItem with criteria).
Use the IsSynchronous parameter to StartProcessInstance, WorklistItem.Finish, etc. so that relevant method calls will not return until the process instance has reached a stable state.
Expect tests to be slow and to occasionally fail. These are not unit tests.
If you want to write unit tests against other systems, you'll probably want to wrap the K2 API.
Consider looking at Windows Workflow 4 and the new workflow features in SharePoint 2010. You may not need K2.