In the latest requirement document of Execution Manager of adaptive Autosar,
I am confused about function group and application.
in the document it says about function group and application like below
(from https://www.autosar.org/fileadmin/user_upload/standards/adaptive/19-11/AUTOSAR_SWS_ExecutionManagement.pdf)
Function Group
A Function Group is a set of coherent Processes, which
need to be controlled consistently. Depending on the state of
the Function Group, Processes are started or terminated.
Processes can belong to more than one Function Group
State (but at exactly one Function Group).
"MachineState" is a Function Group with a predefined
name, which is mainly used to control Machine lifecycle and
Processes of platform level Applications. Other Function
Groups are sort of general purpose tools used (for example) to
control Processes of user level Applications.
,
Application
An implementation that resolves a set of coherent functional requirements and is the result of functional development. An Application is the unit of delivery for Machine specific configuration and integration.
These concepts are really confusing to me.
based on my understanding I categorized applications in this way.
system applications
user applications - multiple applications in function groups
user applications - other applications
but I'm not sure this is right or not. help me to get fully understand about
function group and application so that I can categorize applications in a right way.
In the end, it is all about functions in the vehicle, which are distributed over multiple ECUs, including supporting functions and ECUs in between.
In order to save battery power, not all ECUs need to be running all the time. But it could be, that some ECUs implement multiple functions.
e.g.:
SRR (Short Range Radar, well nowadays also more than 150m range!) ECUs in the rear do BSD (BlindSpotDetection), LCA (LaneChangeAssist), RCTA (RearCrossTrafficAssist Alert/Brake), Freespace Detection, Collision Avoidance, OccupantSafeExit, Object Detection Output for 360° Vision and Fusion e.g. for AutomatedDriving ...
CentralECU, like a 360° Vision/Fusion ECU for Automated Driving, which has several sensoric ECUs like front LRR (Long Range Radar), front Cameras and front and rear SRRs connected. If this ECU is also the gateway of for the sensoric ECUs, and the rear SRRs are used for OSE, then the front ECUs can be shutdown, and the CentralECU can at least shutdown several high performance cores/processors, except the ones needed for routing between vehicle and the rear SRRs. After the driver/passenger leave the car, they can then shutdown shortly after too.
For the above scenarios, other gateways might also be involved. And also the SRRs and CentralECU need to be aware of, that other ECUs are off and not providing data like vehicle speed, yawrate, steering angle etc. and therefore, the messages are not transmitted on the networks anymore. Rx/Tx deadline monitoring should be disabled for functions which are turned off. Functions that are shutdown also in these SRRs or CentralECUs should also stop their functional messages to be transmitted.
That is, why you could have in one application multiple functions grouped into one or more function groups, because the ECU could be involved in several of them.
At least the AUTOSAR Adaptive is for the CentralECU, the SRR ECUs usually are low-price ECUs, which just run AUTOSAR Classic. But there is similar handling through PartialNetworking, VirtualFunctionCLusters and NetworkManagement.
Related
I am currently working on a web-based MMORPG game and would like to setup an auto-scaling strategy based on Docker and DigitalOcean droplets.
However, I am wondering how I could manage to do so:
My game server would have to be splittable across different Docker containers BUT every game server instance should act as if it was only one gigantic game server. That means that every modification happening in one (character moving) should also be mirrored in every other game server.
I am trying to get this to work (at least conceptually) but can't find a way to synchronize all my instances properly. Should I use a master only broadcasting events or is there an alternative?
I was wondering the same thing about my MySQL database: since every game server would have to read/write from/to the db, how would I make it scale properly as the game gets bigger and bigger? The best solution I could think of was to keep the database on a single server which would be very powerful.
I understand that this could be easy if all game servers didn't have to "share" their state but this is primarily thought so that I can scale quickly in case of a sudden spike of activity.
(There will be different "global" game servers like A, B, C... but each of those global game servers should be, behind the scenes, composed of 1-X docker containers running the "real" game server so that the "global" game server is only a concept)
The problem you state is too generic and it's difficult to give a concrete response. However let me be reckless and give you some general-purpose scaling advices:
Remove counters from databases. Instead primary keys that are auto-incremented IDs, try to assign random UUIDs.
Change data that must be validated against a central point by data that is self contained. For example, for authentication, instead of having the User Credentials in a DB, use JSON Web Tokens that can be verified by any host.
Use techniques such as Consistent Hashing to balance the load without need of load balancers. Of course use hashing functions that distribute well, to avoid/minimize collisions.
The above advices are basically about changing the design to migrate from stateful to stateless in as much as aspects as you can. If you anyway need to provide stateful parts, try to guess which entities will have more chance to share stateful data and allocate them in the same (or nearly server). For example, if there are cities in your game, try to allocate in the same server the users that are in the same city, since they are more willing to interact between them (and share stateful data) than users that are in different cities.
Of course if the city is too big and it's very crowded, you will probably need to partition the city in more servers to avoid overloading the server.
Your question is too broad and a general scaling problem as others have mentioned. It'd have been helpful if you'd stated more clearly what your system requirements are.
If it has to be real-time, then you can choose Redis as your main DB but then you'd need slaves (for replication) and you would not be able to scale automatically as you go*, since Redis doesn't support that. I assume that's not a good option when you're working with games (Sudden spikes are probable)
*there seems to be some managed solutions, you need to check them out
If it can be near real-time, using Apache Kafka can prove to be useful.
There's also a highly scalable DB which has everything you need called CockroachDB (I'm a contributor, yay!) but you need to run tests to see if it meets your latency requirements.
Overall, going with a very powerful server is a bad choice, since there's a ceiling and it'd cost you more to scale vertically.
There's a great benefit in scaling horizontally such an application. I'll try to write down some ideas.
Option 1 (stateful):
When planning stateful applications you need to take care about synchronisation of the state (via PubSub, Network Broadcasting or something else) and be aware that every synchronisation will take time to occur (when not blocking each operation). If this is ok for you, lets go ahead.
Let's say you have 80k operations per second on your whole cluster. That means that every process need to synchronise 80k state changes per second. This will be your bottleneck. Handling 80k changes per second is quiet a big challenge for a Node.js application (because it's single threaded and therefore blocking).
At the end you'll need to provision precisely the maximum amount of changes you want to be able to sync and perform some tests with different programming languages. The overhead of synchronising needs to be added to the general work load of the application. It could be beneficial to use some multithreaded language like C, Java/Scala or Go.
Option 2 (stateful with routing):*
In some cases it's feasible to implement a different kind of scaling.
When for example your application can be broken down into areas of a map, you could start with one app replication which holds the full map and when it scales up, it shares the map in a proportional way.
You'll need to implement some routing between the application servers, for example to change the state in city A of world B => call server xyz. This could be done automatically but downscaling will be a challenge.
This solution requires more care and knowledge about the application and is not as fault tolerant as option 1 but it could scale endlessly.
Option 3 (stateless):
Move the state to some other application and solve the problem elsewhere (like Redis, Etcd, ...)
In another words, if I create messaging layout which uses rather large number of messaging entities (like several thousands), instead of smaller number, is there something in Azure ServiceBus that gets irritated by that and makes it perform less than ideally, or generates significantly different costs. Let us assume that number of messages will remain roughly the same in both scenarios.
So to make clear I am not asking if messaging layout with many entities is sound from applications point of view, but rather is there in Azure some that performs badly in such situations. If there are advantages to it (perhaps Azure can scale it more easily), that would be also interesting.
I am aware of 10000 entites limit in single ServiceBus namespace.
It is the more matter of programming and architecture of the solution i think - for example, we saw the problems with the ACS (authentication mechanism) - SB started to throttle the client sometimes when there were many requests. Take a look at the guidance about SB high availability - there are some issues listed that should be considered when you have a lot of load.
And, you always have other options that can be more suitable for highload scenarios - for example, Azure Event Hubs, more lightweight queue mechanism intended to be the service for the extremely high amount of messages.
I have a Node application which persists data to a MongoDB database. Most of this data is in hand, such as data for the User collection. However, the application also has the concept of Website collection, and for this collection, data must first be downloaded from somewhere before it is saved.
I am wondering how I should separate the above concerns in my application. At the service layer, I have things like User and Website. They provide basic CRUD operations. At completely the opposite end of the spectrum, there is a user interface whereby uses can input a website URL. Somewhere between this UI and the application persisting the data to MongoDB (the service layer), the application must make a request to this URL to gather some data. Once the data has been fetched, the Website service will persist it.
Potentially, there could be thousands of these URLs entered at once, and I do not want to bring down the Node process that handles the web server due to load issues. Therefore I think it would be a good idea to abstract the work out to a different process and use some sort of messaging bus to tie the application together.
It seems that you've decomposed system correctly -and have created that separation at the persistence "service" layer-, but I'd take this separation a bit further by moving toward a distributed system architecture (i.e. SOA / micro-services).
The initial step of building a distributed system is identifying each of the functions necessary to meet the overall business goal of the application and mapping these to service endpoints. Each loosely coupled service endpoint will then serve a small isolated job/function and it will act as an abstraction for that business goal.
By continuing the separation of responsibilities all the way to the service endpoint you create small independent boundaries for scalability, throughput, fault tolerance, security, deployment, etc.
For example -RESTfully speaking-, this might mean service endpoints for both Users (e.g. /users/{userid}) and Websites (e.g. /websites/{websiteid|url})... and perhaps an additional Resource to maintain the relationship/link between the two (e.g. /users/{userid}/userwebsites : {websiteid:1234,url:blah.com).
This separation would mean you can handle the website processing responsibility independently, which would have a number of benefits -beyond just handling the different load characteristics-.
I have some questions regarding architecting enterprise applications using azure cloud services.
Back Story
We have a system made up of about a dozen WCF Windows Services on a SQL backend. We currently have about 10 clients but expect that to grow to potentially a hundred with perhaps a hundred fold increase in the throughput demands on the system. The current system is poorly engineered and is simply not capable of scaling. So now appears to be the appropriate juncture to reengineer on the azure platform.
Process Flow
Let me briefly describe a simplified set of the services and the process flow and then ask some questions I have regarding utilizing azure cloud services to build the new system.
Service A is logged on to an external systems and downloads data continuously
Service B is logged on to a second external systems and downloads data continuously
There can only ever be one logged in instance each of services A and B.
Both A and B hand off their data to Service C which reconciles the data from the two external sources.
Validated and reconciled data is then passed from C to Service D which performs some accounting functions and then passes the resulting data to Services E and F.
Service E is continually logged in to an external system and uploads data to it.
Service F generates reports and publishes them to clients via FTP etc
The system is actually far more complex than this but the above illustrates the processes involved. The system runs 24 hours a day 6 days a week. Queues will be used to buffer messaging between all the services.
We could just build this system using Azure persistent VMs and utilise the service bus, queues etc but that would ties us in to vertical scaling strategy. How could we utilise cloud services to implement it given the following questions.
Questions
Given that Service A, B and E are permanently logged in to external systems there can only ever be one active instance of each. If we implement these as single instance worker roles there is the issue with downtime and patching (which is unacceptable). If we created two instances of each is there a standard way to implement active-passive load balancing with worker roles on azure or would we have to build our own load balancer? Is there another solution to this problem that I haven’t thought of?
Services C and D are a good candidates to scale using multiple worker role instance. However each instance would have to process related data. For example, we could have 4 instances each processing data for 5 individual clients. How can we get messages to be processed in groups (client centric) by each instance? Also, how would we redistribute load from one instance to the remaining instances when patching takes place etc. For example, if instance 1, which processes data for 5 clients, goes down for OS patching, the data for its clients would then have to be processed by the remaining instances until it came back up again. Similarly, how could we redistribute the load if we decide to spin up additional worker roles?
Any insights or suggestions you are able to offer would be greatly appreciated.
Mat
Question #1: you will have to implement your own load-balancing. This shouldn't be terribly complex as you could use Blob storage Lease functionality to keep a mutex on some blob in the storage from one instance while holding the connection active to your external system. Every X period of time you could renew the lease if you know that connection is still active and successful. Every other worker in the Role could be checking on that lease to see if it expires. If it ever expires, the next worker would jump in and acquire the lease, and then open the connection to the external source.
Question #2: Look into Azure Service Bus. It has a capability to allow clients to process related messages. More info here: http://geekswithblogs.net/asmith/archive/2012/04/02/149176.aspx
All queuing methodologies imply that if a message gets picked up but does not get processed within a configurable amount of time, it goes back onto the queue so that the next available instance can pick it up and process it
You can use something like AzureWatch to monitor the depth of your queues (storage or service bus) and auto-scale number of instances in your C and D roles to match; and monitor instance statuses for roles A, B and E to make sure there is always a healthy instance there and auto-scale if quantity of ready instances drop to 0.
HTH
First, back up a step. One of the first things I do when looking at application architecture on Windows Azure is to qualify whether or not the app is a good candidate for migration to Windows Azure. I particularly look at how much integration is in the application — integration is always more difficult than expected, doubly so when doing it in the cloud. If most of your workload needs to be done through a single, always-on connection, then you are going to struggle to get the availability and scalability that we turn to the cloud for.
Without knowing the detail of your application, but by way of example, assume services A & B are feeds from a financial data provider. Providers of data feeds are really good at what they do, have high availability, and provide 'enterprise grade' (whatever that means) for enterprise grade costs. Their architectures are also old-school and, in some cases, very rigid. So first off, consider asking your feed provider (that gives to a login/connection and expects you to pull data) to push data to you via a web service. Exposed web services are the solution to scaling and performance, and are used from table storage on Azure, to high throughput database services like DynamoDB. (I'll challenge any enterprise data provider to explain how a service like Amazon S3 is mickey-mouse.) If your data supplier pushed data to a web service via an agreed API, you could perform all sorts of scaling and availability on the service for a low engineering cost.
Your alternative is, as you are discovering, to build a whole lot of stuff to make sure that your architecture fits in with the single-node model of your data supplier. While it can be done, you are going to spend a lot of engineering cash on hand-rolling a whole bunch of distributed computing principles. If you are going to have an active-passive architecture, you need to implement a leader election algorithm in order to determine when a passive node should become active. This is not as trivial as it sounds as an active node may look like it has disappeared, but is still processing — and you don't want to slot another one in its place. So then you will implement a heartbeat, or even a separate 'witness' node that does nothing other than keep an eye on which nodes are alive in order to do something about them. You mention that downtime and patching is unacceptable. So what is acceptable? A few minutes or a few seconds, or less than a second? Do you want the passive node to take over from where the other left off, or start again?
You will probably find that the development cost to implement all of this is lower than the cost of building and hosting a highly available physical server. Perhaps you can separate the loads and run the data feed services in a co-lo on a physical box, and have the heavy lifting of the processing done on Windows Azure. I wouldn't even look at Azure VMs, because although they don't recycle as much as roles, they are subject to occasional problems — at least more than enterprise-grade hardware. Start off with discussions with your supplier of the data feeds — they may have a solution, or one that can be cobbled together (e.g. two logins for the price of one, and the 'second' account/instance mostly throws away its data).
Be very careful of traditional enterprise integration. They ask for things that seem odd in today's cloud-oriented world. I've had a request that my calling service have a fixed ip address, for example. You may find that the code that you have to write to work around someone else's architecture would be better spent buying physical servers. Push back on the data providers — it is time that they got out of the 90s.
[Disclaimer] 'Enterprises', particularly those that are in financial services, keep saying that their requirements are special — higher throughput, higher security, high regulations and higher availability. With the exception of a very few cases (e.g. high frequency trading), I tend to call 'bull' on most of this. They are influenced by large IT budgets and vendors of expensive kit taking them to fancy lunches, and are indoctrinated to their server-hugging beliefs. My individual view on the enterprise hardware/software/services business has influenced this answer. Your mileage may vary.
I'd like to know about specific problems you - the SO reader - have solved using Workflow Engines and what libraries/frameworks you used if you didn't roll your own. I'd also like to know when a Workflow Engine wasn't the best choice and if/how you chose something simpler, like a TaskList/WorkList/Task-Management type application using state machines.
Questions:
What problems have you used workflow engines to solve?
What libraries/frameworks did you use?
When did a simpler State Machine/Task Management like system suffice?
Bonus: How did/do you make the distinction between Task Management and Workflow Engine?
I'm looking for first-hand experiences.
Some of the resources I've checked out:
Ruote and State Machine != Workflow Engine
StonePath and Docs
Creating and Managing Worklist Task Plans with Oracle
Design and Implementation of a Workflow Engine - Thesis
What to use Windows Workflow Foundation For
JBoss jBPM Docs
I'm biased as well, as I am the main author of StonePath.
I have developed workflow applications for the U.S. State Department, the Geneva Centre for Humanitarian Demining, several fortune 500 clients, and most recently the Washington DC Public School System. Every time I have seen a 'workflow engine' that tried to be the one master reference for business processes, I have seen an organization fighting itself to work around the tool. This may be due to the fact that these solutions have always been vendor/product driven, and then end up with a tactical team of 'consultants' constantly feeding the app... but because of this, I tend to react negatively when I hear the benefits of process-based tools that promise to 'centralize the workflow definitions in one place and make them repeatable'.
I very much like Ruote - I have been following that project for some time and should I need that kind of solution, it will be the next tool I'll be willing to try. StonePath has a very different purpose than ruote
Ruote is useful to Ruby in general,
StonePath is aimed at Rails, the web framework written in Ruby.
Ruote is about long-lived business processes and their associated definitions (Note - active development on ruote ceased).
StonePath is about managing State-based workflow and tasking.
Frankly, I think the distinction from the outside looking in might be subtle - many times the same kinds of business processes can be represented either way - the state-and-task-based model tends to map to my mental model though.
Let me describe the highlights of a state-based workflow.
States
Imagine a workflow revolving around the processing of something like a mortgage loan or a passport renewal. As the document moves 'around the office', it travels from state to state.
If you are responsible for the document, and your boss asked you for a status update you'd say things like
"It is in data entry"...
"We are checking the applicant's credentials now"...
"we are awaiting quality review"...
"We are done"... and so on.
These are the states in a state-based workflow. We move from state to state via transitions - like "approve", "apply", kickback", "deny", and so on. These tend to be action verbs. Things like this are modeled all the time in software as a state machine.
Tasks
The next part of a state/task-based workflow is the creation of tasks.
A Task is a unit of work, typically with a due date and handling instructions, that connects a work item (the loan application or passport renewal, for instance), to a users "in box".
Tasks can happen in parallel with each other or sequentially
Tasks can be created automatically when we enter states,
Create tasks manually as people realize work needs to get done
Require tasks to be completed before we can move onto a new state.
This kind of behavior is optional, and part of the workflow definition.
The rabbit hole can go a lot deeper than this, and I wrote an article about it for Issue #4 of PragPub, the Pragmatic Programmer's Magazine. Check out the repo link above for an updated PDF of that article.
In working with StonePath the last few months, I have found that the state-based model maps really well to restful web architectures - in particular, the tasks and state transitions map nicely as nested resources. Expect to see future writing from me on this subject.
I'm biased, I'm one of the authors of ruote.
variant 1) state machine attached to a resource (document, order, invoice, book, piece of furniture).
variant 2) state machine attached to a virtual resource named a task
variant 3) workflow engine interpreting workflow definitions
Now your question is tagged "BPM" we can be expanded into "Business Process management". How does that kind of management occur in each of the variant ?
In variant 1, the business process (or workflow) is scattered in the application. The state machine attached to the resource enforces some of the aspects of the workflow, but only those related to the resource. There may be other resources with their own state machine following the same business process.
In variant 2, the workflow can be concentrated around the task resource and represented by the state machine around that resource.
In variant 3, the workflow is enacted by interpreting a resource called a workflow definition (or business process definition).
What happens when the business process changes ? Is it worth having a workflow engine where business processes are manageable resources ?
Most of the state machine libraries have 1 set states + transitions. Workflow engines are, most of them, workflow definition interpreters and they allow multiple different workflows to run together.
What will be the cost of changing the workflow ?
The variants are not mutually exclusive. I have seen many examples where a workflow engine changes the state of multiple resources some of them guarded by state machines.
I also use variant 3 + 2 a lot, for human tasks : the workflow engine, at some points when running a process instance, hands a task (workitem) to a human participant (resource task is created and placed in state 'ready').
You can go a long way with variant 2 alone (the task manager variant).
We could also mention variant 0), where there is no state machine, no workflow engine, and the business process(es) are scattered and/or hardcoded in the application.
You can ask many questions, but if you don't take the time to read the answers and don't take the time to try out and experiment, you won't go very far, and will never acquire any flair for when to use this or that tool.
On a previous project I was working on i added some Workflow type rules to a set of Government Forms in the Healhcare industry.
Forms needed to be filled out by the end user , and depending on some answers other Forms were scheduled to be filled out at a later date. There were also external events that would cancel scheduled Forms or schedule new ones.
Sample Flow :
Patient Admitted -> Schedule Initial Assessment FOrm -> Schedule Quarterly Review Form -> Patient Died -> Cancel Review -> Schedule Discharge Assessment Form
Many other rules were based on things such as Patient age, where they were being admitted etc.
This was an ASP.NET app, the rules were basically a table in the database. I added scripting, so a script would run on Form completion to determine what to do next. This was a horrid design, and would have been perfect for a proper Workflow engine.
I'm one of the authors of the open source Temporal Workflow Engine we initially developed at Uber as Cadence. The difference between Temporal and the majority of the existing workflow engines is that it is developer focused and is extremely flexible and scalable (to tens of thousands updates per second and up to billions of open workflows). The workflows are written as object oriented programs and the engine ensures that the state of the workflow objects including thread stacks and local variables is fully preserved in case of host failures.
What problems have you used workflow engines to solve?
Temporal is used for practically any backend application that lives beyond a single request reply. Examples of usage are:
Distributed CRON jobs
Managing ML/Data pipelines
Reacting to business events. For example trip events at Uber. The workflow can accumulate state based on events received and execute activities when necessary.
Services Deployment to Mesos/ Kubernetes
CI Pipeline implementation
Ensuring that multiple service calls complete when a request is received. Including SAGA pattern implementation
Managing human worker tasks (similar to Amazon MTurk)
Media processing
Customer Support Ticket Routing
Order processing
Testing service similar to ChaosMonkey
and many others
The other set of use cases is based on porting existing workflow engines to run on Temporal. Practically any existing engine workflow specification language can be ported to run on Temporal. This way a single backend service can power multiple domain specific workflow systems.
What libraries/frameworks did you use?
Temporal is a self contained service written in Go with Go, Java, PHP, and Typescript client side SDKs (.NET and Python are coming in 2022). The only external dependency is storage. Cassandra, MySQL and, PostgreSQL are supported. Elasticsearch can be used for advanced indexing.
Temporal also support asynchronous cross region (using AWS terminology) replication.
When did a simpler State Machine/Task Management like system suffice?
Open source Temporal service can be self hosted or temporal.io cloud offering can be used. So the overhead of building any custom state machine/task management is always higher than using Temporal. Outside the company the service and storage for it need to be set up. If you already have an SQL database the service deployment is trivial through a docker image. The docker is also used to run a local Temporal service for development on a personal computer or laptop.
I am one of the authors of Imixs-Workflow. Imixs-Workflow is an open source workflow engine based on BPMN 2.0 and fully integrated into the Java EE technology stack.
I develop workflow engines by myself since more than 10 years. I will try to answer your question in short:
> What problems have you used workflow engines to solve?
My personal goal when I started to think about workflow engines was to avoid hard codding the business logic within my application. Many things in a business application can be reused so it makes sense to keep them configurable. For example:
sending out a notification
view open tasks
assigned a task to a person
describing the current task
From this function list you can see I am talking about human-centric workflows. In short: A human-centric workflow engine answers the questions: Who is responsible for a task and who needs to be informed next? And these are the typical questions in business requirements.
>What libraries/frameworks did you use?
5 years ago we started reimplementing Imixs-Workflow engine focusing on BPMN 2.0. BPMN is the common standard for process modeling. And the surprising thing for me was that we were suddenly able to describe even highly complex business processes that could be visualized and executed. I recommend everyone to use BPMN for modeling business processes.
> When did a simpler State Machine/Task Management like system suffice?
A simple state machine is sufficient if you just want to track the status of a business object. This is the case when you begin to introduce the 'status' attribute into your object model. But in case you need business processes with responsibilities, logging and flow control, then a state machine is no longer sufficient.
> Bonus: How did/do you make the distinction between Task Management and Workflow Engine?
This is exactly the point where many workflow engines mentioned here differ. For a human-centric workflow you typically need a task management to distribute tasks between human actors. For a process automation, this point is not so relevant. It is sufficient if the engine performs certain tasks. Task management and workflow engines can not be compared because task management is always a function of a workflow engine.
Check rails_workflow gem - I think this is close to what you searching.
I have an experience with using Activiti BPMN 2.0 engine for handling high-performance and high-throughput data transfer processes in an infrastructure of network nodes. The basic task was to allow configuration and monitoring of such transfer processes and control each network node (ie. request node1 to send a data file to node2 via specific transport layer).
There could be thousands of processes running at a time and overall tens or low hundreds of thousands processes per day.
There were bunch of different process definitions but it was not necessarily required that an operator of the system could create custom workflows. So the primary use case for the BPM engine itself was to be robust, scalable and allow monitoring of each process flow.
In the end it basically worked but what we learned from that project was that a BPMN platform, or rather the Activiti engine specifically, was not the best bet for such a high-throughput system.
The main challenges were task execution prioritization, DB locking, execution retries to name the few concerning the BPM itself. So we had to develop custom handling of these, for example:
Handling of retries in the BPM for cases when a node had no free worker for given task, or when the node was not running at all.
Execution of parallel transfer tasks in a single process and synchronization of the results (success/failure).
I don't know if other BPMN engines would be more suitable for such scenario since BPMN is mostly intended for long-running business tasks involving user interaction where performance is probably not the same issue as was in our case.
I rolled my own workflow engine to support phased processing of documents - cataloging, sending for image processing (we work with redaction sw), if needed sending to validation, then release and finally shipping back to the client. In our case we have a truckload of documents to process so sometimes we need to run each service separately to control delivery and resources usage. Simple in concept but high performance and distributed processing needed, and we could't find any off the shelf product that fit the bill for us.