I'm having difficulty finding the documentation I need to work with Resources and resourcePools - there is no "resource API documentation" that I can find.
I would like to programatically create static resources (of a custom type) and then add these resources to a resourcePool. When creating the resources, I'd like to be able to specify their property values prior to adding them to the resourcePool. In my mind the code would look something like this:
Room myRoom;
myRoom = new Room("redRoom", 20);
addTo_myResourcePool(myRoom);
myRoom = new Room("greenRoom", 10);
addTo_myResourcePool(myRoom);
Does anyone know if there are ways to achieve this end?
This is a bit of a blind spot in AnyLogic. It can only be done indirectly:
Create an empty agent population with your agent type
Tell the resource pool to use that agent type
set the resource pool capacity as you need. The pool will create agents for your in that population (if the capacity is larger than the current number of resources)
If you want to manually create a resource, you must call myResourcePool.set_Capacity(myResourcePool.getCapacity()+1)
Destroying 1 resource works vice-versa.
Also, make sure to "destroy resources on capacity decrease" so the agents are destroyed from the population
There's a trick I use for this, that works sometimes, but I wouldn't generalize it as a final solution...I only do it when i have a few different unit characteristics, which seems to be your case.
step 1: create a population of resourcePools... each resource pool will correspond to one kind of agent characteristics, and all resourcePools use the same resource (agent) type
step 2: in the on new unit of the resource pool, you will use the index of the resourcePool population to generate the unit with particular characteristics... then you can just do something like resourcePool.get(i).set_capacity(whatever) in order to generate a resource unit with the exact characteristics you want
step 3: when you seize the resource, you will use alternatives... each resourcePool from the resourcePool population will be 1 option among the alternatives to use... you will need to create a function that retourns a ResourcePool[][]
step 4: you will use conditions to select the unit based on its characteristics (customize resource selection)
One option is to create a population of your resource agent, which I assume is of type room based on your code.
Then you have a function that will add a new agent to the population and return it to the caller.
And now you only need to add this to the new resource unit call in the resource pool object
Do not change the "Add Units to" option since we are already doing this in the function.
I tested this in a small model by having two buttons to increase and decrease the capacity during execution using
resourcePool.set_capacity(max(0, resourcePool.size() + 1));
and
remove_myResource(myResource.get(myResource.size()-1));
resourcePool.set_capacity(max(0, myResource.size()));
There are several related points here.
AnyLogic is designed such that you only change the agents in a resource pool via changing the capacity of the pool. This can be done directly via the pool's set_capacity function (if the pool's capacity is set to be defined "Directly"), but is more commonly done indirectly by linking the pool's capacity to a schedule (to represent shift patterns or any other pre-defined changes over time in capacity).
Resource pools will keep agents by default when the capacity decreases (i.e., they exist but are 'unavailable'), but you can set it to delete them. What you'll want will depend on what the resources and any resource-specific attributes/state represent.
If you need to address/access the agents in the resource pool independently from their use as seized/released resources you would have the pool store them in a custom population (rather than put them in the default population along with all other agents not created in a custom population) via the "Add units to" setting. Then you can access that population explicitly (e.g., loop through it) whenever you need to. Otherwise this aspect is not necessary.
If your resource pool is of custom agents with parameters (or other state that needs initialising in a particular way), the standard approach is to have direct/indirect capacity increases create the unit (with default parameter values) and then do any follow-on initialisation in the "On new unit" action (where you have access to the newly-created unit via the agent keyword).
You can alternatively do it via a dynamic expression for the "New resource unit" (as in Jaco's answer) but this doesn't particularly gain you anything (though it's equally valid). Yes, it's a bit more 'object-oriented', but there are loads of barriers AnyLogic puts up to doing things in a proper OO way anyway — as you say, the more 'obvious' OO approach would just be to have methods on the resource pool which add/remove agents. Basically, the standard 'design pattern' on blocks which create agents, like a Resource Pool or a Source block, is to let it create them with default info and then customise it in the appropriate action.
You could also use the "On startup" action of the resource pool agent as well / instead, but normally you would be using information from the process-containing agent to determine what state the new resource agent should have, which makes the "On startup" action less useful.
Finally(!), if you have an unchanging set of resource pool agents and it's easier to initialise them together (e.g., because there is some database input data for all of them that can thus just be looped through once), then just do that in your "On startup" action of the agent containing the resource pool (which runs after the resource pool has initialised and created default-state agents). That will require them in a custom population so you can loop through just those agents. But your question seemed to imply you were concerned about dynamic additions of resource agents.
But, as some of my other comments indicated, there are various subtleties/restrictions in how resource pools work (really based on them currently being designed as a pool where the explicit identity of each individual is not fully 'controllable') which mean that what you actually need may go beyond this.
A couple of examples:
If you wanted your resources to be explicit individual staff with info tracked across shifts (e.g., with their state changing to reflect experience or the history of tasks they've conducted), you have the complication that you can't control which resources (if you didn't delete them on capacity decrease) 'come back' when the capacity changes, so you may need more complex workarounds such as having separate resource pools for each distinct shift and seizing from resource sets including all such pools — only one of them (the active shift one) will have non-zero capacity at any given time unless the shifts overlap.
If capacity decreases (representing end-of-shift), you can't control / determine which resource agents are chosen to go off-shift. Plus there are issues regarding resource agents which are still active finishing a task before they actually 'leave' (given that shift-ends weren't set to pre-empt) — because they still exist until they finish, if you increase the capacity again whilst they're working AnyLogic won't create a new resource agent (it will just add this existing agent back as an 'active' agent in the pool). So that makes some other things harder...
Related
Question: Is it thread-safe to use static variables (as a shared storage between orchestrations) or better to save/retrieve data to durable-entity?
There are couple of azure functions in the same namespace: hub-trigger, durable-entity, 2 orchestrations (main process and the one that monitors the whole process) and activity.
They all need some shared variables. In my case I need to know the number of main orchestration instances (start new or hold on). It's done in another orchestration (monitor)
I've tried both options and ask because I see different results.
Static variables: in my case there is a generic List, where SomeMyType holds the Id of the task, state, number of attempts, records it processed and other info.
When I need to start new orchestration and List.Add(), when I need to retrieve and modify it I use simple List.First(id_of_the_task). First() - I know for sure needed task is there.
With static variables I sometimes see that tasks become duplicated for some reason - I retrieve the task with List.First(id_of_the_task) - change something on result variable and that is it. Not a lot of code.
Durable-entity: the major difference is that I add List on a durable entity and each time I need to retrieve it I call for .CallEntityAsync("getTask") and .CallEntityAsync("saveTask") that might slow done the app.
With this approach more code and calls is required however it looks more stable, I don't see any duplicates.
Please, advice
Can't answer why you would see duplicates with the static variables approach without the code, may be because list is not thread safe and it may need ConcurrentBag but not sure. One issue with static variable is if the function app is not always on or if it can have multiple instances. Because when function unloads (or crashes) the state would be lost. Static variables are not shared across instances either so during high loads it wont work (if there can be many instances).
Durable entities seem better here. Yes they can be shared across many concurrent function instances and each entity can only execute one operation at a time so they are for sure a better option. The performance cost is a bit higher but they should not be slower than orchestrators since they perform a lot of common operations, writing to Table Storage, checking for events etc.
Can't say if its right for you but instead of List.First(id_of_the_task) you should just be able to access the orchestrators properties through the client which can hold custom data. Another idea depending on the usage is that you may be able to query the Table Storages directly with CloudTable class for the information about the running orchestrators.
Although not entirely related you can look at some settings for parallelism for durable functions Azure (Durable) Functions - Managing parallelism
Please ask any questions if I should clarify anything or if I misunderstood your question.
I have two processes I want to juxtapose. The first is a Manual workflow that is well represented by the Process library. The second is a software System that performs the same work, but is better modelled as a state transition system (e.g. s/w component level).
Now in AnyLogic, state models are for agents, that can run through processes with animations (counts), or move across space. What if I want to use a state chart to run an agent through? so I have a System state chart/agent and a Job state chart/agent?
I want Jobs from Population A to go through the Manual process flow chart and Jobs from Population B to go through the System state flow chart, so I can juxtapose the processing costs. I then calculate various delays and resource allocations for each of the Jobs going through and compare them.
Can anyone explain how to setup a state chart as the base process, another agent will go through? Is this even possible?
Please help
Thanks
This will not work as you would like it to, for these reasons:
You can't send an Agent into a flowchart. (Not sure how AnyLogic is handling it internally, maybe a generic token, or no flow at all, just changes to the state).
In AnyLogic there can only be one state active (simple or combined state) per state chart, so you can't represent a population with several members.
Agents can't be in more then one flow at a time, so even if it would be possible to insert an Agent into a statechart, this limitation would also apply.
The conclusion of this is: State charts are suitable for modeling individual behaviour (inside one Agent), whereas process flows can be both used for individual behaviour (inside one Agent, running a dummy Agent through) as well as for groups (multiple Agents running through process).
The normal use case would be to add the state chart to the Agent type running through your process flow (as you already noted in your question), applying the changes caused by the state chart to the individual agent.
I've managed to use the Job scheduling example for a project I'm working on. I have an additionnal constraint I would like to add. Some Resources should be blocked sometimes. For example a Global renewable Resource shouldn't be used between minutes 10 to 20. Is it currently already doable or if not, how can it be done in the score calculation ?
Thanks
Use a custom shadow variable listener to predict the starting time of each task.
Then simply have a hard constraint to check that the task won't overlap with its blocks.
Penalize the amount of overlap to avoid a "score trap".
We are using CQRS with EventSourcing.
In our application we can add resources(it is business term for a single item) from ui and we are sending command accordingly to add resources.
So we have x number of resources present in application which were added previously.
Now, we have one special type of resource(I am calling it as SpecialResource).
When we add this SpecialResource , id needs to be linked with all existing resources in application.
Linked means this SpecialResource should have List of ids(guids) (List)of existing resources.
The solution which we tried to get all resource ids in applcation before adding the special
resource(i.e before firing the AddSpecialResource command).
Assign these List to SpecialResource, Then send AddSpecialResource command.
But we are not suppose to do so , because as per cqrs command should not query.
I.e. command cant depend upon query as query can have stale records.
How can we achieve this business scenario without querying existing records in application?
But we are not suppose to do so , because as per cqrs command should not query. I.e. command cant depend upon query as query can have stale records.
This isn't quite right.
"Commands" run queries all the time. If you are using event sourcing, in most cases your commands are queries -- "if this command were permitted, what events would be generated?"
The difference between this, and the situation you described, is the aggregate boundary, which in an event sourced domain is a fancy name for the event stream. An aggregate is allowed to run a query against its own event stream (which is to say, its own state) when processing a command. It's the other aggregates (event streams) that are out of bounds.
In practical terms, this means that if SpecialResource really does need to be transactionally consistent with the other resource ids, then all of that data needs to be part of the same aggregate, and therefore part of the same event stream, and everything from that point is pretty straight forward.
So if you have been modeling the resources with separate streams up to this point, and now you need SpecialResource to work as you have described, then you have a fairly significant change to your domain model to do.
The good news: that's probably not your real requirement. Consider what you have described so far - if resourceId:99652 is created one millisecond before SpecialResource, then it should be included in the state of SpecialResource, but if it is created one millisecond after, then it shouldn't. So what's the cost to the business if the resource created one millisecond before the SpecialResource is missed?
Because, a priori, that doesn't sound like something that should be too expensive.
More commonly, the real requirement looks something more like "SpecialResource needs to include all of the resource ids created prior to close of business", but you don't actually need SpecialResource until 5 minutes after close of business. In other words, you've got an SLA here, and you can use that SLA to better inform your command.
How can we achieve this business scenario without querying existing records in application?
Turn it around; run the query, copy the results of the query (the resource ids) into the command that creates SpecialResource, then dispatch the command to be passed to your domain model. The CreateSpecialResource command includes within it the correct list of resource ids, so the aggregate doesn't need to worry about how to discover that information.
It is hard to tell what your database is capable of, but the most consistent way of adding a "snapshot" is at the database layer, because there is no other common place in pure CQRS for that. (There are some articles on doing CQRS+ES snapshots, if that is what you actually try to achieve with SpecialResource).
One way may be to materialize list of ids using some kind of stored procedure with the arrival of AddSpecialResource command (at the database).
Another way is to capture "all existing resources (up to the moment)" with some marker (timestamp), never delete old resources, and add "SpecialResource" condition in the queries, which will use the SpecialResource data.
Ok, one more option (depends on your case at hand) is to always have the list of ids handy with the same query, which served the UI. This way the definition of "all resources" changes to "all resources as seen by the user (at some moment)".
I do not think any computer system is ever going to be 100% consistent simply because life does not, and can not, work like this. Apparently we are all also living in the past since it takes time for your brain to process input.
The point is that you do the best you can with the information at hand but ensure that your system is able to smooth out any edges. So if you need to associate one or two resources with your SpecialResource then you should be able to do so.
So even if you could associate your SpecialResource with all existing entries in your data store what is to say that there isn't another resource that has not yet been entered into the system that also needs to be associated.
It all, as usual, will depend on your specific use-case. This is why process managers, along with their state, enable one to massage that state until the process can complete.
I hope I didn't misinterpret your question :)
You can do two things in order to solve that problem:
make a distinction between write and read model. You know what read model is, right? So "write model" of data in contrast is a combination of data structures and behaviors that is just enough to enforce all invariants and generate consistent event(s) as a result of every executed command.
don't take a rule which states "Event Store is a single source of truth" too literally. Consider the following interpretation: ES is a single source of ALL truth for your application, however, for each specific command you can create "write models" which will provide just enough "truth" in order to make this command consistent.
Greetings SO denizens!
I'm trying to architect an overhaul of an existing NodeJS application that has outgrown its original design. The solutions I'm working towards are well beyond my experience.
The system has ~50 unique async tasks defined as various finite state machines which it knows how to perform. Each task has a required set of parameters to begin execution which may be supplied by interactive prompts, a database or from the results of a previously completed async task.
I have a UI where the user may define a directed graph ("the flow"), specifying which tasks they want to run and the order they want to execute them in with additional properties associated with both the vertices and edges such as extra conditionals to evaluate before calling a child task(s). This information is stored in a third normal form PostgreSQL database as a "parent + child + property value" configuration which seems to work fairly well.
Because of the sheer number of permutations, conditionals and absurd number of possible points of failure I'm leaning towards expressing "the flow" as a state machine. I merely have just enough knowledge of graph theory and state machines to implement them but practically zero background.
I think what I'm trying to accomplish is at the flow run time after user input for the root services have been received, is somehow compile the database representation of the graph + properties into a state machine of some variety.
To further complicate the matter in the near future I would like to be able to "pause" a flow, save its state to memory, load it on another worker some time in the future and resume execution.
I think I'm close to a viable solution but if one of you kind souls would take mercy on a blind fool and point me in the right direction I'd be forever in your debt.
I solved similar problem few years ago as my bachelor and diploma thesis. I designed a Cascade, an executable structure which forms growing acyclic oriented graph. You can read about it in my paper "Self-generating Programs – Cascade of the Blocks".
The basic idea is, that each block has inputs and outputs. Initially some blocks are inserted into the cascade and inputs are connected to outputs of other blocks to form an acyclic graph. When a block is executed, it reads its inputs (cascade will pass values from connected outputs) and then the block sets its outputs. It can also insert additional blocks into the cascade and connect its inputs to outputs of already present blocks. This should be equal to your task starting another task and passing some parameters to it. Alternative to setting output to an value is forwarding a value from another output (in your case waiting for a result of some other task, so it is possible to launch helper sub-tasks).