Good day! I tried to build the network model in AnyLogic 8.7.6. I have 2 sources with different priorities (the packets from the first source obtain priority 2, and the packets from the second source obtain priority 1). The packets from the sources are transmitted to the Queue. The Queue should sort them by Priority.
The priority parameters are defined in Agents at the Sources.
I made a simple experiment. The Source 1 generates 1 agent per sec and the Source 2 generates 10 agents per sec. We see that the queue is empty :(
I have no idea why. The Queue doesn't sort them according to their priority.
P.S. Sorry, I have russian language version of Anylogic.
Without seeing the queue capacities, if you have two queues connected to each other agents will enter the first one and immediately go to the next queue... so they are never prioritized because they never queue in the first queue, where I assume you set up the prioritization.
Try deleting the connection between the two queues and simply see if the agents get ordered according to your priority.
See a small test below
I have a custom agent type with a variable priority and a simple flow chart with 2 sources and a queue
As per your example, I am setting the priority variable of the agents generated in Source1 to 2, and in source 2 they are set to 1.
In the queue, I set my ordering to be based on priority and tell the block to use the priority variable inside the agents (the higher the higher the priority)
For the example, I set source1 to generate agents every minute and source 2 to generate every second.
The expectation is that as soon as an agent from source1 gets generated it will jump the queue and go stand first inline
When I run the model and I click to see the details of the queue, I can see that as soon as the agent from source1 gets created it jumps the line.
You can always create a custom toString() function to determine what must be displayed when you click on the queue block
I have built a Logic app that does an API call, gets a JSON object. I have to do some manipulations to get a proper array out of it to generate a good-looking e-mail.
I have done a for-each loop to do the manipulation, create the object and generate an array at the end. But the array contains multiple times the same lines and some lines are missing.
As you can see here, the data don't match for a single iteration:
Any idea?
By the way, it takes 5 seconds to loop in 12 values! If someone knows why, I'm interested.
The problem may be caused by the "For each" iterations run at same time(or in parallel). When the workflow execute the "Set variable" action, another instance of the workflow may also execute here. So it may cause this problem.
To solve this problem, you can set the "For each" iterations run one by one. Click the "..." of "For each" and click "Settings".
Enable Concurrency Control and set Degree of Parallelism to 1.
Then run your logic app again.
For a certain internal endpoint I am working on for a Nodejs API, I have been asked to dynamically change the value of a property status based on the value of a property visibility of the same object just before sending down the response.
So for example lets say I have an object that represents a user's profile. The user can have visibility Live or Hidden but status can be IDLE, CREATING, UPDATING.
What's been asked of me is that when I send down the object response containing those two properties I override the status value with another based on the current value of visibility - so if visibility is LIVE then I should set status to ACTIVE, if visibility is HIDDEN then status should be INACTIVE (two status values that do not exist internally in the database or in the list of enums for this object) and then also if status is not IDLE I should change it's value to BUSY
So not only am I changing it's value based on the value of visibility but I'm also changing it's value based on it's own value not being a value!
I am just wondering if this is good practice for an API in any way (apart from some weird extra layer of complexity, and so much inconsistency as the client will later ask for the same object based on status too, which means a reverse mapping)?
status doesn't mean the same thing for different users, having the same name may be confusing but not a problem if well documented.
If the mapping become too complex, you can always persist the two values, but then you will have to keep them in sync.
We are using CQRS with EventSourcing.
In our application we can add resources(it is business term for a single item) from ui and we are sending command accordingly to add resources.
So we have x number of resources present in application which were added previously.
Now, we have one special type of resource(I am calling it as SpecialResource).
When we add this SpecialResource , id needs to be linked with all existing resources in application.
Linked means this SpecialResource should have List of ids(guids) (List)of existing resources.
The solution which we tried to get all resource ids in applcation before adding the special
resource(i.e before firing the AddSpecialResource command).
Assign these List to SpecialResource, Then send AddSpecialResource command.
But we are not suppose to do so , because as per cqrs command should not query.
I.e. command cant depend upon query as query can have stale records.
How can we achieve this business scenario without querying existing records in application?
But we are not suppose to do so , because as per cqrs command should not query. I.e. command cant depend upon query as query can have stale records.
This isn't quite right.
"Commands" run queries all the time. If you are using event sourcing, in most cases your commands are queries -- "if this command were permitted, what events would be generated?"
The difference between this, and the situation you described, is the aggregate boundary, which in an event sourced domain is a fancy name for the event stream. An aggregate is allowed to run a query against its own event stream (which is to say, its own state) when processing a command. It's the other aggregates (event streams) that are out of bounds.
In practical terms, this means that if SpecialResource really does need to be transactionally consistent with the other resource ids, then all of that data needs to be part of the same aggregate, and therefore part of the same event stream, and everything from that point is pretty straight forward.
So if you have been modeling the resources with separate streams up to this point, and now you need SpecialResource to work as you have described, then you have a fairly significant change to your domain model to do.
The good news: that's probably not your real requirement. Consider what you have described so far - if resourceId:99652 is created one millisecond before SpecialResource, then it should be included in the state of SpecialResource, but if it is created one millisecond after, then it shouldn't. So what's the cost to the business if the resource created one millisecond before the SpecialResource is missed?
Because, a priori, that doesn't sound like something that should be too expensive.
More commonly, the real requirement looks something more like "SpecialResource needs to include all of the resource ids created prior to close of business", but you don't actually need SpecialResource until 5 minutes after close of business. In other words, you've got an SLA here, and you can use that SLA to better inform your command.
How can we achieve this business scenario without querying existing records in application?
Turn it around; run the query, copy the results of the query (the resource ids) into the command that creates SpecialResource, then dispatch the command to be passed to your domain model. The CreateSpecialResource command includes within it the correct list of resource ids, so the aggregate doesn't need to worry about how to discover that information.
It is hard to tell what your database is capable of, but the most consistent way of adding a "snapshot" is at the database layer, because there is no other common place in pure CQRS for that. (There are some articles on doing CQRS+ES snapshots, if that is what you actually try to achieve with SpecialResource).
One way may be to materialize list of ids using some kind of stored procedure with the arrival of AddSpecialResource command (at the database).
Another way is to capture "all existing resources (up to the moment)" with some marker (timestamp), never delete old resources, and add "SpecialResource" condition in the queries, which will use the SpecialResource data.
Ok, one more option (depends on your case at hand) is to always have the list of ids handy with the same query, which served the UI. This way the definition of "all resources" changes to "all resources as seen by the user (at some moment)".
I do not think any computer system is ever going to be 100% consistent simply because life does not, and can not, work like this. Apparently we are all also living in the past since it takes time for your brain to process input.
The point is that you do the best you can with the information at hand but ensure that your system is able to smooth out any edges. So if you need to associate one or two resources with your SpecialResource then you should be able to do so.
So even if you could associate your SpecialResource with all existing entries in your data store what is to say that there isn't another resource that has not yet been entered into the system that also needs to be associated.
It all, as usual, will depend on your specific use-case. This is why process managers, along with their state, enable one to massage that state until the process can complete.
I hope I didn't misinterpret your question :)
You can do two things in order to solve that problem:
make a distinction between write and read model. You know what read model is, right? So "write model" of data in contrast is a combination of data structures and behaviors that is just enough to enforce all invariants and generate consistent event(s) as a result of every executed command.
don't take a rule which states "Event Store is a single source of truth" too literally. Consider the following interpretation: ES is a single source of ALL truth for your application, however, for each specific command you can create "write models" which will provide just enough "truth" in order to make this command consistent.
Is it possible to have a plugin intervene when someone is editing an optionset?
I would have thought crm would prevent the removal of optionset values if there are entities that refer to them, but apparently this is not the case (there are a number of orphaned fields that refer to options that no longer exist). Is there a message/entity pair that I could use to check if there are entities using the value that is to be deleted/modified and stop it if there are?
Not sure if this is possible, but you could attempt to create a plugin on the Execute Method, and check the input parameters in the context to determine what the Request Type that is being processed is. Pretty sure you'll be wanting to look for either UpdateAttributeRequest for local OptionSets, or potentially UpdateOptionSetRequest for both. Then you could run additional logic to determine what values are changing, and ensuring the database values are correct.
The big caveat to this, is if you even have a moderate amount of data, I'm guessing you'll hit the 2 minute limit for plugin execution and it will fail.