Good day! I tried to build the network model in AnyLogic 8.7.6. I have 2 sources with different priorities (the packets from the first source obtain priority 2, and the packets from the second source obtain priority 1). The packets from the sources are transmitted to the Queue. The Queue should sort them by Priority.
The priority parameters are defined in Agents at the Sources.
I made a simple experiment. The Source 1 generates 1 agent per sec and the Source 2 generates 10 agents per sec. We see that the queue is empty :(
I have no idea why. The Queue doesn't sort them according to their priority.
P.S. Sorry, I have russian language version of Anylogic.
Without seeing the queue capacities, if you have two queues connected to each other agents will enter the first one and immediately go to the next queue... so they are never prioritized because they never queue in the first queue, where I assume you set up the prioritization.
Try deleting the connection between the two queues and simply see if the agents get ordered according to your priority.
See a small test below
I have a custom agent type with a variable priority and a simple flow chart with 2 sources and a queue
As per your example, I am setting the priority variable of the agents generated in Source1 to 2, and in source 2 they are set to 1.
In the queue, I set my ordering to be based on priority and tell the block to use the priority variable inside the agents (the higher the higher the priority)
For the example, I set source1 to generate agents every minute and source 2 to generate every second.
The expectation is that as soon as an agent from source1 gets generated it will jump the queue and go stand first inline
When I run the model and I click to see the details of the queue, I can see that as soon as the agent from source1 gets created it jumps the line.
You can always create a custom toString() function to determine what must be displayed when you click on the queue block
Related
I am exploring the use of Simpy to model the queue of elective surgery demand following Covid. Here I want to explore various strategies, such as number of theatres, on the cutting through the existing backlog. Is there any way to predefine a queue length and waiting time distribution in Simpy? I imagine I can create a source of patients to create the waiting list size and hold off serving them until I reach the required queue size and waiting list distribution, but wondering if there are any more elegant solutions.
You can put what ever timestamp you want on your backlog objects and added them directly to the queue at start up, but you will still need to delay your queue processing with a timeout to start it at the right time
So i'm trying to understand service bus timings... Especially how the locks works. One can choose to manually call CompleteAsync which is what we're doing. It could also be the case that the processing takes some time. In these cases we want to make sure we don't get unneccessary MessageLockLostException.
Seems there are a couple of numbers to relate to:
Lock duration (found in azure portal on the bus, currently set to 1 minute which is think is default)
AutoRenewTimeout (property on OnMessageOptions, currently set to 1 minute)
AutoComplete (property on OnMessageOptions, currently set to false)
Assuming the processing is running for around 2 minutes, and then either succeeds or crases (doesn't matter which case for now). Let's say this is the normal scenario, so this means that processing takes roughly 2 minutes for each message.
Also, it's indeed a queue and not a topic. And we only have one consumer that asynchronoulsy processes the messages with MaxConcurrentCalls set to 100. We're using OnMessageAsync with ReceiveMode.PeekLock.
What should my settings now be as a single consumer to robustly process all messages?
I'm thinking that leaving Lock duration to 1 minute would be fine, as that's the default, and set my AutoRenewTimeout to 5 minutes for safety, because as i've understood this value should be the maximum time it takes to process a message (atleast according to this answer). Performance is not critical for this system, so i'm resonating as that leaving a message locked for some unneccessary 1, 2 or 3 minutes is not evil, as long as we don't get LockedException because these give no real value.
This thread and this thread gives great examples of how to manually renew the locks, but I thought there is a way to automatically renew the locks.
What should my settings now be as a single consumer to robustly process all messages?
Aside from LockDuration, MaxConcurrentCalls, AutoRenewTimeout, and AutoComplete there are some configurations of the Azure Service Bus client you might want to look into. For example, create not a single client with MaxConcurrentCalls set to 100, but a few clients with total concurrency level distributed among the clients. Note that you'd want to use different MessagingFactory instances to create those clients to ensure you have more than a single "pipe" to receive messages. And even with that, it would be way better to scale out and have competing consumers rather than having a single consumer handling all the load.
Now back to the settings. If your normal processing time is 2 minutes, it's better to set MaxLockDuration on the entities to this time and not 1 minute. This will remove unnecessary lock extension calls to the broker and eliminate MessageLockLostException.
Also, keep in mind that AutoRenewTimeout is a client based operation, not broker, and therefore not guaranteed. You will run into cases where lock will be lost even though the AutoRenewTimeout time has not elapsed yet.
AutoRenewTimeout should always be set to longer than MaxLockDuration as it will be counterproductive to have them equal. Have it somewhat larger than MaxLockDuration as this is clients' "insurance" that when processing takes longer than MaxLockDuration, message lock won't be lost. Having those two equal is, in essence, disables this fallback.
We have a requirement where we will have messages coming in 3 different queues.
I need to write code such that messages from Queue A are given higher priority over Queue B followed by Queue C.
However I cannot keep any of the Queue waiting for too long so there should be some dedicated receivers for each thread.
Can you please suggest any existing framework that can do this for me?
A possible solution is a higher number of dedicated receivers for queue A that also look at B and C if there are no messages in A.
A slightly lesser number of dedicated receivers for Queue B that also look at A and C if there are no messages in B.
A very few dedicated receivers for Queue C that also look at A and B if there are no messages in C.
Is it possible to implement this solution at JMS consumer\receiver level or Do I need to write custom code for it?
JMS has no means to control priority of message handling. I propose to convert each message in a task (immediately as it arrives) and submit tasks to a prioritized Executor. See Java Executors: how can I set task priority?
If you control the queues (as in, the writing code can have a queue reference you provide), then you would put a single PriorityBlockingQueue, with a comparator that sort the A, B, C.
If you cannot avoid 3 queues (as in you only get the queue reference to read from), then you unfortunately have to poll each, not take(). However you cannot spin at full speed and must wait, so I would think that you should take(timeout) on the A queue for as long as your minimal response time allows for servicing the B and C queues (which would be large anyway if A always have priority). You only call A.take() if the B and C queues are empty of course (but don't rely on .size() if you don't know the queue implementation; just trust the last poll() outcome you just tried).
Of course you can spin 3 threads to simply take() and put in a single priority queue that you control. But that is a bit overkill.
Use the JMSPriority property on the JMS message, dump the messages on the same queue and let the provider do the work of prioritizing.
I've just begun tinkering with Windows Azure and would appreciate help with a question.
How does one determine if a Windows Azure Queue is empty and that all work-items in it have been processed? If I have multiple worker processes querying a work-item queue, GetMessage(s) returns no messages if the queue is empty. But there is no guarantee that a currently invisible message will not be pushed back into the queue.
I need this functionality since follow-up behavior of my workflow depends on completion of all work-items in that particular queue. A possible way of tackling this problem would be to count the number of puts and deletes. But this will again require synchronization at a shared storage level and I would like to avoid it if possible.
Any ideas?
Take a look at the ApproximateMessageCount method. This should return the number of messages on the queue, including invisible messages (e.g. the ones being processed).
Mike Wood blogged about this subtlety, along with a tidbit about the queue's Clear method, here.
That said: you might want to choose a different mechanism for workflow management. Maybe a table row, where you have your rowkey equal to some multi-queue-item transation id, and individual properties being status flags. This allows you to track failed parts of the transaction (say, 9 out of 10 queue items process ok, the 10th fails; you can still delete the 10th queue item, but set its status flag to failed, then letting you deal with this scenario accordingly). Also: let's say you use the same queue to process another 'transaction' (meaning the queue is again non-zero in length). By using a separate object like a Table Row, you can still determine that your 'transaction' is complete even though there are additional queue messages.
The best way is to have another queue, call it termination indicator queue, and put a message in that queue for every message your process from your main queue. That is how it is done in research projects too. Check this out http://www.cs.gsu.edu/dimos/content/gis-vector-data-overlay-processing-azure-platform.html
If Multiple worker processes have to called in order after every task by the previous worker gets done (there is a queue containing pointer to blobs and every worker has multiple instances. Pls see my previous questions.) how should this be done ?
Will Azure fabric do this automatically ? or is there a way to set this in the config file ?
You just follow the same process that you're already got but with more layers. If worker 1 reads something from queue 1, and it needs to let worker 2 know that it's time for it to start processing the same file, worker 1 simply puts a message in queue 2.
Edit: OK, let me see if I fully understand what you're after here. It sounds like you have here is a batch of files that need to go through several processes, but they can't go on to the next step of the process until they've all finished going through the previous step.
If that is the case then, no, there is nothing in Azure that will do that for you automatically.
Because of this, if possible I'd rework my workers so that each file could just be sent on without worrying about what state the other files were in.
If that is not possible, then you need some way of monitoring which files have been completed and which ones are still pending. One way to do this (and hopefully you can expand on this) is the code that creates the batch, creates a progress row in a table somewhere (SQL Azure or Azure Tables, it doesn't matter really) for each file, sends a message to worker one and starts a background task to monitor this table.
When worker 1 finishes processing a file, it updates the relevant row in the monitoring table to say, "Worker 1 finished".
The background thread that was created above waits until all of the rows have "Worker 1 finished" set to true, then creates the messages for Worker 2 and starts looking at the "Worker 2 finished" flag. Rinse repeat for as many worker steps as you have.
When all steps are finished, you'll probably want the background task to clean up this table and also have some sort of timeout in case a message gets lost somewhere.
Although what #knightpfhor is suggesting would do the trick, I would try and go about this in a more simple kind of way without referencing the names of workers :-)
Specifically, If there is a way you already know how many docs need to be processed, I would first create N-amount of rows in a Table, each holdung some info relevant to the current batch, each having columnKey set to be the batch id. I'd then put N number of messages in my queue and let the worker processes pick them up. When each worker is done, it would delete the corresponding row in the table as well. A monitoring process would simoly know a batch started and do a count every once in a while (if it is not cricital, or the worker would do a count after it finishes removing the row) and spawn a new message in the relevant queue for the next worker role to process.
If you wamt even more control you could go with having a row in your table storing the state of your process (processing files, post-processing), etc. In this case, I'd store the state transitions in a queue, and make sure you only make them once. But that's a whole new question alltogether.
Hope it heps.