How can I run agents simultaneously in Anylogic? - agent

How can I run agents simultaneously in Anylogic?
when I define some agents in Anylogic and then run program, all agents did not run simultaneous. which means that all agent stay in first state then one of them go's some state and others wait until in their turn. is there solution for my problem?
tanks.

What you are asking will never happen, even if you do parallel processing of each agent.
Assuming that for example you have for each agent an initial state X with a timeout transition that will happen after 1 second to a state Y.
What's going to happen?
Well, you will see that each agent will move from X to Y one after the other... yes, but if you check the time in which each of them move from X to Y, it will be exactly the same.
use time() function to discover that all agents will move to state "Y" at the exact same millisecond.
In summary, your problem is probably not a problem and you are just confused about what is going on.
As an analogy, try to create a code equivalent to this where all the elements of "aux" are equal to 2 at the same time:
for(int i=0; i<100; i++){
aux[i]=2;
}
how can you create a code that makes each one of the elements of "aux" equal to "2" at the same time?
Well, it's not possible... because a computer runs everything sequentially, and even if you process everything in parallel, they will still not all be equal to "2" at the same time...
But in the virtual time of a simulation YES, all the agents will run at the exact same time. In the real time of your wrist watch, NO, they won't run at the same time...

Related

Executing a function periodically on accurate and precise intervals

I want to implement an accurate and precise countdown timer for my application. I started with the most simple implementation, which was not accurate at all.
loop {
// Code which can take upto 10 ms to finish
...
let interval = std::time::Duration::from_millis(1000);
std::thread::sleep(interval);
}
As the code before the sleep call can take some time to finish, I cannot run the next iteration at the intended interval. Even worse, if the countdown timer is run for 2 minutes, the 10 milliseconds from each iteration add up to 1.2 seconds. So, this version is not very accurate.
I can account for this delay by measuring how much time this code takes to execute.
loop {
let start = std::time::Instant::now();
// Code which can take upto 10 ms to finish
...
let interval = std::time::Duration::from_millis(1000);
std::thread::sleep(interval - start.elapsed());
}
Even though this seems to precise up to milliseconds, I wanted to know if there is a way to implement this which is even more accurate and precise and/or how it is usually done in software.
For precise timing, you basically have to busy wait: while time.elapsed() < interval {}. This is also called "spinning" (you might have heard of "spin lock"). Of course, this is far more CPU intensive than using the OS-provided sleep functionality (which often transitions the CPU in some low power mode).
To improve upon that slightly, instead of doing absolutely nothing in the loop body, you could:
Call thread::yield_now().
Call std::hint::spin_loop()
Unfortunately, I can't really tell you what timing guarantees these two functions give you. But from the documentation it seems like spin_loop will result in more precise timing.
Also, you very likely want to combine the "spin waiting" with std::thread::sleep so that you sleep the majority of the time with the latter method. That saves a lot of power/CPU-resources. And hey, there is even a crate for exactly that: spin_sleep. You should probably just use that.
Finally, just in case you are not aware: for several use cases of these "timings", there are other functions you can use. For example, if you want to render a frame every 60th of a second, you want to use some API that synchronizes your loop with the refresh rate/v-blanking of the monitor directly, instead of manually sleeping.

Celery chains: Is it necessary to wait before getting the results?

So, I have a chain of tasks in Python 3 that a celery worker runs. Currently, I use the following piece of code to get and print the final result of the chain :
while not result.ready():
pass
print(result.get())
I have run the code with and without the while-loop, and it seems that the while-loop is redundant.
My question is: "is it necessary to have that while-loop?"
If by redundant, you mean that the code works fine without the while loop, then I would venture to say that the loop is not necessary. If, however, you throw an error without the loop because you're trying to print something that doesn't exist yet, then you should keep it. This can be a problem, though, because an empty while loop means you're just checking the same variable as fast as your computer can physically handle it, which tends to eat up your CPU. I recommend something like the following:
import time
t = 1 #The number of seconds you want to wait between checking if the result is ready
while not result.ready():
time.sleep(t)
print(result.get())
You can set t to whatever makes sense. If the task you're running takes several hours, maybe set it to 60, and you'll get the result within a minute. If you want the result faster, you can make the interval smaller. This will keep the program from dragging down the rest of your computer. However, if you don't mind your fans blowing and you absolutely need to know the moment the result is ready, ignore all of the above and leave your code the way it is :)

AnyLogic: Look ahead simulation

Is it possible to perform look ahead simulation in AnyLogic?
Specifically:
Simulate till time T.
Using 2 values of a variable, simulate for both values till T+t in parallel.
Evaluate the system state at T+t, choose the value of variable which leads to better performance.
Continue simulating from T using the selected value for the variable.
This is the basic functionality I am trying to implement. The variable values can be taken from decision tree, which should not affect the implementation.
Please let me know if someone has done something like this.
Yes, it is possible with some Java code. You may:
Pause parent experiment, save snapshot at time T;
Create two new experiments from parent experiment;
Load snapshots in two new experiments;
Continue execution of both experiments till time T + t;
Send notification to parent experiment, compare the results, assign the best value and continue simulation.
Some points can be done manually with UI controls or by code, some — by code only.

Puzzled by the cpushare setting on Docker.

I wrote a test program 'cputest.py' in python like this:
import time
while True:
for _ in range(10120*40):
pass
time.sleep(0.008)
, which costs 80% cpu when running in a container (without interference of other runnng containers).
Then I ran this program in two containers by the following two commands:
docker run -d -c 256 --cpuset=1 IMAGENAME python /cputest.py
docker run -d -c 1024 --cpuset=1 IMAGENAME python /cputest.py
and used 'top' to view their cpu costs. It turned out that they relatively cost 30% and 67% cpu. I'm pretty puzzled by this result. Would anyone kindly explain it for me? Many thanks!
I sat down last night and tried to figure this out on my own, but ended up not being able to explain the 70 / 30 split either. So, I sent an email to some other devs and got this response, which I think makes sense:
I think you are slightly misunderstanding how task scheduling works - which is why the maths doesn't work. I'll try and dig out a good article but at a basic level the kernel assigns slices of time to each task that needs to execute and allocates slices to tasks with the given priorities.
So with those priorities and the tight looped code (no sleep) the kernel assigns 4/5 of the slots to a and 1/5 to b. Hence the 80/20 split.
However when you add in sleep it gets more complex. Sleep basically tells the kernel to yield the current task and then execution will return to that task after the sleep time has elapsed. It could be longer than the time given - especially if there are higher priority tasks running. When nothing else is running the kernel then just sits idle for the sleep time.
But when you have two tasks the sleeps allow the two tasks to interweave. So when one sleeps the other can execute. This likely leads to a complex execution which you can't model with simple maths. Feel free to prove me wrong there!
I think another reason for the 70/30 split is the way you are doing "80% load". The numbers you have chosen for the loop and the sleep just happen to work on your PC with a single task executing. You could try moving the loop to be based on elapsed time - so loop for 0.8 then sleep for 0.2. That might give you something closer to 80/20 but I don't know.
So in essence, your time.sleep() call is skewing your expected numbers, removing the time.sleep() causes the CPU load to be far closer to what you'd expect.

UPPAAL: What cause clock to stop running

I am currently running my UPPAAL simulator. My simulator stops running the code after a certain point. This point varies depending on the declaration i provide. But i would like to know generally when does the clock stop running? Is there something that triggers this?
I'm not sure if I interpret your question correctly, if I could read your model I may give you some precise advice.
Trying to guess what the problem is, I can say there are times when the Uppaal simulator takes infinitely many discrete steps (transitions) without increasing any of the clock variables.
The feeling is that "the clock is stopped", while the rest of the simulation is going on. In this case, the time is not actually stopped: Uppaal, among all possible paths, it is just exploring the one where clock does not evolve. If the simulator (or the model checker) can take infinitely many transitions without increasing the clock variables, that is an example of "Zeno path".
It is a responsibility of the person who writes the model to avoid the possibility of taking zeno paths.
If you are not sure your model is free of Zeno paths, you can use known methods for verifying that a Timed Automaton has no zeno paths (in Uppaal).
Another possibility is that the simulator stop running at all saying that there is a deadlock. In this case the problem is not that the clock stopped running, but that you arrived at a situation where all possible transitions are disabled (maybe because all possible guards are never enabled, or because all the possible target states of your enabled transitions have some time invariants that are false)

Resources