AnyLogic: Total Delay Time in a discrete event simulation - delay

Is there any function to measure the total delay time needed in an iteration of a DES? I want to do a Monte Carlo Experiment with my DES and iterate the DES 1000 times. Every iteration I want to measure the total delay time needed in this iteration and plot it to a histogram. I have already implemented aMonteCarlo Experiment. My idea was to have a variable totalDelayTime and instantiate this variable with the total delay time needed in every iteration. In my monte carlo experiment I want to plot this variable via a histogram. Is there any solution or simple anylogic function to measure/get the total delay time? I tried to call my variable totalDelayTime in the sink and said: totalDelayTime = time(). But when trace this variable via traceln(totalDelayTime)to the console I get the exact same delay time for any iteration. However, when I just write traceln(time()) in the sink, I get other delay times for every iteration.

You can get the total simulation run time by calling time() in the "On destroy" code box of Main. It returns the total time in the model time units.
If you need it in a special unit, call time(MINUTE) or similar.

Related

How to get number of miliseconds since epoch in ADF

I am trying hard to get Unix Timestamp for current time in ADF. Basically I need number of number of milliseconds (ms) since epoch. I tried dabbling with built-in ticks function in ADF but it's not what I need. Also the documentation on the ticks is not really clear. It says number of ticks since specified timestamp is what the function returns. 1 tick = 100 nanoseconds & 1000000 nanoseconds = 1 ms. So considering this, I used the following expression in set variable activity:-
#{ div(ticks('1970-01-01T00:00:00.0000000Z'),10000) }
So the expectation is that whenever I run it, it should give me number of milliseconds since the epoch ( up to the moment of execution ) -- so by this definition every time I run it, it should return me a different value. But it returns a fixed value 62135568000000 every time I run it. So either the documentation is not correct or it's not really calculating what I really need.
Function ticks() return the number of ticks from '0001-01-01T00:00:00.0000000Z' to parameter of ticks(), not from '1970-01-01T00:00:00.0000000Z'.This is why you always get a fixed value 62135568000000.
I have tried #{ticks('0001-01-01T00:00:00.0000000Z')}, result is 0.
So if you want to get ms from '1970-01-01T00:00:00.0000000Z' to current time, you can try this:#{div(sub(ticks(utcnow()),ticks('1970-01-01T00:00:00.0000000Z')),10000)}.

How to wait x seconds in 6502 basic

How do I wait x amount of time in basic 6502? I am on VICE xpet, and I would like to print a character, wait a little, then delete it, and repeat for a while, as a sort of status indicator. The only problem is that it deletes to fast so the indicator never shows up. I've tried looking for such a command on the reference, but there is nothing for just flat out waiting a little bit. I know that if I make a huge for loop I may be able to slow the machine down enough to do it by brute force, but I'd rather avoid such if possible. Is there not a better way?
Thanks!
You can refer to the system variable TI for timing purposes. Value of the variable incremented automatically in 1/60 seconds. It will not be perfect but it works.
Below example will print current value of TI for each second:
10 PRINT "START " TI
20 T0=T1
30 IF TI-T0>=60 THEN PRINT TI;TI-T0 : GOTO 20
40 GOTO 30
It's been decades since I've programmed on a 6502 (C-64/VIC-20) but I'm pretty sure even their version of BASIC had the keyword TIMER. If memory serves, it counts milliseconds, but I could be wrong. You might have to play with it. Set a variable equal to TIMER, do a for/next loop to take up some time, then check its value again. Once you figure out how many ticks occur in a second, you'll be able to make that a constant and then loop until variable = timer start plus constant ticks per second (first setting variable to timer before loop, of course).

What unit of time does the delay refer to?

In the tracer(x,y) function of python's standard turtle graphics where x is said to be the number of updates and y is said to be the delay time and calling the tracer does turn of the tracing animation. For example, in the call of turtle.tracer(1, 50), what unit of time does the delay of 50 refer to?
According to the docs:
Set or return the drawing delay in milliseconds. (This is
approximately the time interval between two consecutive canvas
updates.) The longer the drawing delay, the slower the animation.

Difference between Think time and Pacing Time in Performace testing

Pacing is used to achieve X number of iterations in X minutes, But I'm able to achieve x number of iterations in X minutes or x hours or x seconds by specifying only think time without using pacing time.
I want to know the actual difference between think time and pacing time? pacing time is necessary to mention between iterations? what this pacing time does?
Think time is a delay added after iteration is complete and before the next one is started. The iteration request rate depends on the sum of the response time and the think time. Because the response time can vary depending on a load level, iteration request rate will vary as well.
For constant request rate, you need to use pacing. Unlike think time, pacing adds a dynamically determined delay to keep iteration request rate constant while the response time can change.
For example, to achieve 3 iteration in 2 minutes, pacing time should be 2 x 60 / 3 = 40 seconds. Here's an example how to use pacing in our tool http://support.stresstimulus.com/display/doc46/Delay+after+the+Test+Case
Think Time
it introduces an element of realism into the test execution.
With think time removed, as is often the case in stress testing, execution speed and throughput can increase tenfold, rapidly bringing an application infrastructure that can comfortably deal with a thousand real users to its knees.
always include think time in a load test.
think time influences the rate of transaction execution
Pacing
another way of affecting the execution of a performance test
affects transaction throughput

Bi-Threaded processing in Matlab

I have a Large-Scale Gradient Descent optimization problem that I am running using Matlab. The code has got two parts:
A Sequential update part that fires every iteration that updates the parameter vector.
A validation error computation part that fires every 10 iterations or so using the parameter value at the end of the corresponding iteration in which its fired.
The way that I am running this now is to do (1) and (2) sequentially. But (2) takes a lot of time and its not the core part of my routine - I made it just to check the progress and plot the error of my model. Is it possible in Matlab to run (2) in a parallel manner to (1) ? Please note that (1) cannot be run in parallel since it performs sequential update. So a simple 'parfor' usage is not a solution, unless there is a really smart way of doing that.
I don't think Matlab has any way of multi-threading outside of the (rather restricted) parallel computing toolbox. There is a work over which may help you though:
Open 2 sessions of Matlab, sessions A and B (or instances, or workspaces, however you call it)
Matlab session A:
Calculate the 10 iterations of your sequential process (1)
Saves the result in a file (adequately and uniquely named)
Goes on to calculate the next 10 iterations (back to the top of this loop basically)
In parralel:
Matlab session B:
Check periodically for the existence of the file written by process A (define a timer that will do that at the time interval which make sense for your process, a few seconds or a few minutes ...)
If the file exist => load it then do the validation computation (your process (2)) and display/report the results.
note: This only works if process (1) doesn't need the result of process (2) to run its iterations, but if it is the case I don't know how you could parallelise anyway.
If you have multiple cores on your machine that should run smoothly, if you have a single core then the 2 sessions will have to share and you will see a performance impact.

Resources