How does the update diff work in AzerothCore? - world-of-warcraft

The worldserver source code is full of Update methods that take as input a diff integer value:
How does it work?
And how is this linked to the "Update time diff" from the .server info command?

To fully understand how this works, it's necessary to have a look at the main World run process.
WorldRunnable::run()
File: src/server/worldserver/WorldThread/WorldRunnable.cpp
The method void WorldRunnable::run() is the "Main heartbeat for the World". This method runs the whole world process.
Inside it, there is a while loop that runs as long as the world is supposed to keep running:
void WorldRunnable::run()
{
uint32 realCurrTime = 0;
uint32 realPrevTime = getMSTime();
///- While we have not World::m_stopEvent, update the world
while (!World::IsStopped())
{
++World::m_worldLoopCounter;
realCurrTime = getMSTime();
uint32 diff = getMSTimeDiff(realPrevTime, realCurrTime);
sWorld->Update( diff );
realPrevTime = realCurrTime;
uint32 executionTimeDiff = getMSTimeDiff(realCurrTime, getMSTime());
devDiffTracker.Update(executionTimeDiff);
avgDiffTracker.Update(executionTimeDiff > WORLD_SLEEP_CONST ? executionTimeDiff : WORLD_SLEEP_CONST);
// ... some more code here
}
// at this point the world process is terminating
// ... some more code here
What this loop really does is basically:
1) calculate the elapsed time (in milliseconds) since the previous iteration, this will be the diff
2) call the sWorld->Update( diff ); function, that contains all the world process logic (see below) and passing the diff to it
3) calculate how much time it took to run sWorld->Update( diff ); and update the devDiffTracker and its average avgDiffTracker. These values will be displayed by the .server info command.
World::Update(uint32 diff)
File: src/server/game/World/World.cpp
The World::Update(uint32 diff) function gets constantly called by the main worldserver loop process and every time it takes in input the amount diff of elapsed time since the last call.
This function is responsible for constantly updating the world, this is where all the magic happens.
Timers
There are a set of timers (defined in World.h that are being updated within the World::Update function:
/// Timers for different object refresh rates
enum WorldTimers
{
WUPDATE_AUCTIONS,
WUPDATE_WEATHERS,
WUPDATE_UPTIME,
WUPDATE_CORPSES,
WUPDATE_EVENTS,
WUPDATE_CLEANDB,
WUPDATE_AUTOBROADCAST,
WUPDATE_MAILBOXQUEUE,
WUPDATE_PINGDB,
WUPDATE_5_SECS,
WUPDATE_COUNT
};
For example, WUPDATE_AUTOBROADCAST is responsible for the period global messages defined in the acore_auth.autobroadcast table.
Tasks
The World::Update function also takes care of many timed-tasks, for example:
/// Handle daily quests reset time
if (m_gameTime > m_NextDailyQuestReset)
ResetDailyQuests();
Calling Update(diff) functions of Managers
In AzerothCore there are singleton classes called Managers (Mgr) that handle specific parts of the game. For example BattlegroundMgr handles the Battlegrounds (BGs).
Those classes have their own Update(uint32 diff) functions and they are called by World::Update that passes down the diff to them, for example:
sBattlegroundMgr->Update(diff);
sOutdoorPvPMgr->Update(diff);
sBattlefieldMgr->Update(diff);
/// ... there are more!
OnWorldUpdate hook
Last but not least, it calls sScriptMgr->OnWorldUpdate(diff);.
This is part of the AzerothCore Module System, and defines a hook that can be used by third-part modules to attach custom logic to the World::Update function.

Related

Deleting/Disconnecting a SceneTreeTimer

I'm trying to disconnect a SceneTreeTimer to avoid a function being called on timeout
like this:
extends Node2D
onready var something = $Node2D
var timer
func abort():
timer.disconnect("timeout",something,"queue_free")
timer.emit_signal("timeout")
print("timer=>",timer)
func _ready():
timer=get_tree().create_timer(3)
timer.connect("timeout",something,"queue_free")
...
abort()
And while it does stop the timer from invoking the function
I'm still seeing the timer after aborting it, Output:
timer=>[SceneTreeTimer:1276]
Shouldn't it be something like this since it's time has elapsed?
timer=>[Deleted Object]
SceneTreeTimer unlike Node is a Reference.
If you have a look at the good old class diagram. You are going to see that some classes extend Node others Reference (including Resource) and other extend Object directly.
The classes that extend Reference are reference counted, they won't be deleted as long as you hold (a not WeakRef) reference to them.
While the classes that extend Node use explicit memory management, so they are deleted by calling free or queue_free on them.
Thus, drop the reference once you are no longer using the SceneTreeTimer:
func abort():
timer.disconnect("timeout",something,"queue_free")
timer.emit_signal("timeout")
timer = null
print("timer=>",timer) # null, duh
Godot will still emit the "timeout" signal, and when it does it releases its internal reference. We find this in "scene_tree.cpp" (source):
if (time_left < 0) {
E->get()->emit_signal("timeout");
timers.erase(E);
}
We can also experiment using a WeakRef to get a result similar to the one you expect. However, remember that since Godot is holding a reference internally the timer won't be deleted before its normal timeout.
extends Node2D
onready var something = $Node2D
var timer_ref:WeakRef
func abort():
var timer := timer_ref.get_ref() as SceneTreeTimer
timer.disconnect("timeout",something,"queue_free")
timer.emit_signal("timeout")
func _ready():
var timer := get_tree().create_timer(3)
# warning-ignore:return_value_discarded
timer.connect("timeout",something,"queue_free")
timer_ref = weakref(timer)
abort()
func _process(_delta: float) -> void:
print("timer=>", timer_ref.get_ref())
You should see it change from
timer=>[SceneTreeTimer:1234]
To
timer=>null
After 3 seconds, since that is the argument we gave to create_timer.
Trivia: Here you will get some number where I put "1234", that number is the instance id of the object. You can get it with get_instance_id and you can get the instance from the id with instance_from_id. We saw an example of instance_from_id in FauxBody2D.
You might also find it convenient to create an Autoload where you create, stop, and even pause your timers while keeping a API similar to create_timer, for example see Godot 4.0. how stop a auto call SceneTreeTimer?.
Addendum:
DON'T DO THIS
You might actually mess up with Godot. Since it is reference counted, and we can freely change the count, we can make it release the timer early:
var timer := timer_ref.get_ref() as SceneTreeTimer
timer.disconnect("timeout",something,"queue_free")
timer.emit_signal("timeout")
timer.unreference()
I tested this both on the debugger and on a release export, with Godot 3.5.1, and it didn't crash the game, not output any errors.
For clarity unreference is not the same as free, instead:
reference increases the count by one.
unreference decreases the count by one.
I'm calling unreference to cancel out the internal reference that Godot has.
We can confirm that the timer is being freed, either by using a weak reference or by looking at Godot's profiler. However, Godot has an internal list with references to the timers which are not being cleared properly.
I made this code to test out if the timer loop was being affected by the timers being released early by the above means.
extends Node2D
var can_fire := true
func _process(_delta: float) -> void:
var timer := get_tree().create_timer(60)
# warning-ignore:return_value_discarded
timer.unreference()
if can_fire:
can_fire = false
print("CREATED")
# warning-ignore:return_value_discarded
get_tree().create_timer(2).connect("timeout", self, "fire")
func fire() -> void:
print("FIRED")
can_fire = true
You might expect it to output FIRED each couple seconds. However, what I found out is that by using unreference on unrelated timers, we get the others to fire much faster.
My hypothesis is that Godot is keeping the dead reference in its internal list, then when another timer is allocated it takes the same memory. Then the timer counts faster because it appears multiple times in the list.
Removing unreference results in the expected behavior.
It does seem to exist still because calling "disconnect" function won't automatically free itself. Try doing timer.stop() instead.

How to "simulate" processing time consuming tasks for FreeRTOS aimed to discuss real-time system topics using Linux simulator

I'm trying to use FreeRTOS to discuss real time concepts with students, using the POSIX-Linux simulator framework. To accomplish this, I have to find a way of wasting processing time in a controlled way (simulating task in "Running" status by a predetermined period of processing time).
Delays are not good because they change the task status to "Block" and, with a preemptive scheduler, it means the scheduler can give the processor to other tasks. Using Linux native time control approaches (e.g. using clock_gettime() to build the logic) are not good because I don't have the control of the exact running time of a single task, specially with preemptiveness. Regular iterations (for, while) don't have the control I need for processing time (my computer and students computers will have different processing times depending on their architectures).
During my researches on FreeRTOS documentation, I found both the TaskStatus_t struct and the vTaskGetInfo() function which were supposed to help me out. And my problem is when I implement something like:
// Creating the task
if (xTaskCreate(
app_task,
task1_info.name,
configMINIMAL_STACK_SIZE,
(void *) &task1_info,
task1_info.priority,
NULL
) != pdPASS) printf("Task create error %s\n", task1_info.name);
// ...
starting_time_ticks = xTaskGetTickCount(); // starting time in ticks
vTaskStartScheduler();
// ...
static void app_task( void *pvParameters )
// The task itself
{ // ...
for( ;; )
{ // ...
app_proc_ticks(
pdMS_TO_TICKS( task_info.proc_time_ms ),
task_info.name
); // consuming processing time...
// ... }
// ... }
static void app_proc_ticks( TickType_t proc_time_ticks, uint8_t name[APP_MAX_MSG_SIZE])
// Consuming the number of ticks in order to attain a certain processing time
{
TaskHandle_t xHandle;
TaskStatus_t xTaskDetails;
xHandle = xTaskGetHandle( name );
configASSERT( xHandle );
vTaskGetInfo( xHandle, &xTaskDetails, pdTRUE, eInvalid );
TickType_t begin = xTaskDetails.ulRunTimeCounter;
while((xTaskDetails.ulRunTimeCounter - begin) < proc_time_ticks)
{
vTaskGetInfo( xHandle, &xTaskDetails, pdTRUE, eInvalid );
}
}
For a task_info.proc_time_ms equal to 25 ms, my code shows up as the task is consuming around 250 ms worth of ticks, a error factor of 10x. The way I count this is with the following "timestamp" strategy:
static TickType_t get_timestamp_ticks() {
return xTaskGetTickCount() - starting_time_ticks;
}
As far as I can see, I'm having problems to understand and properly convert xTaskDetails.ulRunTimeCounter time unit (ticks, ms, or probably something else). Also probably some tick to ms constant I'm not aware of. Right now, to convert from "ms" to "ticks" I'm using pdMS_TO_TICKS() macro and to convert from "ticks" to "ms" I'm multiplying the number of ticks by portTICK_RATE_MS.
Also, after make, I'm using taskset -c 0 ./build/posix_demo to run and ensure the use of a single processor by my executable.
I'm not trying to hold on to this solution, though. If anyone could share how to do a time controlled delay with "real consumption of processing time" for tasks in FreeRTOS, I would appreciate it as well.

Thread pool with Apps Script on Spreadsheet

I have a Google Spreadsheet with internal AppsScript code which process each row of the sheet and perform an urlfetch with the row data. The url will provide a value which will be added to the values returned by each row processing..
For now the code is processing 1 row at a time with a simple for:
var spreadsheet = SpreadsheetApp.getActiveSpreadsheet();
var sheet = spreadsheet.getActiveSheet();
var range = sheet.getDataRange();
for(var i=1 ; i<range.getValues().length ; i++) {
var payload = {
// retrieve data from the row and make payload object
};
var options = {
"method":"POST",
"payload" : payload
};
var result = UrlFetchApp.fetch("http://.......", options);
var text = result.getContentText();
// Save result for final processing
// (with multi-thread function this value will be the return of the function)
}
Please note that this is only a simple example, in the real case the working function will be more complex (like 5-6 http calls, where the output of some of them are used as input to the next one, ...).
For the example let's say that there is a generic "function" which executes some sort of processing and provides a result as output.
In order to speed up the process, I'd like to try to implement some sort of "multi-thread" processing, so I can process multiple rows in the same time.
I already know that javascript does not offer a multi-thread handling, but I read about WebWorker which seems to create an async processing of a function.
My goal is to obtain some sort of ThreadPool (like 5 threads at a time) and send every row that need to be processed to the pool, obtaining as output the result of each function.
When all the rows finished the processing, a final action will be performed gathering all the results of each function.
So the capabilities I'm looking for are:
managed "ThreadPool" where I can submit an N amount of tasks to be performed
possibility to obtain a resulting value from each task processed by the pool
possibility to determine that all the tasks has been processed, so a final "event" can be executed
I already see that there are some ready-to-use libraries like:
https://www.hamsters.io/wiki#thread-pool
http://threadsjs.readthedocs.io/en/latest/
https://github.com/andywer/threadpool-js
but they work with NodeJS. Due to AppsScript nature, I need a more simplier approach, which is provided by native JS. Also, it seems that minified JS are not accepted by AppsScript editor, so I also need the "expanded" version.
Do you know a simple ThreadPool in JS where I can submit a function to be execute and I get back a Promise for the result?

How to save data using multiple threads in grails-2.4.4 application using thread pool

I have a multithreaded program running some logic to come up with rows of data that I need to save in my grails (2.4.4) application. I am using a fixedthreadpool with 30 threads. The skeleton of my program is below. My expectation is that each thread calculates all the attributes and saves on a row in the table. However, the end result I am seeing is that there are some random rows that are not saved. Upon repeating this exercise, it is seen that a different set of rows are not saved in the table. So, overall, each time this is attempted a certain set of rows are NOT saved in table at all. GORMInstance.errors did not reveal any errors. So, I have no clue what is incorrect in this program.
ExecutorService exeSvc = Executors.newFixedThreadPool(30)
for (obj in list){
exeSvc.execute({-> finRunnable obj} as Callable)
}
Also, here's the runnable program that the above snippet invokes.
def finRunnable = {obj ->
for (item in LIST-1){
for (it in LIST-2){
for (i in LIST-3){
rowdata = calculateValues(item, it, i);
GORMInstance instance = new GORMInstance();
instance.withTransaction{
instance.attribute1=rowdata[0];
instance.attribute2=rowdata[1];
......so on..
instance.save(flush:true)/*without flush:true, I am
running into HeuristicCompletion exception. So I need it
here. */
}//endTransaction
}//forloop 3
}//forloop 2
}//forloop 1
}//runnable closure

Parallel.ForEach Ordered Execution

I am trying to execute parallel functions on a list of objects using the new C# 4.0 Parallel.ForEach function. This is a very long maintenance process. I would like to make it execute in the order of the list so that I can stop and continue execution at the previous point. How do I do this?
Here is an example. I have a list of objects: a1 to a100. This is the current order:
a1, a51, a2, a52, a3, a53...
I want this order:
a1, a2, a3, a4...
I am OK with some objects being run out of order, but as long as I can find a point in the list where I can say that all objects before this point were run. I read the parallel programming csharp whitepaper and didn't see anything about it. There isn't a setting for this in the ParallelOptions class.
Do something like this:
int current = 0;
object lockCurrent = new object();
Parallel.For(0, list.Count,
new ParallelOptions { MaxDegreeOfParallelism = MaxThreads },
(ii, loopState) => {
// So the way Parallel.For works is that it chunks the task list up with each thread getting a chunk to work on...
// e.g. [1-1,000], [1,001- 2,000], [2,001-3,000] etc...
// We have prioritized our job queue such that more important tasks come first. So we don't want the task list to be
// broken up, we want the task list to be run in roughly the same order we started with. So we ignore tha past in
// loop variable and just increment our own counter.
int thisCurrent = 0;
lock (lockCurrent) {
thisCurrent = current;
current++;
}
dothework(list[thisCurrent]);
});
You can see how when you break out of the parallel for loop you will know the last list item to be executed, assuming you let all threads finish prior to breaking. I'm not a big fan of PLINQ or LINQ. I honestly don't see how writing LINQ/PLINQ leads to maintainable source code or readability.... Parallel.For is a much better solution.
If you use Parallel.Break to terminate the loop then you are guarenteed that all indices below the returned value will have been executed. This is about as close as you can get. The example here uses For but ForEach has similar overloads.
int n = ...
var result = new double[n];
var loopResult = Parallel.For(0, n, (i, loopState) =>
{
if (/* break condition is true */)
{
loopState.Break();
return;
}
result[i] = DoWork(i);
});
if (!loopResult.IsCompleted &&
loopResult.LowestBreakIteration.HasValue)
{
Console.WriteLine("Loop encountered a break at {0}",
loopResult.LowestBreakIteration.Value);
}
In a ForEach loop, an iteration index is generated internally for each element in each partition. Execution takes place out of order but after break you know that all the iterations lower than LowestBreakIteration will have been completed.
Taken from "Parallel Programming with Microsoft .NET" http://parallelpatterns.codeplex.com/
Available on MSDN. See http://msdn.microsoft.com/en-us/library/ff963552.aspx. The section "Breaking out of loops early" covers this scenario.
See also: http://msdn.microsoft.com/en-us/library/dd460721.aspx
For anyone else who comes across this question - if you're looping over an array or list (rather than an IEnumberable ), you can use the overload of Parallel.Foreach that gives the element index to maintain original order too.
string[] MyArray; // array of stuff to do parallel tasks on
string[] ProcessedArray = new string[MyArray.Length];
Parallel.ForEach(MyArray, (ArrayItem,loopstate,ArrayElementIndex) =>
{
string ProcessedArrayItem = TaskToDo(ArrayItem);
ProcessedArray[ArrayElementIndex] = ProcessedArrayItem;
});
As an alternate suggestion, you could record which object have been run and then filter the list when you resume exection to exclude the objects which have already run.
If this needs to be persistent across application restarts, you can store the ID's of the already executed objects (I assume here the objects have some unique identifier).
For anybody looking for a simple solution, I have posted 2 extension methods (one using PLINQ and one using Parallel.ForEach) as part of an answer to the following question:
Ordered PLINQ ForAll
Not sure if question was altered as my comment seems wrong.
Here improved, basically remind that parallel jobs run in out of your control order.
ea printing 10 numbers might result in 1,4,6,7,2,3,9,0.
If you like to stop your program and continue later.
Problems alike this usually endup in batching workloads.
And have some logging of what was done.
Say if you had to check 10.000 numbers for prime or so.
You could loop in batches of size 100, and have a prime log1, log2, log3
log1= 0..99
log2=100..199
Be sure to set some marker to know if a batch job was finished.
Its a general aprouch since the question isnt that exact either.

Resources