Terminal limiting cpu usage while running lua - node.js

I was learning lua (specifically loops), and I need to run my code in the cmd to be able to use luaJit. Doing so, I notice that the loop were too slow. After that, I recreated the loop using js in the vscode and when I ran it, everything was normal. Then, I tried the same code, but compiled in the cmd, not surprising it was also slow. So, I think there is something limiting the cpu usage while running code in the terminal, but I have no idea. If someone knows how to fix it, I would be delighted.
All I did was open the terminal and run these commands:
luajit <path-to-the-code>
node <path-to-the-code>
Lua:
vscode: average 106 ms per test | 1.1 sec total
cmd: average 10 secs per test | 100 sec total
Js:
vscode: average 288 ms per test | 3 sec total
cmd: average 10 secs per test | 100 sec total
Lua code:
function test()
for x=1, 100000 do
print(x/100)
end
end
totalTime = 0
for x=1, 10 do
start = os.clock()
test()
totalTime = totalTime + os.clock() - start
end
print(totalTime/10)
Js code:
function test(){
for(let x = 1; x<100000; x++){
console.log(x/100)
}
}
let totalTime = 0
for(x = 1; x!=10; x++){
var start = Date.now()
test()
totalTime += Date.now()-start;
}
console.log(totalTime/10)

Related

Is there any NodeJs class/function which is similar to Environment.TickCount on c#?

This code is running on c#
int x = Environment.TickCount;
docs for Environment.TickCount
Gets the number of milliseconds elapsed since the system started. TickCount cycles between Int32.MinValue, which is a negative number, and Int32.MaxValue once every 49.8 days.
TickCount will increment from Zero to (2147483647) for approximately 24.9 days, then jump back to (-2147483648), which is a negative number, then increment back to zero during the next 24.9 days.
We can use int result = Environment.TickCount & Int32.MaxValue; to make it rotate between (0) and (2147483647) for every 24.9 days
I want an equivalent method in NodeJS, which would yield the same result.
I made a search on NodeJS npmjs but didn't find similar function
os.uptime() is the closest method to what you need which
Returns the system uptime in number of seconds
NodeJS docs
But this is a valid question that what will be the max limit for the above method.?
In NodeJS the max safe integer is Number.MAX_SAFE_INTEGER that is 9007199254740991. Which is basically 289583309.373 years. So I guess we will have to assume this as the max value for said method.
If you want the functionality as of c#'s TickCount, you will need to create your own custom method, maybe something like given below:
// this method will cycle between 0 and 2147483647
function TickCount() {
const miliseconds_elapsed = os.uptime() * 1000; // convert the time in miliseconds
return miliseconds_elapsed % 2147483647;
}
// this method will cycle between -2147483648 to 2147483647
// note: it will not start from 0
function TickCount() {
const miliseconds_elapsed = os.uptime() * 1000; // convert the time in miliseconds
return (miliseconds_elapsed % 4294967296) - 2147483648;
}
// this method will cycle between -2147483648 to 2147483647
// note: it will start from 0 goes to 2147483647
// then comes back to -2147483648 and starts the cycle
function TickCount() {
const miliseconds_elapsed = os.uptime() * 1000; // convert the time in miliseconds
if (miliseconds_elapsed <= 2147483647) {
return miliseconds_elapsed;
}
return ((miliseconds_elapsed - 2147483648) % 4294967296) - 2147483648;
}
The Microsoft docs say Environment.TickCount is an integer that "contains the amount of time in milliseconds that has passed since the last time the computer was started".
When searching for that I found this question and the answers suggest to use process.uptime() oros.uptime()

Rust lang thread::sleep() sleeping for almost twice the specified time during game loop on windows

So I've written the following function to show what i mean:
use std::{thread, time};
const TARGET_FPS: u64 = 60;
fn main() {
let mut frames = 0;
let target_ft = time::Duration::from_micros(1000000 / TARGET_FPS);
println!("target frame time: {:?}",target_ft);
let mut time_slept = time::Duration::from_micros(0);
let start = time::Instant::now();
loop {
let frame_time = time::Instant::now();
frames+=1;
if frames == 60 {
break
}
if let Some(i) = (target_ft).checked_sub(frame_time.elapsed()) {
time_slept+=i;
thread::sleep(i)
}
}
println!("time elapsed: {:?}",start.elapsed());
println!("time slept: {:?}",time_slept);
}
The idea of the function is to execute 60 cycles at 60fps then exit with the time elapsed and the total time spent sleeping during the loop. ideally, since im executing 60 cycles at 60fps with no real calculations happening between, it should take about one second to execute and spend basically the entire second sleeping. but instead when i run it it returns:
target frame time: 16.666ms
time elapsed: 1.8262798s
time slept: 983.2533ms
As you can see, even though it was only told to sleep for a total of 983ms, the 60 cycles took nearly 2 seconds to complete. Because of this nearly 50% inaccuracy, a loop told to run at 60fps instead runs at only 34fps.
The docs say The thread may sleep longer than the duration specified due to scheduling specifics or platform-dependent functionality. It will never sleep less. But is this really just from that? Am i doing something wrong?
i switched to using spin_sleep::sleep(i) from https://crates.io/crates/spin_sleep and it seems to have fixed it. i guess it must just be windows inaccuracies then...still strange that time::sleep on windows would be that far off for something as simple as a game loop

How to execute a while loop precisely every 10 seconds in windows vc++

Please help me in running the following loop precisely every 10 seconds in windows vc++.
Initially It should start at something like say 12:12:40:000, It should neglect the milliseconds it takes to do some work commented, and restart the next loop at 12:12:50:000 and so on every 10 seconds precisely.
void controlloop()
{
struct timeb start, end;
while(1)
{
ftime(&start);
if(start.time %10 == 0)
break;
else
Sleep(100);
}
while(1)
{
ftime(&start);
if(start.time %10 == 0)
{
// some work here which will roughly take 100 ms
ftime(&end);
elapsedtime = (int) (1000.0 * (end.time - start.time) + (end.millitm - start.millitm));
if(elapsedtime > 10000)
{
sleeptime = 0;
}
else
{
sleeptime = 10000-(elapsedtime);
}
}
Sleep(sleeptime);
}//1
}
The Sleep approach only guarantees you sleep at least 10 seconds. After that your thread is considered eligible for scheduling and on the next quanta it will be considered again. You are still subject to the priority of any other threads on the system, the number of logical cores, etc. You are also still subject to the resolution of the threading quanta which is by default ~15 ms. You can change it with timeBeginPeriod, but that has system-wide power implications.
For more information on Windows scheduling see Microsoft Docs. For more on the power issues, see this blog post.
For Windows the best option is to use the high-frequency performance counter via QueryPerformanceCounter. You use QueryPerformanceFrequency to convert between cycles and seconds.
LARGE_INTEGER qpcFrequency;
QueryPerformanceFrequency(&qpcFrequency);
LARGE_INTEGER startTime;
QueryPerformanceCounter(&startTime);
LARGE_INTEGER tenSeconds;
tenSeconds.QuadPart = startTime .QuadPart + qpcFrequency.QuadPart * 10;
while (true)
{
LARGE_INTEGER currentTime;
QueryPerformanceCounter(&currentTime);
if (currentTime.QuadPart >= tenSeconds.QuadPart)
break;
}
The timer resolution for QPC is typically close the cycle speed of your CPU processor.
If you want to run a thread for as close to 10 seconds as you can while still yielding the processor use:
LARGE_INTEGER qpcFrequency;
QueryPerformanceFrequency(&qpcFrequency);
LARGE_INTEGER startTime;
QueryPerformanceCounter(&startTime);
LARGE_INTEGER tenSeconds;
tenSeconds.QuadPart = startTime .QuadPart + qpcFrequency.QuadPart * 10;
while (true)
{
LARGE_INTEGER currentTIme;
QueryPerformanceCounter(&currentTIme);
if (currentTime.QuadPart >= tenSeconds.QuadPart)
{
// do a thing
tenSeconds.QuadPart = currentTime.QuadPart + qpcFrequency.QuadPart * 10;
SwitchToThread();
}
This is not really the most efficient way to do a periodic timer, but you asked for precision not efficiency.
If you are using VS 2015 or later, you can use the C++11 type high_resolution_clock which uses QPC for it’s implementation. In older versions of Visual C++ used ‘file system time’ which is back to your original resolution problem with ftime.

NodeJS, Promises and performance

My question is about performance in my NodeJS app...
If my program run 12 iteration of 1.250.000 each = 15.000.000 iterations all together - it takes dedicated servers at Amazon the following time to process:
r3.large: 2 vCPU, 6.5 ECU, 15 GB memory --> 123 minutes
4.8xlarge: 36 vCPU, 132 ECU, 60 GB memory --> 102 minutes
I have some code similair to the code below...
start();
start(){
for(var i=0; i<12; i++){
function2(); // Iterates over a collection - which contains data split up in intervals - by date intervals. This function is actually also recursive - due to the fact - that is run through the data many time (MAX 50-100 times) - due to different intervals sizes...
}
}
function2(){
return new Promise{
for(var i=0; i<1.250.000; i++){
return new Promise{
function3(); // This function simple iterate through all possible combinations - and call function3 - with all given values/combinations
}
}
}
}
function3(){
return new Promise{ // This function simple make some calculations based on the given values/combination - and then return the result to function2 - which in the end - decides which result/combination was the best...
}}
This is equal to 0.411 millisecond / 441 microseconds pér iteration!
When i look at performance and memory usage in the taskbar... the CPU is not running at 100% - but more like 50%...the entire time?
The memory usage starts very low - but KEEPS growing in GB - every minute until the process is done - BUT the (allocated) memory is first released when i press CTRL+C in the Windows CMD... so its like the NodeJS garbage collection doesn't not work optimal - or may be its simple the design of the code again...
When i execute the app i use the memory opt like:
node --max-old-space-size="50000" server.js
PLEASE tell me every thing you thing i can do - to make my program FASTER!
Thank you all - so much!
It's not that the garbage collector doesn't work optimally but that it doesn't work at all - you don't give it any chance to.
When developing the tco module that does tail call optimization in Node i noticed a strange thing. It seemed to leak memory and I didn't know why. It turned out that it was because of few console.log()
calls in various places that I used for testing to see what's going on because seeing a result of recursive call millions levels deep took some time so I wanted to see something while it was doing it.
Your example is pretty similar to that.
Remember that Node is single-threaded. When your computations run, nothing else can - including the GC. Your code is completely synchronous and blocking - even though it's generating millions of promises in a blocking manner. It is blocking because it never reaches the event loop.
Consider this example:
var a = 0, b = 10000000;
function numbers() {
while (a < b) {
console.log("Number " + a++);
}
}
numbers();
It's pretty simple - you want to print 10 million numbers. But when you run it it behaves very strangely - for example it prints numbers up to some point, and then it stops for several seconds, then it keeps going or maybe starts trashing if you're using swap, or maybe gives you this error that I just got right after seeing the Number 8486:
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
Aborted
What's going on here is that the main thread is blocked in a synchronous loop where it keeps creating objects but the GC has no chance to release them.
For such long running tasks you need to divide your work and get into the event loop once in a while.
Here is how you can fix this problem:
var a = 0, b = 10000000;
function numbers() {
var i = 0;
while (a < b && i++ < 100) {
console.log("Number " + a++);
}
if (a < b) setImmediate(numbers);
}
numbers();
It does the same - it prints numbers from a to b but in bunches of 100 and then it schedules itself to continue at the end of the event loop.
Output of $(which time) -v node numbers1.js 2>&1 | egrep 'Maximum resident|FATAL'
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
Maximum resident set size (kbytes): 1495968
It used 1.5GB of memory and crashed.
Output of $(which time) -v node numbers2.js 2>&1 | egrep 'Maximum resident|FATAL'
Maximum resident set size (kbytes): 56404
It used 56MB of memory and finished.
See also those answers:
How to write non-blocking async function in Express request handler
How node.js server serve next request, if current request have huge computation?
Maximum call stack size exceeded in nodejs
Node; Q Promise delay
How to avoid jimp blocking the code node.js

Testing write speed: ArangoDB...Sync and MSync times

So I'm testing some extreme writes, 10 individual insert...simple for loop (let's keep the topic simple for a moment)...I've got 'wait for sync' turned on for the collection (in this case we need 100% commit when the call returns)... 2 machines... I run the loop on my main machine that I'm running the actual unit test from and it takes 3 minutes to write the 10k... if I write to my remote machine (same arangoDB settings), it takes 9 sec...Is the reason it's taking longer on my local machine due to it also running the unit tests? Or is it due to the SYNC/MSYNC issues of the drive that the arangoDB FAQ warns about?
"From the durability point of view, immediate synchronization is of course better, but it means performing an extra system call for each operation. On systems with slow sync/msync,"
Is there a setting or whatever to check on a drive or system to determine what my values are for the sync/msync of the device?
thanks for the help!!
First of all the actual speed strongly depends on your hard disk.
For example for a notebook with SSD under MacOSX I get:
arangod> t = time(); for (i = 0; i < 1000; i++) db.unsync.save({ name: "Hallo " + i }); time() - t;
0.03408193588256836
arangod> t = time(); for (i = 0; i < 1000; i++) db.sync.save({ name: "Hallo " + i }); time() - t;
6.904788970947266
So writing 1000 documents is 200x times faster.
For a desktop with harddisk under Linux I get:
arangod> t = time(); for (i = 0; i < 1000; i++) db.unsync.save({ name: "Hallo " + i }); time() - t;
0.08486199378967285
arangod> t = time(); for (i = 0; i < 1000; i++) db.unsync.save({ name: "Hallo " + i }); time() - t;
54.90065908432007
Here it is even worse. More than a factor of 600.
Regarding the difference between local and remote: That sounds strange. How do you access the remote machine? Do you use arangosh?

Resources