Is there any NodeJs class/function which is similar to Environment.TickCount on c#? - node.js

This code is running on c#
int x = Environment.TickCount;
docs for Environment.TickCount
Gets the number of milliseconds elapsed since the system started. TickCount cycles between Int32.MinValue, which is a negative number, and Int32.MaxValue once every 49.8 days.
TickCount will increment from Zero to (2147483647) for approximately 24.9 days, then jump back to (-2147483648), which is a negative number, then increment back to zero during the next 24.9 days.
We can use int result = Environment.TickCount & Int32.MaxValue; to make it rotate between (0) and (2147483647) for every 24.9 days
I want an equivalent method in NodeJS, which would yield the same result.
I made a search on NodeJS npmjs but didn't find similar function

os.uptime() is the closest method to what you need which
Returns the system uptime in number of seconds
NodeJS docs
But this is a valid question that what will be the max limit for the above method.?
In NodeJS the max safe integer is Number.MAX_SAFE_INTEGER that is 9007199254740991. Which is basically 289583309.373 years. So I guess we will have to assume this as the max value for said method.
If you want the functionality as of c#'s TickCount, you will need to create your own custom method, maybe something like given below:
// this method will cycle between 0 and 2147483647
function TickCount() {
const miliseconds_elapsed = os.uptime() * 1000; // convert the time in miliseconds
return miliseconds_elapsed % 2147483647;
}
// this method will cycle between -2147483648 to 2147483647
// note: it will not start from 0
function TickCount() {
const miliseconds_elapsed = os.uptime() * 1000; // convert the time in miliseconds
return (miliseconds_elapsed % 4294967296) - 2147483648;
}
// this method will cycle between -2147483648 to 2147483647
// note: it will start from 0 goes to 2147483647
// then comes back to -2147483648 and starts the cycle
function TickCount() {
const miliseconds_elapsed = os.uptime() * 1000; // convert the time in miliseconds
if (miliseconds_elapsed <= 2147483647) {
return miliseconds_elapsed;
}
return ((miliseconds_elapsed - 2147483648) % 4294967296) - 2147483648;
}

The Microsoft docs say Environment.TickCount is an integer that "contains the amount of time in milliseconds that has passed since the last time the computer was started".
When searching for that I found this question and the answers suggest to use process.uptime() oros.uptime()

Related

process.hrtime returns non matching second and milisecond

I use process.hrtime() to calculate the time a process takes in sec and millisec as follows:
router.post(
"/api/result-store/v1/indexing-analyzer/:searchID/:id",
async (req, res) => {
var hrstart = process.hrtime();
//some code which takes time
hrend = process.hrtime(hrstart);
console.info("Execution time (hr): %ds %dms", hrend[0], hrend[1] / 1000000);
}
);
I followed the following for code:
https://blog.abelotech.com/posts/measure-execution-time-nodejs-javascript/
So I expect to get a matching time in millisec and sec but here is what I get:
Execution time (hr): 54s 105.970357ms
So this is very strange since when I convert 54s to millisec I get this 54000 so I do not get where this "105.970357ms" comes from. Is there anything wrong with my code? why do I see this mismatch?
According to process.hrtime() documentation it returns an array [seconds, nanoseconds], where nanoseconds is the remaining part of the real time that can't be represented in second precision.
1 nanosecond = 10^9 seconds
1 millisecond = 10^6 nanoseconds
In your case the execution took 54 seconds and 105.970357 milliseconds or
54000 milliseconds + 105.970357 milliseconds.
Or if you need it in seconds: (hrend[0]+ hrend[1] / Math.pow(10,9))

gstreamer read decibel from buffer

I am trying to get the dB level of incoming audio samples. On every video frame, I update the dB level and draw a bar representing a 0 - 100% value (0% being something arbitrary such as -20.0dB and 100% being 0dB.)
gdouble sum, rms;
sum = 0.0;
guint16 *data_16 = (guint16 *)amap.data;
for (gint i = 0; i < amap.size; i = i + 2)
{
gdouble sample = ((guint16)data_16[i]) / 32768.0;
sum += (sample * sample);
}
rms = sqrt(sum / (amap.size / 2));
dB = 10 * log10(rms);
This was adapted to C from a code sample, marked as the answer, from here. I am wondering what it is that I am missing from this very simple equation.
Answered: jacket was correct about the code loosing the sign, so everything ended up being positive. Also the code 10 * log(rms) is incorrect. It should be 20 * log(rms) as I am converting amplitude to decibels (as a measure of outputted power).
The level element is best for this task (as #ensonic already mentioned) its intended for exactly what you need..
So basically you add to your pipe element called "level", then enable the messages triggering.
Level element then emits messages which contains values of RMS Peak and Decay. RMS is what you need.
You can setup callback function connected to such message event:
audio_level = gst_element_factory_make ("level", "audiolevel");
g_object_set(audio_level, "message", TRUE, NULL);
...
g_signal_connect (bus, "message::element", G_CALLBACK (callback_function), this);
bus variable is of type GstBus.. I hope you know how to work with buses
Then in callback function check for the element name and get the RMS like is described here
There is also normalization algorithm with pow() function to convert to value between 0.0 -> 1.0 which you can use to convert to % as you stated in your question.

why does a a nodejs array shift/push loop run 1000x slower above array length 87369?

Why is the speed of nodejs array shift/push operations not linear in the size of the array? There is a dramatic knee at 87370 that completely crushes the system.
Try this, first with 87369 elements in q, then with 87370. (Or, on a 64-bit system, try 85983 and 85984.) For me, the former runs in .05 seconds; the latter, in 80 seconds -- 1600 times slower. (observed on 32-bit debian linux with node v0.10.29)
q = [];
// preload the queue with some data
for (i=0; i<87369; i++) q.push({});
// fetch oldest waiting item and push new item
for (i=0; i<100000; i++) {
q.shift();
q.push({});
if (i%10000 === 0) process.stdout.write(".");
}
64-bit debian linux v0.10.29 crawls starting at 85984 and runs in .06 / 56 seconds. Node v0.11.13 has similar breakpoints, but at different array sizes.
Shift is a very slow operation for arrays as you need to move all the elements but V8 is able to use a trick to perform it fast when the array contents fit in a page (1mb).
Empty arrays start with 4 slots and as you keep pushing, it will resize the array using formula 1.5 * (old length + 1) + 16.
var j = 4;
while (j < 87369) {
j = (j + 1) + Math.floor(j / 2) + 16
console.log(j);
}
Prints:
23
51
93
156
251
393
606
926
1406
2126
3206
4826
7256
10901
16368
24569
36870
55322
83000
124517
So your array size ends up actually being 124517 items which makes it too large.
You can actually preallocate your array just to the right size and it should be able to fast shift again:
var q = new Array(87369); // Fits in a page so fast shift is possible
// preload the queue with some data
for (i=0; i<87369; i++) q[i] = {};
If you need larger than that, use the right data structure
I started digging into the v8 sources, but I still don't understand it.
I instrumented deps/v8/src/builtins.cc:MoveElemens (called from Builtin_ArrayShift, which implements the shift with a memmove), and it clearly shows the slowdown: only 1000 shifts per second because each one takes 1ms:
AR: at 1417982255.050970: MoveElements sec = 0.000809
AR: at 1417982255.052314: MoveElements sec = 0.001341
AR: at 1417982255.053542: MoveElements sec = 0.001224
AR: at 1417982255.054360: MoveElements sec = 0.000815
AR: at 1417982255.055684: MoveElements sec = 0.001321
AR: at 1417982255.056501: MoveElements sec = 0.000814
of which the memmove is 0.000040 seconds, the bulk is the heap->RecordWrites (deps/v8/src/heap-inl.h):
void Heap::RecordWrites(Address address, int start, int len) {
if (!InNewSpace(address)) {
for (int i = 0; i < len; i++) {
store_buffer_.Mark(address + start + i * kPointerSize);
}
}
}
which is (store-buffer-inl.h)
void StoreBuffer::Mark(Address addr) {
ASSERT(!heap_->cell_space()->Contains(addr));
ASSERT(!heap_->code_space()->Contains(addr));
Address* top = reinterpret_cast<Address*>(heap_->store_buffer_top());
*top++ = addr;
heap_->public_set_store_buffer_top(top);
if ((reinterpret_cast<uintptr_t>(top) & kStoreBufferOverflowBit) != 0) {
ASSERT(top == limit_);
Compact();
} else {
ASSERT(top < limit_);
}
}
when the code is running slow, there are runs of shift/push ops followed by runs of 5-6 calls to Compact() for every MoveElements. When it's running fast, MoveElements isn't called until a handful of times at the end, and just a single compaction when it finishes.
I'm guessing memory compaction might be thrashing, but it's not falling in place for me yet.
Edit: forget that last edit about output buffering artifacts, I was filtering duplicates.
this bug had been reported to google, who closed it without studying the issue.
https://code.google.com/p/v8/issues/detail?id=3059
When shifting out and calling tasks (functions) from a queue (array)
the GC(?) is stalling for an inordinate length of time.
114467 shifts is OK
114468 shifts is problematic, symptoms occur
the response:
he GC has nothing to do with this, and nothing is stalling either.
Array.shift() is an expensive operation, as it requires all array
elements to be moved. For most areas of the heap, V8 has implemented a
special trick to hide this cost: it simply bumps the pointer to the
beginning of the object by one, effectively cutting off the first
element. However, when an array is so large that it must be placed in
"large object space", this trick cannot be applied as object starts
must be aligned, so on every .shift() operation all elements must
actually be moved in memory.
I'm not sure there's a whole lot we can do about this. If you want a
"Queue" object in JavaScript with guaranteed O(1) complexity for
.enqueue() and .dequeue() operations, you may want to implement your
own.
Edit: I just caught the subtle "all elements must be moved" part -- is RecordWrites not GC but an actual element copy then? The memmove of the array contents is 0.04 milliseconds. The RecordWrites loop is 96% of the 1.1 ms runtime.
Edit: if "aligned" means the first object must be at first address, that's what memmove does. What is RecordWrites?

Linux kernel: Why add_timer() is modifying my "expires" value?

I am trying to setup a periodic timer triggering a function every seconds, but there is a small drift between each call. After some investigations, I found that this is the add_timer() call which adds an offset of 2 to the expires field (~2ms in my case).
Why is this drift added? Is there a clean way to prevent it? I am not trying to get an accurate millisecond precision, I have a vague understanding of the kernel real-time limitations, but at least to avoid this intentional delay at each call.
Here is the output from a test module. Each couple of numbers is the value of the expires field just before and after the call:
[100047.127123] Init timer 1000
[100048.127986] Expired timer 99790884 99790886
[100049.129578] Expired timer 99791886 99791888
[100050.131146] Expired timer 99792888 99792890
[100051.132728] Expired timer 99793890 99793892
[100052.134315] Expired timer 99794892 99794894
[100053.135882] Expired timer 99795894 99795896
[100054.137411] Expired timer 99796896 99796898
[...]
[100071.164276] Expired timer 99813930 99813932
[100071.529455] Exit timer
And here is the source:
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/jiffies.h>
#include <linux/time.h>
static struct timer_list t;
static void timer_func(unsigned long data)
{
unsigned long pre, post;
t.expires = jiffies + HZ;
pre = t.expires;
add_timer(&t);
post = t.expires;
printk("Expired timer %lu %lu\n", pre, post);
}
static int __init timer_init(void)
{
init_timer(&t);
t.function = timer_func;
t.expires = jiffies + HZ;
add_timer(&t);
printk("Init timer %d\n", HZ);
return 0;
}
static void __exit timer_exit(void)
{
del_timer(&t);
printk("Exit timer\n");
}
module_init(timer_init);
module_exit(timer_exit);
I found the cause. Let's trace the add_timer function:
The add_timer function calls:
mod_timer(timer, timer->expires);
The mod_timer function calls:
expires = apply_slack(timer, expires);
and then goes on to actually modify the timer.
The apply_slack function says:
/*
* Decide where to put the timer while taking the slack into account
*
* Algorithm:
* 1) calculate the maximum (absolute) time
* 2) calculate the highest bit where the expires and new max are different
* 3) use this bit to make a mask
* 4) use the bitmask to round down the maximum time, so that all last
* bits are zeros
*/
Before continuing, let's see what is the timer's slack. The init_timer macro eventually calls do_init_timer which sets the slack by default to -1.
With this knowledge, let's reduce apply_slack and see what remains of it:
static inline
unsigned long apply_slack(struct timer_list *timer, unsigned long expires)
{
unsigned long expires_limit, mask;
int bit;
if (timer->slack >= 0) {
expires_limit = expires + timer->slack;
} else {
long delta = expires - jiffies;
if (delta < 256)
return expires;
expires_limit = expires + delta / 256;
}
mask = expires ^ expires_limit;
if (mask == 0)
return expires;
bit = find_last_bit(&mask, BITS_PER_LONG);
mask = (1 << bit) - 1;
expires_limit = expires_limit & ~(mask);
return expires_limit;
}
The first if, checking for timer->slack >= 0 fails, so the else part is applied. In that part the difference between expires and jiffies is slightly less than HZ (you just did t.expires = jiffies + HZ. Therefore, the delta in the function (with your data) is most likely about 4 and delta / 4 is non zero.
This in turn implies that mask (which is expires ^ expires_limit) is not zero. The rest really depends on the value of expires, but for sure, it gets changed.
So there you have it, since slack is automatically set to -1, the apply_slack function is changing your expires time to align with, I guess, the timer ticks.
If you don't want this slack, you can set t.slack = 0; when you are initializing the timer in timer_init.
This is the old answer! It doesn't address the issue in your question, but it is an issue with what you are trying to achieve nonetheless: having a periodic function.
Let's visualize your program in a timeline (assuming start time 1000 and HZ=50 with imaginary time units):
time (jiffies) event
1000 in timer_init(): t.expires = jiffies + HZ; // t.expires == 1050
1050 timer_func() is called by timer
1052 in timer_func(): t.expires = jiffies + HZ; // t.expires == 1102
1102 timer_func() is called by timer
1104 in timer_func(): t.expires = jiffies + HZ; // t.expires == 1154
I hope you see where this is going! The problem is that there is a delay between the time the timer expires and the time you calculate when the next expiration should be. That's where the drift comes from. The drift could get even larger, by the way, if the system is busy and your function call is delayed.
The way to fix it is very very easy. The problem was that when you update t.expires by jiffies, which is the current time. What you should do is update t.expires by the last time it expired (which is already in t.expires!).
So, in your timer_func function, instead of:
t.expires = jiffies + HZ;
simply do:
t.expires += HZ;

how to use pthread_cond_timedwait with millisecond

I am trying to use pthread_cond_timedwait for millisecond sleep interval but I am not getting sleep duration. my thread is sleeping more than I have mentioned. below is my implementation. Let me know if i am wrong anywhere.
struct timeval tp;
struct timespec ts;
int rc = gettimeofday(&tp, NULL);
ts.tv_sec = tp.tv_sec;
ts.tv_nsec = tp.tv_usec * 1000;
ts.tv_nsec += 30 * 1000000; //30 is my milliseconds
pthread_mutex_lock(&mtxPlaybackWait);
pthread_cond_timedwait(&playbackSignal, &mtxPlaybackWait, &ts);
pthread_mutex_unlock(&mtxPlaybackWait);
timespac might be overflowed and causing timeout.
Try following:
ts.tv_sec = tp.tv_sec;
ts.tv_nsec = tp.tv_usec * 1000;
ts.tv_nsec += 30 * 1000000;
ts.tv_sec += ts.tv_nsec / 1000000000L;
ts.tv_nsec = ts.tv_nsec % 1000000000L;
You have an addition of seconds and microseconds on one side, and milliseconds on the other. The result is in seconds and nanoseconds.
If you try to express seconds in nanoseconds, this may overflow quickly: 1 second = 1,000,000,000 nanoseconds, which takes up ~30 bits. An unsigned 32-bit integer value can hold up to ~4 seconds if unsigned (~2 for a signed int) and will overflow beyond that.
Also, I am not sure if all functions behave correctly under all circumstances when passed a struct where the fractional seconds amount to more than a second. I’d expect widely used standard libraries to have done their homework and normalize first (or otherwise ensure correct behavior), but some quickly assembled niche product might not handle such cases properly.
To prevent both the overflow and strange side effects of anomalies, shave off integer seconds wherever you can and store them in the seconds part rather than in the fractional seconds.
Here is a version of your calculation which avoids both these things:
gettimeofday(&tp, NULL);
/* if msec is 1 s or more, add its integer part to tv_sec */
ts.tv_sec = tp.tv_sec + floor(msec / 1000);
/* for now, these are really µsec, not nsec, to prevent overflow */
ts.tv_nsec = tp.tv_usec + (msec % 1000) * 1000000;
/* if tv_nsec is 1s or more, move integer second part to tv_sec */
ts.tv_sec += floor(ts.tv_nsec / 1000000);
ts.tv_nsec %= 1000000;
/* and finally, convert µsec to nsec */
ts.tv_nsec *= 1000;
You might not need floor if you are certain that you are operating on integer types (i.e. for msec and ts.tv_nsec)—in that case, a simple division will do.

Resources