I am currently working on some scientific calculations for which my base calculation loops are executed over and over again with recursive calls as long as at least one parameter is false.
Currently my nodejs server stops at around 905 - 915th recursive function call.
The weird thing is, that it doesn't crash, nor output any error. It just stops doing anything -> no more logs etc.
Is this some protection behaviour from node to avoid overflow?
I am struggling with this for a few weeks now whilst trying to limit the "loops" with an as intelligent as possible software.
Thank you for your help & advice.
Greetings Noa.
As requested I provide some abstraction of my actual code
I hope this helps. I cannot put my original code on here because it consists of more than 1.5k lines - theres a lot to check.
But the below example covers the base logic behind the recursive call.
// Contains objects which contain an array
// which represents the amount of the ex_obj terms
var amount = {
a:[10,10],
b:[7.5,7.5],
c:[2.5,2.5,2.5,2.5]
}
// Contains objects, which contain an array of other objects
// that represent some selection
// Each object in an array consists of different values per attribute per 1 amount
var ex_obj = {
a: [
{aa: 1.41, ab: 0.143, ac: 0.5},
{aa: 1.30, ab: 1.43, ac: 0.42}
],
b: [
{aa: 0.32, ab: 5.54, ac: 1.3},
{aa: 0.33, ab: 4.49, ac: 2.5}
],
c: [
{aa: 0.54, ab: 1.4, ac: 3.5},
{aa: 0.39, ab: 1.434, ac: 3.91},
{aa: 0.231, ab: 1.44324, ac: 2.91},
{aa: 0.659, ab: 1.554, ac: 3.9124},
]
}
// Here we have an object that represents
// the "to be" state which should be achieved
var should_be ={
aa: 14.534,
ab: 3.43,
ac: 5.534
}
function calculate(){
// Now we want to mulitply the amount object values with
// the ex_obj values
for(let prop in ex_obj){
for(let i = 0, i < ex_obj[prop].length, i++){
for(let propa in ex_obj[prop][i]){
// here every aa,ab,ac gets mulitplied with the
// contains of the amount obj for the matching
// propertyname
}
}
}
// the next step is to check if the sum of all ex_obj property
// child values per aa, ab and ac match the should_be propertie values
// if everything is above the should_be and not too high then the
// programm can stop here and print out the amount obj.
// if the sum of the ex_obj properties is too little
// compared to the should_be obj
// we need to check which property is too little
// e.g. aa is too little
// then we check where aa in the ex_obj per 1 value is
// the highest
// then we increment the matching amount property child
// and start calculate() again
// same procedure for - if something is too much
}
Since your code isn't complete it's hard to say exactly what's going wrong. If you exceed node's call stack limit you will get an exception, although 1000 recursions isn't normally a problem.
It might be that you're choking the Node.js event-loop. Instead of directly calling the recursive function, you can try to call
process.nextTick(function(){calculate()})
It's not suprised that direct recursive call will result in stack overflow, however, it seems that frequent function calls will get node hang up(BTW, I don't know why this happens :( ).
For example, the following script get frozen at around 4k ~ 5k loops on my machine with 1.6 GHz CPU and 3.3 GiB memory, and node keeps swallowing my available memory after that.
var i = 0;
function cal() {
console.log(++i);
process.nextTick(cal);
}
cal();
But if I changed process.nextTick(cal); to setTimeout(cal, 0);, everything works fine.
So as for your question, I think you can use something like setTimeout(calculate, 100) in your script to prevent recursive call and defer it slightly.
Related
Good time of the day. I'm doing an assignment which involves writing a program that solves Traveling Salesman Problem in parallel using brute-force method. I managed to write a working version but it was a lot slower than consequential version due to numerous memory allocations at a high rate which I attempted to limit as in a code bellow using buffered channel.
SO probably won't allow to fit all the code into post so, please view definition and methods of data structures on Github: https://github.com/telephrag/tsp_go
This version doesn't work at all.
At the end t will contain empty path and maximum value of uint64 for traveled distance which is set at the beginning.
As could be seen through debugger, salesman with id of 1 is getting node added to its path at sm.Path[1], but once <-t.RouteQueue occurs in the same call to travel() the program stops despite numerous goroutines are supposedly waiting to write to t.RouteQueue at the moment. It's also confirmed through debugger as well that the program never reaches if-block responsible for setting new shortest path in t.
If we create sync.Waitgroup for each for-loop the program will crash on deadlock.
sync.WaitGroup but remove everything related to channel the program would work but do so very slowly. You can get this version at the first commit to main branch of repository above.
Why does the program ends prematurely?
package tsp
func (t *Tsp) Solve() {
sm := NewSalesman(t)
sm.Visit(0) // sets bit corresponding to given node in bitmask
for nextNode := range graph[0] {
t.RouteQueue <- true // book slot in route queue (buffered channel)
go t.travel(sm.Copy(t), nextNode)
}
}
func (t *Tsp) travel(sm *Salesman, node uint64) {
sm.Distance += graph[sm.TailNode()][node] // increase traveled distance
if sm.Distance > t.MinDist { // terminate if traveled distance is bigger than current minimal
<-t.RouteQueue
return
}
sm.Visit(node)
sm.Count++ // increase amount of nodes traveled
sm.Path[sm.Count] = node // add node to path
if sm.Count == t.NodeCount-1 { // stop if t.NodeCount - 1 nodes traveled
sm.Count++
sm.Distance += graph[node][0] // return to zero-node
t.Mu.Lock()
if t.MinDist > sm.Distance { // set new min distance and path if they are shorter
t.MinPath = sm.Path
t.MinDist = sm.Distance
}
t.Mu.Unlock()
}
<-t.RouteQueue // free the slot for routes
for nextNode := range graph[node] {
if !sm.HasVisited(node) {
t.RouteQueue <- true
go t.travel(sm.Copy(t), nextNode)
}
}
}
I'm new to atomic techniques and try to implement a safe thread version for the follow code:
// say m_cnt is unsigned
void Counter::dec_counter()
{
if(0==m_cnt)
return;
--m_cnt;
if(0 == m_cnt)
{
// Do seomthing
}
}
Every thread that calls dec_counter must decrement it by one and "Do something" should be done only one time - at when the counter is decremented to 0.
After fighting with it, I did the follow code that does it well (I think), but I wonder if this is the way to do it, or is there a better way. Thanks.
// m_cnt is std::atomic<unsigned>
void Counter::dec_counter()
{
// loop until decrement done
unsigned uiExpectedValue;
unsigned uiNewValue;
do
{
uiExpectedValue = m_cnt.load();
// if other thread already decremented it to 0, then do nothing.
if (0 == uiExpectedValue)
return;
uiNewValue = uiExpectedValue - 1;
// at the short time from doing
// uiExpectedValue = m_cnt.load();
// it is possible that another thread had decremented m_cnt, and it won't be equal here to uiExpectedValue,
// thus the loop, to be sure we do a decrement
} while (!m_cnt.compare_exchange_weak(uiExpectedValue, uiNewValue));
// if we are here, that means we did decrement . so if it was to 0, then do something
if (0 == uiNewValue)
{
// do something
}
}
The thing with atomic is that only that one statement is atomic.
If you write
std::atomic<int> i {20}
...
if (!--i)
...
Then just 1 thread will enter the if.
However, if you split up the change and the test, then other threads can get into the gap, and you may get strange results:
std::atomic<int> i {20}
...
--i;
// other thread(s) can modify i just here
if (!i)
...
Of course you can split the condition test for the decrement by using a local variable:
std::atomic<int> i {20}
...
int j=--i;
// other thread(s) can modify i just here
if (!j)
...
All the simple math operations are generally efficiently supported for small atomics in c++
For more complex types and expressions, you need to use the read/modify/write member methods.
These allow you to read the current value, calculate the new value, and then call compare_exchange_strong or compare_exchange_weak say "if the value has not changed, then store my new value, otherwise give me the new current value" a a single atomic operation. You can stick this in a loop and keep recalculating the new value until you are lucky enough that your thread is the only writer. If there are not too many threads trying too often to change the value this is reasonably efficient as well.
It seems that dynamic variables don't always survive subroutine calls in threads:
sub foo($x, &y = &infix:<+>) {
my &*z = &y;
bar($x);
}
sub bar ($x) {
say &*z($x,$x);
my $promise = start { bar($x-1) if $x > 0 }
await $promise;
# bar($x-1) if $x > 0 # <-- provides the expected result: 6, 4, 2, 0
}
foo(3); # 6, 4, Dynamic variable &*z not found
Using a more globally scoped variable also works, so it's not that all variables are lost — it seems specific to dynamics:
our &b;
sub foo($a, &c = &infix:<+>) {
&b = &c;
bar($a);
}
sub bar ($a) {
say &b($a,$a);
my $promise = start { bar($a-1) if $a > 0 }
await $promise;
}
foo(3); # 6, 4, 2, 0
Once the variable is set in foo(), it is read without problem in bar(). But when bar() is called from inside the promise, the value for &*z disappears not on the first layer of recursion but the second.
I'm sensing a bug but maybe I'm doing something weird with the between the recursion/dynamic variables/threading that's messing things up.
Under current semantics, start will capture the context it was invoked in. If dynamic variable lookup fails on the stack of the thread that the start executes on (one of those from the thread pool), then it will fall back to looking at the dynamic scope captured when the start block was scheduled.
When a start block is created during the execution of another start block, the same thing happens. However, there is no relationship between the two, meaning that the context captured by the "outer" start block will not be searched also. While one could argue for that to happen, it seems potentially problematic to do so. Consider this example:
sub tick($n = 1 --> Nil) {
start {
await Promise.in(1);
say $n;
tick($n + 1);
}
}
tick();
sleep;
This is a (not entirely idiomatic) way to produce a tick every second. Were the inner start to retain a reference back to the state of the outer one, for the purpose of dynamic variable lookup, then this program would build up a chain of ever increasing length in memory, which seems like an undesirable behavior.
What is the "right" way to stuff an arbitrary, odd sized struct into a swift 3 Data object ?
I think that I have got there, but it seems horribly convoluted for what from prior experience was no than
dataObject.append(&structInstance, sizeof(structInstance))
My case is as follows:
The structure of interest:
public struct CutEntry {
var itemA : UInt64
var itemB : UInt32
}
I have an array of these things that I want to stuff into a data object, in a specific manner as the data object becomes a file which is eventually read by a different application on a different architecture.
The function to put them into a Data object
open func encodeCutsData() -> Data
{
var data = Data()
for entry in cutsArray
{
// bigendian stuff, as a var, just so the you can get the address
var entryCopy = CutEntry(itemA: entry.itemA.bigEndian, itemB: entry.itemB.bigEndian)
// step 1 get the address of the item as a UnsafePointer
let d2 = withUnsafePointer(to: &entryCopy) { return $0}
// step 2 cast it to a raw pointer
let d3 = UnsafeRawPointer(d2)
// step 3 create a temp data object
let d4 = Data(bytes:d3, count: MemoryLayout<CutEntry>.size )
// step 4 add the temp to main data object
data.append(d4)
}
return data
}
Earlier when we only had NSMutableData it was
let item = NSMutableData()
for entry in cutsArray
{
var entryCopy = CutEntry(cutPts: entry.cutPts.bigEndian, cutType: entry.cutType.bigEndian)
item.append(&entryCopy, length: MemoryLayout<CutEntry>.size)
}
I've spent a few hours searching for examples of manipulating struct and Data objects. I though that I was close when I found references to unsafebufferpointer. That blew up in my face when I discovered that "buffer" bit uses core memory alignment (which can be useful) and it was stuffing 16 bytes into the data object instead of the expected 12.
I am quite prepared to say that I have missed the blindingly obvious bit of RTFM somewhere. Can anyone offer a cleaner solution ? or has Swift really gone backwards here ?
If I could find a way of getting a pointer to the item as a UInt8 pointer that would remove a couple of lines, but that looks just a difficult.
With checking the reference of Data, I can find two things which may be useful for you:
init(bytes: UnsafeRawPointer, count: Int)
func append(Data)
You can write something like this:
var data = Data()
for entry in cutsArray {
var entryCopy = CutEntry(cutPts: entry.cutPts.bigEndian, cutType: entry.cutType.bigEndian)
data.append(Data(bytes: &entryCopy, count: MemoryLayout<CutEntry>.size))
}
Why is the speed of nodejs array shift/push operations not linear in the size of the array? There is a dramatic knee at 87370 that completely crushes the system.
Try this, first with 87369 elements in q, then with 87370. (Or, on a 64-bit system, try 85983 and 85984.) For me, the former runs in .05 seconds; the latter, in 80 seconds -- 1600 times slower. (observed on 32-bit debian linux with node v0.10.29)
q = [];
// preload the queue with some data
for (i=0; i<87369; i++) q.push({});
// fetch oldest waiting item and push new item
for (i=0; i<100000; i++) {
q.shift();
q.push({});
if (i%10000 === 0) process.stdout.write(".");
}
64-bit debian linux v0.10.29 crawls starting at 85984 and runs in .06 / 56 seconds. Node v0.11.13 has similar breakpoints, but at different array sizes.
Shift is a very slow operation for arrays as you need to move all the elements but V8 is able to use a trick to perform it fast when the array contents fit in a page (1mb).
Empty arrays start with 4 slots and as you keep pushing, it will resize the array using formula 1.5 * (old length + 1) + 16.
var j = 4;
while (j < 87369) {
j = (j + 1) + Math.floor(j / 2) + 16
console.log(j);
}
Prints:
23
51
93
156
251
393
606
926
1406
2126
3206
4826
7256
10901
16368
24569
36870
55322
83000
124517
So your array size ends up actually being 124517 items which makes it too large.
You can actually preallocate your array just to the right size and it should be able to fast shift again:
var q = new Array(87369); // Fits in a page so fast shift is possible
// preload the queue with some data
for (i=0; i<87369; i++) q[i] = {};
If you need larger than that, use the right data structure
I started digging into the v8 sources, but I still don't understand it.
I instrumented deps/v8/src/builtins.cc:MoveElemens (called from Builtin_ArrayShift, which implements the shift with a memmove), and it clearly shows the slowdown: only 1000 shifts per second because each one takes 1ms:
AR: at 1417982255.050970: MoveElements sec = 0.000809
AR: at 1417982255.052314: MoveElements sec = 0.001341
AR: at 1417982255.053542: MoveElements sec = 0.001224
AR: at 1417982255.054360: MoveElements sec = 0.000815
AR: at 1417982255.055684: MoveElements sec = 0.001321
AR: at 1417982255.056501: MoveElements sec = 0.000814
of which the memmove is 0.000040 seconds, the bulk is the heap->RecordWrites (deps/v8/src/heap-inl.h):
void Heap::RecordWrites(Address address, int start, int len) {
if (!InNewSpace(address)) {
for (int i = 0; i < len; i++) {
store_buffer_.Mark(address + start + i * kPointerSize);
}
}
}
which is (store-buffer-inl.h)
void StoreBuffer::Mark(Address addr) {
ASSERT(!heap_->cell_space()->Contains(addr));
ASSERT(!heap_->code_space()->Contains(addr));
Address* top = reinterpret_cast<Address*>(heap_->store_buffer_top());
*top++ = addr;
heap_->public_set_store_buffer_top(top);
if ((reinterpret_cast<uintptr_t>(top) & kStoreBufferOverflowBit) != 0) {
ASSERT(top == limit_);
Compact();
} else {
ASSERT(top < limit_);
}
}
when the code is running slow, there are runs of shift/push ops followed by runs of 5-6 calls to Compact() for every MoveElements. When it's running fast, MoveElements isn't called until a handful of times at the end, and just a single compaction when it finishes.
I'm guessing memory compaction might be thrashing, but it's not falling in place for me yet.
Edit: forget that last edit about output buffering artifacts, I was filtering duplicates.
this bug had been reported to google, who closed it without studying the issue.
https://code.google.com/p/v8/issues/detail?id=3059
When shifting out and calling tasks (functions) from a queue (array)
the GC(?) is stalling for an inordinate length of time.
114467 shifts is OK
114468 shifts is problematic, symptoms occur
the response:
he GC has nothing to do with this, and nothing is stalling either.
Array.shift() is an expensive operation, as it requires all array
elements to be moved. For most areas of the heap, V8 has implemented a
special trick to hide this cost: it simply bumps the pointer to the
beginning of the object by one, effectively cutting off the first
element. However, when an array is so large that it must be placed in
"large object space", this trick cannot be applied as object starts
must be aligned, so on every .shift() operation all elements must
actually be moved in memory.
I'm not sure there's a whole lot we can do about this. If you want a
"Queue" object in JavaScript with guaranteed O(1) complexity for
.enqueue() and .dequeue() operations, you may want to implement your
own.
Edit: I just caught the subtle "all elements must be moved" part -- is RecordWrites not GC but an actual element copy then? The memmove of the array contents is 0.04 milliseconds. The RecordWrites loop is 96% of the 1.1 ms runtime.
Edit: if "aligned" means the first object must be at first address, that's what memmove does. What is RecordWrites?