How to run in Nodejs parallel process but limit the number of process that exist in the current bulk?
Example, I have an Array with 200 items ['word.doc', 'foo.pdf', 'a.txt', ...] and I need to run Work process for each item like so:
Work.exe word.doc
Work.exe foo.pdf
Work.exe a.txt
Work.exe ....
What I did is:
forEach in the Array..
call to exec from child_process lib.
But I want only 5 process each time. When some process will end a new item should be up and running. So every time I have only 5 process until all process are completed.
Not sure how it can be done.
The child_process lib as a "close" event
you could create a counter and add a listener to this event. Each time you call exec, you increment the counter. If your number of processes goes beyond a threshold (here, 5) you do not call the function anymore.
(you could just wrap the increment and the exec call in one function and call it multiple times)
And when you receive a "close" event you could decrement the counter and if the counter hits 0, recall the function to spawn a child process N times.
You would have to keep a global variable for the array index though.
hope this helps.
exec accepts a callback that will be invoked with the child process exits. You can easily combine this with async eachLimit:
const async = require('async');
const child_process = require('child_process');
// items = ['word.doc', 'foo.pdf', 'a.txt', ...]
async.eachLimit(items, 5, (item, done) => {
child_process.exec(`Work.exe ${item}`, done);
});
Related
Within my project I intend to send large volumes of transactions therefore for simplicity I am building a wrapper function for the following functions to be executed together: contractName.functions.functionName(params).transact() and w3.eth.wait_for_transaction(tx_hash). However when I write the functions transact_and_wait with the above implemented within in the transactions do not get executed!
Implementation of Transact and wait
def transact_and_wait(contract_function, transaction_params= {"gas": 100000}):
# Send the transaction
if transaction_params != {"gas": 100000}:
transaction_params["gas"] = 100000
transaction_hash = contract_function.transact(transaction_params)
# Wait for the transaction to be mined
transaction_receipt = w3.eth.wait_for_transaction_receipt(transaction_hash)
return transaction_receipt
Where it is called via: Transact_and_wait(contractName.functions.functionName(account.address))
For example this should set the a role for a user defined via index 1
However when I call. print(contractName.functions.stateVariable(account.address).call()) it returns 0
If i do the same process above but not within a functions:
tx_hash = contractName.functions.functionName(account.address).transact({"gas": 100000}))
transaction_receipt = w3.eth.wait_for_transaction_receipt(tx_hash)
Then I can call the same getter: print(contractName.functions.stateVariable(account.address).call()
It returns 1.
I have a Node.js script, which uses some external library. The problem is with this function it freezes (probably some infinite loop inside) for a specific arguments. Example code:
const Elemenets = [/*...*/];
for(let i=0 ; i < Elements.length ; i++) {
console.log(`Running ${i}/${Elements.length} ...`);
someExternalFunction(Elements[i]);
}
My console:
Running 1/352 ...
Running 2/352 ...
Running 3/352 ...
Running 4/352 ...
Running 5/352 ...
Running 6/352 ...
// the script freezees here
Is there any way to do something like:
if someExternalFunction takes longer than 10 seconds, break it and continue the loop
If the function cannot work properly for some arguments I can handle it. But I don't want one damaged element to freeze the entire loop.
If there is no possibility to solve it this way, maybe there is some another approach to this problem?
Thanks
This question already has answers here:
How do I measure the execution time of JavaScript code with callbacks?
(12 answers)
Closed 1 year ago.
I have a node file which contains many methods including a Async call method which fetches data from db. I need to find the exact execution time of it.
So i tried,
.
.
var start = Date.now();
await dbFetching()
var end = Date.now();
console.log(end - start)
.
Then I tried with an external shell script which executes the file and find the actual time execution of the entire file execution. But the issue is i need to calculate only the time taken for the Async call (dbFetching). Below is my shell script,
#!/bin/sh
START=$(date +%s)
node ./s3-glacier.js
END=$(date +%s)
DIFF=$(( $END - $START ))
echo "It took $DIFF seconds"
But i though like, if i can run entire node script in the shell script, May be i can calculate the time. Therefore suggest your thought on this to calculate only the Async time consumption.
Thanks in advance
If you have this possibility - use async/await. It would look like:
var start = Date.now();
await dbFetching(); // blocks execution of code
var end = Date.now(); // this happens AFTER you've fetched the data
console.log(end - start)
That would require your function to be marked as async though, or return a Promise.
Otherwise you'd need to pass a callback to a function, so that the function calls it when it's done and you know that it's now time to calculate the dime. Something along the lines of:
var start = Date.now();
dbFetching(function done() { // done should be called AFTER the data is fetched
var end = Date.now();
console.log(end - start)
})
I am trying to build small application with Yew (Rustwasm) . I would like to put sleep function in Yew app.when I use use std::thread::sleep , I am getting below error
I am using sleep as below
let mut index = 0;
sleep(Duration::new(1, 0));
if col < 3 {
index = row * 4 + (col + 1);
if self.cellule[index].value == 1 {
sleep(Duration::new(1, 0));
wasm.js:314 panicked at 'can't sleep', src/libstd/sys/wasm/thread.rs:26:9
Stack:
Error
at imports.wbg.__wbg_new_59cb74e423758ede (http://127.0.0.1:8080/wasm.js:302:19)
at console_error_panic_hook::hook::hd38f373f442d725c (http://127.0.0.1:8080/wasm_bg.wasm:wasm-function[117]:0x16a3e)
at core::ops::function::Fn::call::hf1476807b3d9587d (http://127.0.0.1:8080/wasm_bg.wasm:wasm-function[429]:0x22955)
at std::panicking::rust_panic_with_hook::hb07b303a83b6d242 (http://127.0.0.1:8080/wasm_bg.wasm:wasm-function[211]:0x1ed0d)
at std::panicking::begin_panic::h97f15f2442acdda4 (http://127.0.0.1:8080/wasm_bg.wasm:wasm-function[321]:0x21ee0)
at std::sys::wasm::thread::Thread::sleep::hdd97a2b229644713 (http://127.0.0.1:8080/wasm_bg.wasm:wasm-function[406]:0x22829)
The methods like thread::sleep doesn't work, because in the JS environment you have a single thread only. If you call that sleep you will block the app completely.
If you want to use an interval you should "order" a callback. You can check the following example how to use TimeoutService or IntervalService for that: yew/examples/timer
The core idea to create a service this way:
let handle = TimeoutService::spawn(
Duration::from_secs(3),
self.link.callback(|_| Msg::Done),
);
// Keep the task or timer will be cancelled
self.timeout_job = Some(handle);
Now you can use handler of Msg::Done to react to that timer elapsed.
Threads are actually available, but it's a complex topic and you have to use Web Workers API reach them. Anyway it's useless for your case. Also there are some proposals in standards, but they aren't available in the browsers yet.
I'd like to have Parallel::ForkManager use a callback to get something back from a child process and then also restart it. Is that possible? The following is from the Parallel::ForkManager docs:
use strict;
use Parallel::ForkManager;
my $max_procs = 5;
my #names = qw( Fred Jim Lily Steve Jessica Bob Dave Christine Rico Sara );
# hash to resolve PID's back to child specific information
my $pm = new Parallel::ForkManager($max_procs);
# Setup a callback for when a child finishes up so we can
# get it's exit code
$pm->run_on_finish(
sub { my ($pid, $exit_code, $ident) = #_;
print "** $ident just got out of the pool ".
"with PID $pid and exit code: $exit_code\n";
}
);
$pm->run_on_start(
sub { my ($pid,$ident)=#_;
print "** $ident started, pid: $pid\n";
}
);
$pm->run_on_wait(
sub {
print "** Have to wait for one children ...\n"
},
0.5
);
foreach my $child ( 0 .. $#names ) {
my $pid = $pm->start($names[$child]) and next;
# This code is the child process
print "This is $names[$child], Child number $child\n";
sleep ( 2 * $child );
print "$names[$child], Child $child is about to get out...\n";
sleep 1;
$pm->finish($child); # pass an exit code to finish
#####here is where I'd like each child process to restart
}
So when $pm->finish happens, the callback confirms the "child" is "out of the pool." How can I both get the callback to fire and immediately put the child back in the pool as they come out, so that it runs forever?
I think you're misunderstanding what's happening. Under the covers, what Parallel::ForkManager is doing is calling a fork(). Two processes exist at this point, with only a single difference - different PIDs.
Your child process goes and runs some stuff, then exits, generating an exit status, which your parent then reaps.
Restarting the process... well, you just need to fork again and run your code.
Now, what you're doing - a foreach loop, that - foreach array element, forks and then the fork exits.
So really - all your need to do, is call $pm -> start again. How you figure out which one exited (and thus the child name) is more difficult though - your callback runs in the parent process, so data isn't being passed back aside from the exit status of your child. You'll need to figure out some sort of IPC to notify the necessary details.
Although - I'd point out #names isn't a hash, so treating it like one is going to have strange behaviour :).
Have you considered threading as an alternative? Threads are good for shared memory operations passing keyed subprocesses is something it's good at.