How Can I set a interval or timeout - node.js

Error: 'The write action you are performing on the channel has hit the write rate limit.',
How can I make my loop send at a slower rate. It seems to be sending all at once, although I'm trying to use an increment in a loop one by one.
Its still causing the sendrate to throttle and break. Is using an interval or timeout a good idea. But I'm not sure how I should set it up.
Simple index.js using node.js
const autosend = require("discord-autosender")
const fs = require("fs");
function send(){
var channelID = ""
var tokenID = ""
const data = fs.readFileSync('mr_robot.txt', 'UTF-8');
const lines = data.split(/\r?\n/);
for (let l_indx = 0; l_indx < lines.length; l_indx++) {
var message = lines[l_indx];
autosend.Post(message, channelID, tokenID)
}
}
send();
Mr.Robot.txt
A dog of that house shall move me to stand. I
will take the wall of any man or maid of Montague’s.
That shows thee a weak slave, for the weakest
goes to the wall.
’Tis true, and therefore women, being the
weaker vessels, are ever thrust to the wall. Therefore
I will push Montague’s men from the wall and
thrust his maids to the wall.
The quarrel is between our masters and us
their men.

You can combine Promise, setTimeout, and async-await to reduce the speed of the loop.
function send() {
var channelID = '';
var tokenID = '';
const data = fs.readFileSync('mr_robot.txt', 'UTF-8');
const lines = data.split(/\r?\n/);
const newFun = async () => {
for (let l_indx = 0; l_indx < lines.length; l_indx++) {
var message = lines[l_indx];
await new Promise((resolve) => setTimeout(resolve, 1000));
autosend.Post(message, channelID, tokenID);
}
};
newFun();
}
send();

Related

discord websocket stopped giving messages after certain period of time

I have implemented discord websocket to receive instant message. Everything goes well but after certain period of time, My log file stopped recording the messages. Sometimes after an hour or 2 hours but within 2 hours. When I rerun the node then it starts recording again. Not sure what is the issue.
const WebSocket = require('ws');
const ws = new WebSocket("wss://gateway.discord.gg/?v=6&encoding=json");
let interval = 0;
token = 'SOME tOKEN';
payload = {
op:2,
d:{
token:token,
properties:{
$os:'linux',
$browser:'chrome',
$device:'chrome'
}
}
}
ws.on('open', function open(){
ws.send(JSON.stringify(payload))
})
ws.on('message', function incoming(data){
let payload = JSON.parse(data)
const {t, event, op, d} = payload;
switch (op){
case 10:
const {heartbeat_interval} = d;
interval = heartbeat(heartbeat_interval)
break;
}
switch (t){
case 'MESSAGE_CREATE':
let author = d.author.username;
let content = d.content;
console.log(d);
}
})
const heartbeat = (ms) => {
return setInterval(() => {
ws.send(JSON.stringify({op:1,d:null}))
},ms)
}
Put everything after the first line into a function. In the function, add this code.
ws.on("close", functionName);
Put your function's name in place of the callback (no parenthesis). Outside the function, call the function. Hope this helped!

node js non blocking for loop

Please check if my understanding about the following for loop is correct.
for(let i=0; i<1000; i){
sample_function(i, function(result){});
}
The moment the for loop is invoked, 1000 events of sample_function will be qued in the event loop. After about 5 seconds a user gives a http request, which is qued after those "1000 events".
Usually this would not be a problem because the loop is asynchronous.
But lets say that this sample_function is a CPU intensive function. Therefore the "1000 events" are completed consecutively and each take about 1 second.
As a result, the for loop will block for about 1000 seconds.
Would there be a way to solve such problem? For example would it be possible to let the thread take a "break" every 10 loops? and allow other new ques to pop in between? If so how would I do it?
Try it this:
for(let i=0; i<1000; i++)
{
setTimeout(sample_function, 0, i, function(result){});
}
or
function sample_function(elem, index){..}
var arr = Array(1000);
arr.forEach(sample_function);
There is a technique called partitioning which you can read about in the NodeJs's document, But as the document states:
If you need to do something more complex, partitioning is not a good option. This is because partitioning uses only the Event Loop, and you won't benefit from multiple cores almost certainly available on your machine.
So you can also use another technique called offloading, e.g. using worker threads or child processes which also have certain downsides like having to serialize and deserialize any objects that you wish to share between the event loop (current thread) and a worker thread or a child process
Following is an example of partitioning that I came up with which is in the context of an express application.
const express = require('express');
const crypto = require('crypto');
const randomstring = require('randomstring');
const app = express();
const port = 80;
app.get('/', async (req, res) => {
res.send('ok');
})
app.get('/block', async (req, res) => {
let result = [];
for (let i = 0; i < 10; ++i) {
result.push(await block());
}
res.send({result});
})
app.listen(port, () => {
console.log(`Listening on port ${port}`);
console.log(`http://localhost:${port}`);
})
/* takes around 5 seconds to run(varies depending on your processor) */
const block = () => {
//promisifying just to get the result back to the caller in an async way, this is not part of the partitioning technique
return new Promise((resolve, reject) => {
/**
* https://nodejs.org/en/docs/guides/dont-block-the-event-loop/#partitioning
* using partitioning techinique(using setImmediate/setTimeout) to prevent a long running operation
* to block the eventloop completely
* there will be a breathing period between each time block is called
*/
setImmediate(() => {
let hash = crypto.createHash("sha256");
const numberOfHasUpdates = 10e5;
for (let iter = 0; iter < numberOfHasUpdates; iter++) {
hash.update(randomstring.generate());
}
resolve(hash);
})
});
}
There are two endpoints / and /block, if you hit /block and then hit / endpoint, what happens is that the / endpoint will take around 5 seconds to give back response(during the breathing space(the thing that you call it a "break"))
If setImmediate was not used, then the / endpoint would respond to a request after approximately 10 * 5 seconds(10 being the number of times block function is called in the for-loop)
Also you can do partitioning using a recursive approach like this:
/**
*
* #param items array we need to process
* #param chunk a number indicating number of items to be processed on each iteration of event loop before the breathing space
*/
function processItems(items, chunk) {
let i = 0;
const process = (done) => {
let currentChunk = chunk;
while (currentChunk > 0 && i < items?.length) {
--currentChunk;
syncBlock();
++i;
}
if (i < items?.length) {
setImmediate(process);//the key is to schedule the next recursive call (by passing the function to setImmediate) instead of doing a recursive call (by simply invoking the process function)
}
}
process();
}
And if you need to get back the data processed you can promisify it like this:
function processItems(items, chunk) {
let i = 0;
let result = [];
const process = (done) => {
let currentChunk = chunk;
while (currentChunk > 0 && i < items?.length) {
--currentChunk;
const returnedValue = syncBlock();
result.push(returnedValue);
++i;
}
if (i < items?.length) {
setImmediate(() => process(done));
} else {
done && done(result);
}
}
const promisified = () => new Promise((resolve) => process(resolve));
return promisified();
}
And you can test it by adding this route handler to the other route handlers provided above:
app.get('/block2', async (req, res) => {
let result = [];
let arr = [];
for (let i = 0; i < 10; ++i) {
arr.push(i);
}
result = await processItems(arr, 1);
res.send({ result });
})

Set rate using RxJS5

I have this code which just reads in data from a .csv file and converts it to json and logs the data:
const fs = require('fs');
const path = require('path');
const sd = path.resolve(__dirname + '/fixtures/SampleData.csv');
const strm = fs.createReadStream(sd).setEncoding('utf8');
const Rx = require('rxjs/Rx');
const csv2json = require('csv2json');
const dest = strm
.pipe(csv2json({
separator: ','
}));
dest.on('error', function(e){
console.error(e.stack || e);
})
const obs = Rx.Observable.fromEvent(dest, 'data')
.flatMap(d => Rx.Observable.timer(100).mapTo(d))
obs.subscribe(v => {
console.log(String(v));
})
What the code is doing is logging all the data after a 100 ms delay. I actually want to delay on each line of data and log each line after a small delay.
The above code doesn't achieve that - what is the best way to control the rate at which the data is logged?
Hypothesis: All the lines of data come in approximately at the same time, so all are delayed 100 ms, so they end up getting printed at pretty much the same time. I need to only start delaying the next line after the previous as been logged.
the following code seems to do the same thing as using the timer above:
const obs = Rx.Observable.fromEvent(dest, 'data')
.delay(100)
Hypothesis: All the lines of data come in approximately at the same
time, so all are delayed 100 ms, so they end up getting printed at
pretty much the same time. I need to only start delaying the next line
after the previous as been logged.
Your hypothesis is correct
Solution
Swap out the .flatMap() in your original solution with .concatMap()
Rx.Observable.from([1,2,3,4])
.mergeMap(i => Rx.Observable.timer(500).mapTo(i))
.subscribe(val => console.log('mergeMap value: ' + val));
Rx.Observable.from([1,2,3,4])
.concatMap(i => Rx.Observable.timer(500).mapTo(i))
.subscribe(val => console.log('concatMap value: ' + val));
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.0.3/Rx.js"></script>
This will ensure that every emission completes before the next emission is subscribed to and starts delaying its value.
I couldn't find the functionality I needed in the RxJS library (although it might be there, I just couldn't find it, let me know if there is a better, more idiomatic, way).
So I wrote this, which seems to do the job:
const fs = require('fs');
const path = require('path');
const sd = path.resolve(__dirname + '/fixtures/SampleData.csv');
const strm = fs.createReadStream(sd).setEncoding('utf8');
const Rx = require('rxjs/Rx');
const csv2json = require('csv2json');
const p = Rx.Observable.prototype;
p.eachWait = function(timeout){
const source = this;
const values = [];
let flipped = true;
const onNext = function (sub){
flipped = false;
setTimeout(() => {
var c = values.pop();
if(c) sub.next(c);
if(values.length > 0){
onNext(sub);
}
else{
flipped = true;
}
}, timeout);
}
return Rx.Observable.create(sub => {
return source.subscribe(
function next(v){
values.unshift(v);
if(flipped){
onNext(sub);
}
},
sub.error.bind(sub),
sub.complete.bind(sub)
);
});
}
const dest = strm
.pipe(csv2json({
separator: ','
}));
dest.on('error', function(e){
console.error(e.stack || e);
});
const obs = Rx.Observable.fromEvent(dest, 'data')
.eachWait(1000)
obs.subscribe(v => {
console.log(String(v));
});
I assume this is as about as performant as you can make it - only one timer should be running at any given moment.

Using Redis SCAN in NODE

I have Redis with a lot of keys in some format and I want to get keys that match some pattern and do some operations on them. I don't use KEYS method since it's not recommend in production. Using SCAN I'm wondering what is the best way to write it in code. I have to do something like a while loop but using promises, my current solution looks like this (code is simplified a little):
'use strict'
const Promise = require('bluebird');
const config = require('./config');
const client = require('./clinet');
let iterator = 0;
Promise.coroutine(function* () {
do {
iterator = yield clinet.scanAsync(iterator, 'myQuery', 'COUNT', config.scanChunkSize)
.then(data => {
let nextIterator = data[0];
let values = data[1];
//do some magic with values
return nextIterator;
})
} while (iterator !== '0');
})();
Is there a better way to do it that I'm missing?
I realize this is a really old question, but I found all of the other answers very unsatisfying. Here is yet another attempt to scan in a relatively clean way using async await (WITHOUT the use of yet another external dependency). You can easily modify this to continuously delete each set of found keys (you would want to tackle them in batches like this in case there are LOTS). Pushing them into an array just demonstrates one very basic thing you could do with them during this stage.
const redis = require('redis');
const { promisify } = require('util');
const client = redis.createClient({...opts});
const scan = promisify(client.scan).bind(client);
const scanAll = async (pattern) => {
const found = [];
let cursor = '0';
do {
const reply = await scan(cursor, 'MATCH', pattern);
cursor = reply[0];
found.push(...reply[1]);
} while (cursor !== '0');
return found;
}
You can use recursion to keep calling scan until done.
function scanAsync(cursor, pattern, returnSet){
return redisClient.scanAsync(cursor, "MATCH", pattern, "COUNT", "100").then(
function (reply) {
cursor = reply[0];
var keys = reply[1];
keys.forEach(function(key,i){
returnSet.add(key);
});
if( cursor === '0' ){
return Array.from(returnSet);
}else{
return scanAsync(cursor, pattern, returnSet)
}
});
}
Pass in a Set() to make sure keys aren't duplicated
myResults = new Set();
scanAsync('0', "NOC-*[^listen]*", myResults).map(
function( myResults ){ console.log( myResults); }
);
You can try this snippet to scan (1000) keys per iteration and 'delete`.
var cursor = '0';
function scan(pattern,callback){
redisClient.scan(cursor, 'MATCH',pattern,'COUNT', '1000', function(err, reply){
if(err){
throw err;
}
cursor = reply[0];
if(cursor === '0'){
return callback();
}else{
var keys = reply[1];
keys.forEach(function(key,i){
redisClient.del(key, function(deleteErr, deleteSuccess){
console.log(key);
});
});
return scan(pattern,callback);
}
});
}
scan(strkey,function(){
console.log('Scan Complete');
});
Nice option for node-redis module is to use scan iterators. Example:
const redis = require("redis");
const client = redis.createClient();
async function getKeys(pattern="*", count=10) {
const results = [];
const iteratorParams = {
MATCH: pattern,
COUNT: count
}
for await (const key of client.scanIterator(iteratorParams)) {
results.push(key);
}
return results;
}
(Of course you can also process your keys on the fly in for await loop without storing them in additional array if that's enough for you).
If you do not want to override scan parameters (MATCH/COUNT) you can just skip them and execute client.scanIterator() without parameter (defaults will be used then, MATCH="*", COUNT=10).
I think the node bindings for Redis are pushing too much responsibility to the caller here. So I created my own library for scanning as well, using generators in node:
const redis = require('redis')
const client = redis.createClient(…)
const generators = require('redis-async-gen')
const { keysMatching } = generators.using(client)
…
for await (const key of keysMatching('test*')) {
console.info(key)
}
It's the last bit that obviously is the thing that you should care about. Instead of having to carefully control an iterator yourself, all you need to do is use a for comprehension.
I wrote more about it here.
Go through this, it may help.
https://github.com/fritzy/node-redisscan
do not use the library as it, go through the code available at
https://github.com/fritzy/node-redisscan/blob/master/index.js

Is it possible to register multiple listeners to a child process's stdout data event? [duplicate]

I need to run two commands in series that need to read data from the same stream.
After piping a stream into another the buffer is emptied so i can't read data from that stream again so this doesn't work:
var spawn = require('child_process').spawn;
var fs = require('fs');
var request = require('request');
var inputStream = request('http://placehold.it/640x360');
var identify = spawn('identify',['-']);
inputStream.pipe(identify.stdin);
var chunks = [];
identify.stdout.on('data',function(chunk) {
chunks.push(chunk);
});
identify.stdout.on('end',function() {
var size = getSize(Buffer.concat(chunks)); //width
var convert = spawn('convert',['-','-scale',size * 0.5,'png:-']);
inputStream.pipe(convert.stdin);
convert.stdout.pipe(fs.createWriteStream('half.png'));
});
function getSize(buffer){
return parseInt(buffer.toString().split(' ')[2].split('x')[0]);
}
Request complains about this
Error: You cannot pipe after data has been emitted from the response.
and changing the inputStream to fs.createWriteStream yields the same issue of course.
I don't want to write into a file but reuse in some way the stream that request produces (or any other for that matter).
Is there a way to reuse a readable stream once it finishes piping?
What would be the best way to accomplish something like the above example?
You have to create duplicate of the stream by piping it to two streams. You can create a simple stream with a PassThrough stream, it simply passes the input to the output.
const spawn = require('child_process').spawn;
const PassThrough = require('stream').PassThrough;
const a = spawn('echo', ['hi user']);
const b = new PassThrough();
const c = new PassThrough();
a.stdout.pipe(b);
a.stdout.pipe(c);
let count = 0;
b.on('data', function (chunk) {
count += chunk.length;
});
b.on('end', function () {
console.log(count);
c.pipe(process.stdout);
});
Output:
8
hi user
The first answer only works if streams take roughly the same amount of time to process data. If one takes significantly longer, the faster one will request new data, consequently overwriting the data still being used by the slower one (I had this problem after trying to solve it using a duplicate stream).
The following pattern worked very well for me. It uses a library based on Stream2 streams, Streamz, and Promises to synchronize async streams via a callback. Using the familiar example from the first answer:
spawn = require('child_process').spawn;
pass = require('stream').PassThrough;
streamz = require('streamz').PassThrough;
var Promise = require('bluebird');
a = spawn('echo', ['hi user']);
b = new pass;
c = new pass;
a.stdout.pipe(streamz(combineStreamOperations));
function combineStreamOperations(data, next){
Promise.join(b, c, function(b, c){ //perform n operations on the same data
next(); //request more
}
count = 0;
b.on('data', function(chunk) { count += chunk.length; });
b.on('end', function() { console.log(count); c.pipe(process.stdout); });
You can use this small npm package I created:
readable-stream-clone
With this you can reuse readable streams as many times as you need
For general problem, the following code works fine
var PassThrough = require('stream').PassThrough
a=PassThrough()
b1=PassThrough()
b2=PassThrough()
a.pipe(b1)
a.pipe(b2)
b1.on('data', function(data) {
console.log('b1:', data.toString())
})
b2.on('data', function(data) {
console.log('b2:', data.toString())
})
a.write('text')
I have a different solution to write to two streams simultaneously, naturally, the time to write will be the addition of the two times, but I use it to respond to a download request, where I want to keep a copy of the downloaded file on my server (actually I use a S3 backup, so I cache the most used files locally to avoid multiple file transfers)
/**
* A utility class made to write to a file while answering a file download request
*/
class TwoOutputStreams {
constructor(streamOne, streamTwo) {
this.streamOne = streamOne
this.streamTwo = streamTwo
}
setHeader(header, value) {
if (this.streamOne.setHeader)
this.streamOne.setHeader(header, value)
if (this.streamTwo.setHeader)
this.streamTwo.setHeader(header, value)
}
write(chunk) {
this.streamOne.write(chunk)
this.streamTwo.write(chunk)
}
end() {
this.streamOne.end()
this.streamTwo.end()
}
}
You can then use this as a regular OutputStream
const twoStreamsOut = new TwoOutputStreams(fileOut, responseStream)
and pass it to to your method as if it was a response or a fileOutputStream
If you have async operations on the PassThrough streams, the answers posted here won't work.
A solution that works for async operations includes buffering the stream content and then creating streams from the buffered result.
To buffer the result you can use concat-stream
const Promise = require('bluebird');
const concat = require('concat-stream');
const getBuffer = function(stream){
return new Promise(function(resolve, reject){
var gotBuffer = function(buffer){
resolve(buffer);
}
var concatStream = concat(gotBuffer);
stream.on('error', reject);
stream.pipe(concatStream);
});
}
To create streams from the buffer you can use:
const { Readable } = require('stream');
const getBufferStream = function(buffer){
const stream = new Readable();
stream.push(buffer);
stream.push(null);
return Promise.resolve(stream);
}
What about piping into two or more streams not at the same time ?
For example :
var PassThrough = require('stream').PassThrough;
var mybiraryStream = stream.start(); //never ending audio stream
var file1 = fs.createWriteStream('file1.wav',{encoding:'binary'})
var file2 = fs.createWriteStream('file2.wav',{encoding:'binary'})
var mypass = PassThrough
mybinaryStream.pipe(mypass)
mypass.pipe(file1)
setTimeout(function(){
mypass.pipe(file2);
},2000)
The above code does not produce any errors but the file2 is empty

Resources