sqs.recieveMessage not receiving even when messages in queue - node.js

So i have 3 lambdas, one with an API event that triggers a lambda that pulls down around 50,000 objects and pushes them all to a queue.
The second lambda reads from the queue, 10 at a time, in a loop 30 times - meaning it reads, does stuff, invokes the third lambda, returns promise, then reads again - 30 times for a total of 300 reads in the time the lambda executes
The 3rd lambda takes the information from the queue and hits another endpoint with it.
The issue is in that second lambda...First i call a function that returns the number of messages in the queue and if it's more than zero i read them. However, even if there's 20,000 messages in the queue it often comes back with nothing. I'm not sure why.
I have WaitTimeSeconds set to 20 for long polling. Any help would be greatly appreciated, the docs claim i can read up to 3,000/second with a FIFO queue and i'm having trouble getting anywhere near that performance.
Here's the code:
exports.handler = (event, context, callback) => {
const sqs = new AWS.SQS({ region: process.env.AWS_REGION });
getMessageCount(sqs)
.then((messageCount) => {
if (messageCount > 0) {
mapSeries(range(0, 30), getMessages(sqs))
.then((messageRes) => {
callback(null, messageRes);
})
.catch(e => Promise.reject(e));
}
callback(null, 'No more messages');
})
.catch((e) => {
callback(e);
});
};
getMessageCount makes a call to sqs.getQueueAttributes and returns a promise that receives the number of messages.
mapSeries allows the loop to wait for the previous promise to be resolved/rejected before iterating and on each iteration it calls getMessages which calls sqs.receiveMessage and invokes the 3rd lambda with the data.
Any perspective on this is appreciated, thank you!

As i understand your questions, the problem lies with getting the number of messages in the queue. If you had also given the getMessageCount(sqs) as well, we could have determined the types of attributes you are trying to retrieve from SQS.
There are three types of attributes relevant, to get the message count in SQS. These attributes are given below.
ApproximateNumberOfMessages - Returns the approximate number of visible
messages in a queue
ApproximateNumberOfMessagesNotVisible - Returns the approximate number of messages that have not timed-out and aren't deleted.
If you want to include the messages that are waiting to be added, you can consider the following property as well.
ApproximateNumberOfMessagesDelayed - Returns the approximate number of
messages that are waiting to be added to the queue.
By considering these attributes, you can get a much more accurate count from SQS.
Also if I may suggest, I implemented a similar system, but without looking for the count.I retrieve 10 messages at a time via polling, process them and delete them from the queue. As per your example, you can repeat this for 30 times. But if the getMessages(sqs) function returns an empty set, we could assume that the list is empty. (This depends on whether you are using short polling or long polling). Nevertheless, checking for the number of messages at every step seems to be redundant. This is according to this example, but it might defer according to the use case.

Read through the API documentation: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/SQS.html#receiveMessage-property
Parameters:
MaxNumberOfMessages — (Integer)
The maximum number of messages to return. Amazon SQS never returns
more messages than this value (however, fewer messages might be
returned). Valid values are 1 to 10. Default is 1.
Wrap your code in a while loop and anticipate a frequent case of 0 messages since 0 is fewer than 1 to 10.
Something like...
var messages = [];
while(messages.length < NUMBER_OF_MSGS_YOU_REALLY_WANT) {
var new_messages = await getSQSMessages(NUMBER_OF_MSGS_YOU_REALLY_WANT - messages.length);
if(new_messages.Data.Messages.length > 0) {
messages.push(new_messages.Data.Messages);
}
}

Related

Nodejs compute gets slow after query big list from Mongodb

I am using mongoose to query a really big list from Mongodb
const chat_list = await chat_model.find({}).sort({uuid: 1}); // uuid is a index
const msg_list = await message_model.find({}, {content: 1, xxx}).sort({create_time: 1});// create_time is a index of message collection, time: t1
// chat_list length is around 2,000, msg_list length is around 90,000
compute(chat_list, msg_list); // time: t2
function compute(chat_list, msg_list) {
for (let i = 0, len = chat_list.length; i < len; i++) {
msg_list.filter(msg => msg.uuid === chat_list[i].uuid)
// consistent handling for every message
}
}
for above code, t1 is about 46s, t2 is about 150s
t2 is really to big, so weird.
then I cached these list to local json file,
const chat_list = require('./chat-list.json');
const msg_list = require('./msg-list.json');
compute(chat_list, msg_list); // time: t2
this time, t2 is around 10s.
so, here comes the question, 150 seconds vs 10 seconds, why? what happened?
I tried to use worker to do the compute step after mongo query, but the time is still much bigger than 10s
The mongodb query returns a FindCursor that includes arrayish methods like .filter() but the result is not an Array.
Use .toArray() on the cursor before filtering to process the mongodb result set like for like. That might not make the overall process any faster, as the result set still needs to be fetched from mongodb, but compute will be similar.
const chat_list = await chat_model
.find({})
.sort({uuid: 1})
.toArray()
const msg_list = await message_model
.find({}, {content: 1, xxx})
.sort({create_time: 1})
.toArray()
Matt typed faster than I did, so some of what was suggested aligns with part of this answer.
I think you are measuring and comparing something different than what you are expecting and implying.
Your expectation is that the compute() function takes around 10 seconds once all of the data is loaded by the application. This is (mostly) demonstrated by your second test, apart from the fact that that test includes the time it takes to load the data from the local files. But you're seeing that there is a difference of 104 seconds (150 - 46) between the completion of message_model.find() and compute() hence leading to the question.
The key thing is that successfully advancing from the find against message_model is not the same thing as retrieving all of the results. As #Matt notes, the find() will return with a cursor object once the initial batch of results are ready. That is very different than retrieving all of the results. So there is more work (apparently ~94 seconds worth) left to do from the two find() operations to further iterate the cursors and retrieve the rest of the results. This additional time is getting reported inside of t2.
Ass suggested by #Matt, calling .toArray() should shift that time back into t1 as you are expecting. Also sounds like it may be more correct due to ambiguity with .filter() functions.
There are two other things that catch my attention. The first is: why are you retrieving all of this data client-side to do the filtering there? Perhaps you would like to do this uuid matching inside of the database via $lookup?
Secondly, this comment isn't clear to me:
// create_time is a index of message collection, time: t1
create_time itself is a field here, existent or not, that you are requesting an ascending sort against.
You are taking data from 2 tables, then with for loop you are comparing ID using filter function, what is happening now is your loop will be executed 2000 time and so the filter function also which contains 90000 records.
So take a worst case scenario here lets consider 2000 uuid you are getting is not inside the msg_list, here you are executing loop 2000*90000 even though you are not getting data.
It wan't take more than 10 to 15 secs if use below code.
//This will generate array of uuid present in message_model
const msg_list = await message_model.find({}, {content: 1, xxx}).sort({create_time: 1}).distinct("uuid");
// Below query will match all uuid present in msg_list array with chat_list UUID
const chat_list = await chat_model.find({uuid:{$in:msg_list}}).sort({uuid: 1});
The above result is doing same as you have done in your code with filter function and loop but this is proper and fastest way to receive the data you required.

Counting the number of values emitted before the Observable completes?

Attempting to verify that an observable emits a certain number of events before it completes. This is pseudo code:
o.pipe(count).subscribe(count=>
expect(count).toEqual(4));
Thoughts?
The count operator works as follows:
Counts the number of emissions on the source and emits that number when the source completes (source)
So you can use it like so:
obs.pipe(count()).subscribe(totalEmissions => expect(totalEmissions).toEqual(4))
Note that you can't really measure how many events occured before the original observable completed, because if it didn't complete then you didn't finish counting!
You can, however, take note of the "index" of each emission using tap:
let count = 0
obs.pipe(tap(() => console.log("emitted! Index: " + count++))).subscribe(obsValue => {/*...*/})
I'm not sure which is your use case, but that's how you can do it.

Thread pool with Apps Script on Spreadsheet

I have a Google Spreadsheet with internal AppsScript code which process each row of the sheet and perform an urlfetch with the row data. The url will provide a value which will be added to the values returned by each row processing..
For now the code is processing 1 row at a time with a simple for:
var spreadsheet = SpreadsheetApp.getActiveSpreadsheet();
var sheet = spreadsheet.getActiveSheet();
var range = sheet.getDataRange();
for(var i=1 ; i<range.getValues().length ; i++) {
var payload = {
// retrieve data from the row and make payload object
};
var options = {
"method":"POST",
"payload" : payload
};
var result = UrlFetchApp.fetch("http://.......", options);
var text = result.getContentText();
// Save result for final processing
// (with multi-thread function this value will be the return of the function)
}
Please note that this is only a simple example, in the real case the working function will be more complex (like 5-6 http calls, where the output of some of them are used as input to the next one, ...).
For the example let's say that there is a generic "function" which executes some sort of processing and provides a result as output.
In order to speed up the process, I'd like to try to implement some sort of "multi-thread" processing, so I can process multiple rows in the same time.
I already know that javascript does not offer a multi-thread handling, but I read about WebWorker which seems to create an async processing of a function.
My goal is to obtain some sort of ThreadPool (like 5 threads at a time) and send every row that need to be processed to the pool, obtaining as output the result of each function.
When all the rows finished the processing, a final action will be performed gathering all the results of each function.
So the capabilities I'm looking for are:
managed "ThreadPool" where I can submit an N amount of tasks to be performed
possibility to obtain a resulting value from each task processed by the pool
possibility to determine that all the tasks has been processed, so a final "event" can be executed
I already see that there are some ready-to-use libraries like:
https://www.hamsters.io/wiki#thread-pool
http://threadsjs.readthedocs.io/en/latest/
https://github.com/andywer/threadpool-js
but they work with NodeJS. Due to AppsScript nature, I need a more simplier approach, which is provided by native JS. Also, it seems that minified JS are not accepted by AppsScript editor, so I also need the "expanded" version.
Do you know a simple ThreadPool in JS where I can submit a function to be execute and I get back a Promise for the result?

Use First() and Repeat() without restarting whole stream RxJS

I am building a trading bot using RxJS. For that i have to convert ticker data from a socket connection to candles that is getting emitted every x seconds.
I created the socketObservable like this
const subscribeObservable = Observable.fromEventPattern(h => bittrex.websockets.subscribe(['USDT-BTC'], h))
const clientCallBackObservable = Observable.fromEventPattern(h => bittrex.websockets.client(h))
const socketObservable = clientCallBackObservable
.flatMap(() => subscribeObservable)
.filter(subscribtionData => subscribtionData && subscribtionData.M === 'updateExchangeState')
.flatMap(exchangeState => Observable.from(exchangeState.A))
.filter(marketData => marketData.Fills.length > 0)
.map(marketData => marketData && marketData.Fills)
Which works fine - when i connect to the client i flatMap to the subscription connection.
Then i have the candleObservable that is causing problems
export const candleObservable = (promise, timeFrame = TIME_FRAME) =>
promise
.scan((acc, curr) => [...acc, ...curr])
.skipWhile(exchangeData => dateDifferenceInSeconds(exchangeData) < timeFrame)
// take first after skipping
.first()
// first will complete the stream, so we repeat it
.repeat()
// we create candle data from the timeFrame array
.map(fillsData => createCandle(fillsData))
// accumulate candles
.scan((acc, curr) => [...[acc], curr])
What i am trying to achieve is to accumulate data until i have for a full candle that can be x seconds. Then i would like to take that emit and reset the scan function so i start for a new candle. Then i create the candle and accumulate it in another scan.
My problem is that when i call repeat() my socketObservable also gets called again. I do not know if this causes any overhead with the node-bittrex-api but i would like to avoid it.
I have tried putting the accumulating candle part in a flatMap or similar but couldn't get anyt of that to work.
Do you know how i can avoid to repeat() the whole stream or another way of make candles where i can accumulate and then reset the accumulator after first emit?
From what you've described it sounds like you have an observable you want to cut up into buckets of some kind based on some condition. In general, the reduction of a stream to another stream with fewer elements (without filtering) is referred to as "backpressure". In your specific case, it sounds like the backpressure operator you'd be interested in is buffer. The buffer operator can accept an observable as an argument that functions as a "closing selector", i.e. emissions in this observable can be used to regulate when you tie off one buffer and start a new one.
I'd suggest replacing your scan, skipWhile, first, and repeat with a buffer call, passing in a closing selector that will yield a value when your "TIME_FRAME" expires. This should be easy to express as an observable either using timer (in the case of a fixed amount) or a debounced version of the driving stream (if you want to stop when there's a pause in the data). If your buffer is strictly time-based, there's even a specialization of buffer called bufferTime that handles this. Because you'll wind up with an observable of arrays (rather than raw values), you'll likely want to replace your final scan with a regular array reduce.
It's hard to give concrete code without a simpler example to work with. I'd urge you to consult the sample code for the various backpressure operators to see if you can find something similar to what you're attempting to achieve.

How can I implement an anti-spamming technique on my IRC bot?

I run my bot in a public channel with hundreds of users. Yesterday a person came in and just abused it.
I would like to let anyone use the bot, but if they spam commands consecutively and if they aren't a bot "owner" like me when I debug then I would like to add them to an ignored list which expires in an hour or so.
One way I'm thinking would be to save all commands by all users, in a dictionary such as:
({
'meder#freenode': [{command:'.weather 20851', timestamp: 209323023 }],
'jack#efnet': [{command:'.seen john' }]
})
I would setup a cron job to flush this out every 24 hours, but I would basically determine if a person has made X number of commands in a duration of say, 15 seconds and add them to an ignore list.
Actually, as I'm writing this answer I thought of a better idea.. maybe instead of storing each users commands, just store the the bot's commands in a list and keep on pushing until it reaches a limit of say, 15.
lastCommands = [], limit = 5;
function handleCommand( timeObj, action ) {
if ( lastCommands.length < limit ) {
action();
} else {
// enumerate through lastCommands and compare the timestamps of all 5 commands
// if the user is the same for all 5 commands, and...
// if the timestamps are all within the vicinity of 20 seconds
// add the user to the ignoreList
}
}
watch_for('command', function() {
handleCommand({timestamp: 2093293032, user: user}, function(){ message.say('hello there!') })
});
I would appreciate any advice on the matter.
Here's a simple algorithm:
Every time a user sends a command to the bot, increment a number that's tied to that user. If this is a new user, create the number for them and set it to 1.
When a user's number is incremented to a certain value (say 15), set it to 100.
Every <period> seconds, run through the list and decrement all the numbers by 1. Zero means the user's number can be freed.
Before executing a command and after incrementing the user's counter, check to see if it exceeds your magic max value (15 above). If it does, exit before executing the command.
This lets you rate limit actions and forgive excesses after a while. Divide your desired ban length by the decrement period to find the number to set when a user exceeds your threshold (100 above). You can also add to the number if a particular user keeps sending commands after they've been banned.
Well Nathon has already offered a solution, but it's possible to reduce the code that's needed.
var user = {};
user.lastCommandTime = new Date().getTime(); // time the user send his last command
user.commandCount = 0; // command limit counter
user.maxCommandsPerSecond = 1; // commands allowed per second
function handleCommand(obj, action) {
var user = obj.user, now = new Date().getTime();
var timeDifference = now - user.lastCommandTime;
user.commandCount = Math.max(user.commandCount - (timeDifference / 1000 * user.maxCommandsPerSecond), 0) + 1;
user.lastCommandTime = now;
if (user.commandCount <= user.maxCommandsPerSecond) {
console.log('command!');
} else {
console.log('flooding');
}
}
var obj = {user: user};
var e = 0;
function foo() {
handleCommand(obj, 'foo');
e += 250;
setTimeout(foo, 400 + e);
}
foo();
In this implementation, there's no need for a list or some global callback every X seconds, instead we just reduce the commandCount every time there's a new message, based on time difference to the last command, it's also possible to allow different command rates for specific users.
All we need are 3 new properties on the user object :)
Redis
I would use the insanely fast advanced key-value store redis to write something like this, because:
It is insanely fast.
There is no need for cronjob because you can set expire on keys.
It has atomic operations to increment key
You could use redis-cli for prototyping.
I myself really like node_redis as redis client. It is a really fast redis client, which can easily be installed using npm.
Algorithme
I think my algorithme would look something like this:
For each user create a unique key which counts the commands consecutively executed. Also set expire to the time when you don't flag a user as spammer anymore. Let's assume the spammer has nickname x and the expire 15.
Inside redis-cli
incr x
expire x 15
When you do a get x after 15 seconds then the key does not exist anymore.
If value of key is bigger then threshold then flag user as spammer.
get x
These answers seem to be going the wrong way about this.
IRC Servers will disconnect your client regardless of whether you're "debugging" or not if the client or bot is flooding a channel or the server in general.
Make a blanket flood control, using the method #nmichaels has detailed, but on the bot's network connection to the server itself.

Resources