AerospikeError Operation not allowed at this time while doing only reads - node.js

I am trying to load test my code with 50 parallel read requests.
I am querying data based on multiple indexes that I have created. Code looks something like this:
const fetchRecords = async (predicates) => {
let query = aeroClient.query('test', 'mySet');
let filters = [
predexp.stringValue(predicates.load),
predexp.stringBin('load'),
predexp.stringEqual(),
predexp.stringValue(predicates.disc),
predexp.stringBin('disc'),
predexp.stringEqual(),
predexp.integerBin('date1'),
predexp.integerValue(predicates.date2),
predexp.integerGreaterEq(),
predexp.integerBin('date2'),
predexp.integerValue(predicates.date2),
predexp.integerLessEq(),
predexp.stringValue(predicates.column3),
predexp.stringBin('column3'),
predexp.stringEqual(),
predexp.and(5),
]
query.where(filters);
let records = [];
let stream = query.foreach();
stream.on('data', record => {
records.push(record);
})
stream.on('error', error => { throw error });
await new Promise((resolve, reject) => {
stream.on('end', () => resolve());
});
return records;
}
This fails and I get the following error:
AerospikeError: Operation not allowed at this time.
at Function.fromASError (/Users/.../node_modules/aerospike/lib/error.js:113:21)
at QueryCommand.convertError (/Users/.../node_modules/aerospike/lib/commands/command.js:91:27)
at QueryCommand.convertResponse (/Users/.../node_modules/aerospike/lib/commands/command.js:101:24)
at asCallback (/Users/.../node_modules/aerospike/lib/commands/command.js:163:24)
My aerospike.conf content:
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
pidfile /var/run/aerospike/asd.pid
# service-threads 6 # cpu x 5 in 4.7
# transaction-queues 6 # obsolete in 4.7
# transaction-threads-per-queue 4 # obsolete in 4.7
proto-fd-max 15000
}
<...trimmed section>
namespace test {
replication-factor 2
memory-size 1G
default-ttl 30d # 5 days, use 0 to never expire/evict.
nsup-period 120
# storage-engine memory
# To use file storage backing, comment out the line above and use the
# following lines instead.
storage-engine device {
file /opt/aerospike/data/test.dat
filesize 4G
data-in-memory true # Store data in memory in addition to file.
}
}
From a similar question I found that this is happening due to low system configurations.
How can I modify these. Also, I believe 50 requests should have worked, given I was able to insert around 12K records/sec.

Those are scans I guess rather than individual reads. To increase the scan-threads-limit:
asinfo -v "set-config:context=service;scan-threads-limit=128"

Related

Why is my read query to Firebase Realtime Database so slow?

I have a database in a Firebase Realtime Database with data that looks like this:
root
|_history
|_{userId}
|_{n1}
| |_ ...
|_{n2}
|_{n...}
Nodes n are keyed with a date integer value. Each n node has at least 60 keys, with some values being arrays, max 5 levels deep.
Query times were measured in a fashion similar to this:
const startTime = performance.now();
await query();
const endTime = performance.now();
logger.info(`Query completed in ${endTime - startTime} ms`);
I have a function that queries for n nodes under history/${userId} with keys between and inclusive of the start and end values:
await admin
.database()
.ref(`history/${userId}`)
.orderByKey()
.startAt(`${start}`)
.endAt(`${end}`)
.once("value")
This query is executed in a callable cloud function. This query currently takes approximately 2-3 seconds, returning approximately 225 nodes. The total number of n nodes is currently less than 300. Looking through my logs, it looks like query times that returned 0 nodes took approximately 500 milliseconds.
Why are the queries so slow? Am I misunderstanding something about Firebase's Realtime Database?
I've run a few performance tests to allow you to compare against.
I populated my database with this script:
for (var i=0; i < 500; i++) {
ref.push({
refresh_at: Date.now() + Math.round(Math.random() * 60 * 1000)
});
}
This lead to a JSON of this form:
{
"-MlWgH51ia7Iz7ubZb7K" : {
"refresh_at" : 1633726623247
},
"-MlWgH534FgMlb7J4bH2" : {
"refresh_at" : 1633726586126
},
"-MlWgH54gd-uW_M7e6J-" : {
"refresh_at" : 1633726597651
},
...
}
When retrieved in its entirety through the API, the snapshot.val() for this JSON is 26.001 characters long.
Client-side JavaScript SDK in jsbin
With the regular client-side JavaScript SDK in a jsbin and with a simple node script similar to yours.
Updated for jsbin, the code I ran is:
ref.orderByChild("refresh_at")
.endAt(Date.now())
.limitToLast(1000) // 👈 This is what we'll vary
.once("value")
.then(function(snapshot) {
var endTime = performance.now();
console.log('Query completed in '+Math.round(endTime - startTime)+'ms, retrieved '+snapshot.numChildren()+" nodes, for a total JSON size of "+JSON.stringify(snapshot.val()).length+" chars");
});
Running it a few times, and changing the limit that I marked, leads to:
Limit
Snapshot size
Average time in ms
500
26,001
350ms - 420ms
100
5,201
300ms - 350ms
10
521
300ms - 320ms
Node.js Admin SDK
I ran the same test with a local Node.js script against the exact same data set, with a modified script that runs 10 times:
for (var i=0; i < 10; i++) {
const startTime = Date.now();
const snapshot = await ref.orderByChild("refresh_at")
.endAt(Date.now())
.limitToLast(10)
.once("value")
const endTime = Date.now();
console.log('Query completed in '+Math.round(endTime - startTime)+'ms, retrieved '+snapshot.numChildren()+" nodes, for a total JSON size of "+JSON.stringify(snapshot.val()).length+" chars");
};
The results:
Limit
Snapshot size
Time in ms
500
26,001
507ms, 78ms, 70ms, 65ms, 65ms, 61ms, 64ms, 65ms, 81ms, 62ms
100
5,201
442ms, 59ms, 56ms, 59ms, 55ms, 54ms, 54ms, 55ms, 57ms, 56ms
10
521
437ms, 52ms, 49ms, 52ms, 51ms, 51ms, 52ms, 50ms, 52ms, 50ms
So what you can see is that the first run is similar (but slightly slower) as the JavaScript SDK, and subsequent runs are then a lot faster. This makes sense as on the initial run the client establishes its (web socket) connection to the database server, which includes a few roundtrips to determine the right server. Subsequent calls seem more bandwidth constrained.
Ordering by key
I also test with ref.orderByKey().startAt("-MlWgH5QUkP5pbQIkVm0").endAt("-MlWgH5Rv5ij42Vel5Sm") in Node.js and get very similar results to the ordering by child.
Add the field that you are using for the query to the Realtime Database rules.
For example
{
"rules": {
".read": "auth.uid != null",
".write": "auth.uid != null",
"v1": {
"history": {
".indexOn": "refresh_at"
}
}
}
}

How can I run multiple instances of a for loop in NodeJS?

I have a function which returns the usage of a CPU core with the help of a library called cpu-stat:
const cpuStat = require('cpu-stat')
var coreCount = cpuStat.totalCores()
var memArr = []
function getCoreUsage(i) {
return new Promise(async(resolve) => {
if (i === 0) {
cpuStat.usagePercent({coreIndex: i,sampleMs: 1000,},
async function(err, percent, seconds) {
if (err) {resolve(console.log(err))}
x = await percent
resolve("Core0: " + x.toFixed(2) + "%");
});
} else {
cpuStat.usagePercent({coreIndex: i,sampleMs: 1000,},
async function(err, percent, seconds) {
if (err) {resolve(console.log(err))}
x = await percent
resolve(x);
});
}
})
}
This function is called whenever a client requests a specific route:
function singleCore() {
return new Promise(async(resolve) => {
for (i=0; i <= coreCount; i++) {
if (i < coreCount) {core = await getCoreUsage(i), memArr.push(core)}
else if (i === coreCount) {resolve(memArr), memArr = []}
}
})
}
Now, this works just fine on machines which have less than 8 cores. The problem I am running into is that if I (hypothetically) use a high core count CPU like a Xeon or a Threadripper, the time it takes to get the usage will be close to a minute or so because they can have 56 or 64 cores respectively. To solve this, I thought of executing the for loop for each core on different threads such that the time comes down to one or two seconds (high core count CPUS have a lot of threads as well, so this probably won't be a problem).
But, I can't figure out how to do this. I looked into the child_process documentation and I think this can probably be done. Please correct me if I am wrong. Also, please suggest a better way if you know one.
This usagePercent function works by
looking at the cycle-count values in os.cpus[index] in the object returned by the os package.
delaying the chosen time, probably with setTimeout.
looking at the cycle counts again and computing the difference.
You'll get reasonably valid results if you use much shorter time intervals than one second.
Or you can rework the code in the package to do the computation for all cores in step 3 and return an array rather than just one number.
Or you can use Promise.all() to run these tests concurrently.

How to extend time after max invalid attempt for Login (node-rate-limiter-flexible)

Basically i want to protect my login endpoint API from brute-force attack. The existing idea is when user consume max invalid attempt(suppose 5 retry) then i want to locked user and extend time for another each invalid attempt by 30 sec.
I am protecting that endpoint by node-rate-limiter-flexible package. (You can suggest best library for this)
const opts = {
points: 5, // 6 points
duration: 30, // Per second
};
const rateLimiter = new RateLimiterMemory(opts);
rateLimiter.consume(userid)
.then((rateLimiterRes) => {
// Login endpoint code
})
.catch((rateLimiterRes) => {
// Too many invalid attempts
});
Above code is working fine for max 5 invalid attempt and then blocked user for 30 second. But what i want to do that when user consumed max invalid attempt then for another each invalid attempt extend time by 30 sec. ((Means time will be gradually increase for each invalid attempt. maximum for 1 day). (Sorry for my ugly English)
Increase rateLimiterRes.msBeforeNext on 30 seconds every time userId blocked and use rateLimiter.block method to setup new duration.
rateLimiter.consume(userid)
.then((rateLimiterRes) => {
// Login endpoint code
})
.catch((rateLimiterRes) => {
const newBlockLifetimeSecs = Math.round(rateLimiterRes.msBeforeNext / 1000) + 30
rateLimiter.block(userid, newBlockLifetimeSecs)
.then(() => {
// Too many invalid attempts
})
.catch(() => {
// In case store limiter used (not in-memory)
})
});
There is also example of Fibonacci-like increasing of block duration on wiki

Node.js Calling functions as quickly as possible without going over some limit

I have multiple functions that call different api endpoints, and I need to call them as quickly as possible without going over some limit (20 calls per second for example). My current solution is to have a delay and call the function once every 50 milliseconds for the example I gave, but I would like to call them as quickly as possible and not just space out the calls equally with the rate limit.
function-rate-limit solved a similar problem for me. function-rate-limit spreads out calls to your function over time, without dropping calls to your function. It still allows instantaneous calls to you function until the rate limit is reached, so it can behave with no latency introduced under normal circumstances.
Example from function-rate-limit docs:
var rateLimit = require('function-rate-limit');
// limit to 2 executions per 1000ms
var start = Date.now()
var fn = rateLimit(2, 1000, function (x) {
console.log('%s ms - %s', Date.now() - start, x);
});
for (var y = 0; y < 10; y++) {
fn(y);
}
results in:
10 ms - 0
11 ms - 1
1004 ms - 2
1012 ms - 3
2008 ms - 4
2013 ms - 5
3010 ms - 6
3014 ms - 7
4017 ms - 8
4017 ms - 9
You can try using queue from async. Be careful when doing this, it essentially behaves like a while(true) in other languages:
const async = require('async');
const concurrent = 10; // At most 10 concurrent ops;
const tasks = Array(concurrent).fill().map((e, i) => i);
let pushBack; // let's create a ref to a lambda function
const myAsyncFunction = (task) => {
// TODO: Swap with the actual implementation
return Promise.resolve(task);
};
const q = async.queue((task, cb) => {
myAsyncFunction(task)
.then((result) => {
pushBack(task);
cb(null, result);
})
.catch((err) => cb(err, null));
}, tasks.length);
pushBack = (task) => q.push(task);
q.push(tasks);
What's happening here? We are saying "hey run X tasks in parallel" and after each task gets completed, we put it back in the queue which is the equivalent of saying "run X tasks in parallel forever"

Inconsistent request behavior in Node when requesting large number of links?

I am currently using this piece of code to connect to a massive list of links (a total of 2458 links, dumped at https://pastebin.com/2wC8hwad) to get feeds from numerous sources, and to deliver them to users of my program.
It's basically splitting up one massive array into multiple batches (arrays), then forking a process to handle a batch to request each stored link for a 200 status code. Only when a batch is complete is the next batch sent for processing, and when its all done the forked process is disconnected. However I'm facing issues concerning apparent inconsistency in how this is performing with this logic, particularly the part where it requests the code.
const req = require('./request.js')
const process = require('child_process')
const linkList = require('./links.json')
let processor
console.log(`Total length: ${linkList.length}`) // 2458 links
const batchLength = 400
const batchList = [] // Contains batches (arrays) of links
let currentBatch = []
for (var i in linkList) {
if (currentBatch.length < batchLength) currentBatch.push(linkList[i])
else {
batchList.push(currentBatch)
currentBatch = []
currentBatch.push(linkList[i])
}
}
if (currentBatch.length > 0) batchList.push(currentBatch)
console.log(`Batch list length by default is ${batchList.length}`)
// cutDownBatchList(1)
console.log(`New batch list length is ${batchList.length}`)
const startTime = new Date()
getBatchIsolated(0, batchList)
let failCount = 0
function getBatchIsolated (batchNumber) {
console.log('Starting batch #' + batchNumber)
let completedLinks = 0
const currentBatch = batchList[batchNumber]
if (!processor) processor = process.fork('./request.js')
for (var u in currentBatch) { processor.send(currentBatch[u]) }
processor.on('message', function (linkCompletion) {
if (linkCompletion === 'failed') failCount++
if (++completedLinks === currentBatch.length) {
if (batchNumber !== batchList.length - 1) setTimeout(getBatchIsolated, 500, batchNumber + 1)
else finish()
}
})
}
function finish() {
console.log(`Completed, time taken: ${((new Date() - startTime) / 1000).toFixed(2)}s. (${failCount}/${linkList.length} failed)`)
processor.disconnect()
}
function cutDownBatchList(maxBatches) {
for (var r = batchList.length - 1; batchList.length > maxBatches && r >= 0; r--) {
batchList.splice(r, 1)
}
return batchList
}
Below is request.js, using needle. (However, for some strange reason it may completely hang up on a particular site indefinitely - in that case, I just use this workaround)
const needle = require('needle')
function connect (link, callback) {
const options = {
timeout: 10000,
read_timeout: 8000,
follow_max: 5,
rejectUnauthorized: true
}
const request = needle.get(link, options)
.on('header', (statusCode, headers) => {
if (statusCode === 200) callback(null, link)
else request.emit('err', new Error(`Bad status code (${statusCode})`))
})
.on('err', err => callback(err, link))
}
process.on('message', function(linkRequest) {
connect(linkRequest, function(err, link) {
if (err) {
console.log(`Couldn't connect to ${link} (${err})`)
process.send('failed')
} else process.send('success')
})
})
In theory, I think this should perform perfectly fine - it spawns off a separate process to handle the dirty work in sequential batches so its not overloaded and is super scaleable. However, when using using the full list of links at length 2458 with a total of 7 batches, I often get massive "socket hang up" errors on random batches on almost every trial that I do, similar to what would happen if I requested all the links at once.
If I cut down the number of batches to 1 using the function cutDownBatchList it performs perfectly fine on almost every trial. This is all happening on a Linux Debian VPS with two 3.1GHz vCores and 4 GB RAM from OVH, on Node v6.11.2
One thing I also noticed is that if I increased the timeout to 30000 (30 sec) in request.js for 7 batches, it works as intended - however it works perfectly fine with a much lower timeout when I cut it down to 1 batch. If I also try to do all 2458 links at once, with a higher timeout, I also face no issues (which basically makes this mini algorithm useless if I can't cut down the timeout via batch handling links). This all goes back to the inconsistent behavior issue.
The best TLDR I can do: Trying to request a bunch of links in sequential batches in a forked child process - succeeds almost every time with a lower number of batches, fails consistently with full number of batches even though behavior should be the same since its handling it in isolated batches.
Any help would be greatly appreciated in solving this issue as I just cannot for the life of me figure it out!

Resources