Invalid character in header content [\"0\"] - node.js

I am looking to implement a retry mechanism using retry-axios. I have successfully installed the package in the node project.
const baseUrl = `https://mock.codes/500`
const myAxiosInstance = axios.create();
myAxiosInstance.defaults.raxConfig = {
retry: 5,
retryDelay: 5000,
backoffType: 'static',
instance:myAxiosInstance,
onRetryAttempt: err => {
const cfg = rax.getConfig(err);
console.log(`Retry attempt #${cfg.currentRetryAttempt}`);
}
};
const interceptorId = rax.attach(myAxiosInstance);
const res = await myAxiosInstance.get(`${baseUrl}`);
The retry operation has been attempted only once. afterward, I got Invalid character in header content [\"0\"] error.
I need to start retrying the operation if the response is 500 or 400.
Thanks is advance

As Phil mentioned in his comment, this is a bug with Axios itself. It is known to affect Axios 1.1.0 and 1.1.2 at the very least. It was affecting me in 1.1.3 as well.
The fix is here but has not yet been approved. https://github.com/axios/axios/pull/5090
Commenters in the github issues suggested downgrading Axios to pre-1.0.0. I am no longer experiencing this issue after switching to:
axios 0.27.2
axios-retry 3.3.1
Hope that helps!

Since the above answer is highlighting the bug in the axios package itself, I recommend to manually resolving the issue as followed:
const axios = require("axios");
const baseUrl = `https://mock.codes/500`
let retryCount = 0; // Track the count of requests with retry count.
async function test() {
try {
const res = await axios.get(`${baseUrl}`); // Make request
console.log(res); // Log the successful response
} catch(err) {
if(err.response.status !== 200 && retryCount < 10) { // Check if the response code is not equal to 200, log and retry
console.log(`Retrying ${retryCount} time......`);
retryCount += 1;
await test();
} else {
console.log(`Retry count exceeded 10 times.`); // Log the maximum retry count, to avoid an infinite loop
}
}
}
test();

To add to what Appstronaut Studios said above,
axios version 1.3.2 just came out 2 days ago and that seems to fixed the issue as well. https://github.com/axios/axios/releases/tag/v1.3.2

Related

onResponse from RequestHook (testcafe) doesn't work

I use testcafe with Cucumber.
After update testcafe version to 1.20.0 some requests doesn't get response.
I implement a request hook like this:
class CustomRequestHook extends RequestHook {
constructor(requestFilterRules, responseEventConfigureOpts) {
super(requestFilterRules, responseEventConfigureOpts);
this.requests = {};
}
async onRequest(event) {
let requestInfo = event._requestInfo;
this.requests[requestInfo.requestId] = {
url: requestInfo.url,
method: requestInfo.method,
contentType: requestInfo.headers['content-type'],
body: requestInfo.body,
startTime: process.hrtime(),
startDate: new Date().toUTCString()
};
}
async onResponse(event) {
let request = this.requests[event.requestId];
const diffArray = process.hrtime(request.startTime);
request.duration = Math.ceil(diffArray[0] * 1000 + diffArray[1] / 1000000);
request.statusCode = event.statusCode;
}
getRequests() {
return Object.values(this.requests);
}
}
In the end of scenario I write info about requests to a file, but statusCode and duration are undefined for some requests. So the ui page doesn't loaded and test fails. This is a floating bug. For the same requests in different scenarios, the response may or may not recieved.
For testcafe 14.0.0 it works ok.
Has anyone encountered a similar problem?
I tried to update tescafe version to 2.2.0, but it did not help

Web3 BatchRequest always returning undefined, what am I doing wrong?

I'm trying to use the web3 Batch in order to call token balances all together. When I call batch.execute() it returns undefined instead of the resolved requests that have been added to the batch.
Can someone enlighten me where I am messing things up?
Here is my code.
async generateContractFunctionList(
address: Address,
tokens: Token[],
blockNumber: number
) {
const batch = new this.web3.BatchRequest();
for (let i = 0; i < tokens.length; i++) {
const contract = new this.web3.eth.Contract(balanceABI as AbiItem[]);
contract.options.address = tokens[i].address;
batch.add(
contract.methods
.balanceOf(address.address)
.call.request({}, blockNumber)
);
}
return batch;
}
async updateBalances() {
try {
const addresses = await this.addressService.find();
const tokens = await this.tokenService.find();
const blockNumber = await this.web3.eth.getBlockNumber();
for (let i = 0; i < addresses.length; i++) {
const address = addresses[i];
const batch = this.generateContractFunctionList(address, tokens, blockNumber);
const response = await (await batch).execute();
console.log(response); //returns undefined
}
} catch (error: unknown) {
if (error instanceof Error) {
console.log(`UpdateBalanceService updateBalances`, error.message);
}
}
}
why does batch.execute() not return anything and is void? I went by this example from this article and modified it to my needs but did not change too much of the stuff that could be messing it up.
https://chainstack.com/the-ultimate-guide-to-getting-multiple-token-balances-on-ethereum/
when I add a callback function to the "batch.add" and console log, balance get logged to console. But I am trying to use async await on the .execute() so how can I get a result from the method calls with await batch.execute() and have all the callback results in there like its written in the blog post.
A quick and dirty solution is to use an outdated version of the package.
package.json:
....
"dependencies": {
...
"web3": "^2.0.0-alpha.1",
...
}
....
I'm a developer advocate at Chainstack.
batch.add()requires a callback function as the last parameter. You can either change your code to pass a callback or, as mentioned above use version 2.0.0-alpha as used in the article.
We'll update the article soon to use the latest version of web3.js

Google cloud pub/sub function gives "The requested snapshot version is too old" when querying firestore

I have a gcloud pub/sub-function that performs a simple query on a collection. It was working fine before Oct 08. Now I am seeing "The requested snapshot version is too old" error messages.
I have created an HTTP function with the same code and run it manually, it works perfectly fine.
Here is the function:
// 0 3 * * * - at 03:00 AM every day
exports.GenerateRankings = functions.pubsub.schedule('0 3 * * *')
.onRun((context) => {
console.log("GenerateRankings Task started")
const playersCollection = admin.firestore().collection('players')
playersCollection.orderBy("Coin", "desc").get()
.then((qs) => {
console.log("Fetching Players by Coin")
// some staff
return true
})
.catch((error) => {
console.error("Error fetching players", error)
return false
})
})
And here is the error stack:
9 FAILED_PRECONDITION: The requested snapshot version is too old.
at Object.callErrorFromStatus (/workspace/node_modules/#grpc/grpc-js/build/src/call.js:31:26)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client.js:327:49)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:305:181)
at /workspace/node_modules/#grpc/grpc-js/build/src/call-stream.js:124:78
at processTicksAndRejections (internal/process/task_queues.js:79:11)
Caused by: Error
at Query._get (/workspace/node_modules/#google-cloud/firestore/build/src/reference.js:1466:23)
at Query.get (/workspace/node_modules/#google-cloud/firestore/build/src/reference.js:1455:21)
at /workspace/index.js:22:47
at cloudFunction (/workspace/node_modules/firebase-functions/lib/cloud-functions.js:130:23)
at /layers/google.nodejs.functions-framework/functions-framework/node_modules/#google-cloud/functions-framework/build/src/invoker.js:198:28
at processTicksAndRejections (internal/process/task_queues.js:97:5) {
code: 9,
details: 'The requested snapshot version is too old.',
metadata: Metadata { internalRepr: Map {}, options: {} }
}
I know there is another unanswered question
"The requested snapshot version is too old." error in Firestore
similar to this. I am facing this problem with pub/sub-functions.
Thanks for your help.
In case someone faces this problem, I answer my own question.
After a lot of reading, testing, I've noticed there is a warning in my logs like "Function returned undefined, expected Promise or value". Because I return a promise in my function, I was not paying attention to this.
Adding a return on top of my function fixed my warning, and my function has been running successfully for 5 days.
return exports.GenerateRankings = functions.pubsub.schedule('0 3 * * *') ...
I encountered the same problem, in my case, the code was as follows and the error mentioned in this issue was returned.
const userSnapshots = await this.firestore.collection('Users').where('archive', '==', false).get();
const users = userSnapshots.docs
users.map(async (user) => {
//code to update user documents
});
Then I changed the code to the following and it worked without returning the error mentioned in this issue.
const userSnapshots = await this.firestore.collection('Users').where('archive', '==', false).get();
const users = userSnapshots.docs
const markPromises = users.map(async (user) => {
//code to update user documents
});
await Promise.all(markPromises);
I don’t know this is the correct answer to this question but this worked for me.
For my case, I guess the problem was because of the size of data, and time it took to do the data management. I have split data into parts of 5000 elements :
const store = getFirestore(initializeApp(myConfig));
const collectionRef = store.collection("users");
const limit = 5000;
let docs: Array<QueryDocumentSnapshot<DocumentData>>;
let querySnapshot: QuerySnapshot<DocumentData>;
let query = collectionRef.limit(limit);
do {
querySnapshot = await query.get();
docs = querySnapshot.docs; // thanks to #prahack answer
for (const doc of docs) {
await manageMyData(doc.id, doc.data());
}
if (docs.length > 0) {
// Get the last visible document
query = collectionRef.startAfter(docs[docs.length - 1]).limit(limit);
}
} while (docs.length > 0);
Without my data managing, i can use part of 10.000 elements (limit = 10000), but with management, i should down to 5000.
I spent hours trying to run a list of actions inside a "pubsub" and then I decided to try the following code and it worked.
this.firestore.collection('Users').where('archive', '==', false).get().then(snap=>{
// Code Here
});

How to make API requests with intervals with NodeJS

sorry if this question has already been asked, I searched for it but couldn't find anything clear.
How can I make API requests with intervals with NodeJS, for example, one request every 20 seconds?
This so I can respect the API limits, if I make all the requests at once, it'd crash.
Not sure if it matters, but I'm using Axios for the requests.
Please tell me if any other information is needed. Thank you!
It should be easy enough to to this with setInterval, the interval is specified in milliseconds.
If you need more control over when the api is called, e.g. using a cron expression to schedule calls, I'd suggest trying node-cron.
const axios = require("axios");
async function callApi() {
let response = await axios( { url: "https://jsonplaceholder.typicode.com/users" });
console.log("Response:", response.data);
}
function callApiEveryNSeconds(n) {
setInterval(callApi, n * 1000);
}
callApiEveryNSeconds(20);
This was the best solution to my case:
let i = 0;
let idInterval = setInterval(() => {
if (i < dataFortheAPI.length) {
funtionThatMakestheApiRequest(dataFortheAPI[i]);
i++;
} else {
clearInterval(idInterval);
}
}, 22000);
You can use https://www.npmjs.com/package/request for the API request and setInterval().

Throttle and queue up API requests due to per second cap

I'm use mikeal/request to make API calls. One of the API's I use most frequently (the Shopify API). Recently put out a new call limit, I'm seeing errors like:
Exceeded 6.0 calls per second for api client. Slow your requests or contact support for higher limits.
I've already gotten an upgrade, but regardless of how much bandwidth I get I have to account for this. A large majority of the requests to the Shopify API are within async.map() functions, which loop asynchronous requests, and gather the bodies.
I'm looking for any help, perhaps a library that already exists, that would wrap around the request module and actually block, sleep, throttle, allocate, manage, the many simultaneous requests that are firing off asynchronously and limit them to say 6 requests at a time. I have no problem with working on such a project if it doesn't exist. I just don't know how to handle this kind of situation, and I'm hoping for some kind of standard.
I made a ticket with mikeal/request.
For an alternative solution, I used the node-rate-limiter to wrap the request function like this:
var request = require('request');
var RateLimiter = require('limiter').RateLimiter;
var limiter = new RateLimiter(1, 100); // at most 1 request every 100 ms
var throttledRequest = function() {
var requestArgs = arguments;
limiter.removeTokens(1, function() {
request.apply(this, requestArgs);
});
};
The npm package simple-rate-limiter seems to be a very good solution to this problem.
Moreover, it is easier to use than node-rate-limiter and async.queue.
Here's a snippet that shows how to limit all requests to ten per second.
var limit = require("simple-rate-limiter");
var request = limit(require("request")).to(10).per(1000);
I've run into the same issue with various APIs. AWS is famous for throttling as well.
A couple of approaches can be used. You mentioned async.map() function. Have you tried async.queue()? The queue method should allow you to set a solid limit (like 6) and anything over that amount will be placed in the queue.
Another helpful tool is oibackoff. That library will allow you to backoff your request if you get an error back from the server and try again.
It can be useful to wrap the two libraries to make sure both your bases are covered: async.queue to ensure you don't go over the limit, and oibackoff to ensure you get another shot at getting your request in if the server tells you there was an error.
My solution using modern vanilla JS:
function throttleAsync(fn, wait) {
let lastRun = 0;
async function throttled(...args) {
const currentWait = lastRun + wait - Date.now();
const shouldRun = currentWait <= 0;
if (shouldRun) {
lastRun = Date.now();
return await fn(...args);
} else {
return await new Promise(function(resolve) {
setTimeout(function() {
resolve(throttled(...args));
}, currentWait);
});
}
}
return throttled;
}
// Usage:
const run = console.log.bind(console);
const throttledRun = throttleAsync(run, 1000);
throttledRun(1); // Will execute immediately.
throttledRun(2); // Will be delayed by 1 second.
throttledRun(3); // Will be delayed by 2 second.
In async module, this requested feature is closed as "wont fix"
Reason given in 2016 is "managing that kind of construct properly is
a hard problem." See right side of here:
https://github.com/caolan/async/issues/1314
Reason given in 2013 is "wouldn't scale to multiple processes" See:
https://github.com/caolan/async/issues/37#issuecomment-14336237
There is a solution using leakybucket or token bucket model, it is implemented "limiter" npm module as RateLimiter.
RateLimiter, see example here: https://github.com/caolan/async/issues/1314#issuecomment-263715550
Another way is using PromiseThrottle, I used this, working example is below:
var PromiseThrottle = require('promise-throttle');
let RATE_PER_SECOND = 5; // 5 = 5 per second, 0.5 = 1 per every 2 seconds
var pto = new PromiseThrottle({
requestsPerSecond: RATE_PER_SECOND, // up to 1 request per second
promiseImplementation: Promise // the Promise library you are using
});
let timeStart = Date.now();
var myPromiseFunction = function (arg) {
return new Promise(function (resolve, reject) {
console.log("myPromiseFunction: " + arg + ", " + (Date.now() - timeStart) / 1000);
let response = arg;
return resolve(response);
});
};
let NUMBER_OF_REQUESTS = 15;
let promiseArray = [];
for (let i = 1; i <= NUMBER_OF_REQUESTS; i++) {
promiseArray.push(
pto
.add(myPromiseFunction.bind(this, i)) // passing am argument using bind()
);
}
Promise
.all(promiseArray)
.then(function (allResponsesArray) { // [1 .. 100]
console.log("All results: " + allResponsesArray);
});
Output:
myPromiseFunction: 1, 0.031
myPromiseFunction: 2, 0.201
myPromiseFunction: 3, 0.401
myPromiseFunction: 4, 0.602
myPromiseFunction: 5, 0.803
myPromiseFunction: 6, 1.003
myPromiseFunction: 7, 1.204
myPromiseFunction: 8, 1.404
myPromiseFunction: 9, 1.605
myPromiseFunction: 10, 1.806
myPromiseFunction: 11, 2.007
myPromiseFunction: 12, 2.208
myPromiseFunction: 13, 2.409
myPromiseFunction: 14, 2.61
myPromiseFunction: 15, 2.811
All results: 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
We can clearly see the rate from output, i.e. 5 calls for every second.
The other solutions were not up to my tastes. Researching further, I found promise-ratelimit which gives you an api that you can simply await:
var rate = 2000 // in milliseconds
var throttle = require('promise-ratelimit')(rate)
async function queryExampleApi () {
await throttle()
var response = await get('https://api.example.com/stuff')
return response.body.things
}
The above example will ensure you only make queries to api.example.com every 2000ms at most. In other words, the very first request will not wait 2000ms.
Here's my solution use a library request-promise or axios and wrap the call in this promise.
var Promise = require("bluebird")
// http://stackoverflow.com/questions/28459812/way-to-provide-this-to-the-global-scope#28459875
// http://stackoverflow.com/questions/27561158/timed-promise-queue-throttle
module.exports = promiseDebounce
function promiseDebounce(fn, delay, count) {
var working = 0, queue = [];
function work() {
if ((queue.length === 0) || (working === count)) return;
working++;
Promise.delay(delay).tap(function () { working--; }).then(work);
var next = queue.shift();
next[2](fn.apply(next[0], next[1]));
}
return function debounced() {
var args = arguments;
return new Promise(function(resolve){
queue.push([this, args, resolve]);
if (working < count) work();
}.bind(this));
}
I use async-sema module handle throttle HTTP request. Which means it allow you send HTTP request with a rate limit.
Here is an example:
A simple Node.js server, add express-rate-limit middleware to API so that the API has rate-limit feature. Let's say this is the Shopify API for your case.
server.ts:
import express from 'express';
import rateLimit from 'express-rate-limit';
import http from 'http';
const port = 3000;
const limiter = new rateLimit({
windowMs: 1000,
max: 3,
message: 'Max RPS = 3',
});
async function createServer(): Promise<http.Server> {
const app = express();
app.get('/place', limiter, (req, res) => {
res.end('Query place success.');
});
return app.listen(port, () => {
console.log(`Server is listening on http://localhost:${port}`);
});
}
if (require.main === module) {
createServer();
}
export { createServer };
On client-side, we want to send HTTP requests with concurrency = 3 and per second cap between them. I put the client-side code inside a test case. So don't feel weird.
server.test.ts:
import { RateLimit } from 'async-sema';
import rp from 'request-promise';
import { expect } from 'chai';
import { createServer } from './server';
import http from 'http';
describe('20253425', () => {
let server: http.Server;
beforeEach(async () => {
server = await createServer();
});
afterEach((done) => {
server.close(done);
});
it('should throttle http request per second', async () => {
const url = 'http://localhost:3000/place';
const n = 10;
const lim = RateLimit(3, { timeUnit: 1000 });
const resArr: string[] = [];
for (let i = 0; i < n; i++) {
await lim();
const res = await rp(url);
resArr.push(res);
console.log(`[${new Date().toLocaleTimeString()}] request ${i + 1}, response: ${res}`);
}
expect(resArr).to.have.lengthOf(n);
resArr.forEach((res) => {
expect(res).to.be.eq('Query place success.');
});
});
});
Test results, Pay attention to the time of the request
20253425
Server is listening on http://localhost:3000
[8:08:17 PM] request 1, response: Query place success.
[8:08:17 PM] request 2, response: Query place success.
[8:08:17 PM] request 3, response: Query place success.
[8:08:18 PM] request 4, response: Query place success.
[8:08:18 PM] request 5, response: Query place success.
[8:08:18 PM] request 6, response: Query place success.
[8:08:19 PM] request 7, response: Query place success.
[8:08:19 PM] request 8, response: Query place success.
[8:08:19 PM] request 9, response: Query place success.
[8:08:20 PM] request 10, response: Query place success.
✓ should throttle http request per second (3017ms)
1 passing (3s)
So many great options here, also here is the one that i am using in one of my projects.
axios-request-throttle
Usage:
import axios from 'axios';
import axiosThrottle from 'axios-request-throttle';
axiosThrottle.use(axios, { requestsPerSecond: 5 });

Resources