I am wondering if its possible for me to change the default timeout settings for Azure Storage BlobService. From the documentation I can see that the default settings are:
Calls to get a blob, get page ranges, or get a block list are permitted 2 minutes per megabyte to complete. If an operation is taking longer than 2 minutes per megabyte on average, it will time out.
Calls to write a blob, write a block, or write a page are permitted 10 minutes per megabyte to complete. If an operation is taking longer than 10 minutes per megabyte on average, it will time out.
Looking through the source code I see that the BlobService.getServiceProperties and setServiceProperties are listed with these two parameters:
#param {int} [options.timeoutIntervalInMs] The server timeout interval, in milliseconds, to use for the request.
#param {int} [options.maximumExecutionTimeInMs] The maximum execution time, in milliseconds, across all potential retries, to use when making this request. The maximum execution time interval begins at the time that the client begins building the request. The maximum execution time is checked intermittently while performing requests, and before executing retries.
Are these two parameters equal to the items above?
Now when I try to use the getServiceProperties using the following code I am not given any information other than logging, metrics, and cors data. Which is what is said on the Github page
blobSvc.getServiceProperties(function(error, result, response) {
if (!error) {
console.log('Result: ', result);
console.log('Response: ', response);
} else {
console.log(error);
}
});
Result: { Logging:
{ Version: '1.0',
Delete: false,
Read: false,
Write: false,
RetentionPolicy: { Enabled: false } },
HourMetrics:
{ Version: '1.0',
Enabled: true,
IncludeAPIs: true,
RetentionPolicy: { Enabled: true, Days: 7 } },
MinuteMetrics:
{ Version: '1.0',
Enabled: false,
RetentionPolicy: { Enabled: false } },
Cors: {} }
Response: { isSuccessful: true,
statusCode: 200,
body:
{ StorageServiceProperties:
{ Logging: [Object],
HourMetrics: [Object],
MinuteMetrics: [Object],
Cors: '' } },
headers:
{ 'transfer-encoding': 'chunked',
'content-type': 'application/xml',
server: 'Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0',
'x-ms-request-id': '45a3cfeb-0001-0127-0cf7-0149a8000000',
'x-ms-version': '2015-02-21',
date: 'Thu, 08 Oct 2015 18:32:36 GMT',
connection: 'close' },
md5: undefined }
So really I guess I am confused on the mismatch between documentation and if it is even possible to modify any timeout settings.
A sample call with the timeout option is:
var options = { maximumExecutionTimeInMs: 1000 };
blobSvc.createBlockBlobFromLocalFile('mycontainer', 'myblob', 'test.txt', options, function(error, result, response) {
if(!error) {
// file uploaded
}
});
You might also want to check the APIs and their options at: http://azure.github.io/azure-storage-node/BlobService.html
The timeout settings are not 'properties associated with the service', but instead they are 'properties associated with a call to the Storage Library'. The timeoutIntervalInMs setting and the maximumExecutionTimeInMs setting are parameters that you can set on the 'options' object that can be passed in with pretty much every operation (including uploading and downloading blobs). So, if you want to modify the timeouts for a given operation, just pass the desired setting on the 'options' object when you call into the library.
The 'timeoutIntervalInMs' is the timeout sent in a request to the Azure Storage service. This is the amount of time that the service will spend attempting to fulfill the request before it times out. This is the setting in the documentation you mentioned here -
https://msdn.microsoft.com/en-us/library/azure/dd179431.aspx
If a call into the Storage Client makes multiple HTTP(S) requests to the Storage Service, this value will be passed with each call.
The 'maximumExecutionTimeInMs' is a client timeout. This is tracked by the Storage Client across all Storage Requests made from that single API call. For example, if you have retries configured in the client, this value will be checked before every potential retry, and the retry will not continue if more than the 'maximumExecutionTimeInMs' has elapsed since the start of the first request.
Hope this makes sense.
Related
I have a lambda function which is calling 3rd party API using axios, when it calls 3rd party API, it creates a new entry on their database, which is working fine but the lambda function is returning 503 service unavailable
Following is my code -
let algcon = {
method: 'post',
url: constants.API_URL,
timeout: 1000 * 7,
headers: {
'Content-Type': 'application/json',
"User-Agent": "axios 0.21.1",
'token': myToken ? JSON.stringify(myToken.access) : ''
},
data: invoiceData,
};
await axios(algcon).then(function (response) {
}).catch(function (error) {
console.log(error) //here it is throwing 503 service unavailable error
});
I have increased lambda execution time but still getting the same error. Please help!!
Your code looks fine to me,
the default timeout of an API gateway response is 6500 milliseconds
You can update it as -
Go to API gateway
Select Lamda function
Click on integration
Click on Manage integration
Update default timeout
Like I did in below screenshot -
In my application I am making authenticated requests to the GitHub search API with a token. I am making a request every 2s to stay within the primary rate limit of 30 reqs per minute (so not concurrently) and I am also validating every request with the GitHub rate limit API before I make the actual search API call.
Even in the rare case of accidental concurrent requests, they are not likely to be for the same token.
I seem to be following all the rules mentioned in the Primary and Secondary best practices documentation. Despite this my application keeps getting secondary rate limited and I have no idea why. Could anyone help me with why this may be happening?
EDIT:
Sample code:
const search = async function(query, token) {
var limitResponse;
try {
limitResponse = JSON.parse(await rp({
uri: "https://api.github.com/rate_limit",
headers: {
'User-Agent': 'Request-Promise',
'Authorization': 'token ' + token
},
timeout: 20000
}));
} catch (e) {
logger.error("error while fetching rate limit from github", token);
throw new Error(Codes.INTERNAL_SERVER_ERROR);
}
if (limitResponse.resources.search.remaining === 0) {
logger.error("github rate limit reached to zero");
throw new Error(Codes.INTERNAL_SERVER_ERROR);
}
try {
var result = JSON.parse(await rp({
uri: "https://api.github.com/search/code",
qs: {
q: query,
page: 1,
per_page: 50
},
headers: {
'User-Agent': 'Request-Promise',
'Authorization': 'token ' + token
},
timeout: 20000
}));
logger.info("successfully fetched data from github", token);
/// process response
} catch (e) {
logger.error("error while fetching data from github" token);
throw new Error(Codes.INTERNAL_SERVER_ERROR);
}
};
Sample Architecture:
A query string (from a list of query strings) and the appropriate token to make the API call with is inserted into a rabbitmq x-delayed queue, with a delay of index*2000s per message (hence they are spaced out by 2s) and the function above is the consumer for that queue. When the consumer throws an error, the message is nack'd and sent to a dead letter queue.
const { delayBetweenMessages } = require('../rmq/queue_registry').GITHUB_SEARCH;
await __.asyncForEach(queries, async (query, index) => {
await rmqManager.publish(require('../rmq/queue_registry').GITHUB_SEARCH, query, {
headers: { 'x-delay': index * delayBetweenMessages }
})
})
Looks like there is not an issue in your code. I was just surfing from my browser and was using github search bar, and I hit secondary rate limit from browser by just surfing. So, looks like search API is internally using concurrency. So, might be it is github's own bug.
You hardcoded a sleep time of 2s, but, according to the documentation, when you trigger the secondary api rate limit you have to wait a time same as the one indicated in the Retry-After attribute of the response header.
I'm using the node js request module to send emails via sendgrid. I am getting the error ETIMEDOUT. I intend on using node-retry npm module to retry the sending, but how can I detect what the error code is? Does the sendgrid API return the error code somehow? Also when I do detect the error code, is it just a matter of waiting X seconds to send the email again? If so how do I determine what X is?
_makeAPIRequest (httpMethod, url, body) {
var defer = Q.defer();
var options = {
method: httpMethod,
url: this.SENDGRID_API_URL + url,
headers: {
'content-type': 'application/json',
authorization: 'Bearer ' + SENDGRID_API_KEY
},
body: body,
json: true
};
request(options, function (error, response, body) {
if (error) {
console.dir(error);
return defer.reject(error);
}
defer.resolve(body);
});
return defer.promise;
}
ETIMEDOUT is an OS error message. It indicates a failed attempt, at the TCP/IP level, to connect to a remote host, probably the one mentioned in SENDGRID_API_URL.
The default value for that is https://api.sendgrid.com/v3/. For some reason, possibly an outbound firewall or some sort of network configuration trouble, your nodejs program cannot reach that URL, and waits for a response. You should check your value of that URL.
If this is intermittent (doesn't happen all the time) you probably can wait a few seconds and try again.
If it starts happening after you've sent a bunch of emails, you may be hitting a limit at sendgrid. Pace out your sending of emails; try putting a half-second delay between them.
I'm developing a NodeJS service that continuously polls a REST API to get state changes of a large number of resources in near real-time. No other protocol is available.
The polling interval is 5 seconds. The number of resources is usually between 100-500, so let's consider:
Usually it does 50 HTTP requests per second
Response often takes more than 5 seconds so requests for the same request may overlap
Often the load average gets to over 8.00 (in a 8 core vm) and/or the app crashes.
I want to ensure that the resource usage is as minimal as possible while handling this workload.
Here's what I've done:
HTTP2 is available (but unsupported by axios?)
Set process.env.UV_THREADPOOL_SIZE = 8
Use the axios library to make async requests
Reusing the same axios instance for all requests
Use an HTTPSAgent with keep-alive
The relevant code:
'use strict'
process.env.UV_THREADPOOL_SIZE = 8
const axios = axios.create({
baseURL: 'https://api.example.com',
httpsAgent: new HTTPS.Agent({
keepAlive: true,
//maxSockets: 256,
//maxFreeSockets: 128,
scheduling: 'fifo',
timeout: 1000 * 15
}),
timeout: 1000 * 15,
})
loadResource(id){
return axios.request({
method: 'GET',
url: `/resource/${id}`
})
.catch((err) => {
process.logger.error('Error with request (...)')
})
.then((res) => {
handle(res)
})
}
for(let id of [1,2,3,4,(...)]){
setInterval(() => loadResource(id), 5000)
}
handle(res){...}
What I'm considering:
Waiting for a response before making the new request (and looking for the best approach to do this)
What else can be done to handle the requests in an optimal way?
Following will be my node.js call to retrive some data, which is taking more than 1 minute. Here this will be timeout at 1 minute (60 seconds). I put a console log for the latency also. However I have configured the timeout for 120 seconds but it is not reflecting. I know the default level nodejs server timeout is 120 seconds but still I get the timeout (of 60 seconds) from this request module for this call. Please provide your insights on this.
var options = {
method: 'post',
url:url,
timeout: 120000,
json: true,
headers: {
"Content-Type": "application/json",
"X-Authorization": "abc",
"Accept-Encoding":"gzip"
}
}
var startTime = new Date();
request(options, function(e, r, body) {
var endTime = new Date();
var latencyTime = endTime - startTime;
console.log("Ended. latencyTime:"+latencyTime/1000);
res.status(200).send(body);
});
From the request options docs, scrolling down to the timeout entry:
timeout - Integer containing the number of milliseconds to wait for a server to send response headers (and start the response body) before aborting the request. Note that if the underlying TCP connection cannot be established, the OS-wide TCP connection timeout will overrule the timeout option (the default in Linux can be anywhere from 20-120 seconds).
Note the last part "if the underlying TCP connection cannot be established, the OS-wide TCP connection timeout will overrule the timeout option".
There is also an entire section on Timeouts. Based on that, and your code sample, we can modify the request sample as such
request(options, function(e, r, body) {
if (e.code === 'ETIMEDOUT' && e.connect === true){
// when there's a timeout and connect is true, we're meeting the
// conditions described for the timeout option where the OS governs
console.log('bummer');
}
});
If this is true, you'll need to decide if changing OS settings is possible and acceptable (this is beyond the scope of this answer and such a question would be better on Server Fault).