I am using SenecaJS to build a microservices based application. So far, I have conceptualized one microservice which consists of only one action yet. This action, when called, will execute a time-consuming shell command (approx. time consumed 3 minutes) and return as a response - the output of the shell command. My code files are available here: https://gist.github.com/ohmtrivedi/5a94841d25714f3cfd6aee260add97bb
So, I have been trying to make requests to this service in 2 different ways: First, I send a direct request to the service (which runs the plugin, osfp_tool) using cURL and as demonstrated here: http://senecajs.org/getting-started/#writing-microservices.
Second, by referencing this tutorial: http://senecajs.org/getting-started/#web-server-integration, I wrote an Express API which communicates with my service (osfp_service). So, I send HTTP requests (using POSTMAN) to the Express API.
I used to receive Client Request timeout error in both cases. After some research, I came to know of timeout configuration in Seneca instance. So, I added a time configuration at 2 places - in the Seneca service (osfp_service) as well as in the Express API (app.js). Note that, I have set timeout to 300000ms or 5 mins. I have checked that the shell command takes about 3 mins, so timeout is set more than that. However, I still face Client Request timeout error as you can see below. I know that there is no error from the shell command execution, as on my serivce log, even after I get Client timeout Request error, the action completes its execution successfully which can be seen using console.log messages.
Hope someone can help me in resolving this issue, stuck on it for a very long time now.
EDIT
So, I have been playing around with timeout configuration. I was able to resolve the timeout error from osfp_service.js script by setting the timeout in seneca instance at the topmost level (https://gist.github.com/ohmtrivedi/5a94841d25714f3cfd6aee260add97bb#file-osfp_service-js-L8).
If I set timeout configuration in app.js in the same way (https://gist.github.com/ohmtrivedi/5a94841d25714f3cfd6aee260add97bb#file-app2-js-L26), then I still get Error 504: Client request timeout/Gateway timeout (https://drive.google.com/open?id=1El2JCy047dnm6PHlvU33d_mKPuIWUlfX).
If I set timeout configuration in app.js inside the transport object in seneca instance (https://gist.github.com/ohmtrivedi/5a94841d25714f3cfd6aee260add97bb#file-app1-js-L26), then I get Error 503: Response timeout/Service Unavailable (https://drive.google.com/open?id=1u6w7XyK9-vAJVhna_JnIQ4imRzOm_51T). I cannot understand why it says Service Unavailable, because the action does get executed and it even completes successfully.
I can't seem to understand the different behavior.
I also worked on timeout problems with Seneca.
For my application, the solution was:
Set the timeout in require('seneca'):
let seneca = require('seneca')(
{
timeout: config.request_timeout,
tag: ...
}
)
Set the timeout in each act() call:
seneca.act({timeout$: config.request_timeout, role: ...});
Hope this helps.
EDIT:
As found in this post, the transport timeout can also be configured:
let seneca = require('seneca')(
{
timeout: config.request_timeout,
tag: ...,
transport: {
'web': { timeout: config.request_timeout },
'tcp': { timeout: config.request_timeout }
}
}
);
Related
I've been using node-redis for a while and so far so good. However, upon setting up a new environment, I had a typo on the hostname (or password) and it wouldn't connect. But because this was an already working application I developed some time ago, it was kind of hard to track the actual issue. When you made requests against this server, it would just take up to the server's timeout which was 5 minutes and come back with error 500.
At the end I found out that it was the credentials for the redis server. I use redis to make my app faster by preventing revalidating security tokens for up to an hour (since the validation process can take up to 2000ms), so I store the token on redis for future requests.
This has worked fine for years, however, just because this time I had a typo on the hostname or password, I noticed that if the redis server can't connect (for whaterver reason) the whole application goes down. The idea is that redis should be used if available, if not it should fallback to just take the long route but fulfill the request anyway.
So my question is, how to tell node-redis to throw an error as soon as possible, and not wait until ETIMEOUT error comes?
For example:
const client = redis.createClient(6380, "redis.host.com", { password: "verystrongone" } });
client.on("error", err => {
console.log(err)
})
Based on this code, I get the console.log error AFTER it reaches timeout (around 30-40 seconds). This is not good, because then my application is AT LEAST 30 seconds unresponsive. What I want to achieve is that if the redis is down or something, it should just give up after 2-5 seconds. I use a very fast and reliable redis server from Azure. It takes less than a second to connect, and has never failed, I believe, but if it does, it will take the whole application with it.
I tried stuff like retry_strategy but I believe that option kicks in only after the initial ~30 seconds attempt.
Any suggestions?
So here's an interesting thing I observed.
When I connect to the redis cache instance using the following options, I am able to reproduce the error you're getting.
port: 6380,
host: myaccount.redis.cache.windows.net,
auth_pass: mysupersecretaccountkey
When I specify incorrect password, I get an error after 1 minute.
However, if I specify tls parameter I get an error almost instantaneously:
port: 6380,
host: myaccount.redis.cache.windows.net,
auth_pass: mysupersecretaccountkey,
tls: {
servername: myaccount.redis.cache.windows.net
}
Can you try with tls option?
I am still not able to reproduce the error if I specify incorrect account name. I get the following error almost instantaneously:
Redis connection to
myincorrectaccountname.redis.cache.windows.net:6380 failed -
getaddrinfo ENOTFOUND myincorrectaccountname.redis.cache.windows.net
I have a nodejs client making request to an azure api management endpoint. I am getting the following exception sometimes:
ClientConnectionFailure: at transfer-response
So i am using the request package, and in the client i am doing a straightforward request:
request({
method: "GET",
headers: {
"contentType": "application/json",
"Ocp-Apim-Subscription-Key": key
},
uri: endpointUrl
}, function (error, response, body) {
(...)
});
So is it a timeout eventually happening in the client, in the middle of the request, and consequently failing the connection with the Azure APIM endpoint?
Or something else? And how do you think i could solve this problem? I thought about increase the timeout in the request, but i am assuming that in omission of timeout, it is assuming the default timeout of the server (Azure Function App) that is 120 seconds, right?
Thanks.
ClientConnectionFailure suggests that client broke the connection wile APIM was processing request. at transfer-response means that this happened when APIM was sending response to client. APIM does not cache request/response body by default, so when it's sending response to client it's at the same time reading it from backend. This may lead to client breaking off the connection if backend takes too long to respond with actual data.
This behavior is driven purely by client deciding to stop waiting for data. Check how long it takes for you on client before you send request and see the error. Try adjusting client timeout.
We have seen this behavior when the back end is taking too long to respond.
My approach would be to take a look at the backend first which is Function app in this case and see the time taken and then see if there are any timeout limit set at APIM.
The timeout duration of a function app is defined by the functionTimeout property in the host.json project file. The following table shows the default and maximum values in minutes for both plans and in both runtime versions:
For troubleshooting Performance issue , follow this Troubleshoot slow app performance issues in Azure App Service
I currently have a request which is made from an angular 4 app(which uses electron[which uses chromium]) to a bottleneck(nodejs/express) server. The server takes about 10 minutes to process the request.
The default timeout which I'm getting is 120 seconds.
I tried to use setting the timeout on the server using
App.use(timeout("1000s")
In the client side I have used
options = {
url,
method: GET
timeout : 600 * 1000}
let req = http.request(options, () => {})
req.end()
I have also tried to give the specific route timeout.
Each time the request hits 120 seconds the socket dies and I get a "socket timeout"
I have read many posts with the same questions but I didn't get any concrete answers. Is it possible to do a request with a long/no timeout using the tools above? Do I need to download a new library which handles long timeouts?
Any help would be greatly appriciated.
So after browsing through the internet I have discovered that there is no possible way to increase Chrome's timeout time.
My solution to this problem was to open the request and return a default answer(something like "started") then pinging the server to find out it's status.
There is another possible solution which will be to put a route in the client(I'm using electron and node modules in the client side so it is possible) and then let the server ping back to the client with the status of the query.
Writing this down so other people will have some possible patches. Will update if I'll find anything better.
Recently I've been trying to create a simple file server with nodejs and looks like I've run into some problems that I can't seem to overcome.
In short:
I configured iisnode to have 4 worker processes (there is a setting in web.config for this called nodeProcessCountPerApplication="4"). And it balances the load between these workers.
When there are 8 requests coming in, each worker has 2 request to process, but when an exception happens in one of the request that is being processed, the other one that is waiting also fails.
For example:
worker 1 handling request 1, request 5 waiting
worker 2 handling request 2, request 6 waiting
worker 3 handling request 3, request 7 waiting
worker 4 handling request 4, request 8 waiting
If exception happens when handling request 3, the server responds with my custom error code, shuts down and is restarted by iisnode. But the problem is that request 7 also fails, even if it hasn't been processed.
I tried to set the maxConcurrentRequestsPerProcess="1" so that only 1 request goes at a time to one worker, but it does not work the way I want. Request 5,6,7,8 will be rejected with a 503 Service Unavailable response even though the maximum number of request that will queue is set to 1000 (default by iis).
The Question
These requests don't have anything to do with each other, so one failing should not take down the other.
Is there a setting in IIS that enables the behavior that I'm after? Or is this even possible to do with node and IIS?
In Long
Why?
I'm using node, because I have some other requirements (like logging, etc..) that I can do in JavaScript fairly easy.
Since I have a ASP.NET MVC background and I'm running windows, after a few searches I've found the iisnode module for IIS, that can be used to host a node app with IIS. This makes it easy for me to manage and deploy the application. I also read on many sites, that node servers have good performance because of their async nature.
How?
I started with a very basic exception handling logic, that catches exceptions using the node's domain object:
var server = http.createServer(function (request, response) {
var d = domain.create();
d.on('error', function (err) {
try {
//stop taking new requests.
serverShutdown();
//send an error to the request that triggered the problem
response.statusCode = 500;
response.end('Oops, there was a problem! ;) \n');
}
catch (er2) {
//oh well, not much we can do at this point.
console.error('Error sending 500!', er2.stack);
process.exit(1);
}
});
d.add(request);
d.add(response);
d.run(function () {
router.route(request, response);
});
}).listen(process.env.PORT);
Since I could not find any best practices to gracefully shut down the server, when there is an unhandled exception, I decided to write my own logic. So after server.close() is called, I go through the sockets, and wake them so the server can shut down:
function serverShutdown() {
server.close();
for (var s in sockets) {
sockets[s].setTimeout(1, function () { });
}
}
This is also great!
What?
The problem comes when I try to stresstest this. For some reason the cluster module is not supported by the iisnode, but it has a similar feature. I configured iisnode to have 4 worker processes (there is a setting in web.config for this called nodeProcessCountPerApplication="4"). And it balances the load between these workers.
I'm not entirely sure on how this works, but here's what I figured out from testing:
When there are 8 requests coming in, each worker has 2 request to process, but when an exception happens in one of the request that is being processed, the other one that is waiting also fails.
For example:
worker 1 handling request 1, request 5 waiting
worker 2 handling request 2, request 6 waiting
worker 3 handling request 3, request 7 waiting
worker 4 handling request 4, request 8 waiting
If exception happens when handling request 3, the server responds with my custom error code, shuts down and is restarted by iisnode. But the problem is that request 7 also fails, even if it hasn't been processed.
I tried to set the maxConcurrentRequestsPerProcess="1" so that only 1 request goes at a time to one worker, but it does not work the way I want. Request 5,6,7,8 will be rejected with a 503 Service Unavailable response even though the maximum number of request that will queue is set to 1000 (default by iis).
The Question Again
These requests don't have anything to do with each other, so one failing should not take down the other.
Is there a setting in IIS that enables the behavior that I'm after? Or is this even possible to do with node and IIS?
Any help is appreciated!
Update
I managed to rule out iisnode, and made the same server using cluster and worker processes.
The problem still persist, and request that are queued to the worker that has the exception are returned with 502 Bad Gateway.
Again, I don't know what's happening with the requests that are coming in to the server, and which level are they at the time of the exception. And I can't seem to find any info about this either...
Anyone could point me in the right direction? At least where to search for the solution?
Problem
Node's default configuration timeouts requests after 2 minutes. I would like to change the request timeouts to:
1 minute for 'normal' requests
5 minutes for requests that serve static files (big assets in this case)
8 hours for uploads (couple of thousand pictures per request)
Research
Reading through Node's documentation, I've discovered that there are numerous ways of defining timeouts.
server.setTimeout
socket.setTimeout
request.setTimeout
response.setTimeout
I'm using Express which also provides middleware to define timeout's for (specific) routes. I've tried that, without success.
Question
I'm confused about how to properly configure the timeout limit globally and per route. Should I configure all of the above timeouts? How is setting the server's timeout different to setting the socket's or request's timeout?
As I saw on your other question concerning the usage of the timeout middleware, you are using it somehow differently.
See documentation of timeout-connect middleware.
Add your errorHandler-function as an EventListener to the request, as it is an EventEmitter and the middleware causes it to emit the timeout-event:
req.on("timeout", function (evt) {
if (req.timedout) {
if (!res.headersSent) {
res
.status(408)
.send({
success: true,
message: 'Timeout error'
});
}
}
});
This is called outside of the middleware stack, causing the function call to next(err) to be invalid. Also, you have to keep in mind, that if the timeout happens while the request is hanging server-side, you have to prevent your server code from further processing this request (because headers are already sent and its underlying connection will no longer be available).
Summary
nodejs timeout API are all inactivity timeout
expressjs/timeout package is response hard timeout
nodejs timeout API
server.timeout
inactivity/idle timeout
equal to socket timeout
default 2min
server.setTimeout
inactivity/idle timeout
equal to socket timeout
default 2min
have callback
socket.setTimeout
inactivity/idle timeout
callback responsible to end(), destroy() socket
default no timeout
response.setTimeout
socket.setTimeout front end
request.setTimeout
socket.setTimeout front end
expressjs/timeout package
response hard-timeout (vs inactivity)
have callback
Conclusion
max. time allowed for an action(request+response), express/timeout package is needed.
This is properly what you need, but the callback need to end the request/response. As the timeout only trigger the callback, it does not change the state or interfere with the connection. It is the callback job.
idle timeout, set nodejs api request/response timeout
I don't recommend touching these, as it is not necessary in most cases. Unless you want to allow a connection to idle(no traffic) over 2min.
There is already a Connect Middleware for Timeout support. You can try this middleware.
var timeout = express.timeout // express v3 and below
var timeout = require('connect-timeout'); //express v4
app.use(timeout(120000)); // should be changed with your desire time
app.use(haltOnTimedout);
function haltOnTimedout(req, res, next){
if (!req.timedout) next();
}