I'm at a little standstill here.
I'm creating an express app that have to export a huge amount of data from a Shopify store with a single request.
The issue is that when I reach the 2min mark the request is fired again ( since the default timeout is 2min ), so I increased the server.setTimeout() to take in consideration the time that it needs.
So this fixed the fire of the request on the 2min mark, but once my request finish, once again for some reason a second request is made.
Here is a bare bone example of the issue:
const express = require('express');
function sleep(ms){
return new Promise(resolve=>{
setTimeout(resolve,ms)
})
}
app.get('/export', async (req, res) => {
// Sleep for 2:03min
await sleep(123000);
console.log('Starting request');
res.status(200).send('Finish');
})
const server = app.listen(3000, () => {
console.log(`Example app listening on port ${port}`)
})
// Set timeout to 2:04min
server.setTimeout(124000);
If you open http://localhost:3000/export the result form the above code returns:
Starting request <- at the beginning
Starting request <- at the 2:04 mark
Is this some issue because of the async/await, since it seems that res.send() never fires ?
Can someone clarify "why" is this happening and "how" to prevent it?
PS: I can't use a flag or something to check if the request was already made, since the APP needs to work for users exporting the data at the same time, and the second request comes as a brand new one.
Well I lost around 4 hours on this today and the conclusion is that my tunel service ( http://serveo.net/ ) was firing another request at some point perfectly timed when my export is done. ( as for why, I don't have an answer to that ) I'm still not sure if this is the correct conclusion, but switching to a different option ( or using the direct localhost ) didn't show any issues.
I moved to OpenVPN and all of my problems were gone. Ngrok was OK as well, but since the free version didn't have a fixed URL ( and it's a pain to change all of the end points in the App setup each day I start the service ) I went with OpenVPN.
The root of the problem was that res.status(200).send('Finish') was never firing for some specific reason or if it was it sure didn't seems so.
Thanks for all of the help.
Related
Problem:
I am doubting that something in my network system is slow which is why responses from NodeJS server arrives late. I've noticed that the server is ready with the response fairly fast however, it takes time to reach the web browser.
Question:
What are the ways I can confirm whether this is infact happening or not? Basically to capture the time when the response left from my NodeJS server and then from the physical server and then compare it with the time that it arrived on the client web browser. How can I do this?
What I tried:
I tried putting a console.log after ctx.body to identify the point of time when the response left the server. But is this the correct point or can I go further down? I am a little unsure. Please advise.
FYI, I am using Koa.js.
As we cannot always assume that the time on the client is exactly the same as on the server, what you theoretically can do is:
client side: before sending a request to the server, get the current time with
const startClient = Date.now()
on the server side you implement a small request time (koa) middleware like this:
app.use(function *(next){
const startServer = new Date.now();
await next;
const ms = (new Date.now()) - startServer;
ctx.set('X-Server-Time', ms);
});
Place this middleware before defining your routes. This should return a time the server took to complete its task to the client (in the X-Server-Time header). So this is basically the time from when the server receives the request till he is ready and sends results back.
as soon as the client gets back any information from the server, get again the current time and calculate time spend overall:
myHeaders = response.headers;
if (myHeaders[X-Server-Time]) {
const msServer = parseInt(myHeaders[X-Server-Time]);
const endClient = Date.now();
const msOverall = endClient - startClient;
// output time consumed
console.log('Overall time in ms to complete : ' + msOverall);
console.log('Server time in ms to complete : ' + msServer);
console.log('Network time in ms to complete : ' + msOverall - msServer);
}
Code not testet, but I hope this gives an idea how to measure timing...
Try Koa.js in debug mode and add a logger middleware to identify server side issues.
Use the network inspector in Chrome or any equivalent in other browsers to identify client side issues.
If none of the above helps, you can try debugging on the network level with tcpdump and Wireshark. This helps identifying protocol and connection issues.
I am using this (contentful-export) library in my express app like so
const app = require('express');
...
app.get('/export', (req, rex, next) => {
const contentfulExport = require('contentful-export');
const options = {
...
}
contentfulExport(options).then((result) => {
res.send(result);
});
})
now this does work, but the method takes a bit of time and sends status / progress messages to the node console, but I would like to keep the user updated also.. is there a way I can send the node console progress messages to the client??
This is my first time using node / express any help would be appreciated, I'm not sure if this already has an answer since im not entirely sure what to call it?
Looking of the documentation for contentful-export I don't think this is possible. The way this usually works in Node is that you have an object (contentfulExport in this case), you call a method on this object and the same object is also an EventEmitter. This way you'd get a hook to react to fired events.
// pseudo code
someLibrary.on('someEvent', (event) => { /* do something */ })
someLibrary.doLongRunningTask()
.then(/* ... */)
This is not documented for contentful-export so I assume that there is no way to hook into the log messages that are sent to the console.
Your question has another tricky angle though. In the code you shared you include a single endpoint (/export). If you would like to display updates or show some progress you'd probably need a second endpoint giving information about the progress of your long running task (which you can not access with contentful-export though).
The way this is usually handled is that you kick of a long running task via a certain HTTP endpoint and then use another endpoint that serves infos via polling or or a web socket connection.
Sorry that I can't give a proper solution but due to the limitation of contentful-export I don't think there is a clean/easy way to show progress of the exported data.
Hope that helps. :)
I have app made with react and node .
The react app needs to make api call to the node app the node app is running on port 5100 . I am facing problem where I get net_err empty response in the console after waiting long period of time . The thing is my api takes 200s to get the response from the server .
When I hit
http://localhost:5100/api/users/wait-ip
I get response after 200 second But when I hit this in the react app
fetch('/api/users/wait-ip')
I get the following error on console
GET http://localhost:3000/api/users/wait-ip net::ERR_EMPTY_RESPONSE
This is my function for api
router.get('/api/users/wait-ip',(req,res)=>{
//Others things happen here
setTimeout(()=>{
return res.json({
data:1
})
},150000)
})
This is the response I get while hitting directly on browser after 150seconds
Any help on how to solve this will be appreciated
Using Node.js API with React is a common use case. I think the reason you are facing the issue in getting response is that you are using fetch call synchronously. Always use async/await for it.
async function getUsers() {
let response = await fetch('/api/users/wait-ip');
let users= await response.json();
//...
return users;
}
Using the function:
getUsers().then(result => {console.log(JSON.stringify(result));});
Hope that helps.
For me the problem was on both client and the server .
I have read some of the post on how to fix timeout on the server . One of them was
const server = http.listen(port, () => console.log(`Server running on port ${port}`));
server.timeout = 200000;
Well this worked but only for the direct browser call .
For the asynchronous call I need to set it for each function where I wanted it like this
router.get('/wait-ip',(req,res)=>{
req.timeout =160000;
setTimeout(()=>{
return res.json({
data:1
})
},150000)
})
And for client part with pxoy it didn't work properly . So what I did was posted the full url
fetch('http://localhost:5100/api/users/wait-ip')
I hope this helps with other person too
I may be because of the Cross Origin Resource Sharing (CORS) header. Usually calling from a browser there is an "OPTION" call that is made followed by, in this case, the "GET".
I would try
fetch('/api/users/wait-ip', {
mode: 'no-cors' // 'cors' by default
})
If this fixes the problem either you force not to use cors on the client side. Or the server should manage it. Another option is to allow the proxy to set these headers.
ref: https://developers.google.com/web/ilt/pwa/working-with-the-fetch-api
section: Cross-origin requests
This is an API problem. I'd make set breakpoints in your API and make sure the right fields are populated for the response.
Node.js Debugging Guide...
In an app that I was working, I encountered "headers sent already error" if I test using concurrency and parallel request methods.
ultimately I resolved the problem using !response.headersSent but my question is why am I forced to use it? is node caching similar requests and reuses them for the next repeated call.
if(request.headers.accept == "application/json") {
if(!response.headersSent) {response.writeHead(200, {'Content-Type': 'application/json'})}
response.end(JSON.stringify({result:{authToken:data.authToken}}));
}
Edit
var express = require('express');
var app = express();
var server = app.listen(process.env.PORT || 3000, function () {
console.log('Example app listening at http://%s:%s', server.address().address, server.address().port);
});
Edit 2:
Another problem is while testing using mocha, super agent and while the tests in progress if I just send another request through postman on the side, one of the tests in mocha end with a timeout error. These steps I'm taking to ensure the code is production ready for simultaneous, parallel requests? please advise on what measures I can take to ensure node/code works under stress.
Edit 3:
app.use(function(request, response, next){
request.id = Math.random();
next();
});
OK, in an attempt to capture what solved this for you via all our conversation in comments, I will attempt to summarize here:
The message "headers sent already error" is nearly always caused by improper async handling which causes the code to call methods on the response object in a wrong sequence. The most common case is non-async code that ends the request and then an async operation that ends some time later that then tries to use the request (but there are other ways to misuse it too).
Each request and response object is uniquely created at the time each individual HTTP request arrives at the node/express server. They are not cached or reused.
Because of asynchronous operations in the processing of a request, there may be more than one request/response object in use at any given time. Code that is processing these must not store these objects in any sort of single global variable because multiple ones can be in the state of processing at once. Because node is single threaded, code will only be running on any given request at any given moment, but as soon as that code hits an async operation (and thus has nothing to do until the async operation is done), another request could start running. So multiple requests can easily be "in flight" at the same time.
If you have a system where you need to keep track of multiple requests at once, you can coin a request id and attach it to each new request. One way to do that is with a few lines of express middleware that is early in the middleware stack that just adds a unique id property to each new request.
One simple way of coining a unique id is to just use a monotonically increasing counter.
My Problem is, that I'm planning to use express to cache all requests which I receiver for a certain amount of time until I send all responses at once.
But unfortunately I can't receive a second request until I've responded to the first one. So I guess node / express is somehow blocking the further processing of other requests.
I build a minimal working example for you, so you can see better what I'm talking about.
var express = require('express');
var app = express();
var ca = [];
app.get('/hello.txt', function(req, res){
ca.push(res);
console.log("Push");
});
setInterval(function(){
while (ca.length) {
var res = ca.shift();
res.send('Hello World');
console.log("Send");
}
},9000);
var server = app.listen(3000, function() {
console.log('Listening on port %d', server.address().port);
});
When I'm sending just one request to localhost:3000 and wait for 9sec I'm able to send a second one. But when I send both without waiting for the callback of the interval, the second one is blocked until the first interval triggered.
Long Story short: Why is this blocking happening and what ways are there to avoid this blocking.
PS: It seems that the default http package shows another behavior http://blog.nemikor.com/2010/05/21/long-polling-in-nodejs/
try it with firefox and chrome to prevent serializing the requests...
OK, I've got the solution.
The issue wasn't in my code, it was caused by Chrome. It seems that Chrome is serializing all requests, which target the same URL. But nevertheless it sends both request and won't serve the second request with the response of the first.
Anyway, thanks for you help!