res.end() not closing the script execution in NodeJS - node.js

return res.end() not closing script execution.
Initially it renders the form and when i submit it executes the code block below closing brace of if statement.
I think execution should have ended on the line return res.end().
const http = require("http");
const server = http.createServer((req, res) => {
const url = req.url;
if (url === "/") {
res.write("<html>");
res.write("<head><title>FORM</title></head>");
res.write(
"<body><form action='/message' method='POST'><input type='text' name='message' /><button>Send</button></from></body>"
);
res.write("</html>");
return res.end();
}
res.setHeader("Content-Type", "text/html");
res.write("<html>");
res.write("<head><title>APP</title></head>");
res.write("<body><h1>Demo APP</h1></body>");
res.write("</html>");
});
server.listen(4000);

Please add console.log(req.url) to the start of your request handler. You will find that:
return res.end();
does indeed stop the execution of that particular request by returning from the request handler callback.
But, you are most likely then getting another incoming http request, probably for favicon.ico. In general you should never have code in a request handler that pays zero attention to what resource is being requested unless it's sending back an error like a 404 because there are lots of requests that can potentially be made to your http server for somewhat standard resources sucn as robots.txt, favicon.ico, sitemap.xml and so on and you shouldn't be answering for those requests unless you are sending the appropriate data for those requests.

You might want to put the remainder of the code in ELSE block to ensure it is run only when URL!=="/"

Related

server response to webform: how to answer duplicates?

I'm running a small server that needs to receive webforms. The server checks the request and sends back "success" or "fail" which is then displayed on the form (client screen).
Now, checking the form may take a few seconds, so the user may be tempted to send the form again.
What is the corret way to ignore the second request?
So far I have come out with this solutions: If the form is duplicate of the previous one
Don't check and send some server error back (like 429, or 102, or some other one)
Close directly the connection req.destroy();res.destroy();
Ignore the request and exit from the requestListener function.
With solution 1 and 2 the form (on client's browser) displays a message error (even if the first request they sent was correct, so as the duplicates). So it's not a good one.
Solution 3 gives the desired outcome... but I'm not sure if it is the right way around it... basically not changing req and res instead of destroying them. Could this cause issues, or slow down the server? (like... do they stack up?). Of course the first request, once it has been checked, will be sent back with the outcome code. My concern is with the duplicate requests, which I don't destroy nor answer...
Some details on the setup: Nodejs application using the very default code by the http module.
const http = require("http");
const requestListener = function (req, res) {
var requestBody = '';
req.on('data', (data)=>{
requestBody += data;
});
req.on('end', ()=>{
if (isduplicate(requestBody))
return;
else
evalRequest(requestBody, res);
})
}

node request pipe hanging after a few hours

I have an endpoint in a node app which is used to download images
var images = {
'car': 'http://someUrlToImage.jpg',
'boat': 'http://someUrlToImage.jpg',
'train': 'http://someUrlToImage.jpg'
}
app.get('/api/download/:id', function(req, res){
var id = req.params.id;
res.setHeader("content-disposition", "attachment; filename=image.jpg");
request.get(images[id]).pipe(res);
});
Now this code works fine, but after a few hours of the app running, the endpoint just hangs.
I am monitoring the memory usage of the app, which remains consistent, and any other endpoints which just return some JSON respond as normal so it is not as if the event loop is somehow being blocked. Is there a gotcha of some kind that I am missing when using the request module to pipe a response? Or is there a better solution to achieve this?
I am also using the Express module.
You should add an error listener on your request because errors are not passed in pipes. That way, if your request has an error, it will close the connection and you'll get the reason.
request
.get(...)
.on('error', function(err) {
console.log(err);
res.end();
})
.pipe(res)

Node.js - Create a proxy, why is request.pipe needed?

Can some one explain this code to create a proxy server. Everything makes sense except the last block. request.pipe(proxy - I don't get that because when proxy is declared it makes a request and pipes its response to the clients response. What am I missing here? Why would we need to pipe the original request to the proxy because the http.request method already makes the request contained in the options var.
var http = require('http');
function onRequest(request, response) {
console.log('serve: ' + request.url);
var options = {
hostname: 'www.google.com',
port: 80,
path: request.url,
method: 'GET'
};
var proxy = http.request(options, function (res) {
res.pipe(response, {
end: true
});
});
request.pipe(proxy, {
end: true
});
}
http.createServer(onRequest).listen(8888);
What am I missing here? [...] the http.request method already makes the request contained in the options var.
http.request() doesn't actually send the request in its entirety immediately:
[...] With http.request() one must always call req.end() to signify that you're done with the request - even if there is no data being written to the request body.
The http.ClientRequest it creates is left open so that body content, such as JSON data, can be written and sent to the responding server:
var req = http.request(options);
req.write(JSON.stringify({
// ...
}));
req.end();
.pipe() is just one option for this, when you have a readable stream, as it will .end() the client request by default.
Although, since GET requests rarely have a body that would need to be piped or written, you can typically use http.get() instead, which calls .end() itself:
Since most requests are GET requests without bodies, Node provides this convenience method. The only difference between this method and http.request() is that it sets the method to GET and calls req.end() automatically.
http.get(options, function (res) {
res.pipe(response, {
end: true
});
});
Short answer: the event loop. I don't want to talk too far out of my ass, and this is where node.js gets both beautiful and complicated, but the request isn't strictly MADE on the line declaring proxy: it's added to the event loop. So when you connect the pipe, everything works as it should, piping from the incoming request > proxy > outgoing response. It's the magic / confusion of asynchronous code!

Node.js and understanding how response works

I'm really new to node.js so please bear with me if I'm making a obvious mistake.
To understand node.js, i'm trying to create a webserver that basically:
1) update the page with appending "hello world" everytime the root url (localhost:8000/) is hit.
2) user can go to another url (localhost:8000/getChatData) and it will display all the data built up from the url (localhost:8000/) being triggered
Problem I'm experiencing:
1) I'm having issue with displaying that data on the rendered page. I have a timer that should call get_data() ever second and update the screen with the data variable that stores the appended output. Specifically this line below response.simpleText(200, data); isn't working correctly.
The file
// Load the node-router library by creationix
var server = require('C:\\Personal\\ChatPrototype\\node\\node-router').getServer();
var data = null;
// Configure our HTTP server to respond with Hello World the root request
server.get("/", function (request, response) {
if(data != null)
{
data = data + "hello world\n";
}
else
{
data = "hellow world\n";
}
response.writeHead(200, {'Content-Type': 'text/plain'});
console.log(data);
response.simpleText(200, data);
response.end();
});
// Configure our HTTP server to respond with Hello World the root request
server.get("/getChatData", function (request, response) {
setInterval( function() { get_data(response); }, 1000 );
});
function get_data(response)
{
if(data != null)
{
response.writeHead(200, {'Content-Type': 'text/plain'});
response.simpleText(200, data);
console.log("data:" + data);
response.end();
}
else
{
console.log("no data");
}
}
// Listen on port 8080 on localhost
server.listen(8000, "localhost");
If there is a better way to do this, please let me know. The goal is to basically have a way for a server to call a url to update a variable and have another html page to report/display the updated data dynamically every second.
Thanks,
D
The client server model works by a client sending a request to the server and the server in return sends a response. The server can not send a response to the client that the client hasn't asked for. The client initiates the request. Therefore you cannot have the server changing the response object on an interval.
The client will not get these changes to the requests. How something like this is usually handled as through AJAX the initial response from the server sends Javascript code to the client that initiates requests to the server on an interval.
setTimeout accepts function without parameter which is obvious as it will be executed later in time. All values you need in that function should be available at the point of time. In you case, the response object that you are trying to pass, is a local instance which has scope only inside the server.get's callback (where you set the setTimeout).
There are several ways you can resolve this issue. you can keep a copy of the response instance in the outer scope where get_data belongs or you can move the get_data entirely inside and remove setTimeout. The first solution is not recommended as if getChatData is called several times in 1sec the last copy will be prevailing.
But my suggestion would be to keep the data in database and show it once getChatData is called.

req.pause() on a finished request pauses the next request

I have a really simple file upload example
var http = require('http');
http.createServer(function(req, res){
console.log(req.method);
if (req.method == 'GET'){
res.writeHead(200);
res.write('<html><head></head><body><form method="POST" enctype="multipart/form-data"><input type="file" id="f" name="f"><input type="submit"></body></html>')
res.end();
}else if(req.method == 'POST'){
req.pause();
res.writeHead(200);
res.end();
}else{
res.writeHead(404).end();
}
}).listen('8081');
What I want to do is pause the upload. While it works fine with large files, the small ones (<= 100kB), that are probably sent along with the request in a single part are not paused (that's fine and understandable), but instead the next request to the server is paused (i.e. when I try to load the page, it never reaches the console.log(req.method) part, but when I refresh again it's back to normal), which is definitely not fine.
It seems like an error that would pop out once in a while, so I was suprised when I didn't find any complaints about it. What are your thoughts/possible explanation/workaround/fix suggestions? For now I check whether the file's size is below a certain threshold, but it doesn't look very safe or ellegant.
I am of the opinion (could be wrong) that req.pause causes the underlying socket to be paused. A way to verify this claim would be to add the header Connection: Close in the res.writeHead(200); which is just after the line req.pause();.
Now, if everyting start working fine (next request is not paused), you can say that the underlying socket was paused and was being reused for the next request.

Resources