how to disable res.writeHead() output extra number before output? - node.js

I create a http server with express.
Below is the server code:
router.get('/', function(req, res, next) {
// req.socket.setTimeout(Infinity);
res.writeHead(200, {
'Content-Type': 'text/plain; charset=utf-8', // <- Important headers
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
});
res.write('hell');
res.write('world');
res.end();
// res.write('\n\n');
// response = res;
});
when I use netcat to GET the url.
the output is like this
GET /sse HTTP/1.1
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/plain; charset=utf-8
Cache-Control: no-cache
Expires: 0
Connection: keep-alive
Keep-Alive: timeout=5, max=97
Date: Fri, 30 Jun 2017 11:50:00 GMT
Transfer-Encoding: chunked
4
hell
5
world
0
My question is why there are always a number before every res.write()? And the number seems is then length of the output of res.write.
How can I remove the number?

This is how the chunked encoding works. You don't have to declare up front how many bytes are you going to send but instead every chunk is prepended with the number of bytes that it has. For example: 4 for "hell", 5 for "world" and then finally 0 for "no more bytes to send".
You can see that you have the chunked encoding header present:
Transfer-Encoding: chunked
Now, to answer your question directly, to remove the numbers you'd have to switch off the chunked encoding. To do that you'd have to set the Content-length header to the number of bytes that you are going to send in the body of the response.
For example:
app.get('/', function(req, res, next) {
res.writeHead(200, {
'Content-Type': 'text/plain; charset=utf-8',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'Content-length': 9, // <<<--- NOTE HERE
});
res.write('hell');
res.write('world');
res.end();
});
(Of course in the real code you'd have to either calculate the length of the response or build up a string or buffer and get its length just before you set the Content-length header.)
But note that if you use curl or any other HTTP client then it will "remove" the numbers for you. It's just that with netcat you accidentally saw the underlying implementation detail of the chunked encoding but all of the real HTTP clients can handle that just fine.
Normally in HTTP you declare the length of the entire response in the headers and then send the body - in one piece, in multiple chunks, whatever - but it has to have the same length as what you declared. It means that you cannot start sending data before you know the length of everything that you want to send. With chunked encoding it's enough to declare the length of each chunk that you send but you don't have to know the length of the entire response - which could even be infinite, it's up to you. It lets you start sending as soon as you have anything to send which is very useful.

Related

nodejs https response is not received

I'm using nodejs on windows 7. I'm experiencing strange behavior. The same https request that always gives the same response is most of the time not received by node whereas it is always received by a web browser.
The code is quite simple:
var url1 = 'https:<my url>'
require('request').get(url1).pipe(process.stdout)
The response is actually a chunked response, which is a big json object.
If I try this request on a browser, I always receive the anwser.
If I use the node request module (I also tried with https), the body is most of the time empty. But sometimes it is returned.
I receive status 200 and response headers are:
{ 'content-type': 'application/json',
connection: 'close',
'transfer-encoding': 'chunked',
date: 'Wed, 27 Jan 2016 21:01:59 GMT',
server: 'My Server' }
Any idea?
I found 2 solutions to fix this issue:
1) Set header 'Connection': 'keep-alive'
var opts = { url: 'my url', headers :{ 'Connection':'keep-alive'} }
request(opts).pipe(..)
2) set to true KeepAlive option in https.globalAgent
https.globalAgent.keepAlive=true

non-chunked multipart/mixed POST?

I'm trying to send multiple binary files to a web service in a single multipart/mixed POST but can't seem to get it to work... target server rejects it. One thing I noticed is Node is trying to do the encoding as chunked, which I don't want:
POST /SomeFiles HTTP/1.1
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary=123456789012345
Host: host.server.com
Connection: keep-alive
Transfer-Encoding: chunked
How do I get it to stop being chunked? Docs say that can be disabled by setting the Content-Length in the main request header but I can't set the Content-Length since, among other reasons, I'm loading one file at a time -- but it shouldn't be required either since it's multipart.
I think the rest is OK (excepting it's not chunked, unless the req_post.write() is doing that part), e.g. for the initial part I:
var req_post = https.request({
hostname: 'host.server.com',
path: '/ManyFiles',
method: 'POST',
headers: {
'MIME-Version': '1.0',
'Content-Type': 'multipart/mixed; boundary=' + boundary
}
},...);
and then pseudo-code:
while ( true ) {
// { get next file here }
req_post.write('\r\n--' + boundary + '\r\nContent-Type: ' + filetype + '\r\nContent-Length: ' + filesize + '\r\n\r\n');
req_post.write(chunk);// filesize
if ( end ) {
req_post.write('\r\n--' + boundary + '--\r\n');
req_post.end();
break;
}
}
Any help/pointers is appreciated!
The quick answer is you cannot disable chunked without setting content-length. The chunked encoding was introduced for cases where you do not know the size of the payload when you start transmitting. Originally, content-length was required and the recipient knew it had the full message when it received content-length bytes. Chunked encoding removed that requirement by transmitting mini-payloads each with a content-length, followed by a zero-size to denote completion of the payload. So, if you do not set the content-length, and you do not use the chunked methodology, the recipient will never know when it has the full payload.
To help solve your problem, if you cannot send chunked and do not want to read all the files before sending, take a look at fs.stat. You can use it to get the file size without reading the file.

Jmeter test script doesn't work when node.js close connection?

I have a simple API upload, it is used to accept upload file from client.
var flg=true;
app.post('/test', function(req, res){
flg=!flg;
var returnJson='{';
if(flg){
req.form.on('part', function (part) {
if(part){
part.resume();
}
returnJson=returnJson+',\"status\":\"0\"}';
res.send(returnJson);
});
}else{
console.log('close');
returnJson=returnJson+',\"status\":\"1\"}';
res.header('Connection', 'close');
res.send(413, returnJson);
}
});
I'd like to test this API with Jmeter. "status":"0" means success. "status":"1" means failure. I write Jmeter script like this:
http://i.imgur.com/vEUJKc8.jpg
Jmeter only displays all the samplers which response contains "status":"0". it seems Jmeter exclude failure sampler response which comes from else section.
http://imgur.com/bkFSpK2
How can I see all samplers which includes all success and failure samplers in Jmeter?
successful Sampler result is :
Thread Name: API 1-1
Sample Start: 2013-12-18 11:46:08 PST
Load time: 7
Latency: 6
Size in bytes: 178
Headers size in bytes: 163
Body size in bytes: 15
Sample Count: 1
Error Count: 0
Response code: 200
Response message: OK
Response headers:
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 15
Date: Wed, 18 Dec 2013 19:46:08 GMT
Connection: keep-alive
HTTPSampleResult fields:
ContentType: text/html; charset=utf-8
DataEncoding: utf-8
Any suggestion?
I don't like this stanza:
ContentType: text/html;
Correct ContentType for JSON will be application/json
You can try using HTTP Header Manager to set Content-Type header of your request to be application/json and see what happens.
Also there is a JSON plugin which provides JSON Path Extractor and JSON Path Assertion (Select “Extras with libs set” from the download list).

Handling Accept headers in node.js restify

I am trying to properly handle Accept headers in RESTful API in node.js/restify by using WrongAcceptError as follows.
var restify = require('restify')
; server = restify.createServer()
// Write some content as JSON together with appropriate HTTP headers.
function respond(status,response,contentType,content)
{ var json = JSON.stringify(content)
; response.writeHead(status,
{ 'Content-Type': contentType
, 'Content-Encoding': 'UTF-8'
, 'Content-Length': Buffer.byteLength(json,'utf-8')
})
; response.write(json)
; response.end()
}
server.get('/api',function(request,response,next)
{ var contentType = "application/vnd.me.org.api+json"
; var properContentType = request.accepts(contentType)
; if (properContentType!=contentType)
{ return next(new restify.WrongAcceptError("Only provides "+contentType)) }
respond(200,response,contentType,
{ "uri": "http://me.org/api"
, "users": "/users"
, "teams": "/teams"
})
; return next()
});
server.listen(8080, function(){});
which works fine if the client provides the right Accept header, or no header as seen here:
$ curl -is http://localhost:8080/api
HTTP/1.1 200 OK
Content-Type: application/vnd.me.org.api+json
Content-Encoding: UTF-8
Content-Length: 61
Date: Tue, 02 Apr 2013 10:19:45 GMT
Connection: keep-alive
{"uri":"http://me.org/api","users":"/users","teams":"/teams"}
The problem is that if the client do indeed provide a wrong Accept header, the server will not send the error message:
$ curl -is http://localhost:8080/api -H 'Accept: application/vnd.me.org.users+json'
HTTP/1.1 500 Internal Server Error
Date: Tue, 02 Apr 2013 10:27:23 GMT
Connection: keep-alive
Transfer-Encoding: chunked
because the client is not assumed to understand the error message, which is in JSON, as
seen by this:
$ curl -is http://localhost:8080/api -H 'Accept: application/json'
HTTP/1.1 406 Not Acceptable
Content-Type: application/json
Content-Length: 80
Date: Tue, 02 Apr 2013 10:30:28 GMT
Connection: keep-alive
{"code":"WrongAccept","message":"Only provides application/vnd.me.org.api+json"}
My question is therefore, how do I force restify to send back the right error status code and body, or am I doing things wrong?
The problem is actually that you're returning a JSON object with a content-type (application/vnd.me.org.api+json) that Restify doesn't know (and therefore, creates an error no formatter found).
You need to tell Restify how your responses should be formatted:
server = restify.createServer({
formatters : {
'*/*' : function(req, res, body) { // 'catch-all' formatter
if (body instanceof Error) { // see text
body = JSON.stringify({
code : body.body.code,
message : body.body.message
});
};
return body;
}
}
});
The body instanceof Error is also required, because it has to be converted to JSON before it can be sent back to the client.
The */* construction creates a 'catch-all' formatter, which is used for all mime-types that Restify can't handle itself (that list is application/javascript, application/json, text/plain and application/octet-stream). I can imagine that for certain cases the catch-all formatter could pose issues, but that depends on your exact setup.

Node.js: chunked transfer encoding

Is that code valid HTTP/1.1?
var fs = require('fs')
var http = require('http')
var buf=function(res,fd,i,s,buffer){
if(i+buffer.length<s){
fs.read(fd,buffer,0,buffer.length,i,function(e,l,b){
res.write(b.slice(0,l))
//console.log(b.toString('utf8',0,l))
i=i+buffer.length
buf(res,fd,i,s,buffer)
})
}
else{
fs.read(fd,buffer,0,buffer.length,i,function(e,l,b){
res.end(b.slice(0,l))
fs.close(fd)
})
}
}
var app = function(req,res){
var head={'Content-Type':'text/html; charset=UTF-8'}
switch(req.url.slice(-3)){
case '.js':head={'Content-Type':'text/javascript'};break;
case 'css':head={'Content-Type':'text/css'};break;
case 'png':head={'Content-Type':'image/png'};break;
case 'ico':head={'Content-Type':'image/x-icon'};break;
case 'ogg':head={'Content-Type':'audio/ogg'};break;
case 'ebm':head={'Content-Type':'video/webm'};break;
}
head['Transfer-Encoding']='chunked'
res.writeHead(200,head)
fs.open('.'+req.url,'r',function(err,fd){
fs.fstat(fd,function(err, stats){
console.log('.'+req.url+' '+stats.size+' '+head['Content-Type']+' '+head['Transfer-Encoding'])
var buffer = new Buffer(100)
buf(res,fd,0,stats.size,buffer)
})
})
}
http.createServer(app).listen(8000,"127.0.0.1")
console.log('GET http://127.0.0.1:8000/appwsgi/www/index.htm')
I think I am violating HTTP/1.1 here? Text files do seem to work fine, but that could be coincidental. Is my header "200 OK" or need it to be "100"? Is one header sufficient?
If you're doing chunked transfer encoding, you actually need to set that header:
Transfer-Encoding: chunked
You can see from the headers returned by google, which does chunked transfers for the homepage and most likely other pages:
HTTP/1.1 200 OK
Date: Sat, 04 Jun 2011 00:04:08 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
Set-Cookie: PREF=ID=f9c65f4927515ce7:FF=0:TM=1307145848:LM=1307145848:S=fB58RFtpI5YeXdU9; expires=Mon, 03-Jun-2013 00:04:08 GMT; path=/; domain=.google.com
Set-Cookie: NID=47=UiPfl5ew2vCEte9JyBRkrFk4EhRQqy4dRuzG5Y-xeE---Q8AVvPDQq46GYbCy9VnOA8n7vxR8ETEAxKCh-b58r7elfURfiskmrOCgU706msiUx8L9qBpw-3OTPsY-6tl; expires=Sun, 04-Dec-2011 00:04:08 GMT; path=/; domain=.google.com; HttpOnly
Server: gws
X-XSS-Protection: 1; mode=block
Transfer-Encoding: chunked
EDIT Yikes, that read is way too complicated:
var app = function(req,res){
var head={'Content-Type':'text/html'}
switch(req.url.slice(-3)){
case '.js':head={'Content-Type':'text/javascript'};break;
case 'css':head={'Content-Type':'text/css'};break;
case 'png':head={'Content-Type':'image/png'};break;
case 'ico':head={'Content-Type':'image/x-icon'};break;
case 'ogg':head={'Content-Type':'audio/ogg'};break;
case 'ebm':head={'Content-Type':'video/webm'};break;
}
res.writeHead(200,head)
var file_stream = fs.createReadStream('.'+req.url);
file_stream.on("error", function(exception) {
console.error("Error reading file: ", exception);
});
file_stream.on("data", function(data) {
res.write(data);
});
file_stream.on("close", function() {
res.end();
});
}
There you go, a nice streamed buffer for you to write with. Here's a blog post I wrote on different ways to read in files. I recommend looking that over so you can see how to best work with files in node's asynchronous environment.
Since Node.js implicitly sets 'Transfer-Encoding: chunked', all I needed to send in headers was the content type with charset like:
'Content-Type': 'text/html; charset=UTF-8'
Initially it was:
'Content-Type': 'text/html'
... which didn't work. Specifying "charset=UTF-8" immediately forced Chrome to render chunked responses.
Why are you doing all the fs operations manually? You'd probably be better off using the fs.createReadStream() function.
On top of that, my guess is that Chrome is expecting you to return a 206 response code. Check req.headers.range, and see if Chrome is expecting a "range" of the media file to be returned. If it is, then you will have to only send back the portion of the file requested by the web browser.
By why reinvent the wheel? There's tons of node modules that do this sort of thing for you. Try Connect/Express' static middleware. Good luck!

Resources