Handling Accept headers in node.js restify - node.js

I am trying to properly handle Accept headers in RESTful API in node.js/restify by using WrongAcceptError as follows.
var restify = require('restify')
; server = restify.createServer()
// Write some content as JSON together with appropriate HTTP headers.
function respond(status,response,contentType,content)
{ var json = JSON.stringify(content)
; response.writeHead(status,
{ 'Content-Type': contentType
, 'Content-Encoding': 'UTF-8'
, 'Content-Length': Buffer.byteLength(json,'utf-8')
})
; response.write(json)
; response.end()
}
server.get('/api',function(request,response,next)
{ var contentType = "application/vnd.me.org.api+json"
; var properContentType = request.accepts(contentType)
; if (properContentType!=contentType)
{ return next(new restify.WrongAcceptError("Only provides "+contentType)) }
respond(200,response,contentType,
{ "uri": "http://me.org/api"
, "users": "/users"
, "teams": "/teams"
})
; return next()
});
server.listen(8080, function(){});
which works fine if the client provides the right Accept header, or no header as seen here:
$ curl -is http://localhost:8080/api
HTTP/1.1 200 OK
Content-Type: application/vnd.me.org.api+json
Content-Encoding: UTF-8
Content-Length: 61
Date: Tue, 02 Apr 2013 10:19:45 GMT
Connection: keep-alive
{"uri":"http://me.org/api","users":"/users","teams":"/teams"}
The problem is that if the client do indeed provide a wrong Accept header, the server will not send the error message:
$ curl -is http://localhost:8080/api -H 'Accept: application/vnd.me.org.users+json'
HTTP/1.1 500 Internal Server Error
Date: Tue, 02 Apr 2013 10:27:23 GMT
Connection: keep-alive
Transfer-Encoding: chunked
because the client is not assumed to understand the error message, which is in JSON, as
seen by this:
$ curl -is http://localhost:8080/api -H 'Accept: application/json'
HTTP/1.1 406 Not Acceptable
Content-Type: application/json
Content-Length: 80
Date: Tue, 02 Apr 2013 10:30:28 GMT
Connection: keep-alive
{"code":"WrongAccept","message":"Only provides application/vnd.me.org.api+json"}
My question is therefore, how do I force restify to send back the right error status code and body, or am I doing things wrong?

The problem is actually that you're returning a JSON object with a content-type (application/vnd.me.org.api+json) that Restify doesn't know (and therefore, creates an error no formatter found).
You need to tell Restify how your responses should be formatted:
server = restify.createServer({
formatters : {
'*/*' : function(req, res, body) { // 'catch-all' formatter
if (body instanceof Error) { // see text
body = JSON.stringify({
code : body.body.code,
message : body.body.message
});
};
return body;
}
}
});
The body instanceof Error is also required, because it has to be converted to JSON before it can be sent back to the client.
The */* construction creates a 'catch-all' formatter, which is used for all mime-types that Restify can't handle itself (that list is application/javascript, application/json, text/plain and application/octet-stream). I can imagine that for certain cases the catch-all formatter could pose issues, but that depends on your exact setup.

Related

how to disable res.writeHead() output extra number before output?

I create a http server with express.
Below is the server code:
router.get('/', function(req, res, next) {
// req.socket.setTimeout(Infinity);
res.writeHead(200, {
'Content-Type': 'text/plain; charset=utf-8', // <- Important headers
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
});
res.write('hell');
res.write('world');
res.end();
// res.write('\n\n');
// response = res;
});
when I use netcat to GET the url.
the output is like this
GET /sse HTTP/1.1
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/plain; charset=utf-8
Cache-Control: no-cache
Expires: 0
Connection: keep-alive
Keep-Alive: timeout=5, max=97
Date: Fri, 30 Jun 2017 11:50:00 GMT
Transfer-Encoding: chunked
4
hell
5
world
0
My question is why there are always a number before every res.write()? And the number seems is then length of the output of res.write.
How can I remove the number?
This is how the chunked encoding works. You don't have to declare up front how many bytes are you going to send but instead every chunk is prepended with the number of bytes that it has. For example: 4 for "hell", 5 for "world" and then finally 0 for "no more bytes to send".
You can see that you have the chunked encoding header present:
Transfer-Encoding: chunked
Now, to answer your question directly, to remove the numbers you'd have to switch off the chunked encoding. To do that you'd have to set the Content-length header to the number of bytes that you are going to send in the body of the response.
For example:
app.get('/', function(req, res, next) {
res.writeHead(200, {
'Content-Type': 'text/plain; charset=utf-8',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'Content-length': 9, // <<<--- NOTE HERE
});
res.write('hell');
res.write('world');
res.end();
});
(Of course in the real code you'd have to either calculate the length of the response or build up a string or buffer and get its length just before you set the Content-length header.)
But note that if you use curl or any other HTTP client then it will "remove" the numbers for you. It's just that with netcat you accidentally saw the underlying implementation detail of the chunked encoding but all of the real HTTP clients can handle that just fine.
Normally in HTTP you declare the length of the entire response in the headers and then send the body - in one piece, in multiple chunks, whatever - but it has to have the same length as what you declared. It means that you cannot start sending data before you know the length of everything that you want to send. With chunked encoding it's enough to declare the length of each chunk that you send but you don't have to know the length of the entire response - which could even be infinite, it's up to you. It lets you start sending as soon as you have anything to send which is very useful.

Azure Functions: NodeJS - HTTP Response Renders as XML Rather than HTTP Response

I have an Azure function written in NodeJS where I'm attempting to cause an HTTP Redirect with a 302. The documentation is very sparse on what are valid entries in the response. As a result I've created an object with what I feel should be the correct entries to generate the redirect, but all I get is an XML response. Even items like the status code are shown in the XML rather than changing the real status code.
What am I doing wrong?
My Code:
module.exports = function(context, req){
var url = "https://www.google.com";
context.res = {
status: 302,
headers: {
Location: url
}
}
context.done();
}
This is the response I'm getting in the browser:
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 1164
Content-Type: application/xml; charset=utf-8
Expires: -1
Server: Microsoft-IIS/8.0
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Wed, 18 Jan 2017 00:54:20 GMT
Connection: close
<ArrayOfKeyValueOfstringanyType xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays"><KeyValueOfstringanyType><Key>status</Key><Value xmlns:d3p1="http://www.w3.org/2001/XMLSchema" i:type="d3p1:int">302</Value></KeyValueOfstringanyType><KeyValueOfstringanyType><Key>headers</Key><Value i:type="ArrayOfKeyValueOfstringanyType"><KeyValueOfstringanyType><Key>Location</Key><Value xmlns:d5p1="http://www.w3.org/2001/XMLSchema" i:type="d5p1:string">https://www.google.com</Value></KeyValueOfstringanyType></Value></KeyValueOfstringanyType></ArrayOfKeyValueOfstringanyType>
The problem is that you're not defining the "body" in the response. This can be set to null, but it must be set for Azure functions to interpret it properly.
e.g. Update your code to:
module.exports = function(context, req){
var url = "https://www.google.com";
context.res = {
status: 302,
headers: {
Location: url
},
body : {}
}
context.done();
}
You will then get the desired response:
HTTP/1.1 302 Found
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 0
Expires: -1
Location: https://www.google.com
Server: Microsoft-IIS/8.0
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Wed, 18 Jan 2017 01:10:13 GMT
Connection: close
Edited 2/16/2017 - Using "null" for the body currently throws an error on Azure. As a result the answer was updated to use {} instead.
This is a bug with azure functions, see this content headers issue.
For now, a workaround is to remove any content related headers if the body of your response is null.

Emotion API Project Oxford base64 image

I am trying to make a call to the Emotion Api via JavaScript with in a PhoneGap app. I encoded the image into base64 and verified that the data can be decoded by one of the online tools. this is the code that i found on the web to use.
var apiKey = "e371fd4333ccad2"; //(you can get a free key on site this is modified for here)
//apiUrl: The base URL for the API. Find out what this is for other APIs via the API documentation
var apiUrl = "https://api.projectoxford.ai/emotion/v1.0/recognize";
"file" is the base64 string.
function CallAPI(file, apiUrl, apiKey)
{
// console.log("file=> " +file);
$.ajax({
url: apiUrl,
beforeSend: function (xhrObj) {
xhrObj.setRequestHeader("Content-Type", "application/octet-stream");
xhrObj.setRequestHeader("Ocp-Apim-Subscription-Key", apiKey);
},
type: "POST",
data: file,
processData: false
})
.done(function (response) {
console.log("in call api a");
ProcessResult(response);
})
.fail(function (error) {
console.log(error.getAllResponseHeaders())
});
}
function ProcessResult(response)
{
console.log("in processrult");
var data = JSON.stringify(response);
console.log(data);
}
I got back this:
Expires: -1
Access-Control-Allow-Origin: *
Pragma: no-cache
Cache-Control: no-cache
Content-Length: 60
Date: Fri, 01 Apr 2016 13:34:32 GMT
Content-Type: application/json; charset=utf-8
X-Powered-By: ASP.NET
So i tried their console test page.
https://dev.projectoxford.ai/docs/services/5639d931ca73072154c1ce89/operations/563b31ea778daf121cc3a5fa/console
I can put in an image like your "example.com/man.jpg" and it works great. but if i take the same image and have it encoded as a base 64 image all i get is "Bad Body" i have tried it both as content type "application/octet-stream" and "application/json" and get the same error. sample of the encoded looks like..and http request
POST https://api.projectoxford.ai/emotion/v1.0/recognize HTTP/1.1
Content-Type: application/octet-stream
Host: api.projectoxford.ai
Ocp-Apim-Subscription-Key: ••••••••••••••••••••••••••••••••
Content-Length: 129420
data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAASABIAAD/...
i get back:
Pragma: no-cache
Cache-Control: no-cache
Date: Fri, 01 Apr 2016 16:23:09 GMT
X-Powered-By: ASP.NET
Content-Length: 60
Content-Type: application/json; charset=utf-8
Expires: -1
{
"error": {
"code": "BadBody",
"message": "Invalid face image."
}
}
I am now not sure if you can send an image like this or not from Javascript. Can anyone tell me if my javascript is correct or if they can send an encoded base64 string image to the site.
thanks for your help
tim
This API does not accept data URIs for images. What you'll need to do is convert it to a binary blob. Though this answer is for a different Project Oxford API, you can apply the same technique.

Show function provides json

My CouchDB show function will not run the provides('json', ...) function. It will run the html provides in certain cases though. Here is the show function:
function(doc, req) {
provides('json', function(){
return {'json': doc };
});
provides('html', function(){
return "<html><body>html string here</body></html>";
});
return {'json': {
'hello': "goodbye"
}
};
}
Here is a sample request when sending text/x-json. hello:goodbye is also returned if I use Accept: application/json
dave#ubuntu-laptop:~/py/liqc$ curl -i -H "Accept: text/x-json" http://127.0.0.1:8001/liqc/user-dave
HTTP/1.1 200 OK
Content-Length: 20
Vary: Accept
Server: CouchDB/1.0.2 (Erlang OTP/R14B)
ETag: "6V7EMSS64ZQ5SRLI0EYQVDWES"
Cache-Control: must-revalidate
Date: Mon, 27 Jan 2014 15:54:31 GMT
Content-Type: text/plain;charset=utf-8, text/x-json
{"hello":"goodbye"}
When I request text/html, I also get hello:goodbye. If I remove the final return of the show function however, application/json will continue to give me hello:goodbye, but text/html will give me the results I want!
dave#ubuntu-laptop:~/py/liqc$ curl -i -H "Accept: text/html" http://127.0.0.1:8001/liqc/user-dave
HTTP/1.1 200 OK
Content-Length: 42
Vary: Accept
Server: CouchDB/1.0.2 (Erlang OTP/R14B)
ETag: "9B8K3XGK28Y7RL2ART28WLL50"
Date: Mon, 27 Jan 2014 16:02:41 GMT
Content-Type: text/html; charset=utf-8
<html><body>html string here</body></html>
Am I doing something wrong or is this something going on with CouchDB? I am running a localhost reverse proxy to cloudant BTW. Thanks for any help.
You're not supposed to use a final return if you use provides. return supersedes any provides.
Plus, what do you expect to get when requesting JSON while your show function provides JSON in two different places? Use only provides and you will be fine.
About this:
If I remove the final return of the show function however, application/json will continue to give me hello:goodbye
There is no way you can get "hello":"goodbye" if you removed completely the final return. Maybe you forgot to update the design document? Debugging the wrong source code can be very frustrating…

Node.js: chunked transfer encoding

Is that code valid HTTP/1.1?
var fs = require('fs')
var http = require('http')
var buf=function(res,fd,i,s,buffer){
if(i+buffer.length<s){
fs.read(fd,buffer,0,buffer.length,i,function(e,l,b){
res.write(b.slice(0,l))
//console.log(b.toString('utf8',0,l))
i=i+buffer.length
buf(res,fd,i,s,buffer)
})
}
else{
fs.read(fd,buffer,0,buffer.length,i,function(e,l,b){
res.end(b.slice(0,l))
fs.close(fd)
})
}
}
var app = function(req,res){
var head={'Content-Type':'text/html; charset=UTF-8'}
switch(req.url.slice(-3)){
case '.js':head={'Content-Type':'text/javascript'};break;
case 'css':head={'Content-Type':'text/css'};break;
case 'png':head={'Content-Type':'image/png'};break;
case 'ico':head={'Content-Type':'image/x-icon'};break;
case 'ogg':head={'Content-Type':'audio/ogg'};break;
case 'ebm':head={'Content-Type':'video/webm'};break;
}
head['Transfer-Encoding']='chunked'
res.writeHead(200,head)
fs.open('.'+req.url,'r',function(err,fd){
fs.fstat(fd,function(err, stats){
console.log('.'+req.url+' '+stats.size+' '+head['Content-Type']+' '+head['Transfer-Encoding'])
var buffer = new Buffer(100)
buf(res,fd,0,stats.size,buffer)
})
})
}
http.createServer(app).listen(8000,"127.0.0.1")
console.log('GET http://127.0.0.1:8000/appwsgi/www/index.htm')
I think I am violating HTTP/1.1 here? Text files do seem to work fine, but that could be coincidental. Is my header "200 OK" or need it to be "100"? Is one header sufficient?
If you're doing chunked transfer encoding, you actually need to set that header:
Transfer-Encoding: chunked
You can see from the headers returned by google, which does chunked transfers for the homepage and most likely other pages:
HTTP/1.1 200 OK
Date: Sat, 04 Jun 2011 00:04:08 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
Set-Cookie: PREF=ID=f9c65f4927515ce7:FF=0:TM=1307145848:LM=1307145848:S=fB58RFtpI5YeXdU9; expires=Mon, 03-Jun-2013 00:04:08 GMT; path=/; domain=.google.com
Set-Cookie: NID=47=UiPfl5ew2vCEte9JyBRkrFk4EhRQqy4dRuzG5Y-xeE---Q8AVvPDQq46GYbCy9VnOA8n7vxR8ETEAxKCh-b58r7elfURfiskmrOCgU706msiUx8L9qBpw-3OTPsY-6tl; expires=Sun, 04-Dec-2011 00:04:08 GMT; path=/; domain=.google.com; HttpOnly
Server: gws
X-XSS-Protection: 1; mode=block
Transfer-Encoding: chunked
EDIT Yikes, that read is way too complicated:
var app = function(req,res){
var head={'Content-Type':'text/html'}
switch(req.url.slice(-3)){
case '.js':head={'Content-Type':'text/javascript'};break;
case 'css':head={'Content-Type':'text/css'};break;
case 'png':head={'Content-Type':'image/png'};break;
case 'ico':head={'Content-Type':'image/x-icon'};break;
case 'ogg':head={'Content-Type':'audio/ogg'};break;
case 'ebm':head={'Content-Type':'video/webm'};break;
}
res.writeHead(200,head)
var file_stream = fs.createReadStream('.'+req.url);
file_stream.on("error", function(exception) {
console.error("Error reading file: ", exception);
});
file_stream.on("data", function(data) {
res.write(data);
});
file_stream.on("close", function() {
res.end();
});
}
There you go, a nice streamed buffer for you to write with. Here's a blog post I wrote on different ways to read in files. I recommend looking that over so you can see how to best work with files in node's asynchronous environment.
Since Node.js implicitly sets 'Transfer-Encoding: chunked', all I needed to send in headers was the content type with charset like:
'Content-Type': 'text/html; charset=UTF-8'
Initially it was:
'Content-Type': 'text/html'
... which didn't work. Specifying "charset=UTF-8" immediately forced Chrome to render chunked responses.
Why are you doing all the fs operations manually? You'd probably be better off using the fs.createReadStream() function.
On top of that, my guess is that Chrome is expecting you to return a 206 response code. Check req.headers.range, and see if Chrome is expecting a "range" of the media file to be returned. If it is, then you will have to only send back the portion of the file requested by the web browser.
By why reinvent the wheel? There's tons of node modules that do this sort of thing for you. Try Connect/Express' static middleware. Good luck!

Resources