Node: Express: How to handle application/octet-stream;charset=;UTF-8 response? - node.js

I have a node-express application.
There, I'm trying to make a call to an API which is responding an raw xlsx object as
'Content-Type' : 'application/octet-stream;charset=;UTF-8'
Code how I'm calling the API:
var unirest = require("unirest");
var reqClient = unirest("POST", "https://api.application.com/getExcel");
reqClient.headers({ "Authorization": "Bearer " + req.session.passport.user.token,
"content-type": req.headers['content-type'], "application/json"});
reqClient.type("json");
reqClient.send(JSON.stringify(requestbody));
reqClient.end(function(res) {
if (res.error) throw new Error(res.error);
console.log(res.body);
});
Now 2 things I'm trying to do actually with this data.
Write that into an excel file.
Below is the code how I'm trying it:
let data = res.body // res is the response coming from the API
let buf = Buffer.from(data);
excelfile = fs.createWriteStream("result.xlsx");
excelfile.write(buf);
excelfile.end();
Trying to send it to UI where the excelfile will get created.
Below is my code for that:
let data = res.body // res is the response coming from the API
let buf = Buffer.from(data);
response.write(buf); //response is the response to the request to ui
response.end();
So in both the cases the file is coming corrupted.
But the API response is prefect because, when it's directly getting consumed by the UI, xlsx file is generating properly.

When dealing with binary data, you have to set the encoding to null
reqClient.encoding(null)
reqClient.encoding(null)
reqClient.end(function(res) {
if (res.error) {
return response.status(500).send('error');
}
// res.body is now a buffer
response.write(res.body);
response.end();
});
Otherwise, the data is converted to UTF-8 and you can't convert from UTF-8 back to binary and get the same data, which is what you were doing:
Buffer.from(res.body)
The recommended approach is to use streams directly, I don't see a way of doing this in a simple way in unirest, I recommend using request or got, so you can .pipe directly to a file or express res

Related

Getting error 400 when trying to use Azure Speech Recognition and Flutter

I've been given the task to use the Azure Speech Recognition API on a Flutter application. The app is supposed to record the user's voice and send it to the Azure API. I've tried to use the only pub.dev plugin that I could find, but it did not work and the documentation does not have a Flutter example. Since the request returned 200 on Postman and I was able to make it work on a Javascript application, the problem must be my Flutter application, maybe something on the request, since it is returning code 400 (bad request), saying that the request contains invalid data.
The code below is my request to the API. The file which I'm using to get the bytes is a wav file containing the recorded voice
Could you help me? Thanks for the attention.
var bytes = file.readAsBytesSync();
var response = await Dio().post(
"https://brazilsouth.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=pt-BR",
data: bytes,
options: Options(
headers: {
"Ocp-Apim-Subscription-Key": "subscriptionKey",
"Content-Type": "audio/wav"
},
);
print(response.statusCode);
After trying to solve this problem for a couple of days, I finally got a successful response!
Future<dynamic> speechToText(File file) async {
final bytes = file.readAsBytesSync();
var headers = {
'Ocp-Apim-Subscription-Key': key,
'Content-Type': 'audio/wav'
};
var response;
Map<String, dynamic> responseBody;
var recognizedVoiceText;
try {
response = await http.post(
"https://brazilsouth.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=pt-BR",
body: bytes,
headers: headers,
);
// The response body is a string that needs to be decoded as a json in order to extract the text.
responseBody = jsonDecode(response.body);
recognizedVoiceText = responseBody["DisplayText"];
} catch (e) {
print('Error: ${e.toString()}');
recognizedVoiceText = "Something went wrong";
}
return recognizedVoiceText;
}

How to construct and extract value from simple HTTPS request in Node.js?

I have a simple HTTPS request -
https://api.pro.coinbase.com/products/btc-eur/ticker
In the browser this returns one object. What's the simplest code that will allow me to retrieve and display this object (as is) in the terminal of Node?
const https = require('https')
const url = https.get('https://api.pro.coinbase.com/products/btc-eur/ticker')
const myObject = JSON.parse(url)
console.log(myObject)
A simple copy / paste of the above code in VSC returns the error SyntaxError: Unexpected token o in JSON at position 1.
#mamba76, welcome to the SO community. Please use Node.js node-fetch package. It is much simpler to use. You can install it using npm install.
Following code might help:
"use strict";
const fetch = require('node-fetch')
async function getValue() {
// Invoke the API.
// Wait until data is fetched.
let response = await fetch('https://api.pro.coinbase.com/products/btc-eur/ticker');
let value = await response.json();
return value;
}
getValue().then(result => {console.log(result.price);});
As a good practice, always assume that API calls over the HTTP (whether in your own network or outside) might take time to return data and hence you should use async-await pattern to make these requests.
Extending #Akshay.N's answer and without using external dependencies,
const https = require('https')
https.get("https://api.pro.coinbase.com/products/btc-eur/ticker",res=>{
let body = '';
res.on('data', (chunk) => { body += chunk; });
res.on('end', () => {
const myObject = JSON.parse(body);
console.log(myObject);
})
})
Now, what we're doing here is waiting on the data event as long as the data is coming in, and appending it to the variable body. Once the end event is encountered, we take that as a signal that all data has been received and we can proceed to parse the body into an object using JSON.parse (assuming the data was serialized in JSON; if it wasn't JSON.parse will throw an error).
This tutorial is helpful: https://davidwalsh.name/nodejs-http-request
try something like this:-
https.get("https://api.pro.coinbase.com/products/btc-eur/ticker",res=>{
res.on('data', (chunk) => { console.log(JSON.parse(chunk))})
})
With node (you need request module) :
// display object
(require("request")).get({
url: "myurl",
json: true
}, function(e,r,b){
console.log(b);
});
// display as string
(require("request")).get({
url: "myurl",
json: false
}, function(e,r,b){
console.log(b);
});
With just curl in your terminal (without node)
curl myurl

nodejs handling arraybuffers

suppose I make a multipart, application/octet-stream request with responseType as 'arraybuffer'...suppose I receive this in nodejs and I try to write the response into a file. How can I handle this such that I don't corrupt the contents?
My current approach is something like this
var req = var req = restler.post(url, opts)
.on('data', function (data){
console.log('receiving data...');
console.log(data);
}).on('complete', function (data) {
var buff = new Buffer(data) //this is prolly incorrect, but I can't figure this out at all
fs.writeFile(file_name, buff.toString('binary'), function(err){
console.log('done!')
});
Here I write the contents into filename.
Suppose I fetch a microsoft word file...fetching it only leads me to a corrupt file. Also using restler package for this
According to the restler documentation, you can set decoding: 'buffer' in your opts and it will keep the binary data intact as a Buffer instead of the default utf8-encoded string. From there it's just a matter of passing the buffer directly to fs.writeFile() without calling buffer.toString().

hitting a multipart url in nodejs

I have a client code using form-data module to hit a url that returns a content-type of image/jpeg. Below is my code
var FormData = require('form-data');
var fs = require('fs');
var form = new FormData();
//form.append('POLICE', "hello");
//form.append('PAYSLIP', fs.createReadStream("./Desert.jpg"));
console.log(form);
//https://fbcdn-profile-a.akamaihd.net/hprofile-ak-xfp1/v/t1.0- 1/c8.0.50.50/p50x50/10934065_1389946604648669_2362155902065290483_n.jpg?oh=13640f19512fc3686063a4703494c6c1&oe=55ADC7C8&__gda__=1436921313_bf58cbf91270adcd7b29241838f7d01a
form.submit({
protocol: 'https:',
host: 'fbcdn-profile-a.akamaihd.net',
path: '/hprofile-ak-xfp1/v/t1.0-1/c8.0.50.50/p50x50/10934065_1389946604648669_2362155902065290483_n.jpg?oh=13640f19512fc3686063a3494c6c1&oe=55ADCC8&__gda__=1436921313_bf58cbf91270adcd7b2924183',
method: 'get'
}, function (err, res) {
var data = "";
res.on("data", function (chunks) {
data += chunks;
});
res.on("end", function () {
console.log(data);
console.log("Response Headers - " + JSON.stringify(res.headers));
});
});
I'm getting some chunk data and the response headers i received was
{"last-modified":"Thu, 12 Feb 2015 09:49:26 GMT","content-type":"image/jpeg","timing-allow-origin":"*","access-control-allow-origin":"*","content-length":"1443","cache-control":"no-transform, max-age=1209600","expires":"Thu, 30 Apr 2015 07:05:31 GMT","date":"Thu, 16 Apr 2015 07:05:31 GMT","connection":"keep-alive"}
I am now stuck as how to process the response that i received to a proper image.I tried base64 decoding but it seemed to be a wrong approach any help will be much appreciated.
I expect that data, once the file has been completely downloaded, contains a Buffer.
If that is the case, you should write the buffer as is, without any decoding, to a file:
fs.writeFile('path/to/file.jpg', data, function onFinished (err) {
// Handle possible error
})
See fs.writeFile() documentation - you will see that it accepts either a string or a buffer as data input.
Extra awesomeness by using streams
Since the res object is a readable stream, you can simply pipe the data directly to a file, without keeping it in memory. This has the added benefit that if you download really large file, Node.js will not have to keep the whole file in memory (as it does now), but will write it to the filesystem continuously as it arrives.
form.submit({
// ...
}, function (err, res) {
// res is a readable stream, so let's pipe it to the filesystem
var file = fs.createWriteStream('path/to/file.jpg')
res.on('end', function writeDone (err) {
// File is saved, unless err happened
})
.pipe(file) // Send the incoming file to the filesystem
})
The chunk you got is the raw image. Do whatever it is you want with the image, save it to disk, let the user download it, whatever.
So if I understand your question clearly, you want to download a file from an HTTP endpoint and save it to your computer, right? If so, you should look into using the request module instead of using form-data.
Here's a contrived example for downloading things using request:
var fs = require('fs');
var request = require('request')
request('http://www.example.com/picture.jpg')
.pipe(fs.createWriteStream('picture.jpg'))
Where 'picture.jpg' is the location to save to disk. You can open it up using a normal file browser.

Response not ending and browser keeps loading - nodejs and graphicsmagick

I am new to nodejs. I am using graphicsmagick to resize the image before sending it to the browser.
My code looks like this (res is the response to be sent from function(req,res){...}) -
imageURLPromise
.then(function(response) {
var body = response.body;
if (body) {
console.log("Found From: " + response.request.uri.href);
//set response headers here
setResponseData(res, response);
var buf = new Buffer(body, "binary");
console.log(buf);
//gm module
gm(buf).resize(400,300).toBuffer(function(err,buffer) {
console.log("buffer here");
res.end(buffer, "binary");
});
}
}, function(error) {
console.log(error);
});
I get the image in the browser, I get the log "buffer here" but the browser stays in the "loading" state.
I have tried using .stream() with gm and pipe the stdout to response but it has the same problem.
If I do away with gm and directly write body to the response like this
res.end(body, 'binary');
then it works correctly.
Can someone tell what I am doing wrong here?
I figured out the problem.
The problem was not with node or gm but with the HTTP response headers.
When GM returns a buffer and we write that to the HTTP response then it sets the Transfer-Encoding header to "chunked". In that case the Content-Length header should never be set.
You can read more about it here
http://en.wikipedia.org/wiki/Chunked_transfer_encoding
Since I was setting both the browser kept waiting for content even after the image had been sent.
The code is exactly the same as I posted, except for the fact that in the setResponseData() function (which basically used to set headers) I am not setting the content-length header now.

Resources