Not Getting Response From Another Nodejs Server? - node.js

I am learning Nodejs. I am facing a small problem. I am uploading file to google cloud service. All types of files are uploading and I am getting signed URL of file. I am sending the signed URL from one nodejs server to another nodejs server. While uploading jpg or png file its success and getting response back to the another server.
But the issues is that, when I upload xlsx (excel file) I am not getting response to the another server. Its happing only while I upload the excel file only.
Any solution for that?
First Nodejs Server from where File is uploaded to GCS->
if(is_valid_request){
interface.gcpFileUpload(req, (data)=>{
console.log("response = "+ JSON.stringify(data));
res.json(data)
})
} else{
res.json({success:0,err:"request is not valid"});
}
Another server where I want to get response->
getResponse(source, filename, destination, bucket_name, json_url, callback) {
var formData = {
file: fs.createReadStream(source),
filename: filename,
json_url: json_url,
destination: destination,
bucket_name: bname
}
var options = {
'method': 'POST',
'url': endpoint_of_another_server,
'headers': {
'api_key': key,
'Content-Type': 'application/json',
},
formData: formData,
}
console.log("I AM RUNNING TILL HERE")
request(options, function (error, response) {
console.log("WE ARE GETTING SOME RESPONSE " + response);
return callback(response.body);
})
}

It is hard to conclude on something without looking at the code snippets. But as per your problem statement, if it is only issue with *.xlsx (excel file) then it may have one of the below issue:
You may need to allow *.xlsx file extension in your code
If the file extension is already allowed, then there may be issue with the size of the file (assuming if it is having GBs of data)
Nevertheless, we can help you out more if you provide more details of code snippet.

Related

TypeScript CLI: How to save octect-stream response from node-fetch call to Get Request?

I am making a Get Request using Node-Fetch to a web api. It returns octet-stream response to be saved as a file in a local.
I tried using downloadjs (download()) and download.js (downloadBlob()), but both of them did not work.
downloadBlob() returned "createObjectURL is not a function" error, and download() returned "window is not defined at object." error.
The call is below
let res = await fetch(apiURL, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
}
})
I think I am completely lost here. What should I do to download the file to local drive? How should I construct .then block here?
downloadjs and download.js won't help because these are front end libraries that trigger the download process in a browser. For instance, when you generate an image on the client (in browser) and want a user to download it.
In order to save an octet-stream in Node(CLI) you can use fs module:
const data = await fetch(apiURL, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
}
}).then(res => res.buffer());
fs.writeFile('filename.dat', data, (err) => {
if (err) throw err;
console.log('The file has been saved!');
});

Sending pdf files to user from nodejs to reactjs

I have pdf documents stored in the file system on the server side.
I need to let the user download one of them when he/she clicks on download.
The problem is that I know how to send a file from NodeJS to browser but here the request will be made by a ReactJS axios request. So when I send a file, the response will go to react. How do I send that pdf file to the user? Do I access the file system directly using my front end code?
I get the following in the browser console when I log the response after I do res.sendFile(file_path) in NodeJS
How do I process this so that I can make the user download the pdf?
You can use file-saver to download the file. Below is the function I'm using for pdf download. (response.data is the Buffer that nodejs sends back as a response)
import FileSaver from 'file-saver';
...
_onPdfFetched() {
FileSaver.saveAs(
new Blob([response.data], { type: 'application/pdf' }),
`sample.pdf`
);
}
or you can just show pdf to the user
window.open(response.data, '_blank');
Edit
The axios call should be like this:
axios.get(url, {
responseType: 'arraybuffer',
headers: {
Accept: 'application/pdf',
},
});
Edit 2
The nodejs code should be like this:
router.post('/api/downloadfile',(req, res, next) => {
const src = fs.createReadStream('path to sample.pdf');
res.writeHead(200, {
'Content-Type': 'application/pdf',
'Content-Disposition': 'attachment; filename=sample.pdf',
'Content-Transfer-Encoding': 'Binary'
});
src.pipe(res);
});

NodeJS request fetch pdf file and save on disk but opens as blank white page

I am fetching a pdf file and want to save that on disk. Below is my code:
request.post({
url: some_api_url,
json: true,
body: {
by: user,
password: 'mypassword'
}
}, function(err, response, body) {
if (err) next(err);
else {
if (typeof(body) == 'string') {
//console.log(body);
fs.writeFileSync(path.join(__dirname, 'abc.pdf'), body, 'binary', function(err) {
console.log(err);
});
} else {
console.log("invalid file");
}
}
});
This saves the pdf on disk with the right size (about 200kb) which means that there is data in body of the post request. However, the pdf opens up blank in document viewer in Ubuntu.
I have also compared "cat abd.pdf | less" outputs of a working pdf file (which opens fine) and the one downloaded through the request and top and bottom of both are same.
Below is the api code that serves the pdf file. If I make the request in postman, the pdf file downloads and save to disk and opens up fine.
let fileStat = fs.statSync(filePath);
res.writeHead(200, {
'Content-Disposition': 'attachment; filename="report.pdf"',
'Content-Type': 'application/pdf',
'Content-Length': fileStat.size
});
let readStream = fs.createReadStream(filePath);
res.on('finish', function() {
console.log("file sent");
});
readStream.pipe(res);
Writing my solution in case anyone needs it.
The problem was in encoding while reading the file and sending it. Now I am reading the file as base64, transmitting and then again saving from base64 and now it works fine.

Google Cloud Print API - white page when printing PDF

I want to send a PDF file to be printed using the Google Cloud Print API. The code bellow will give me a positive message telling me that one page was generate. When I go and check what came out, I gate an empty page.
The same result happens if I save the print on Google Drive.
The code
unirest.post('https://www.google.com/cloudprint/submit')
.header('Authorization', 'Bearer ' + token)
.header("Accept-Charset", "utf-8")
.field('xsrf', xsrf_token)
.field('printerid', printerId)
.field('ticket', '{"version": "1.0", "print": {}}')
.field('title', 'Test from Simpe.li')
.field('contentType', 'application/pdf')
.attach('content', buffer)
.end(function (res) {
console.log(res);
});
I know that what I'm sending is a PDF, because when I change the
.field('contentType', 'application/pdf')
to
.field('contentType', 'text/plain')
I will get 53 pages of text which is the raw content of the PDF file.
Question
What I'm doing wrong?
Tech spec
NodeJS v4.1.1
Unirest v0.4.2
It turns out that the Google documentation left some key information out. To send a binary type data, like a PDF, you need to convert the file to base64. In addition to that you need to tell Google that you are going to send them a base64 blob with the add field contentTransferEncoding and set the value to base64.
Another important thing. There is a bug in Unirest (for NodeJS at least), where sending a base64 file won't set the Content-Size header. Nor even setting your own will fix the problem. To circumvent this issue I had to switch to Request. The following code shows a post to Google Cloud Print that works:
let buffer64 = buffer.toString('base64');
let formData = {
xsrf: xsrf_token,
printerid: printerId,
ticket: '{"version": "1.0"}',
title: 'Test Print',
contentTransferEncoding: 'base64',
contentType: 'application/pdf',
content: buffer64
};
let headersData = {
'Authorization': 'Bearer ' + token
};
request.post({
url: 'https://www.google.com/cloudprint/submit',
headers: headersData,
formData: formData
}, function (err, httpResponse, body) {
if (err) {
return console.error('upload failed:', err);
}
console.log('Upload successful! Server responded with:', body);
});
I hope this will help others :)

Downloading S3 file as attachment using knox/node.js passthrough

I am trying to get downloading files from S3 via node/knox working. My javascript call is working and successfully downloads the file, but I want to download it as an attachment. I have tried setting the headers to 'Content-disposition': 'attachment; filename=myfile.zip', but it doesn't seem to be working. Here is my sample code:
var mimetype = mime.lookup(product.filename);
var headers = {
'Content-disposition': 'attachment; filename=' + product.filename,
'Content-type': mimetype
};
var get = knox.getFile(product.filename, function(err, result){
if(err) { return next(err); }
res.setHeader('Content-disposition', 'attachment; filename=' + product.filename);
res.setHeader('Content-type', mimetype);
result.pipe(res);
});
I have also tried setting those headers on the knox call, but still won't download as attachment.
So it looks like the problem wasn't my server at all, as I was unaware that you cannot use xhr (i.e. $resource with Angular) to download files as attachments. The simplest way I have found to get around this so far, is to only use xhr to validate the download, returning a token to the user which can be used non-xhr to get the actual file.

Resources