Node JS - Data Stream from S3 Bucket - node.js

i'm developing a software that provide direct download to my S3 bucket in Node JS.
I have develop the following code:
const stream = s3.getObject(getParams).createReadStream();
stream.on("error", (err) => {
res.send(err);
});
res.setHeader("Content-disposition", `attachment; filename=${path.basename(req.params[ 0 ])}`);
stream.pipe(res);
and it works!
But if the file doesn't exist, the download will start and this is wrong (it contains the error message).
Where is the best place to put the Header? in a way to set it only if the file exist?
I have try several events like on("data") or on("end") but it doesn't works as expected.
After that, i need to close the stream after this operation?

Related

How to upload Google Cloud text to speech API's response to Cloud Storage [Node.js]

I make a simple audio creating web app using Node.js server. I would like to create audio using Cloud text to speech API and then upload that audio to Cloud storage.
(I use Windows 10, Windows Subsystems for Linux, Debian 10.3 and Google Chrome browser. )
This is the code in Node.js server.
const client = new textToSpeech.TextToSpeechClient();
async function quickStart() {
// The text to synthesize
const text = 'hello, world!';
// Construct the request
const request = {
input: {text: text},
// Select the language and SSML voice gender (optional)
voice: {languageCode: 'en-US', ssmlGender: 'NEUTRAL'},
// select the type of audio encoding
audioConfig: {audioEncoding: 'MP3'},
};
// Performs the text-to-speech request
const [response] = await client.synthesizeSpeech(request);
// Write the binary audio content to a local file
console.log(response);
I would like to upload response to Cloud Storage.
Can I upload response to Cloud Storage directly? Or Do I have to save response in Node.js server and upload it to Cloud Storage?
I searched the Internet, but couldn't find the way to upload response to Cloud Storage directly. So, if you have a hint, please tell me. Thank you in advance.
You should be able to do that, with all your code in the same file. The best way for you to achieve that, it's by using a Cloud Function, that will be the one sending the file to your Cloud Storage. But, yes, you will need to save your file using Node.js, so then, you will upload it to Clou Storage.
To achieve that, you will need to save your file locally and then, upload it to Cloud Storage. As you can check in a complete tutorial in this other post here, you need to construct the file, save it locally and then, upload it. Below code is the main part you will need to add in your code.
...
const options = { // construct the file to write
metadata: {
contentType: 'audio/mpeg',
metadata: {
source: 'Google Text-to-Speech'
}
}
};
// copied from https://cloud.google.com/text-to-speech/docs/quickstart-client-libraries#client-libraries-usage-nodejs
const [response] = await client.synthesizeSpeech(request);
// Write the binary audio content to a local file
// response.audioContent is the downloaded file
return await file.save(response.audioContent, options)
.then(() => {
console.log("File written to Firebase Storage.")
return;
})
.catch((error) => {
console.error(error);
});
...
Once you have this part implemented, you will have the file that is saved locally downloaded and ready to be uploaded. I would recommend you to take a better look at the other post I mentioned, in case you have more doubts on how to achieve it.
Let me know if the information helped you!

Download image files from google cloud storage bucket to IOS localstorage using meteor

I am using Meteor project to upload images to the google cloud from iOS device and download the same images to iOS device.
I don't get any issues while uploading the images, it gets stored in google storage bucket. The issue I am facing is while downloading the images, I am using below code which downloads the images on server's path.
bucket.file(srcFilename).download(options);
I want to download and store the images on iOS device.
When I tried to read the file using createReadStream, my app get stuck without any progress (Not getting any callback).
bucket.file(srcFilename).createReadStream()
.on('error', function(err) {
console.log("error");
})
.on('response', function(response) {
// Server connected and responded with the specified status and
console.log("response");
})
.on('end', function() {
// The file is fully downloaded.
console.log("The file is fully downloaded.");
})
I hope that I am not missing anything while downloading the images to iOS device. I looked but unable to find any other option to do the same.
Any help in this regard is really appreciated as I am stuck at this very point.
I used below code to get the file from google cloud and download the chunks which I converted to binary format. Then I used this binary format to display image and store in my local storage from client side.
var chunkNew = new Buffer('');
bucket.file(srcFilename).createReadStream().on('data', function (chunk) {
chunkNew = Buffer.concat([chunkNew,chunk]);
})
.on('end', function() {
// The file is fully downloaded.
callback(null, chunkNew.toString('base64'));
})
More information can be found in this link http://codewinds.com/blog/2013-08-04-nodejs-readable-streams.html which uses data chunk to show the image as array buffer.

Download file from s3 without write it to file system in nodejs

I have a Nodejs server running with Hapi.
one of the job of the server is to send files to servicer API (the API only accept streams when I send buffer it return an error) on the user ask
All the files are stored in s3.
When I download them if I'm using promise(),
I get in the body buffer.
And I can get passthrough if I'm using createReadStream().
My problem is when I try to convert the buffer to stream and send it the API reject it, and the same when I use the createReadStream() result,
but when I use FS to save the file and then FS to read the API accept the stream and its work.
so I need help how can I create the same result without saving and reading the file.
edit:
here is my code I know it's the wrong way but it works I need a better way that will work
static async downloadFile(Bucket, Key) {
const result = await s3Client
.getObject({
Bucket,
Key
})
.promise();
fs.writeFileSync(`${Path.basename(Key)}`,result.Body);
const file = await fs.createReadStream(`${Path.basename(Key)}`);
return file;
}
If I understand it correctly, you want to get the object from the s3 bucket and stream to your HTTP response as the stream.
Instead of getting the data in the buffers and than figuring out the way to convert it to stream can be complicated and has its limitations, if you really want to leverage the power of streams then don't try to convert it to buffer and load the entire object to the memory, you can create a request that streams the returned data directly to a Node.js Stream object by calling the createReadStream method on the request.
Calling createReadStream returns the raw HTTP stream managed by the request. The raw data stream can then be piped into any Node.js Stream object.
This technique is useful for service calls that return raw data in their payload, such as calling getObject on an Amazon S3 service object to stream data directly into a file, as shown in this example.
//I Imagine you have something similar.
server.get ('/image', (req, res) => {
let s3 = new AWS.S3({apiVersion: '2006-03-01'});
let params = {Bucket: 'myBucket', Key: 'myImageFile.jpg'};
let readStream= s3.getObject(params).createReadStream();
// When the stream is done being read, end the response
readStream.on('close', () => {
res.end()
})
readStream.pipe(res);
});
When you stream data from a request using createReadStream, only the raw HTTP data is returned. The SDK does not post-process the data, this raw HTTP data can be directly returned.
Note:
Because Node.js is unable to rewind most streams, if the request initially succeeds, then retry logic is disabled for the rest of the response. In the event of a socket failure, while streaming, the SDK won't attempt to retry or send more data to the stream. Your application logic needs to identify such streaming failures and handle them.
Edits:
After the edits on the original question, I can see that s3 sends a PassThrough stream object which is different from a FileStream in Nodejs. So to get around the problem use the memory (If your files are not very big and or you have enough memory).
Use the package memfs, it will replace the native fs in your app
https://www.npmjs.com/package/memfs
Install the package by npm install memfs and require as follows:
const {fs} = require('memfs');
and your code will look like
static async downloadFile(Bucket, Key) {
const result = await s3
.getObject({
Bucket,
Key
})
.promise();
fs.writeFileSync(`/${Key}`,result.Body);
const file = await fs.createReadStream(`/${Key}`);
return file;
}
Note that the only change I have made in your functions is that I have changed the path ${Path.basename(Key)} to /${Key}, because now you don't need to know the path of your original filesystem we are storing files in memory. I have tested and this solution works

Streaming files directly to Client from Amazon S3 (Node.js)

I am using sails.js and am trying to stream files from the Amazon s3 server directly to the client.
To connect to S3, I use the s3 Module : https://www.npmjs.org/package/s3
This module provides capabilities like client.downloadFile(params) and client.downloadBuffer(s3Params).
My current code looks like the following:
var view = client.downloadBuffer(params);
view.on('error', function(err) {
cb({success: 0, message: 'Could not open file.'}, null);
});
view.on('end', function(buffer) {
cb(null, buffer);
});
I catch this buffer in a controller using:
User.showImage( params , function (err, buffer){
// this is where I can get the buffer
});
Is it possible to stream this data as an image file (using buffer.pipe(res) doesn't work of course). But is there something similar to completely avoid saving file to server disk first?
The other option client.downloadFile(params) requires a local path (i.e. a server path in our case)
The GitHub issue contains the "official" answer to this question: https://github.com/andrewrk/node-s3-client/issues/53

What "streams and pipe-capable" means in pkgcloud in NodeJS

My issue is to get image uploading to amazon working.
I was looking for a solution that doesnt save the file on the server and then upload it to Amazon.
Googling I found pkgcloud and on the README.md it says:
Special attention has been paid so that methods are streams and
pipe-capable.
Can someone explain what that means and if it is what I am looking for?
Yupp, that means you've found the right kind of s3 library.
What it means is that this library exposes "streams". Here is the API that defines a stream: http://nodejs.org/api/stream.html
Using node's stream interface, you can pipe any readable stream (in this case the POST's body) to any writable stream (in this case the S3 upload).
Here is an example of how to pipe a file upload directly to another kind of library that supports streams: How to handle POSTed files in Express.js without doing a disk write
EDIT: Here is an example
var pkgcloud = require('pkgcloud'),
fs = require('fs');
var s3client = pkgcloud.storage.createClient({ /* ... */ });
app.post('/upload', function(req, res) {
var s3upload = s3client.upload({
container: 'a-container',
remote: 'remote-file-name.txt'
})
// pipe the image data directly to S3
req.pipe(s3upload);
});
EDIT: To finish answering the questions that came up in the chat:
req.end() will automatically call s3upload.end() thanks to stream magic. If the OP wants to do anything else on req's end, he can do so easily: req.on('end', res.send("done!"))

Resources