I am currently trying to get the duration/length of an audio file read from S3. I tried a bunch of different ways, but can't seem to find an efficient one that works. Currently, I am storing the audio files in /tmp/ folder and than trying to read them, but that doesn't seem to work. I am also working with .mp4a not .mp3, but was testing the below initially with url strings from S3, but got 0:00 returned when the audio was read. I also tried getAudioDurationInSeconds, but that tells me "error locating ffprobe". Any pointer would be greatly appreciated.
s3.getObject({ Bucket: bucketName, Key: `audio/file` }, function (err, data) {
if (err) {
console.error(err.code, "-", err.message);
}
fs.writeFile(`/tmp/file`, data.Body)
.then(() => { // This ensures that your mp3Duration function gets called after the file has been written
// getAudioDurationInSeconds(`/tmp/file`).then((duration) => {
// console.log(duration);
// });
mp3Duration(`/tmp/$file`, function (err, duration) {
if (err) return console.log(err.message);
console.log('Your file is ' + duration + ' seconds long');
}).catch(console.error);
});
});
Related
So i built an e-learning platform with node.js and vue.js, and i am using GCP buckets to store my videos privately, everything works perfectly asides the fact that my videos can not fast forward or rewind, if you try moving the video to a specific position (maybe towards the end of the video) it returns to the same spot where you were initially, at first i taught it was a vue problem, but i tried playing this videos from my GCP bucket dashboard directly but it does the same thing. it only works fine when i use the firefox browser.
i am using the Uniform: No object-level ACLs enabled access control and the Not public permission settings. I am new the GCP, i have no idea what could be the problem
here is the node.js function i am using
const upload = async (req, res) => {
try {
if (!req.file) {
res.status(400).send('No file uploaded.');
return;
}
const gcsFileName = `${Date.now()}-${req.file.originalname}`;
var reader = fs.createReadStream('uploads/'+req.file.originalname);
reader.pipe(
bucket.file(gcsFileName).createWriteStream({ resumable: false, gzip: true })
.on('finish', () => {
// The public URL can be used to directly access the file via HTTP.
const publicUrl = format(
`https://storage.googleapis.com/bucketname/` + gcsFileName
);
// console.log('https://storage.googleapis.com/faslearn_files/' + gcsFileName)
fs.unlink('uploads/' + req.file.originalname, (err) => {
if (err) {
console.log("failed to delete local image:" + err);
} else {
console.log('successfully deleted local image');
}
});
res.status(200).send(publicUrl);
})
.on('error', err => {
console.log(err);
return
})
//.end(req.file.buffer)
)
// Read and display the file data on console
reader.on('data', function (chunk) {
console.log('seen chunk');
});
} catch (err) {
console.log(" some where");
res.status(500).send({
message: `Could not upload the file: ${req.file.originalname}. ${err}`,
});
}
};
the issue was comming from the way i encoded the video, i was supposed to use the blob but i used the pipe
Im trying to get the contents of a file using the google drive API v3 in node.js.
I read in this documentation I get a stream back from drive.files.get({fileId, alt: 'media'})but that isn't the case. I get a promise back.
https://developers.google.com/drive/api/v3/manage-downloads
Can someone tell me how I can get a stream from that method?
I believe your goal and situation as follows.
You want to retrieve the steam type from the method of drive.files.get.
You want to achieve this using googleapis with Node.js.
You have already done the authorization process for using Drive API.
For this, how about this answer? In this case, please use responseType. Ref
Pattern 1:
In this pattern, the file is downloaded as the stream type and it is saved as a file.
Sample script:
var dest = fs.createWriteStream("###"); // Please set the filename of the saved file.
drive.files.get(
{fileId: id, alt: "media"},
{responseType: "stream"},
(err, {data}) => {
if (err) {
console.log(err);
return;
}
data
.on("end", () => console.log("Done."))
.on("error", (err) => {
console.log(err);
return process.exit();
})
.pipe(dest);
}
);
Pattern 2:
In this pattern, the file is downloaded as the stream type and it is put to the buffer.
Sample script:
drive.files.get(
{fileId: id, alt: "media",},
{responseType: "stream"},
(err, { data }) => {
if (err) {
console.log(err);
return;
}
let buf = [];
data.on("data", (e) => buf.push(e));
data.on("end", () => {
const buffer = Buffer.concat(buf);
console.log(buffer);
});
}
);
Reference:
Google APIs Node.js Client
I've been stuck on this problem for awhile now and can't seem to figure out why this file isn't being uploaded to firebase correctly. I'm using the code below in firebase functions to generate a document, then I convert that document to a stream, finally I create a write stream declaring the path that I want the file to be written to and pipe my document stream to the firebase storage WriteStream.
Example 1: PDF uploaded from file system through firebase console. (Link works and displays pdf)
Example 2: PDF generated in firebase functions and written to storage using code below
Considerations:
I know the PDF is valid because I can return it from the function and see it in my web browser and everything is as I would expect.
When trying to open the bad file it doesn't display anything and redirects me back to the overview screen.
const pdfGen = require("html-pdf");
pdfGen.create(html, { format: "Letter" }).toStream(function (err, stream) {
if (err) return res.status(500).send(err);
var file = storage.bucket()
.file(job.jobNum + "/quote-doc.pdf")
.createWriteStream({
resumable : false,
validation : false,
contentType: "auto",
metadata : {
'Cache-Control': 'public, max-age=31536000'}
});
stream.pipe(file)
.on('error', function(err) {
return res.status(500).send(err);
})
.on('finish', function(data) {
stream.close();
file.end();
res.end();
console.log('finished upload');
});
});
Hey everyone so I am trying to make this type of request in nodejs. I assume you can do it with multer but there is one major catch I don't want to download the file or upload it from a form I want to pull it directly from s3, get the object and send it as a file along with the other data to my route. Is it possible to do that?
Yes it's completely possible. Assuming you know your way around the aws-sdk, you can create a method for retrieving the file and use this method to get the data in your route and do whatever you please with them.
Example: (Helper Method)
getDataFromS3(filename, bucket, callback) {
var params = {
Bucket: bucket,
Key: filename
};
s3.getObject(params, function(err, data) {
if (err) {
callback(true, err.stack); // an error occurred
}
else {
callback(false, data); //success in retrieving data.
}
});
}
Your Route:
app.post('/something', (req, res) => {
var s3Object = getDataFromS3('filename', 'bucket', (err, file) => {
if(err) {
return res.json({ message: 'File retrieval failed' });
}
var routeProperties = {};
routeProperties.file = file;
routeProperties.someOtherdata = req.body.someOtherData;
return res.json({routeProperties});
});
});
Of course, the code might not be totally correct. But this is an approach that you can use to get what you want. Hope this helps.
There are two ways that I see here, you can either:
pipe this request to user, it means that you still download it and pass it through but you don't save it anywhere, just stream it through your backend.
There is a very similar question asked here: Streaming file from S3 with Express including information on length and filetype
I'm just gonna copy & paste code snippet just for the reference how it could be done
function sendResponseStream(req, res){
const s3 = new AWS.S3();
s3.getObject({Bucket: myBucket, Key: myFile})
.createReadStream()
.pipe(res);
}
if the file gets too big for you to easily handle, create presigned URL in S3 and send it through. User then can download the file himself straight from S3 for a limited amount of time, more details here: https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html
I've written a program that creates HTML files. I then attempt to upload the files to my S3 bucket at the end of the program. It seems that the problem is that my program terminates before allowing the function to complete or receiving a callback from the function.
Here is the gist of my code:
let aws = require('aws-sdk');
aws.config.update({
//Censored keys for security
accessKeyId: '*****',
secretAccessKey: '*****',
region: 'us-west-2'
});
let s3 = new aws.S3({
apiVersion: "2006-03-01",
});
function upload(folder, platform, browser, title, data){
s3.upload({
Bucket: 'html',
Key: folder + platform + '/' + browser + '/' + title + '.html',
Body: data
}, function (err, data) {
if (err) {
console.log("Error: ", err);
}
if (data) {
console.log("Success: ", data.Location);
}
});
}
/*
*
* Here is where the program generates HTML files
*
*/
upload(folder, platform, browser, title, data);
If I call the upload() function (configured with test/dummy data) before the HTML generation section of my code, the upload succeeds. The test file successfully uploads to S3. However, when the function is called at the end of my code, I do not receive an error or success response. Rather, the program simply terminates and the file isn't uploaded to S3.
Is there a way to wait for the callback from my upload() function before continuing the program? How can I prevent the program from terminating before uploading my files to S3? Thank you!
Edit: After implementing Deiv's answer, I found that the program is still not uploading my files. I still am not receiving a success or error message of any kind. In fact, it seems like the program just skips over the upload() function. To test this, I added a console.log("test") after calling upload() to see if it would execute. Sure enough, the log prints successfully.
Here's some more information about the project: I'm utilizing WebdriverIO v4 to create HTML reports of various tests passing/failing. I gather the results of the tests via multiple event listeners (ex. this.on('test:start'), this.on('suite:end'), etc.). The final event is this.on('end'), which is called when all of the tests have completed execution. It is here were the test results are sorted based on which Operating System it was run on, Browser, etc.
I'm now noticing that my program won't to do anything S3 related in the this.on('end') event handler even if I put it at the very beginning of the handler, though I'm still convinced it's because it isn't given enough time to execute because the handler is able to process the results and create HTML files very quickly. I have this bit of code that lists all buckets in my S3:
s3.listBuckets(function (err, data) {
if (err) {
console.log("Error: ", err);
} else {
console.log("Success: ", data.Buckets);
}
});
Even this doesn't return a result of any kind when run at the beginning of this.on('end'). Does anyone have any ideas? I'm really stumped here.
Edit: Here is my new code which implement's Naveen's suggestion:
this.on('end', async (end) => {
/*
* Program sorts results and creates variable 'data', the contents of the HTML file.
*/
await s3.upload({
Bucket: 'html',
Key: key,
Body: data
}, function (err, data) {
if (err) {
console.log("Error: ", err);
}
if (data) {
console.log("Success: ", data.Location);
}
}).on('httpUploadProgress', event => {
console.log(`Uploaded ${event.loaded} out of ${event.total}`);
});
}
The logic seems sound, but still I get no success or error message, and I do not see the upload progress. The HTML file does not get uploaded to S3.
You can use promises to wait for your upload function to finish. Here's what it will look like:
function upload(folder, platform, browser, title, data) {
return new Promise((resolve, reject) => {
s3.upload({
Bucket: 'html',
Key: folder + platform + '/' + browser + '/' + title + '.html',
Body: data
}, function(err, data) {
if (err) {
console.log("Error: ", err);
return reject(err);
}
if (data) {
console.log("Success: ", data.Location);
return resolve(); //potentially return resolve(data) if you need the data
}
});
});
}
/*
*
* Here is where the program generates HTML files
*
*/
upload(folder, platform, browser, title, data)
.then(data => { //if you don't care for the data returned, you can also do .then(() => {
//handle success, do whatever else you want, such as calling callback to end the function
})
.catch(error => {
//handle error
}