Send recorded audio to S3 - node.js

I am using RecorderJs to record audio. When done; I want to save it to amazon S3 (I am using knox library) via server (because I don't want to share the key).
recorder.exportWAV(function(blob) {
// sending it to server
});
On the server side, using knox ...
knox.putBuffer(blob, path, {"Content-Type": 'audio/wav',
"Content-Length": blob.length},
function(e,r) {
if (!e) {
console.log("saved at " + path);
future.return(path);
} else {
console.log(e);
}
});
And this is saving just 2 bytes!!
Also; is this the best way to save server memory. Or are there better alternatives?
I also see this: Recorder.forceDownload(blob[, filename])
Should I force download and then send it to server?
Or should I save to S3 directly from my domain. Is there a option in S3 which cannot be hacked by other user trying to store data on my server?

Or should i save to S3 directly from my domain. Is there a option in
S3 which cannot be hacked by other user trying to store data on my
server?
You can use S3 bucket policies or AIM policies on S3 buckets to restrict access to your buckets.
Bucket Policies:http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucketPolicies.html
AIM Policies: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingIAMPolicies.html
There are several related threads on SO about this too, for example:
Enabling AWS IAM Users access to shared bucket/objects
AWS s3 bucket policy invalid group principal

Related

How to render images(files) from S3 bucket blocked all public access in frontend( Private Write, Private read)

I have uploaded file to S3 bucket using aws-sdk as:
async function uploadFileToAws(file){
const fileName = `new_file_${new Date().getTime()}_${file.name}`;
const mimetype = file.mimetype;
const params = {
Bucket: config.awsS3BucketName,
Key: fileName,
Body: file.data,
ContentType: mimetype,
// ACL: 'public-read'
};
const res = await new Promise((resolve, reject) => {
s3.upload(params, (err, data) => err == null ? resolve(data) : reject(err));
});
return { secure_url: res.Location };
}
If we allow the bucket permission to public read then there is no problem, but we have the requirement of blocking public-read(public-access) and only allow the access of bucket object or image to be visible in owns products only(mobile and web apps) with the help of access Id and secret key or any other similar approach. Is this possible? does aws S3 provide such services?
I have gone through aws s3 documentation, googled, and walked through multiple StackOverflow threads and some blogs but no luck. I would really appreciate the suggestion, tips, help.
You could consider two options.
The first one would be through CloudFront and signed urls or cookies as explained in
Serving Private Content with Signed URLs and Signed Cookies
Basically, in this approach you would setup a CloudFront distribution which would be used to serve your private images. Since the users are authenticated, your backend would need to verify whether they can access the given image, and if so, generate a signed URL for the file. The signed url would enable the access to the said file. Details of this procedure are described in How Signed URLs Work.
The second possibility would be through pre-signed S3 URLs. It is somehow similar to the first one, except that it does not involve any extra service, such as CloudFront. Again, since users are authenticated, your back-end would verify their rights to view the given image, and generate pre-signed S3 url to enable them a temporary access to the image.
In both cases, bucket's do not need to be public. Access to the images is controlled by your back-end.

Upload a file to a cloud storage without exposing an API token

To upload a file from a client to a cloud storage we need an API token.
At the same time, an API token should be keeped privately.
As far as I understand, the easiest implementation would be:
To upload a file locally to the application server
From the application server to upload a file already to a cloud storage using an API.
The biggest issue of this approach is an extra traffic and overloading of the application server, which I really want to avoid.
Is there any way to upload a file directly to a cloud without exposing an API token on a client side? Perhaps, there is some redirect or forward command, which allows to add an API token to the initial request and then to redirect a request with a file to a cloud, or something similar?
If the cloud storage offers an API that allow streaming of the file, for example, in a PUT-request, you can use busboy to upload a file that is sent by an HTML <form>. The following code converts the incoming stream of type multipart/form (which comes from the browser) into an outgoing stream of the file's MIME-type (which goes to the cloud storage API):
app.post("/uploadform", function(req, res) {
req.pipe(busboy({headers: req.headers})
.on("file", function(name, file, info) {
file.pipe(https.request("<cloud storage endpoint>", {
headers: {
"Authorization": "<API token>",
"Content-Type": info.mimeType
},
method: "PUT"
}).on("response", function(r) {
res.send("Successfully uploaded");
}));
}));
});
The advantage of this approach is that the file is not stored on the application server. It only passes through the application server's memory chunk by chunk.

Serve private files directly from azure blob storage

My web app allows users to upload files.
I want to use cloud azure blob storage for this.
Since downloading will be very frequent (more than uploading)
I would like to save server computing time and bandwith and serve files directly from the azure blob server.
I believe it is made possible on Google cloud with Firebase (Firestorage).
Where you can upload and download directly from the client. (I know authentication and authorization are also managed by firebase so it makes things easier)
Does any similar mechanisms/service exist on Azure?
For example
When a user clicks an azure storage download link a trigger would check the JWT for authorization and data would be sent directly to the client from azure storage
Similar option is available with Azure Blob storage as well. You can use the Storage SDK to access the containers and list/download the blob
With a javascript backend You can either use SAS Token or Azure Storage JavaScript Client Library also supports creating BlobService based on Storage Account Key for authentication besides SAS Token. However, for security concerns, use of a limited time SAS Token, generated by a backend web server using a Stored Access Policy.
Example here
EDIT:
I have not answered the question completely above, However if you want to access the blob storage or download any files from the blob storage you can make use of normal http get request with SAS token generated with any JavaScript application.
With Angualr:
uploadToBLob(files) {
let formData: FormData = new FormData();
formData.append("asset", files[0], files[0].name);
this.http.post(this.baseUrl + 'insertfile', formData)
.subscribe(result => console.log(result));
}
downloadFile(fileName: string) {
return this.http.get(this.baseUrl + 'DownloadBlob/' + fileName, { responseType: "blob" })
.subscribe((result: any) => {
if (result) {
var blob = new Blob([result]);
let saveAs = require('file-saver');
let file = fileName;
saveAs(blob, file);
this.fileDownloadInitiated = false;
}
}, err => this.errorMessage = err
);
}
However the best practice (considering the security) is to have a backend API/Azure function to handle the file upload.

Secure external links for Firebase Storage on NodeJS server-side

I'm having issues generating external links to files stored in my Firebase Storage bucket.
I'm using Google Cloud Storage for a while now and used this library (which is based on this answer) for generating external links for regular Storage buckets, but using it on the Firebase-assigned bucket doesn't seem to work.
I can't generate any secure HTTPS links and keep getting certificate validation error NET::ERR_CERT_COMMON_NAME_INVALID stating that my connection is not private. If I remove the 'S' from the HTTPS, the link works.
NOTE: Using the same credentials and private key to generate links for other buckets in my project, works just fine. It's only the Firebase bucket that is refusing to accept my signing...
I recommend using the official GCloud client, and then you can use getSignedUrl() to get a download URL to the file, like so:
bucket.file(filename).getSignedUrl({
action: 'read',
expires: '03-17-2025'
}, function(err, url) {
if (err) {
console.error(err);
return;
}
// The file is now available to read from this URL.
request(url, function(err, resp) {
// resp.statusCode = 200
});
});
Per Generate Download URL After Successful Upload this seems to work with Firebase and GCS buckets.

encrypt object in aws s3 bucket

I am saving some images/objects in aws s3 bucket from my application. First i am getting signed url from nodejs service api and uploading images or files to singed url using jquery ajax. I can open image or object using the link provided in the properties (https://s3.amazonaws.com/bucketname/objectname).
I want to provide security for each uploaded object. Even by chance if any anonymous user gets the link (https://s3.amazonaws.com/bucketname/objectname) somewhere he should not be able to open it. They (objects) should be accessed and open only cases like when request has some headers key values etc. I tried server side encryption by specifying header key values in request as shown below.
var file = document.getElementById('fileupload').files[0];
$.ajax({
url: signedurl,
type: "PUT",
data: file,
header:{'x-amz-server-side-encryption':'AES256'},
contentType: file.type,
processData: false,
success: function (result) {
var res = result;
},
error: function (error) {
alert(error);
}
Doesn't sever side encryption keep encrypted object on s3 bucket storage? Does it only encrypts while transferring and decrypts before saving on s3 storage?
If it stores encrypted object on s3 storage then how can i open it using the link shown in properties.
Server-Side Encryption (SSE) in Amazon S3 encrypts objects at rest (stored on disk) but decrypts objects when they are retrieved. Therefore, it is a transparent form of encryption.
If you wish to keep objects in Amazon S3 private, but make them available to specific authorized users, I would recommend using Pre-Signed URLs.
This works by having your application generate a URL that provides time-limited access to a specific object in Amazon S3. The objects are otherwise kept private so they are not accessible.
See documentation: Share an Object with Others

Resources