I use Kuzzle as a backend for my progressive web app and I wanted to do some file upload.
As far I can see, Kuzzle does not support server side file storage.
How can I upload images from my application and then display them to other users?
EDIT (2019-05-20): Since this is a very current feature in mobile or web application, we have developed a Kuzzle Plugin that allows to upload file to S3 using Presigned Urls: https://github.com/kuzzleio/kuzzle-plugin-s3
Check out the README: https://github.com/kuzzleio/kuzzle-plugin-s3#example-using-the-javascript-sdk
ORIGINAL RESPONSE:
Kuzzle does not natively handle file upload.
The best way to handle file upload is to use an external service like Amazon S3 or Cloudinary to upload your file in the client side and then store the file URL and metadata into Kuzzle.
You can develop a Kuzzle plugin to generate S3 presigned upload URLs with a tiny TTL and then use this URL to upload you file directly to your S3 bucket.
This way you can use Kuzzle authentification and rights management system with your file upload.
Here you have some resources about presigned URL: https://medium.com/#aakashbanerjee/upload-files-to-amazon-s3-from-the-browser-using-pre-signed-urls-4602a9a90eb5
And you will need to develop a Kuzzle plugin with a custom controller to generate these URLs: https://docs.kuzzle.io/plugins/1/controllers
Full disclosure: I work at Kuzzle as core developer
Related
I'm building a MERN stack application where only authenticated users should be able to upload media files and also then perform basic read and delete operations on them.
My application was previously using Firebase Storage to upload the media to Google's servers directly from the client. However, now that I need the client to be authenticated to perform an upload, I am looking for a secure alternative solution.
From my limited research, it appears that the common approach is to first upload the file to the server and then make a separate request from the server to upload the file to cloud storage (e.g. Google Cloud, AWS etc.) or the database (GridFs in MongoDB)?
It seems inefficient to me to, in effect, upload the file twice. I imagine this would be particularly taxing for large files e.g. a 150 MB video.
For this reason, what is the optimal way of achieving authenticated (large) file uploads? And secondly, how can I go about issuing upload progress to cloud storage or database back to the client?
You will nearly always have the client upload to your server directly because that is the only way you can control access to the cloud storage and the only way you can do it without exposing your cloud credentials to the client which would be a giant security hole and the only way you can control what your clients do and don't upload. Exposing those credentials to the client would allow any client to upload anything they want to your cloud service which is certainly not what you want. You must be able to control it by going through your own server.
And, you should have your server checking auth on the uploader, checking the type of data being uploaded, checking the size of the data being uploaded, etc...
It is possible to pipe the incoming upload to the cloud storage as each packet arrives so that you don't have to first buffer the entire file on your server before you start sending it to the cloud service and Abdennour's answer shows an example of that to the S3 service. You will, of course, have to be very careful about denial of service attacks (like 100TB uploads) in these scenarios so you don't mistakenly allow massive uploads to your cloud storage that are beyond what you intend to allow.
It seems inefficient to me to, in effect, upload the file twice.
Yes, it is slightly inefficient from the bandwidth point of view, but unless your cloud storage offers some one-time credential you can pass to the client (so that credential can only be used for one upload) and unless the cloud storage also allows you to specify ALL the required details to control the upload (max size, file type, etc...) and cloud storage will enforce that, then the only other place to put that logic is in your server which requires the upload to go through your server which is the common way this is implemented.
You can directly pipe the request to Cloud Storage (AWS S3) without the need to cache it in the server.
And this is how it should look like :
import express from 'express';
import {S3} from 'aws-sdk';
const s3 = new S3();
const app = express();
app.post('/data', async (req, res) => {
var params = {
Body: req,
Bucket: "yourbucketname here",
Key: "exampleobject",
};
s3.upload(params, async (err, data) => {
if(err)
log.info(err);
else
log.info("success");
});
});
const server = app.listen(8081,async () => log.info("App listening") )
The key thing here is that your request should have multi-part enabled.
I am using NodeJS to upload a file into my S3 bucket. As a response I receive a link to the file.
For example I receive https://my-bucket-name.s3.ap-south-1.amazonaws.com/testUpload_s.txt
The bucket does not allow public access as of now. How am I supposed to securely access the file from the bucket? I would like to know whether the the following method be safe?
Allow public access for bucket
Each file will be given a random unique name during upload
This file name or the response URL is stored in the database
When the file has to be fetched I use the link received from the
upload response to access the file from the bucket
Is this approach safe? If not is there any other method to do the same?
There are a number of options for giving clients access to an object in S3, including:
make the object public
require the client to authenticate with AWS credentials
give the client a time-limited, pre-signed URL
They each serve a different use case. Use #1 if it's safe for anyone to access the file (for example the file is an image being shown on a public web site). Use #2 if the client has AWS credentials. Use #3 if you don't want to make the file public but the client does not have AWS credentials. Note with #3 that the pre-signed URL is time-limited.
You don't need to store URL. You can query objects in S3 bucket using file name.
For access from outside Use signed url.
https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/s3-example-presigned-urls.html
Im trying to use google-storage in my strapi backend. I was able to choose the provider to upload files to but when ever I try to upload a file I get an error:
Could not authenticate request Unexpected error while acquiring
application default credentials: Could not load the default
credentials. Browse to
https://developers.google.com/accounts/docs/application-default-credentials
for more information.
It said something about default credentials but where do I have to place them and in which format?
You need the credentials for access to your project, the credentials are in json file, you can obtain this Json from google console, the info to create a json file is here. Creating a service account
After obtain your json, you need to check the documentation for strapi plugin in order to connect to google cloud storage using a json file. upload gcloud storage plugin for strapi
I have an application who consist of a Node.js backend hosted on AWS and an Angular 2+ frontend. I am using the facebook graph API on the backend, however, when it comes to uploading things to facebook I'm getting into trouble.
If I want to upload a file, I need to upload it to my backend before, which will put it in an S3 bucket and then upload it from my backend to facebook. This seems to be a little heavy for me and I am really suspicious that it is the correct way to do it. Also, Facebook provides a javascript API that allows us to upload a file from a client to its platform, which seems less heavy.
Right now, I see three solutions:
Continue doing everything on the backend
Only do upload operations on the client side using the javascript SDK, and everything else on the backend
Do everything from the frontend using the javascript SDK
For me, the best solution would be 2. What are your opinions? Is there other solutions?
If the file is created on the client, there is no need to send it to the server - you can just directly upload it to Facebook instead. Although, if you need to store it on your own server anyway, you can do that first and let the server handle the upload to Facebook - uploading an URL of an image to Facebook is the easiest way. If you don´t need the image on your server, this may help you:
https://www.devils-heaven.com/facebook-javascript-sdk-photo-upload-with-formdata/
https://www.devils-heaven.com/facebook-javascript-sdk-photo-upload-from-canvas/
If the file is on the server already, there is no need to send it to the client before uploading it to Facebook. In that case, i would do the upload server side. If it´s about the security: There is absolutely no problem in sending Access Tokens to the server. You can just use the JS SDK for login, send the Token to the server and do the upload on the server. Just use appsecret_proof: https://developers.facebook.com/docs/graph-api/securing-requests/
If you are using your end user identity on Facebook there is no benefit to use the backend here (except the fact that you need less Javascript on the page).
Your user Facebook credentials must never be sent to the backend, therefore do the upload to Facebook on client side using Facebook SDK.
Doing it from client side also save you the infrastructure cost on the backend.
We are storing files uploaded by users of our app to Amazon S3.
In order to keep these files private & secure, we are:
having the client generate a UUID for the filename (so that the URL of the file is difficult to guess). See: What is the probability of guessing (matching) a Guid?
going to protect the data by using client-side encryption.
Do these two measures provide sufficient security, or should we also use Amazon Cognito to ensure that the user getting the object is one of the users who has access to it?
Using obscure filenames is not a good security method.
If you wish to allow users to upload/download data to/from Amazon S3 in a secure manner, you should use Pre-Signed URLs.
The process is:
Users authenticate to your web/mobile application
Users interact with your application and indicate they wish to upload/download a file
Your application generates a pre-signed URL that includes an authorization to access Amazon S3, with restrictions such as bucket, path and file size
Users upload/download the file using the pre-signed URL
This way, your application controls the security and there is no potential for accidental workaround, overwriting, access, etc.
See: Uploading Objects Using Pre-Signed URLs