Is there any way to multi-part upload file to the server for Flutter Web natively. Because there so many ways to do it for flutter iOS and Android.
For Android/IOS we are using:
multiPFile = await MultipartFile.fromFile(file.path,
filename: file.name, contentType: MediaType.parse(mimeType));
For Web you need to build a MultiPartFile from the bytes of the file:
multiPFile = await MultipartFile.fromBytes(file.bytes,
filename: file.name, contentType: MediaType.parse(mimeType));
Related
I am currently trying to download the file from the s3 bucket using a button from the front-end. How is it possible to do this? I don't have any idea on how to start this thing. I have tried researching and researching, but no luck -- all I have searched are about UPLOADING files to the s3 bucket but not DOWNLOADING files. Thanks in advance.
NOTE: I am applying it to ReactJS (Frontend) and NodeJS (Backend) and also, the file is uploaded using Webmerge
UPDATE: I am trying to generate a download link with this (Tried node even if I'm not a backend dev) (lol)
see images below
what I have tried so far
onClick function
If the file you are trying to download is not public then you have to create a signed url to get that file.
The solution is here Javascript to download a file from amazon s3 bucket?
for getting non public files, which revolves around creating a lambda function that will generate a signed url for you then use that url to download the file on button click
BUT if the file you are trying to download you is public then you don't need a signed url, you just need to know the path to the file, the urls are structured like: https://s3.amazonaws.com/ [file path]/[filename]
They is also aws amplify its created and maintain by AWS team.
Just follow Get started and downloading the file from your react app is simply as:
Storage.get('hello.png', {expires: 60})
.then(result => console.log(result))
.catch(err => console.log(err));
Here is my solution:
let downloadImage = url => {
let urlArray = url.split("/")
let bucket = urlArray[3]
let key = `${urlArray[4]}/${urlArray[5]}`
let s3 = new AWS.S3({ params: { Bucket: bucket }})
let params = {Bucket: bucket, Key: key}
s3.getObject(params, (err, data) => {
let blob=new Blob([data.Body], {type: data.ContentType});
let link=document.createElement('a');
link.href=window.URL.createObjectURL(blob);
link.download=url;
link.click();
})
}
The url in the argument refers to the url of the S3 file.
Just put this in the onClick method of your button. You will also need the AWS SDK
How to display s3 image in the browser, now it is getting downloading every time when I try to open it in a browser. I have set the content type but still, I am facing the same issue.
here is my code
var params = {
Key: 'upload/' + req.file.originalname,
Body: data,
ContentType:'image/jpeg',
ACL: 'public-read'
};
s3bucket.upload(params, function(err, aws_images) {
console.log(aws_images)
})
It appears that you are wanting to have images served from Amazon S3 cached in browsers.
To do this, you can set cache-control metadata on the objects.
See:
Amazon S3 images cache-control not being applied
How to add cache control in AWS S3?
According to the Docs, I have to pass the filename to the function in order to upload a file.
// Uploads a local file to the bucket
await storage.bucket(bucketName).upload(filename, {
// Support for HTTP requests made with `Accept-Encoding: gzip`
gzip: true,
metadata: {
// Enable long-lived HTTP caching headers
// Use only if the contents of the file will never change
// (If the contents will change, use cacheControl: 'no-cache')
cacheControl: 'public, max-age=31536000',
},
});
I am using Firebase Admin SDK (Nodejs) in my server side code and clients send file in form-data which i get as File Objects. How then do i upload this when the function accepts only filename leading to filepath.
I want to be able to do something like this
app.use(req: Request, res: Response) {
const file = req.file;
// upload file to firebase storage using admin sdk
}
Since the Firebase Admin SDK just wraps the Cloud SDK, you can use the Cloud Storage node.js API documentation as a reference to see what it can do.
You don't have to provide a local file. You can also upload using node streams. There is a method File.createWriteStream() which gets you a WritableStream to work with. There is also File.save() which accepts multiple kinds of things, including a Buffer. There are examples of using each method here.
what you should do is use the built-in function
let's say you receive the file as imageDoc in the client side and
const imageDoc = e.target.files[0]
in node, you can now get a URL path to the object as
const imageDocUrl = URL.createObjectURL(imageDoc)
so your final code will be
// Uploads a local file to the bucket
await storage.bucket(bucketName).upload(imageDocUrl, {
// Support for HTTP requests made with "Accept-Encoding: gzip"
gzip: true,
metadata: {
// Enable long-lived HTTP caching headers
// Use only if the contents of the file will never change
// (If the contents will change, use cacheControl: 'no-cache')
cacheControl: 'public, max-age=31536000',
},
});
I have a Google Cloud Function and I'm trying to save some data into Firebase storage. I'm using the firebase-admin package to interact with Firebase.
I'm reading through the documentation (https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-nodejs) and it seems to have clear instructions on how to upload files if the file is on your local computer.
// Uploads a local file to the bucket
await storage.bucket(bucketName).upload(filename, {
// Support for HTTP requests made with `Accept-Encoding: gzip`
gzip: true,
metadata: {
// Enable long-lived HTTP caching headers
// Use only if the contents of the file will never change
// (If the contents will change, use cacheControl: 'no-cache')
cacheControl: 'public, max-age=31536000',
},
});
In my case through, I have a Google Cloud Function which will be fed some data in the postbody and I want to save that data over to the Firebase bucket.
How do I do this? The upload method only seems to specify a filepath and doesn't have a data parameter.
Use the save method:
storage
.bucket('gs://myapp.appspot.com')
.file('dest/path/in/bucket')
.save('This will get stored in my storage bucket.', {
gzip: true,
contentType: 'text/plain'
})
The API docs for Bucket.upload() state that it's just a wrapper around File.createWriteStream(). This method will create a WritableStream that you can use upload data that's already in memory. You will deal with this stream just like you would any other stream in Node. There is sample code in the API docs.
I'm working with NodeJS and using HAPI to create API for upload and download file. When uploading, I read the file information (filename, mime type and file content) and store it in database. File content is stored as base64 encoded string.
What I want to do is to create API, so when client hits it will be forced to download a file that is constructed based on the stored information using the code below
server.route({
method: 'GET',
path:'/file',
handler: function (request, reply) {
var fileData = // get file content;
var mime = // get file mime-type;
var fileBuffer = new Buffer(fileData, 'base64');
reply(fileBuffer)
.header('Content-disposition', 'attachment; filename=' + fileName)
.header('Content-type', mime).header('Content-length', fileBuffer.length).encoding('binary');
}
})
But this code looks like still not work, if I hit the API it will be loading process forever and no file downloaded. Anybody can give me suggestion on how to do it correctly?
UPDATE
the code is correct and works perfectly. the problem I had before is caused by another reason, incorrect encoding/decoding mechanism.
Check out the inert plugin for hapi which handles files, the repo is here