Uploading multiple large files via a browser and verifying the upload - browser

I need to allow web users to:
upload multiple files via a browser
upload large files (> 2GB)
verify the upload was not corrupted
I've seen recommendations for Uploadify and Jumploader, however, it isn't clear to me if these applications verify the uploaded file (such as, comparing MD5 of client-side file vs. uploaded). The application must support event hooks prior to and post upload. Any suggestions?

Related

Serving and processing large CSV and XML files from API

I am working on a node web application and require a form to enable users to provide a URL containing a (potentially 100mb) large CSV or XML file. This would then be submitted and trigger the server (Express) to download the file using fetch, process it and then save it to my Postgres database.
The problem I am having is the size of the file. Responses from the API take minutes to return and I'm worried this solution is not optimal for a production application. I've also seen that many servers (including cloud based ones) have response size limits on them, which would obviously be exceeded here.
Is there a better way to do this than simply via a fetch request?
Thanks

how to upload client files directly to minio using nodejs?

Can anyone help me to implement direct upload of client files to minio in nodejs? I just saw that there are two methods for this work, presignedPostPolicy and presignedPutObject, but somewhere about how these two methods worked was not explained at the same time.
I want that when a link is given to the user that he can upload there, it has limitations such as the maximum size that can be uploaded.

IPFS File Upload Vulnerabilities

I'm building an app which allows users to upload their images to IPFS and the image will be loaded using an <img> tag, the file type checking is done only in the frontend.
However, I'm also aware of the File Upload Vulnerability in normal centralized servers. So here's my question, would hackers be able to explore this?
The following is a JavaScript file I tried to bypass frontend checking and upload to IPFS, however, when I try to access its URL it returns the file in text instead of executing it. As a sophisticated hacker, would he/she be able to upload a malicious file somehow and cause damage on my site or my users?
https://cloudflare-ipfs.com/ipfs/bafybeigynjetni7b2z52qqv75u5c6k3fgrowqdp6a4qtcbfd4rq7nnj3pu/
All data on ipfs in the public node is Public. If you want to secure files you need to encrypt them. https://docs.ipfs.tech/concepts/privacy-and-encryption/

How do I efficiently combine remote files (AWS S3) + locally generated pdf files as .zip files in node.js

I am building an application where media files and pdf certificates can be downloaded by users.
The media files are stored in AWS S3 buckets and are retrieved using CloudFront URLs, while the certificates are to be generated on the fly by the backend node.js GraphQL API server running on Heroku.
What would be the best way to fetch the required media files, generate the certificate, add the files and certificates to an archive (zip), then deliver the archive to the frontend.
My first thought is to:
client request ---
download media
generate pdf
compress and save archive (single use only, no longer necessary after download)
server response (send link to the client)
Is there a way to make this more efficient? I don't want to tie down server resources processing files and would like to avoid saving the zip files on the server. Are there other tools or services that I can delegate that task to?

Uploading and requesting images from Meteor app

I want to upload images from the client to the server. The client must see a list of all images he or she has and see the image itself (a thumbnail or something like that).
I saw people using two methods (generically speaking)
1- Upload image and save the binaries to MongoDB
2- Upload an image and move it to a folder, save the path somewhere (the classic method, and the one I implemented so far)
What are the pros and cons of each method and how can I retrieve the data and show it in a template in each case (getting the path and writing to the src attribute of and img tag and sending the binaries) ?
Problems found so far: when I request foo.jpg (localhost:3000/uploads/foo.jpg) that I uploaded and the server moved to a known folder, my router (iron router) fails to find how to deal with the request.
1- Upload image and save the binaries to MongoDB
Either you limit the file size to 16MB and use only basic mongoDB, either you use gridFS and can store anything (no size limit). There are several pro-cons of using this method, but IMHO it is much better than storing on the file system :
Files don't touch your file system, they are piped to you database
You get back all the benefits of mongo and you can scale up without worries
Files are chunked and you can only send a specific byte range (useful for streaming or download resuming)
Files are accessed like any other mongo document, so you can use the allow/deny function, pub/sub, etc.
2- Upload an image and move it to a folder, save the path somewhere
(the classic method, and the one I implemented so far)
In this case, either you store everything in your public folder and make everything publicly accessible using the files names + paths, either you use dedicated asset delivery system, such as an ngix server. Either way, you will be using something less secure and maintenable than the first option.
This being said, have a look at the file collection package. It is much simpler than collection-fs and will offer you everything you are looking for out of the box (including a file api, gridFS storing, resumable uploads, and many other things).
Problems found so far: when I request foo.jpg
(localhost:3000/uploads/foo.jpg) that I uploaded and the server moved
to a known folder, my router (iron router) fails to find how to deal
with the request.
Do you know this path leads to your root folder public/uploads/foo.jpg directory? If you put it there, you should be able to request it.

Resources