I'm building an app which allows users to upload their images to IPFS and the image will be loaded using an <img> tag, the file type checking is done only in the frontend.
However, I'm also aware of the File Upload Vulnerability in normal centralized servers. So here's my question, would hackers be able to explore this?
The following is a JavaScript file I tried to bypass frontend checking and upload to IPFS, however, when I try to access its URL it returns the file in text instead of executing it. As a sophisticated hacker, would he/she be able to upload a malicious file somehow and cause damage on my site or my users?
https://cloudflare-ipfs.com/ipfs/bafybeigynjetni7b2z52qqv75u5c6k3fgrowqdp6a4qtcbfd4rq7nnj3pu/
All data on ipfs in the public node is Public. If you want to secure files you need to encrypt them. https://docs.ipfs.tech/concepts/privacy-and-encryption/
Related
wondering what the modern paradigm is for securing upload images/files.
In other words, I have a website which requires authentication. Once a user logs in they can upload files/images which they can later access.
I'm using multer for the file upload, so a couple questions:
1) Should I be storing files in a database (MongoDB) or in folders? Pros and cons? Security considerations?
2) Assuming the answer to #1 is "folders", how do I practically control access to (i.e. authorization) to those files. In other words, if a user upload file image.jpg, and I save it to http://API_URL/images/image.jpg, I want that jpg to be accessible only after I receive a validation token from that user, but that isn't possible since the image will be accessed via , i.e. not a get or post where I can attach such token. Is that something I should be doing via the express router? Maybe make the path of the saved file /images/:id and then attach some long user id which cannot be guessed as part of the path to get to the image?
Thanks
To render on threejs, we need some images(jpg/png) and , jsons(uv data). All these files are stored in respective folders and the files visible for clients to look at.
I use django/python to start a local server, python code is compiled to .pyc & js code is obfuscated. But the folder structure is accessible for Casual Users. In threejs, we use tex_loader and json_loader functions to which the file paths are given as inputs. Was looking at ways of securing the behind the scenes work.
Happened to read about custom binary formats, but that felt like a lot of work.
or giving access to files only for certain process starting through django/web browser?
Are there any available easy to deploy solutions to protect our IP ?
An option would be to only serve the files to authenticated users. This could be achieved by having an endpoint on your backend like:
api/assets/data.json
and the controller in the backend would receive the file name(data.json), the code could check if the user requesting the endpoint is authenticated and if so read the file from the file system(my-private-folder/assets/data.json) and return it as file with correct mime-type to the browser.
I would like to know what is the best way to handle image uploading and saving the reference to the database. What I'm mostly interested is what order do you do the process in?
Should you upload the images first in the front-end (say Cloudinary), and then call the API with result links to the images and save it to the database?
Or should you upload the images to the server first, and upload them from the back-end and save the reference afterwards?
OR, should you do the image uploading after you save the record in the database and then update it once the images were uploaded?
It really depends on the resources, timeline, and number of images you need to upload daily.
So basically if you have very few images to upload then you can upload that image to your server then upload it to any cloud storage(s3, Cloudinary,..) you are using. As this will be very easy to implement(you can find code snippet over the internet) and you can securely maintain your secret keys/credential to your cloud platform on the server side.
But, according to me best way of doing this will be something like this. I am taking user registration as an example
Make server call to get a temporary credential to upload files on the cloud(Generally, all the providers give this functionality i.e. STS/Signed URL in AWS).
The user will fill up the form and select the image on the client side. When the user clicks the submit button make one call to save the user in the database and start upload with credentials. If possible keep a predictable path for upload. Like for user upload /users/:userId or something like that. this highly depends on your use case.
Now when upload finishes make a server call for acknowledgment and store some flag in the database.
Now advantages of this approach are:
You are completely offloading your server from handling file operations which are pretty heavy and I/O blocking and you are distributing that load to all clients.
If you want to post process the files after upload you can easily integrate this with serverless platforms and do that on there and again offload that.
You can easily provide retry mechanism to your users in case of file upload fails but they won't need to refill the data, just upload the image/file again
You don't need to expose the URL directly to the client for file upload as you are using temporary Creds.
If the significance of the images in your app is high then ideally, you should not complete the transaction until the image is saved. The approach should be to create an object in your code which you will eventually insert into mongodb, start upload of image to cloud and then add the link to this object. Finally then insert this object into mongodb in one go. Do not make repeated calls. Anything before that, raise an error and catch the exception
You can have many answers,
if you are working with big files greater than 16mb please go with gridfs and multer,
( changing the images to a different format and save them to mongoDB)
If your files are actually less than 16 mb, please try using this Converter that changes the image of format jpeg / png to a format of saving to mongodb, and you can see this as an easy alternative for gridfs ,
please check this github repo for more details..
Good day!
Here's a question... we are storing a large file in the third party storage. The link to it is located on our website built with Express in Node. The goal is to make sure that the client doesn't see the path to the external website.
Examples:
File actual location: http://www.some.storage/bucket/SuperBigFile.exe
Path to file from our website: http://www.ourWeb.site/files/SuperBigFile.exe
I tired it with
res.redirect("http://www.some.storage/bucket/SuperBigFile.exe");
and
request('http://www.some.storage/bucket/SuperBigFile.exe').pipe(res);
request(...).pipe(res) would stream the file through our server and the file itself is a pdf report with a size of 300mb. Everything that goes through our primary server usually gets hung after 30-40mb, that was the primary reason of moving the file to the storage, but the client doesn't want the users to see the path to the storage...
Both versions redirect to download the file and it can be in fact accessed through http://www.ourWeb.site/files/SuperBigFile.exe url, but let's say if I'm using fidler debugger, I can see requests to the original file at the storage. Is there a way to hide it completely so, it won't be visible from the client side? So, the client would think that it is coming from our website?
Thanks.
I want to upload images from the client to the server. The client must see a list of all images he or she has and see the image itself (a thumbnail or something like that).
I saw people using two methods (generically speaking)
1- Upload image and save the binaries to MongoDB
2- Upload an image and move it to a folder, save the path somewhere (the classic method, and the one I implemented so far)
What are the pros and cons of each method and how can I retrieve the data and show it in a template in each case (getting the path and writing to the src attribute of and img tag and sending the binaries) ?
Problems found so far: when I request foo.jpg (localhost:3000/uploads/foo.jpg) that I uploaded and the server moved to a known folder, my router (iron router) fails to find how to deal with the request.
1- Upload image and save the binaries to MongoDB
Either you limit the file size to 16MB and use only basic mongoDB, either you use gridFS and can store anything (no size limit). There are several pro-cons of using this method, but IMHO it is much better than storing on the file system :
Files don't touch your file system, they are piped to you database
You get back all the benefits of mongo and you can scale up without worries
Files are chunked and you can only send a specific byte range (useful for streaming or download resuming)
Files are accessed like any other mongo document, so you can use the allow/deny function, pub/sub, etc.
2- Upload an image and move it to a folder, save the path somewhere
(the classic method, and the one I implemented so far)
In this case, either you store everything in your public folder and make everything publicly accessible using the files names + paths, either you use dedicated asset delivery system, such as an ngix server. Either way, you will be using something less secure and maintenable than the first option.
This being said, have a look at the file collection package. It is much simpler than collection-fs and will offer you everything you are looking for out of the box (including a file api, gridFS storing, resumable uploads, and many other things).
Problems found so far: when I request foo.jpg
(localhost:3000/uploads/foo.jpg) that I uploaded and the server moved
to a known folder, my router (iron router) fails to find how to deal
with the request.
Do you know this path leads to your root folder public/uploads/foo.jpg directory? If you put it there, you should be able to request it.