Sites like www.ebayclassifieds.com let users upload images in order to see thumbnail previews and make image adjustments before posting content. Visitors are able to upload images to those sites anonymously without any authorization beforehand.
Can the same type of image previews be done for a smaller site that has bandwidth and disk space constraints? I'd guess that one would set up a cron job to periodically delete images that were anonymously uploaded. But what are other measures that can be taken so that bandwidth usage and disk space don't get out of hand, in case someone tries to spam your site with bogus image uploads?
Here are some ideas off the top of my head:
Use session state to keep track of uploaded files and delete them automatically when the session expires.
Limit uploads per session/visitor (ie. one per anonymous visitor)
Limit the maximum size of a file that can be uploaded.
Limit image types to only those that are compressed (ie. don't allow BMPs)
Scale the images down to a reasonable size as soon as they are uploaded. You probably don't need full size.
Besides madisonw's answer I would just add, use CAPTCHA for the upload as well, so users can't use automated tools to upload the images at large...
Related
The platform I'm working on involves the client(ReactJS) and server(NodeJs, Express), of course.
The major feature of this platform involves users uploading images constantly.
Everything has been setup successfully using multer to receive images in formdata on my api server and now its time to create an "image management system".
The main problem I'll be tackling is the unpredictable file size of users. The files are images and the depend on the OS of the user i.e user taking pictures, users taking screenshots.
The first solution is to determine the max file size, and to transport it using a compression algorithm to the api server. When the backend receives it successfully, images are uploaded to a CDN (Cloudinary) and then the link is stored in the database along with other related records.
The second which im strongly leaning towards is shifting this "upload to CDN" system to the client side and make the client connect to cloudinary directly and after grab the secure link and insert into the JSON that would be sent to the server.
This eliminates the problem of grappling with file sizes which is progress, but I'll like to know if it is a good practice.
Restricting the file size is possible when using the Cloudinary Upload Widget for client-side uploads.
You can include the 'maxFileSize' parameter when calling the widget, setting its value to 500000 (value should be provided in bytes).
https://cloudinary.com/documentation/upload_widget
If the client will try and upload a larger file he/she will get back an error stating the max file size is exceeded and the upload will fail.
Alternatively, you can choose to limit the dimensions of the image, so if exceeded, instead of failing the upload with an error, it will automatically scale down to the given dimensions while retaining aspect ratio and the upload request will succeed.
However, using this method doesn't guarantee the uploaded file will be below a certain desired size (e.g. 500kb) as each image is different and one image that is scaled-down to given dimensions can result in a file size that is smaller than your threshold, while another image may slightly exceed it.
This can be achieved using the limit cropping method as part of an incoming transformation.
https://cloudinary.com/documentation/image_transformations#limit
https://cloudinary.com/documentation/upload_images#incoming_transformations
I want to create a simple feature on my site (React, Node, MongoDB). I have users who can upload their photos, and I want to show their faces blurred for unauthorized visitors. What is the best way of developing this functionality? Saving blurred images separately in DB or calling every time API for blurring images before responding from the backend, or blurring images in frontend. How to make it fast and safe??? Please any help, thank you in advance.
Everything has a pro and con approach.
Upload one photo and using a tag in the data such as user object or better yet inside an auth token apply a blur filter to the image. The downside if someone is clever enough they can get the real picture e.g to intercept the download
Upload one photo and using a tag in the backend data models or user session reduce the quality of the image on the download. The downside pulling images down will be slower as there has to be image manipulation before its sent to the front end.
Upload two images one normal and one low quality. Downside longer initial upload and you are now taking up more space in your image bucket which will cost you more money.
There will be more approaches but each will have a trade-off between speed, security and cost/space. I personally would go with number three if the cost is not an issue and if you use good compression and don't get snowballed with users the cost difference should not be that much.
It depends on your use case blurring images on frontend after calling an API to verify whether user is authorised or not is least secure. Saving two images on upload seems like a good idea but it's a bit waste as you're saving same image twice. I would go with blurring images on the backend.
I have a website created using Node Express, this website serves functionality where user can upload an image and it will be stored locally in server folder and the path will be saved in database.
The problem is the images size is taking too much space on the server disk, so i need to use cdn as a storage for those images and to show the image to the user. The problem is i don't know what is the proper end-to-end flow to store this image to cdn.
The end-to-end flow means the customer upload the picture , the server save it, and can be used again when the user need to see it.
My thought is, when the user uploaded the image, then the server save it first locally, the image path, there will be cron running to store the image to CDN, at the end the image stored in the server will be deleted after success store the image to CDN.
Is that the correct way? or there are any other way to do this?
you can do something like this
store images on a cheap storage for long term like s3. this serve as source of truth.
configure cdn to use the s3 url or your server as source => you don't neeed to upload to cdn.
bonus: create an image resizer service to sit in front of the source and configure cdn to use the image resizer service as source. this way, it will reduce the load to your resizer service
In addition to the Tran answer, you can perform some optimizations.
For example, converting to WebP image format which can reduce the size.
Also, you can look at Image CDNs available which are optimized for images. Following article can be helpful for you
https://imagekit.io/blog/what-is-image-cdn-guide/
I have an image upload view on my client (ember.js) that send the resized image to nodejs rest api;
it works well but it is easy for someone expert to force upload of a non-resized image;
I would like to keep the resize process on the client because this allows users to select heavy-weight images, that are resized locally and uploaded only after that, when they are lightweight;
If someone else uses something like this, I'm interested on how it is possible to make this as safe as possible;
As a rule of thumb when developing web applications is never ever trust any data coming from the client side, always try to do a check in your server side!
Use authentication, this ensures that user only allow to upload data to their own account and not fiddling others files.
Add a special message passing between your server and client, a simple example would be
i. send a post API request first (that contains the image information and targeted compressed size) to your server indicating that your client is starting to compress the picture
ii. when uploading, add a metadata to include the complete compressed image, and check the uploaded image with your server if it is within the accepted threshold, else discard it
You could enhance the security of the message passing to be more complicated!
This would be my simple security, anyone else got better solution? :)
Approaches here also work for file uploads. You can use a combination of checking:
content-length header and/or (i.e. req.headers['content-length'] > x)
reading stream size as it's being read by server. (i.e req.on('data'))
If the stream data exceeds a certain size you can respond accordingly. Check out something like Multer for file uploads, specifically the limits section. Best approach would probably the second option.
Expressjs has bodyParser middleware which can handle file-uploads and can even store them in a directory given in options. But in my app I want to store the files in Amazon S3, so I basically want to stream the file straight to S3 without having to store it locally at all.
But the problem is validation of the file. How can I be sure that these files are all images. Checking the content-type isn't good enough option coz that can be faked. I want to know is it ok if I do the validation after streaming the file to S3?? I am asking from the security point of view.
After storing the image, I need to retrieve it for creating thumbnails, How can I do it asynchronously after giving the response after file upload?
You have contradictory goals of not wanting to store it locally during upload but then also wanting to download it needlessly again to make thumbnails. If you want to go for technical slickness awards, you can simultaneously stream the file upload request body to a local temporary file as well as S3. Or you can do what the rest of the industry does and store it in a local temporary file and then thumbnail it, and then upload all sizes to S3. Either of these approaches alleviates any need to immediately download it from S3 to make thumbnails.
How exactly do you intend to validate that it's really an image? You could look at the first chunk of file data and validate for the file type's magic number if that gives you warm fuzzies, but ultimately it's untrusted user data. The second half of the supposed image file could be virus code and that is just as easily faked at the Content-Type header. Sounds like your security concerns are mostly driven by FUD as opposed to specific threats you intend to defend against. As long as you don't take the user's uploaded data, mark it executable and run it as root on your server, any non-image data is just going to be corrupt and fail to render correctly in a browser (and/or cause your thumbnailer program to exit with an error or perhaps crash in an extreme case).
Regarding validation can I just try to create a thumbnail and if I can't then its not a valid image and delete it. Is this way fine?
Most of the time, yes. There will be edge cases where your thumbnailer cannot process an image but a browser can as thumbnailers are not perfect and some images are partially corrupt. For example, I have found some animated GIFs that render and animate fine in a web browser but graphicsmagick crashes trying to process them. Not sure there's anything that can be done about those 0.01% edge cases.
And for uploads part, can I send a response to the user and than carry on with the thumbnail creation and storing it in S3?
Yes, that is generally the best approach so the user knows their upload succeeded. Generally image processing is usually architected as a "work queue" model where you just record that there's work to do and then proceed and a separate process or processes take work off the queue and complete it.