What is the best flow to store an image in CDN - node.js

I have a website created using Node Express, this website serves functionality where user can upload an image and it will be stored locally in server folder and the path will be saved in database.
The problem is the images size is taking too much space on the server disk, so i need to use cdn as a storage for those images and to show the image to the user. The problem is i don't know what is the proper end-to-end flow to store this image to cdn.
The end-to-end flow means the customer upload the picture , the server save it, and can be used again when the user need to see it.
My thought is, when the user uploaded the image, then the server save it first locally, the image path, there will be cron running to store the image to CDN, at the end the image stored in the server will be deleted after success store the image to CDN.
Is that the correct way? or there are any other way to do this?

you can do something like this
store images on a cheap storage for long term like s3. this serve as source of truth.
configure cdn to use the s3 url or your server as source => you don't neeed to upload to cdn.
bonus: create an image resizer service to sit in front of the source and configure cdn to use the image resizer service as source. this way, it will reduce the load to your resizer service

In addition to the Tran answer, you can perform some optimizations.
For example, converting to WebP image format which can reduce the size.
Also, you can look at Image CDNs available which are optimized for images. Following article can be helpful for you
https://imagekit.io/blog/what-is-image-cdn-guide/

Related

Hosting images when deployed

I'm beginner and using MERN. I've successfully deployed frontend and backend separately on render for testing purposes and when a user signs up they can choose a profile picture, now on localhost this works fine and it successfully adds to mongodb and shows in the web application. However when deployed I get an error that I can't 'GET' the image from the specific path.
Now I'm trying to workout with this is the case but could someone explain in ELI5 terms? Also would I need to host my images such as cloudinary? Thank you.
Tried to upload images when deployed but not being fetched.
You should consider using cloudinary or an S3 bucket to store images in production.
First of all, call your /upload api route
Generate a uuid + .extension (png or jpg)
Upload this key as the name with your image file to cloudinary (for exemple)
Then save this key into your database to be able to access it later on.
If you consider using an S3 bucket, you should always configure a CDN between your client and your bucket to avoid billing’s surprises

Is image or file being always downloaded from Google Cloud Storage on click?

Please help me to understand the following, I have a node.js app which I want to run on Google Cloud App Engine, this app will contain some images which are planned to be stored on Google Cloud Storage. On my sample app once I upload an image and get a url (mediaLink or selfLink) image is being downloaded.
Why is that? Each download each click costs money I understand google, but is there any way to make url just show images NOT to be downloaded?
Saving a file from Google Cloud Storage is the same as displaying it. Both action require transferring the content of the image to the device for display or saving.
The action of displaying an image or popping up a save as dialog is controlled by HTTP headers. For example if you have the HTTP header content-type set incorrectly (not as an image) then some browsers will save the file. If you want your image files to be displayed as images set the headers correctly for the type of picture. For PNG files set the header contenty-type: image/png.
You can also force a download with the content-disposition: attachment header.
In summary, it does not matter if you are displaying an image or saving it to local storage, it will cost you money. Both actions requiring downloading (transferring) the contents of the file across the Internet.

NodeJS, how to handle image uploading with MongoDB?

I would like to know what is the best way to handle image uploading and saving the reference to the database. What I'm mostly interested is what order do you do the process in?
Should you upload the images first in the front-end (say Cloudinary), and then call the API with result links to the images and save it to the database?
Or should you upload the images to the server first, and upload them from the back-end and save the reference afterwards?
OR, should you do the image uploading after you save the record in the database and then update it once the images were uploaded?
It really depends on the resources, timeline, and number of images you need to upload daily.
So basically if you have very few images to upload then you can upload that image to your server then upload it to any cloud storage(s3, Cloudinary,..) you are using. As this will be very easy to implement(you can find code snippet over the internet) and you can securely maintain your secret keys/credential to your cloud platform on the server side.
But, according to me best way of doing this will be something like this. I am taking user registration as an example
Make server call to get a temporary credential to upload files on the cloud(Generally, all the providers give this functionality i.e. STS/Signed URL in AWS).
The user will fill up the form and select the image on the client side. When the user clicks the submit button make one call to save the user in the database and start upload with credentials. If possible keep a predictable path for upload. Like for user upload /users/:userId or something like that. this highly depends on your use case.
Now when upload finishes make a server call for acknowledgment and store some flag in the database.
Now advantages of this approach are:
You are completely offloading your server from handling file operations which are pretty heavy and I/O blocking and you are distributing that load to all clients.
If you want to post process the files after upload you can easily integrate this with serverless platforms and do that on there and again offload that.
You can easily provide retry mechanism to your users in case of file upload fails but they won't need to refill the data, just upload the image/file again
You don't need to expose the URL directly to the client for file upload as you are using temporary Creds.
If the significance of the images in your app is high then ideally, you should not complete the transaction until the image is saved. The approach should be to create an object in your code which you will eventually insert into mongodb, start upload of image to cloud and then add the link to this object. Finally then insert this object into mongodb in one go. Do not make repeated calls. Anything before that, raise an error and catch the exception
You can have many answers,
if you are working with big files greater than 16mb please go with gridfs and multer,
( changing the images to a different format and save them to mongoDB)
If your files are actually less than 16 mb, please try using this Converter that changes the image of format jpeg / png to a format of saving to mongodb, and you can see this as an easy alternative for gridfs ,
please check this github repo for more details..

Uploading user images to s3 and generating thumbnails from node

I'm currently considering developing a Meteor node.js app, but am struggling with how best to handle uploading of user images. In particular, I want to create a photography website that will allow the photographer to upload images in an 'admin' section, and these images will then be displayed on the website. I need to create a thumbnail of these images, and save the respective URLs to the database. I'm struggling with how to best accomplish this in meteor.
Is my best bet to use something like s3 combined with an AWS process for generating thumbnails?
Or should I save and host the images directly in the Meteor/node session?
Or should I scrap Meteor and use something like Express.js for this project?
Why don't you just use something like Filepicker.io to handle uploading and hosting images and simply store the image unique url (given to you by filepicker in the callback)?
Thumbnails can also be dynamically generated by Filepicker (using simple url modifications).
Cloudinary is a nicer alternative to filepicker when it comes to images, but integration process will be messier.
I would store the images on the filesystem, not in a database. If you have a unique id, you can use that as part of the url, for example an id of the item the image belongs to. Might look like this:
./uploads/img-<id>-<size>.jpg
You can write to disk and resize if necessary with node-imagemagick and your cdn should just poll these images from time to time. Not exactly sure how that part would work in terms of including the url to the image in the html.

Amazon S3 Browser Based Upload - Prevent Overwrites

We are using Amazon S3 for images on our website and users upload the images/files directly to S3 through our website. In our policy file we ensure it "begins-with" "upload/". Anyone is able to see the full urls of these images since they are publicly readable images after they are uploaded. Could a hacker come in and use the policy data in the javascript and the url of the image to overwrite these images with their data? I see no way to prevent overwrites after uploading once. The only solution I've seen is to copy/rename the file to a folder that is not publicly writeable but that requires downloading the image then uploading it again to S3 (since Amazon can't really rename in place)
If I understood you correctly The images are uploaded to Amazon S3 storage via your server application.
So the Amazon S3 write permission has only your application. Clients can upload images only throw your application (which will store them on S3). Hacker can only force your application to upload image with same name and rewrite the original one.
How do you handle the situation when user upload a image with a name that already exists in your S3 storage?
Consider following actions:
First user upload a image some-name.jpg
Your app stores that image in S3 under name upload-some-name.jpg
Second user upload a image some-name.jpg
Will your application overwrite the original one stored in S3?
I think the question implies the content goes directly through to S3 from the browser, using a policy file supplied by the server. If that policy file has set an expiration, for example, one day in the future, then the policy becomes invalid after that. Additionally, you can set a starts-with condition on the writeable path.
So the only way a hacker could use your policy files to maliciously overwrite files is to get a new policy file, and then overwrite files only in the path specified. But by that point, you will have had the chance to refuse to provide the policy file, since I assume that is something that happens after authenticating your users.
So in short, I don't see a danger here if you are handing out properly constructed policy files and authenticating users before doing so. No need for making copies of stuff.
actually S3 does have a copy feature that works great
Copying Amazon S3 Objects
but as amra stated above, doubling your space by copying sounds inefficient
mybe itll be better to give the object some kind of unique id like a guid and set additional user metadata that begin with "x-amz-meta-" for some more information on the object, like the user that uploaded it, display name, etc...
on the other hand you could always check if the key exists already and prompt for an error

Resources