S3 storage without versionning get cached images - node.js

I get a strange problem with my s3 storage.
I develop a web-application which need image storage.
On the application, we need to make many update on them and display them in live so we disabled version management on the bucket.
we store the different url in a postgres DB and display them on the website.
But sometimes (can't really know in which condition because it seems to be random) the app display the old version of the image.
I try set some things in the metadata of my update request but it doesn't seems to do anythings.
Also we tried to add a ?versionId=null params at the end of the url but also we keep having issues
Did someone have an idea on this ?

Related

Hosting images when deployed

I'm beginner and using MERN. I've successfully deployed frontend and backend separately on render for testing purposes and when a user signs up they can choose a profile picture, now on localhost this works fine and it successfully adds to mongodb and shows in the web application. However when deployed I get an error that I can't 'GET' the image from the specific path.
Now I'm trying to workout with this is the case but could someone explain in ELI5 terms? Also would I need to host my images such as cloudinary? Thank you.
Tried to upload images when deployed but not being fetched.
You should consider using cloudinary or an S3 bucket to store images in production.
First of all, call your /upload api route
Generate a uuid + .extension (png or jpg)
Upload this key as the name with your image file to cloudinary (for exemple)
Then save this key into your database to be able to access it later on.
If you consider using an S3 bucket, you should always configure a CDN between your client and your bucket to avoid billing’s surprises

S3 access private bucket files

I have gone through all the existing questions doesn't seems to be fullfill my requirements.
I have a S3 private bucket with 10000 files, Privately accessing via Nodejs server to display in my angular application atleast 25 per page.
I found multiple solutions those seems Inefficient to my thoughts.
Generate pre-signed urls for files.
Pulls the image via the Nodejs API from S3
To display 10 or more need to generate signed Url's each time which is a time consuming process. And pulling image via api using s3.getObject method gives me a Buffer data converting it to a Base64 is hard to handle at the client side and fetching each consumes time this too.
Are these any solutions out there which I'm not aware of and how this can be implemented without affecting user experience.
PS: My Bucket is private not public
Have you tried signed cookies?
I think this may help you by just considering AWS CloudFront and signed the cookie one time to let the client access any file(s) directly after that.
There is some reference.
Also, CloudFront will give you more benefits such as optimize the access speed, attach SSL Certificates to your S3 buckets, and more.
"Sorry for my English"

NodeJS, how to handle image uploading with MongoDB?

I would like to know what is the best way to handle image uploading and saving the reference to the database. What I'm mostly interested is what order do you do the process in?
Should you upload the images first in the front-end (say Cloudinary), and then call the API with result links to the images and save it to the database?
Or should you upload the images to the server first, and upload them from the back-end and save the reference afterwards?
OR, should you do the image uploading after you save the record in the database and then update it once the images were uploaded?
It really depends on the resources, timeline, and number of images you need to upload daily.
So basically if you have very few images to upload then you can upload that image to your server then upload it to any cloud storage(s3, Cloudinary,..) you are using. As this will be very easy to implement(you can find code snippet over the internet) and you can securely maintain your secret keys/credential to your cloud platform on the server side.
But, according to me best way of doing this will be something like this. I am taking user registration as an example
Make server call to get a temporary credential to upload files on the cloud(Generally, all the providers give this functionality i.e. STS/Signed URL in AWS).
The user will fill up the form and select the image on the client side. When the user clicks the submit button make one call to save the user in the database and start upload with credentials. If possible keep a predictable path for upload. Like for user upload /users/:userId or something like that. this highly depends on your use case.
Now when upload finishes make a server call for acknowledgment and store some flag in the database.
Now advantages of this approach are:
You are completely offloading your server from handling file operations which are pretty heavy and I/O blocking and you are distributing that load to all clients.
If you want to post process the files after upload you can easily integrate this with serverless platforms and do that on there and again offload that.
You can easily provide retry mechanism to your users in case of file upload fails but they won't need to refill the data, just upload the image/file again
You don't need to expose the URL directly to the client for file upload as you are using temporary Creds.
If the significance of the images in your app is high then ideally, you should not complete the transaction until the image is saved. The approach should be to create an object in your code which you will eventually insert into mongodb, start upload of image to cloud and then add the link to this object. Finally then insert this object into mongodb in one go. Do not make repeated calls. Anything before that, raise an error and catch the exception
You can have many answers,
if you are working with big files greater than 16mb please go with gridfs and multer,
( changing the images to a different format and save them to mongoDB)
If your files are actually less than 16 mb, please try using this Converter that changes the image of format jpeg / png to a format of saving to mongodb, and you can see this as an easy alternative for gridfs ,
please check this github repo for more details..

MEAN Stack- Where should I store images meant for particular users?

I am using the MEAN stack for my project. I read online that it is not advisable to store image in the database itself and hence I am not doing that.
For solving this issue, now I have set up a local server (Using express) and I am serving my static image files from there.
Now I am able to use that image by using the URLs, for example:
http://localhost:4200/images/a.jpg
I am planning to host this express app eventually by using some service like heroku.
In my main website, I am achieving authentication(Sign In and Sign Up) by using MongoDb and NodeJs.
I want the images to be shown according to the specific logged in user.
Should I store my images in folder named by username of that user, so that I can genarate the URL string accordingly and access the image by :
http://localhost:4200/user1/a.jpg
Is the flow of my application correct? Is this the way I should be accessing the images for particular users?
I read somewhere that there would be a security issue because anyone having the url of the image can access it. I am not much concerned with security now as this a small project which is not meant for many users. But any suggestions for a way in which there won't be such a security issue would be helpful.
I am new to this and any advice would be helpful.
Thanks in advance.
You could use firebase for this .
Its super easy
Over there you could just create a folder with any name ans save all the images.
In the database you could just save their firebase generated link which can easily be mapped using a user_id or something like it.

Amazon S3 Browser Based Upload - Prevent Overwrites

We are using Amazon S3 for images on our website and users upload the images/files directly to S3 through our website. In our policy file we ensure it "begins-with" "upload/". Anyone is able to see the full urls of these images since they are publicly readable images after they are uploaded. Could a hacker come in and use the policy data in the javascript and the url of the image to overwrite these images with their data? I see no way to prevent overwrites after uploading once. The only solution I've seen is to copy/rename the file to a folder that is not publicly writeable but that requires downloading the image then uploading it again to S3 (since Amazon can't really rename in place)
If I understood you correctly The images are uploaded to Amazon S3 storage via your server application.
So the Amazon S3 write permission has only your application. Clients can upload images only throw your application (which will store them on S3). Hacker can only force your application to upload image with same name and rewrite the original one.
How do you handle the situation when user upload a image with a name that already exists in your S3 storage?
Consider following actions:
First user upload a image some-name.jpg
Your app stores that image in S3 under name upload-some-name.jpg
Second user upload a image some-name.jpg
Will your application overwrite the original one stored in S3?
I think the question implies the content goes directly through to S3 from the browser, using a policy file supplied by the server. If that policy file has set an expiration, for example, one day in the future, then the policy becomes invalid after that. Additionally, you can set a starts-with condition on the writeable path.
So the only way a hacker could use your policy files to maliciously overwrite files is to get a new policy file, and then overwrite files only in the path specified. But by that point, you will have had the chance to refuse to provide the policy file, since I assume that is something that happens after authenticating your users.
So in short, I don't see a danger here if you are handing out properly constructed policy files and authenticating users before doing so. No need for making copies of stuff.
actually S3 does have a copy feature that works great
Copying Amazon S3 Objects
but as amra stated above, doubling your space by copying sounds inefficient
mybe itll be better to give the object some kind of unique id like a guid and set additional user metadata that begin with "x-amz-meta-" for some more information on the object, like the user that uploaded it, display name, etc...
on the other hand you could always check if the key exists already and prompt for an error

Resources