S3 Presigned URL: How to know about high traffic situation? - node.js

I have a scenario where my images are getting uploaded on S3 bucket using pre-signed URLs.
In case of high load scenario,
I want to let my user name about delay and ask him to wait.
After expiry period we get HTTP status code 403, but I just want to let him know about the high traffic situation
but question is
How do I find out the time taken by image to be uploaded on S3 pre-signed URL?
What all status codes can we get from PUT request on S3 pre-signed URL? (if can help me with some documentation for same; https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getSignedUrlPromise-property does not have info about different status code retrieved from AWS)
How can I know about high traffic situation on S3 bucket?

How do I find out the time taken by image to be uploaded on S3 pre-signed URL?
This shouldn't matter. When the S3 pre-signed url expires, it will not stop uploads in progres. The expiry date is only important to initiate the upload.
How can I know about high traffic situation on S3 bucket?
You can't check this as this is AWS internal networking backend. you have no access to it nor AWS publishes such an info. The only issue is the speed of the internet connection of your clients.
What all status codes can we get from PUT request on S3 pre-signed URL?
I'm not sure what do you mean here.

Related

AWS S3 Bucket Presigned url issue

I have written an API to read user posts after the token is passed in the header. API returns many posts at a time with the nested attachment name for images or videos etc. Currently, all attachments are stored in a server folder directory. Attachments is displayed by another API which accept attachment name as a parameter and return attachment url.
Now I want to move on AWS S3 bucket with the same concept with presigned URL.
API is being used on the flutter app.
I created a API which accept user auth token and return upload presigned URL for s3 bucket.
For displaying attachments i am thinking two option.
Create a another API which accept attachment name(object key name) and will return presigned URL of that attachment.
Post API return json data after replacing all attachment name with presigned URL. But this will take too long for nested json data by looping.
I am new in AWS s3 bucket. Please guide what will be the best way to handle this.
How facebook, twitter, instagram handle private files.
The best option I see is returning the post data with already generated pre-signed url.
Having a separate api for generating the presigned url would mean
the system will need to authorize the input anyway (if the user is authorized to access the particular object)
the user has to wait anyway until the signed links are generared, just with additional call
this will take too long for nested json data by looping
Not really, generating the presigned url makes no backend / S3 calls, it's just a bunch of HMAC computations. But, as already mentioned, these need to be done regardless which option is chosen

How to give file URL hosted in s3 as a document remoteUrl for Dousign remote signing API?

Im trying to give my s3 file's url as a document remoteUrl in remote signing API. As it becomes easy to do this way, instead of downloading the file to my server and then pass on the doc to docusign API. When I made the bucket public, I was able to pass the object url as document remoteUrl and docusign was able to pick it up and send it for signature but my usecase is not a public bucket.
The s3 bucket only allows allowed domains, so I have added "https://account-d.docusign.com/" "https://account.docusign.com/" as allowed domains but even then I am facing this issue :
ERROR {errorCode: 'INVALID_REQUEST_PARAMETER',message: "The request contained at least one invalid parameter. An error occurred while downloading the data for document with id=1. Please ensure that the 'remoteUrl' parameter is correct."}
Are the docusign allowed domains correct or am I missing something?
OK, let's clear some confusion here.
First, DocuSign support various public cloud providers for cloud storage where you can have the files stored that will be sent to DocuSign to be part of an envelope sent for signature.
That list doesn't include Amazon S3, it is focused on end-user/consumer cloud storage and requires that you connect your DocuSign account to the cloud provider for authentication.
So the remoteURL property is not relevant to your scenario.
You can build an integration that goes to S3 using AWS APIs, get the file and then send it to DocuSign from wherever your app is hosted (AWS would make it easy) and if you do that - there's nothing different about sending a file you obtained from Amazon S3 vs. a file that was stored on-prem.

How to check expiration time of a cached Pre-Signed URL accessKeyId (the signer, not the signature)?

I'm working with some cached Pre-Signed URLs for S3 downloads, pretty simple. We have many URLs for each asset and many assets for each request, then to avoid generating so many new Pre-Signed URLs per request we are saving the Pre-Signed URLs while the expiration is far from happening. It works fine most of time but from time to time we receive a 400 Bad Request with "Token expired" error message.
For what I learned:
S3 Pre-Signed URLs also carry accessKeyId which is the authentication of the signer.
The signer key also has its own expiration time.
The Pre-Signed URL can be rejected if the Signer key has expired even if the expiration of the URL itself is not reached.
So the question I have is: how can I check the expiration time of an accessKeyId since my Server can already refreshed its own key (and add to this the uncertainty of Server instances different or not keys) and I no longer have access to AWS.config.credentials.expirationTime?
Unfortunately, I think you're out of luck when looking at the URL itself. I'm sure it's embedded in the x_amz_security_token, but the format of that token is not published (although if you Base64-decode it you'll see some interesting bits of readable text and a lot of binary data).
Instead, I recommend that you ensure that the expiration date for the signed URL is the same as the expiration date of the session that signs it.
The way that you do that is to assume a role on the server, and use the credentials from that role assignment to create the signed URL.
This assumed role just needs to have s3:PutObject permission on the bucket where the upload will go. The assumed role session will have whatever duration you request, starting from the time that you assume the role (unlike your Lambda/EC2 instance/whatever, which only regenerates credentials they expire).

NodeJS - Protect image url to only authorized user

So I'm currently building an app. User would have the possibility to upload image. The image they upload should only be visible by them. So I'm wondering how I can achieve this considering the image is available through a URL.
Actually what I was thinking was, in order to get the picture, the user should do a REST API request and the api would return the image data if the user has the correct permission.
The frontend of my app is in React and the backend (Rest api) in nodeJS.
Edit: Actually the image are store on AWS S3 (this can change if needed)
Thanks !
The best option to allow only authorized users to fetch an image is S3 Presigned URL You can refer to the article, it thoroughly describes how to implement S3 Presigned URL. Another code example with Node JS. If you code in another language just google it "AWS S3 Presigned URL" and you will find it.

Uploading a file to Amazon S3, update a database and return a response

I see how to upload to Amazon S3 (from the client) and I see how to make requests to update a dynamoDB (from the client) but how do I upload a file to S3 such that I get a response back with "business logic" information?
For instance, I want to upload a photo to a uploadPhoto endpoint that will return to me the photoID of the photo in my dynamoDB model.
Yes I can upload a file to S3 and have it 'notify' Lambda but then its too late, S3 has already returned a response, Lambda can't send another response back to the client.
Its clear I shouldn't upload a file to Lambda.
So there is the API Gateway, its not clear that its a 'good idea' to upload files to API Gateway...
We just went through a similar scenario and unfortunately I think it comes down to 2 choices:
Use multiple requests - Client calls lambda to get presigned url, client uploads file directly to s3, then client calls back to lambda, lets it know the file has been uploaded and gets a response with all the business logic
One request - Create a service(likely on ec2) that sits in front of s3, so your client uploads directly to your service, your service then uploads to s3, does the business logic, and then sends the response back to the client. Definitely less work on the client but you get charged for twice as much bandwidth because you are uploading it twice.
We implemented #1 and it wasn't too hard. In our case the client is an Angular app so to the user it looks like one request but behind the scenes the app is making several calls.

Resources