I have written an API to read user posts after the token is passed in the header. API returns many posts at a time with the nested attachment name for images or videos etc. Currently, all attachments are stored in a server folder directory. Attachments is displayed by another API which accept attachment name as a parameter and return attachment url.
Now I want to move on AWS S3 bucket with the same concept with presigned URL.
API is being used on the flutter app.
I created a API which accept user auth token and return upload presigned URL for s3 bucket.
For displaying attachments i am thinking two option.
Create a another API which accept attachment name(object key name) and will return presigned URL of that attachment.
Post API return json data after replacing all attachment name with presigned URL. But this will take too long for nested json data by looping.
I am new in AWS s3 bucket. Please guide what will be the best way to handle this.
How facebook, twitter, instagram handle private files.
The best option I see is returning the post data with already generated pre-signed url.
Having a separate api for generating the presigned url would mean
the system will need to authorize the input anyway (if the user is authorized to access the particular object)
the user has to wait anyway until the signed links are generared, just with additional call
this will take too long for nested json data by looping
Not really, generating the presigned url makes no backend / S3 calls, it's just a bunch of HMAC computations. But, as already mentioned, these need to be done regardless which option is chosen
Related
I'm trying to upload the signed document into s3 once its signed by all the signers using nodejs. I think the webhooks are the best to go with but can we actually get the signed document and upload it to s3 while using webhooks. Please suggest me the methods to retrieve the signed document and upload.
Yes, you can get the signed document bytes as part of the webhook call (DocuSign Connect) or you can make an eSignature API call to retrieve the document.
Here is an article with code examples no how to do the latter.
Im trying to give my s3 file's url as a document remoteUrl in remote signing API. As it becomes easy to do this way, instead of downloading the file to my server and then pass on the doc to docusign API. When I made the bucket public, I was able to pass the object url as document remoteUrl and docusign was able to pick it up and send it for signature but my usecase is not a public bucket.
The s3 bucket only allows allowed domains, so I have added "https://account-d.docusign.com/" "https://account.docusign.com/" as allowed domains but even then I am facing this issue :
ERROR {errorCode: 'INVALID_REQUEST_PARAMETER',message: "The request contained at least one invalid parameter. An error occurred while downloading the data for document with id=1. Please ensure that the 'remoteUrl' parameter is correct."}
Are the docusign allowed domains correct or am I missing something?
OK, let's clear some confusion here.
First, DocuSign support various public cloud providers for cloud storage where you can have the files stored that will be sent to DocuSign to be part of an envelope sent for signature.
That list doesn't include Amazon S3, it is focused on end-user/consumer cloud storage and requires that you connect your DocuSign account to the cloud provider for authentication.
So the remoteURL property is not relevant to your scenario.
You can build an integration that goes to S3 using AWS APIs, get the file and then send it to DocuSign from wherever your app is hosted (AWS would make it easy) and if you do that - there's nothing different about sending a file you obtained from Amazon S3 vs. a file that was stored on-prem.
We're in the process of implementing the videoindexer.
To upload videos, we'd like to use the videoUrl method instead of uploading the video file. For this we're using url's of videos on our blob storage. These require a SAS token to be served, so the url contains query parameters.
However, I'm unable to provide a videoUrl with query parameters to the endpoint on the videoindexer.
Example of a test request:
https://api.videoindexer.ai/trial/Accounts/MY_ACCOUNT_ID/Videos?accessToken=MY_ACCESS_TOKEN&name=interview&description=interview&privacy=private&partition=some_partition&indexingPreset=AudioOnly&streamingPreset=NoStreaming&videoUrl=https://manualtovideos.blob.core.windows.net/asset-xxxxx/interview.mp4?sp=rl&st=2020-12-03T16:48:42Z&se=2020-12-04T16:48:42Z&sv=2019-12-12&sr=b&sig=l57dDjKYr...8%25253D
When I shorten the blob url using a url shortener service, it works.
The docs say I need to url encode the videoUrl, so I'm doing that using javascript's encodeURI
But this doesn't change the url much, since it disregards ?'s and &'s.
Do I need to encode the url in a different way somehow?
Or is there another way to authenticate, so I can use the blob url without the sas token, since it's also on Azure?
You need to encode the URL.
You can see how it's created using the Azure Video Analyzer for Media Developer Portal in the upload method.
So it turned out I needed to use encodeURIComponent() to encode the videoUrl parameter instead of just encodeURI() or escape()
So I'm currently building an app. User would have the possibility to upload image. The image they upload should only be visible by them. So I'm wondering how I can achieve this considering the image is available through a URL.
Actually what I was thinking was, in order to get the picture, the user should do a REST API request and the api would return the image data if the user has the correct permission.
The frontend of my app is in React and the backend (Rest api) in nodeJS.
Edit: Actually the image are store on AWS S3 (this can change if needed)
Thanks !
The best option to allow only authorized users to fetch an image is S3 Presigned URL You can refer to the article, it thoroughly describes how to implement S3 Presigned URL. Another code example with Node JS. If you code in another language just google it "AWS S3 Presigned URL" and you will find it.
I see how to upload to Amazon S3 (from the client) and I see how to make requests to update a dynamoDB (from the client) but how do I upload a file to S3 such that I get a response back with "business logic" information?
For instance, I want to upload a photo to a uploadPhoto endpoint that will return to me the photoID of the photo in my dynamoDB model.
Yes I can upload a file to S3 and have it 'notify' Lambda but then its too late, S3 has already returned a response, Lambda can't send another response back to the client.
Its clear I shouldn't upload a file to Lambda.
So there is the API Gateway, its not clear that its a 'good idea' to upload files to API Gateway...
We just went through a similar scenario and unfortunately I think it comes down to 2 choices:
Use multiple requests - Client calls lambda to get presigned url, client uploads file directly to s3, then client calls back to lambda, lets it know the file has been uploaded and gets a response with all the business logic
One request - Create a service(likely on ec2) that sits in front of s3, so your client uploads directly to your service, your service then uploads to s3, does the business logic, and then sends the response back to the client. Definitely less work on the client but you get charged for twice as much bandwidth because you are uploading it twice.
We implemented #1 and it wasn't too hard. In our case the client is an Angular app so to the user it looks like one request but behind the scenes the app is making several calls.