We have an image resizing service which resizes images on fly based on query parameters.
Resized images we store into s3. Also the service under cloudfront.
Is there any idea how to protect service from unauthorized calls ?
We can add a signature to the request url on each call but it is possible to hack as we have to implement signature generating logic in front-end.
Related
I have a single-page app that is developed in one of the modern JS frameworks.
This single-page app only has a single entry point (index.html).
It handles routing and other page logic through JS, which means that when a user goes directly to, or refreshes the page on, a non-root URL we want them to pull down the index.html file instead of a file stored at the URL path location (like in a statically hosted website).
If a resource behind the URL is not found, S3 returns 403. Therefore, in CloudFront, we have configured the following rule:
Now, I am working on the Authorization layer in the backend that legitimately is returning 403 if the subject doesn't have enough rights to access the API.
And, instead of returning 403, CF, for obvious reasons, is returning 200 with the index page.
Is there a way to fine-tune this behavior? Thoughts?
I was able to solve this by using the following approach:
turn on Static Web Hosting on S3
replace the S3 bucket name in CloudFront origin with the S3 Static Website URL.
remove Error Pages from CloudFront.
The error pages will be handled by the respective origins instead of CloudFront.
I am completely new to Azure management. I have a POST API to map to frontend and expose.
I have tried the Blank API option and tried to define a API in front end side and backend side.
The API I developed is a POST one, which needs a request body of type FORM-DATA and need a multipart file to be uploaded.
I am not sure how this is configured in the AZURE System. I am getting an error, since it is not hitting the backend.
enter image description here
enter image description here
I have a web app that connects to AWS Cognito for authentication. In turn, the app uses a lambda function to connect to an external API. The app collects information from that API and stores it in a dynamodb table, for use with Cognito authorized users. At no point in time does the end-user have access to the token that connects them to the external API. This is important because the end-user could use that token in a malicious way and get me in trouble with the API provider. Essentially, my app acts as a buffer and an aggregator between the end-user and the external API.
The external API provides file urls for direct download, that require authorization. I would like to make these available to the end-user. As I stated, I don't want to give them direct authorization, but somehow instead proxy that request through lambda and then redirect it to the end-user. This is where I'm stuck.
Clearly I could download the file using lambda and S3, and then in turn make it available to the end-user. This is not a good solution because it requires quite a bit of resources to download, store and then upload the file to the end-user. Also, there would be a gap between the time the user wanted the file and the time when it was downloaded to S3 and ready to upload to the user.
NodeJS stream out of AWS Lambda function indicates that lambda functions don't support Node streams as responses, so that idea of somehow starting a download in lambda and then passing the stream along doesn't seem feasible. I'm not sure that makes any sense anyway.
I don't know if it is possible for lambda (or api gateway) to return a redirect somehow, with the authorization embedded in such a way that the end-user doesn't have access to the token. I'm not even sure if what I'm trying to do is possible, but it seems like a reasonable use case to me. Any thoughts? Thanks.
I have a bunch of images on my S3 which is linked with Cloudfront.
Now I have a web server and I only want users to view the image through my website so I can't just give out links to the image. How do I set that?
It appears that your requirement is:
Content stored in Amazon S3
Content served via Amazon CloudFront
Content should be private, but have a way to let specific people view it
This can be accomplished by using Pre-Signed URLs. These are special URLs that only work for a limited period of time.
When your application determines that the user is entitled to view an image, it can generate a Pre-Signed URL that grants access for a limited period of time. Simply use this URL in an <IMG> tag as you would normally.
See:
Amazon S3 pre-signed URLs
Amazon CloudFront pre-signed URLs
Since your content in Amazon S3 will be private (so users cannot bypass CloudFront and access it directly), you will also need to grant CloudFront permission to access the content from S3. See: Using an Origin Access Identity to Restrict Access to Your Amazon S3 Content
Another option, instead of creating a pre-signed URL each time, is to use Signed Cookies. However, this doesn't give fine-grained control for access to individual objects -- it's more for granting access to multiple objects, such as a subscription area.
I have a system providing access to private blobs based on a users login credentials. If they have permission they will be given a SAS Blob url to view document or image stored in Azure.
I want to be able to resize the images, but still maintain the integrity of the short window of access via the SAS.
What is the best approach with ImageResizer? Should I user the AzureReader2 plugin, or should i just use the RemoteReader with the SAS Url?
Thanks
ImageResizer would disk cache the resized result images indefinitely, regardless of restrictions on the source file.
You need to implement your authorization logic within the application using Authorize_Request or Config.Current.Pipeline.AuthorizeImage .
There's no way to pass-through storage authorization unless you disable all caching.