The setup is using S3 as a storage, API gateway for the rest endpoint and Lambda (Python) for get/fetch of file in S3.
I'm using Boto3 for the Lambda function (Python) to check if the file exists in S3, and I was able to download it but being stored in Lambda machine ("/tmp"). The API Gateway can trigger the lambda function already. Is there a way that once the lambda function is triggered then the download will happen in the browser?
Thanks!
Here is how we did it:
Check and Redirect:
API Gateway --> Lambda (return 302)
Deliver Content:
CloudFront --> S3
Check for S3 existence with Lambda returning a 302 to cloudfront. You can also return Signed URL from Lambda with a valid time to access the URL from CloudFront.
Hope it helps.
Related
I have come accross this issue while trying to create a cloudfront distribution that uses a lambda#edge for cognito login.
I create the aws_cloudfront_distribution resource with the lambda configured, as expected the lambda gets created first so the cloudfront module can use the lambda ARN.
now for the issue im facing.
Terraform throws an error saying the aws cloudfront principal does not have the lambda permission to "get lambda" which is correct.
I decided to copy a module I have from another project but the aws_lambda_permission resource needs the arn from cloudfront distribution for the "source_arn".
So far im stuck with in the loop --> cloudfront needs the lamda_permission to assing the function.. and the lamda_permission needs the cloudfront arn to be created.
How can I go around this issue??
Is there another way of doing it??
If code is needed I can upload it
Tried hardcoding values that are not defined by aws
I am trying to attach a Lambda function in another AWS account to the CloudFront on the Origin response but I get below error.
The CloudFront distribution under account <lambda_account> cannot be associated with a Lambda function under a different account: <cloudfront_account>. Function: arn:aws:lambda:us-east-1:<lambda_account>:function:test_edge_lambda:1
Is there any work around to achieve this?
I'm trying to copy an object from one bucket to another, using pre-signed URL with both buckets, one pre-signed URL generated with getObject permission, while the other one with putObject permission.
I want to avoid downloading the objects using HTTP GET and uploading using the HTTP PUT, I want it to be as the usual way of copying objects between buckets (processed and executed within Amazon services).
the lambda functions having Repository if yes how to check the Repository details how it will be deployed if no Repository.
Theres no "hidden" or "shared" repository that can be used between different lambda functions, every file you need should be uploaded in the zip file you upload.
Read a more detailed explanation here.
Lambda functions do not have repositories. You can create a deployment package yourself or write your code directly in the Lambda console, in which case the console creates the deployment package for you and uploads it, creating your Lambda function.
If your custom code requires only the AWS SDK library, then you can use the inline editor in the AWS Lambda console. Using the console, you can edit and upload your code to AWS Lambda. The console will zip up your code with the relevant configuration information into a deployment package that the Lambda service can run.
You can also test your code in the console by manually invoking it using sample event data.
In the case of the deployment package, you may either upload it directly or upload the .zip file first to an Amazon S3 bucket in the same AWS region where you want to create the Lambda function, and then specify the bucket name and object key name when you create the Lambda function using the console or the AWS CLI
I'm using AWS Lambda functions, in node.js.
is there any way to configure the lambda function to use hosts records?
for example:
{domain:'www.example.com',ip:'190.10.20.30'}