I have come accross this issue while trying to create a cloudfront distribution that uses a lambda#edge for cognito login.
I create the aws_cloudfront_distribution resource with the lambda configured, as expected the lambda gets created first so the cloudfront module can use the lambda ARN.
now for the issue im facing.
Terraform throws an error saying the aws cloudfront principal does not have the lambda permission to "get lambda" which is correct.
I decided to copy a module I have from another project but the aws_lambda_permission resource needs the arn from cloudfront distribution for the "source_arn".
So far im stuck with in the loop --> cloudfront needs the lamda_permission to assing the function.. and the lamda_permission needs the cloudfront arn to be created.
How can I go around this issue??
Is there another way of doing it??
If code is needed I can upload it
Tried hardcoding values that are not defined by aws
Related
I am trying to attach a Lambda function in another AWS account to the CloudFront on the Origin response but I get below error.
The CloudFront distribution under account <lambda_account> cannot be associated with a Lambda function under a different account: <cloudfront_account>. Function: arn:aws:lambda:us-east-1:<lambda_account>:function:test_edge_lambda:1
Is there any work around to achieve this?
As par hashicorp hashicorp aws provider, rotation_lambda_arn is a required field.
However, AWS UI shows the option to which creates a rotator lambda on your behalf and uses it. I don't see any such option in the terraform provider. Am I missing anything? Is this a missing feature or bug in the terraform provider?
I am trying to avoid creating a lambda by myself or using terraform here and I am wondering why does the provider doesn't have option corresponding to "Create a rotation function"?
Our clients are already registered on our development environment and the management is asking for us to create the production environment without loosing any of the already registered user data.
We are trying to deploy the production environment on ap-southeast-2 and our development environment is already on eu-west-1.
I have made the necessary changes for the deployment to happen on these two regions but the problem is that we are creating cognito and s3 buckets using cloudformation template.
We want to use the same s3 buckets and cognito between these two regions but the problem is when I'm deploying to ap-southeast-2 (production) the stack creation fails because s3 bucket already exists.
Is it possible to reuse the same s3 bucket and cognito between regions and stages? I want the serverless framework to check if these resources exists at the region I choose (in this case eu-west-1). We can't create new buckets because we are at the 100 buckets limit!
Here is the code in how we are creating the s3 buckets. We are using serverless framework with nodejs.
Resources:
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
-
AllowedOrigins:
- '*'
AllowedHeaders:
- '*'
AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value:
Ref: AttachmentsBucket
I want the serverless framework to check if these resources exists at the region I choose
This is not how Infrastructure as a Code (IaC) works. CloudFormation nor terraform for that matter have any build in tools to "check" if a resource exists or not. The IaC perspective is - if its in a template than only the given template/stack can manage that. There is nothing in between, like it may exist or not.
Having said that, there are ways to re-architect and go around that. The most common ways are:
Since the bucket is common resource, it should be deployed separately from the rest of your stacks, and its name should be passed as an input to the dependant stacks.
Develop a custom resource in the form of a lambda function. The function would use AWS SDK to check for the existence of your buckets, and return that info to your stack for further use.
I have written an AWS Lambda nodejs function for creating a stack in CloudFormation, using CloudFormation template and given input parameters from UI.
When I run my Lambda function with respected inputs, a stack is successfully creating and instances like (ec2, rds, and vpc, etc.) are also created and working perfectly.
Now I want to make this function as public and use this function with user AWS credentials.
So public user uses my function with his AWS credentials those resources should be created in his account and user doesn't want to see my template code.
How can I achieve this?
You can leverage AWS Cloud Development Kit better, than directly using CloudFormation for this purpose. Although CDK may not be directly used within Lambda, a workaround is mentioned here.
AWS CloudFormation will create resources in the AWS Account that is associated with the credentials used to create the stack.
The person who creates the stack will need to provide (upload) a template file or they can reference a template that is stored in Amazon S3, which is accessible to their credentials (meaning that it is either public, or their credentials have been given permission to access the template in S3).
I'm trying to figure out a way to run Terraform from inside an AWS WorkSpace. Is that even possible? Has anyone made this work?
AWS WorkSpaces doesn't apply the same concept with an Instance Profile and associated IAM role that is attached to it.
I'm pretty confident, however, that you can run Terraform in an AWS WorkSpace just as you can do it from your personal computer. With Internet access (or VPC endpoints), it can reach AWS APIs and just requires a set of credentials, which in this case (without instance profiles) would be an AWS Access Key and Secret Access Key - just like on your local computer.