Terraform VPC In New Accounts - terraform

I want to work with multiple AWS Accounts/Organization-Units (Acct1/Acct2/etc) within an Organization. Creating a new Account (let's say Acct4) from the AWS Console results in the automatic creation of a VPC (let's say VPC4). Is there a way to create a new Account from Terraform (and control the creation of the resulting VPC)? Or if I wanted to track the VPC in the new Account, would I simply import the details? Thanks in advance.

Related

How to add a new resource to an existing resource group in Terraform

This would appear to be a fairly simple and basic scenario but I'm frankly at a loss on how to get around this using Terraform and would appreciate any suggestions.
The issue is this. In Azure, I have a number of resource groups, each containing a number of resources, including virtual networks, subnets, storage accounts, etc. What I would now like to do is add new resources to one or two of the resource groups. Typical example, I would like to provision a new virtual machine in each of the resource groups.
Now, so far all of the documentation and blogs I seem to come across only provide guidance on how to create resources whereby you also create a new resource group, vnet, subnet, from scratch. This is definitely not what I wish to do.
All I'm looking to do is get Terraform to add a single virtual machine to an existing resource group, going on to configure it to connect to existing networking resources such as a VNet, Subnet, etc. Any ideas?
I tested for ECS by destroying the launch configuration.
terraform destroy -target module.ecs.module.ec2_alb.aws_launch_configuration.launchcfg
I recreated the launch configuration and it worked:
terraform plan -target=module.ecs.module.ec2_alb.aws_launch_configuration
terraform apply -target=module.ecs.module.ec2_alb.aws_launch_configuration
Also, you can go read more on Terraform target here: https://learn.hashicorp.com/tutorials/terraform/resource-targeting
If you just want to be able to reference your existing resources in your TF script, you normally would use data sources in TF to fetch their information.
So for resource group, you would use data source azurerm_resource_group, for vnet there is azurerm_virtual_network and so forth.
These data sources would allow you to only reference and get details of existing resources, not to manage them in your TF script. Thus if you would like to actually manage these resources using TF (modify, delete, etc), you would have to import them first to TF.

CDK how to add assume role while creating a role from an existing role using from_role_arn

I have a step function that has multiple lambdas and connections. Now this step function uses an existing role using the following method:
self.state_machine_role = _iam.Role.from_role_arn(
self,
"statemachinerole",
role_arn="existing-role-arn",
mutable=False,
)
now I want an event to invoke this step function, as per event-documentation I need to add ServicePrincipal('events.amazonaws.com') to this role. So my question is how I'm going to modify state_machine_role to have this new service principal.
This existing role existing-role-arn has already states.amazonaws.com associated with it along with other policies to run my lambdas and step-function.
I don't think you need to define role for event Rule neither the statemachine, CDK creates for you better.
See the ref here: AWS GuardDuty Combine With Security Hub And Slack
AWS CDK created roles will be much easier for you to use but if you have to use the existing Role, you can't update the Trust Policy document directly from AWS CDK however you can still implement AwsCustomResource with AwsSdkCall to do that for you.

Is it possible to create stack in my AWS account and resources like (ec2, vpc, rds) created in client AWS account?

I have written an AWS Lambda nodejs function for creating a stack in CloudFormation, using CloudFormation template and given input parameters from UI.
When I run my Lambda function with respected inputs, a stack is successfully creating and instances like (ec2, rds, and vpc, etc.) are also created and working perfectly.
Now I want to make this function as public and use this function with user AWS credentials.
So public user uses my function with his AWS credentials those resources should be created in his account and user doesn't want to see my template code.
How can I achieve this?
You can leverage AWS Cloud Development Kit better, than directly using CloudFormation for this purpose. Although CDK may not be directly used within Lambda, a workaround is mentioned here.
AWS CloudFormation will create resources in the AWS Account that is associated with the credentials used to create the stack.
The person who creates the stack will need to provide (upload) a template file or they can reference a template that is stored in Amazon S3, which is accessible to their credentials (meaning that it is either public, or their credentials have been given permission to access the template in S3).

Can Terraform be run from inside an AWS WorkSpace?

I'm trying to figure out a way to run Terraform from inside an AWS WorkSpace. Is that even possible? Has anyone made this work?
AWS WorkSpaces doesn't apply the same concept with an Instance Profile and associated IAM role that is attached to it.
I'm pretty confident, however, that you can run Terraform in an AWS WorkSpace just as you can do it from your personal computer. With Internet access (or VPC endpoints), it can reach AWS APIs and just requires a set of credentials, which in this case (without instance profiles) would be an AWS Access Key and Secret Access Key - just like on your local computer.

AWS Elastic BeanStalk Security Group

I am trying to create Worker Environmenton EBS with Sample Application of Node js which should use existing Security group on VPC.
I create this environment inside VPC (Virtual Private Cloud).
When I create this environment, I keep following configuration for VPC.
Security Group which is selected here is already exist.
In the next screen, I also select instance profile and service role which also exist.
While I create Environment with this setting, It does create Environment fine but it always create new Security group instead of using existing security group.
Why it always create new Security group and not use existing one ?
I want to reuse Security group and not create separate for each worker environment.
Appreciate if someone can guide me in right direction.
Thanks in advance.
Beanstalk uses the security group you asked for, but on creation it also creates a unique one for that configuration. If you launch your instance it will be in the security group as expected.
Instead of stopping it from being created, was able to modify its rules such that I changed to just allow port 22 access only from my private security group.
Namespace: aws:autoscaling:launchconfiguration
OptionName: SSHSourceRestriction
Value: tcp, 22, 22, my-private-security-group
Visit : https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#SSHSourceRestriction

Resources