Can Terraform be run from inside an AWS WorkSpace? - terraform

I'm trying to figure out a way to run Terraform from inside an AWS WorkSpace. Is that even possible? Has anyone made this work?

AWS WorkSpaces doesn't apply the same concept with an Instance Profile and associated IAM role that is attached to it.
I'm pretty confident, however, that you can run Terraform in an AWS WorkSpace just as you can do it from your personal computer. With Internet access (or VPC endpoints), it can reach AWS APIs and just requires a set of credentials, which in this case (without instance profiles) would be an AWS Access Key and Secret Access Key - just like on your local computer.

Related

GCP Service account key management and usage in Terraform

I am creating CI/CD pipeline for Terraform so that my GCP resource creation would be automated. But Terraform needs Service account to do the job, I create the service account and the key is downloaded to my machine, but what should be the correct way to store it so when running Cloud build pipeline so that Terraform would pick on it and execute scripts.
provider "google" {
credentials = file(var.cred_file)
project = var.project_name
region = var.region
}
Is it okay to store this file in Cloud storage bucket ? Or there are some better alternatives ?
On GCP you have the bucket option to keep sensitive information and you can use access control lists (ACLs) to define who has access on your buckets and objects. GCP offers the next options to storage and I think that the better is according with your needs, just ensure that the option provides you the security tools to keep your files safe. I think that once you are Granting permissions to your Cloud Build service account, you can pass the path to the service account key in code

GitHub Actions for Terraform - How to provide "terraform.tfvars" file with aws credentials

I am trying to setup GitHub Actions for execute a terraform template.
My confusion is - how do I provide *.tfvars file which has aws credentials. (I can't check-in these files).
Whats the best practice to share the variable's values expected by terraform commands like plan or apply where they need aws_access_key and aws_secret_key.
Here is my GitHub project - https://github.com/samtiku/terraform-ec2
Any guidance here...
You don't need to provide all variables through *.tfvars file. Apart from -var-file option, terraform command provides also -var parameter, which you can use for passing secrets.
In general, secrets are passed to scripts through environment variables. CI tools give you an option to define environment variables in project configuration. It's a manual step, because as you have already noticed, secrets cannot be stored in the repository.
I haven't used Github Actions in particular, but after setting environment variables, all you need to do is run terraform with secrets read from them:
$ terraform -var-file=some.tfvars -var "aws-secret=${AWS_SECRET_ENVIRONMENT_VARIABLE}
This way no secrets are ever stored in the repository code. If you'd like to run terraform locally, you'll need first to export these variables in your shell :
$ export AWS_SECRET_ENVIRONMENT_VARIABLE="..."
Although Terraform allows providing credentials to some providers via their configuration arguments for flexibility in complex situations, the recommended way to pass credentials to providers is via some method that is standard for the vendor in question.
For AWS in particular, the main standard mechanisms are either a credentials file or via environment variables. If you configure the action to follow what is described in one of those guides then Terraform's AWS provider will automatically find those credentials and use them in the same way that the AWS CLI does.
It sounds like environment variables will be the easier way to go within GitHub actions, in which case you can just set the necessary environment variables directly and the AWS provider should use them automatically. If you are using the S3 state storage backend then it will also automatically use the standard AWS environment variables.
If your system includes multiple AWS accounts then you may wish to review the Terraform documentation guide Multi-account AWS Architecture for some ideas on how to model that. The summary of what that guide recommends is to have a special account set aside only for your AWS users and their associated credentials, and then configure your other accounts to allow cross-account access via roles, and then you can use a single set of credentials to run Terraform but configure each instance of the AWS provider to assume the appropriate role for whatever account that provider instance should interact with.

Is it possible to create stack in my AWS account and resources like (ec2, vpc, rds) created in client AWS account?

I have written an AWS Lambda nodejs function for creating a stack in CloudFormation, using CloudFormation template and given input parameters from UI.
When I run my Lambda function with respected inputs, a stack is successfully creating and instances like (ec2, rds, and vpc, etc.) are also created and working perfectly.
Now I want to make this function as public and use this function with user AWS credentials.
So public user uses my function with his AWS credentials those resources should be created in his account and user doesn't want to see my template code.
How can I achieve this?
You can leverage AWS Cloud Development Kit better, than directly using CloudFormation for this purpose. Although CDK may not be directly used within Lambda, a workaround is mentioned here.
AWS CloudFormation will create resources in the AWS Account that is associated with the credentials used to create the stack.
The person who creates the stack will need to provide (upload) a template file or they can reference a template that is stored in Amazon S3, which is accessible to their credentials (meaning that it is either public, or their credentials have been given permission to access the template in S3).

How to setup awscli without setting up access key & secret access key?

I tried to find set aws-cli locally using IAM role & without using access key/secret access key. But unable to get information from meta url[http://169.256.169.256/latest/meta-data].
I am running Ec2 instance with Ubuntu Server 16.04 LTS (HVM), SSD Volume Type - ami-f3e5aa9c.I have tried to configure aws-cli on that instance.I am not sure what type of role/policy/user needed to get aws-cli configured in my Ec2 instance.
Please provide me step by step guide to achieve that.I just need direction.So useful link also appreciated.
To read Instance Metadata, you dont need to configure the AWS CLI. The problem in your case, is you are using a wrong URL to read the Instance Metadata. The correct URL to use is http://169.254.169.254/ . For example, if you want to read the AMI id of the Instance, you can use the follow command.
curl http://169.254.169.254/latest/meta-data/ami-id
However, if you would like to configure the AWS cli without using the Access/Secret Keys. Follow the below steps.
Create an IAM instance profile and Attach it to the EC2 instance
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Roles, Create role.
On the Select role type page, choose EC2 and the EC2 use case. Choose Next: Permissions.
On the Attach permissions policy page, select an AWS managed policy that
grants your instances access to the resources that they need.
On the Review page, type a name for the role and choose Create role.
Install the AWS CLI(Ubuntu).
Install pip if it is not installed already.
`sudo apt-get install python-pip`
Install AWS CLI.
`pip install awscli --upgrade --user`
Configure the AWS CLI. Leave AWS Access Key ID and AWS Secret Access
Key as blank as we want to use a Role.
$ aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: us-west-2
Default output format [None]: json
Modify the Region and Output Format values if required.
I hope this Helps you!
AWS Documentation on how to setup an IAM role for EC2
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

Handling run time and build time secrets in AWS CodePipeline

We are dealing with the problem of providing build time and run time secrets to our applications built using AWS CodePipeline and being deployed to ECS.
Ultimately, our vision is to create a generic pipeline for each of our applications that achieves the following goals:
Complete separation of access
The services in the app-a-pipeline CANNOT access any of the credentials or use any of the keys used in the app-b-pipeline and visa-versa
Secret management by assigned developers
Only developers responsible for app-a may read and write secrets for app-a
Here are the issues at hand:
Some of our applications require access to private repositories for dependency resolution at build time
For example, our java applications require access to a private maven repository to successfully build
Some of our applications require database access credentials at runtime
For example, the servlet container running our app requires an .xml configuration file containing credentials to find and access databases
Along with some caveats:
Our codebase resides in a public repository. We do not want to expose secrets by putting either the plaintext or the cyphertext of the secret in our repository
We do not want to bake runtime secrets into our Docker images created in CodeBuild even if ECR access is restricted
The Cloudformation template for the ECS resources and its associated parameter file reside in the public repository in plaintext. This eliminates the possibility of passing runtime secrets to the ECS Cloudformation template through parameters (As far as I understand)
We have considered using tools like credstash to help with managing credentials. This solution requires that both CodeBuild and ECS task instances have the ability to use the AWS CLI. As to avoid shuffling around more credentials, we decided that it might be best to assign privileged roles to instances that require the use of AWS CLI. That way, the CLI can infer credentials from the role in the instances metadata
We have tried to devise a way to manage our secrets given these restrictions. For each app, we create a pipeline. Using a Cloudformation template, we create:
4 resources:
DynamoDB credential table
KMS credential key
ECR repo
CodePipeline (Build, deploy, etc)
3 roles:
CodeBuildRole
Read access to DynamoDB credential table
Decrypt permission with KMS key
Write to ECR repo
ECSTaskRole
Read access to DynamoDB credential table
Decrypt permission with KMS key
Read from ECR repo
DeveloperRole
Read and write access to DynamoDB credential table
Encrypt and decrypt permission with KMS key
The CodeBuild step of the CodePipeline assumes the CodeBuildRole to allow it to read build time secrets from the credential table. CodeBuild then builds the project and generates a Docker Image which it pushes to ECR. Eventually, the deploy step creates an ECS service using the Cloudformation template and the accompanying parameter file present in the projects public repository The ECS task definition includes assuming the ECSTaskRole to allow the tasks to read runtime secrets from the credential table and to pull the required image from ECR.
Here is a simple diagram of the AWS resources and their relationships as stated above
Our current proposed solution has the following issues:
Role heavy
Creating roles is a privileged action in our organization. Not all developers who try to create the above pipeline will have permission to create the necessary roles
Manual assumption of DeveloperRole:
As it stands, developers would need to manually assume the DeveloperRole. We toyed with the idea of passing in a list of developer user ARNs as a parameter to the pipeline Cloudformation template. Does Cloudformation have a mechanism to assign a role or policy to a specified user?
Is there a more well established way to pass secrets around in CodePipeline that we might be overlooking, or is this the best we can get?
Three thoughts:
AWS Secret Manager
AWS Parameter Store
IAM roles for Amazon ECS tasks
AWS Secret ManagerAWS Secrets Manager helps you protect secrets to access applications, services, and IT resources. With you can rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.
AWS Parameter Store can protect access keys with granular access. This access can be based on ServiceRoles.
ECS provides access to the ServiceRole via this pattern:
build:
commands:
- curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI | jq 'to_entries | [ .[] | select(.key | (contains("Expiration") or contains("RoleArn")) | not) ] | map(if .key == "AccessKeyId" then . + {"key":"AWS_ACCESS_KEY_ID"} else . end) | map(if .key == "SecretAccessKey" then . + {"key":"AWS_SECRET_ACCESS_KEY"} else . end) | map(if .key == "Token" then . + {"key":"AWS_SESSION_TOKEN"} else . end) | map("export \(.key)=\(.value)") | .[]' -r > /tmp/aws_cred_export.txt
- chmod +x /tmp/aws_cred_export.txt
- /aws_cred_export.txt && YOUR COMMAND HERE
If your ServiceRole provided to the CodeBuild task has access to use the Parameter store key you should be good to go.
Happy hunting and hope this helps
At a high level, you can either isolate applications in a single AWS account with granular permissions (this sounds like what you're trying to do) or by using multiple AWS accounts. Neither is right or wrong per se, but I tend to favor separate AWS accounts over managing granular permissions because your starting place is complete isolation.

Resources