I used 'null_resource' and pass aws cli to 'local-exec' to update stepfunction:
resource "null_resource" "enable_step_function_xray" {
triggers = {
state_machine_arn = xxxxxxx
}
provisioner "local-exec" {
command = "aws stepfunctions update-state-machine --state-machine-arn ${self.triggers.state_machine_arn} --tracing-configuration enabled=true"
}
}
This works fine when I tested via local Terraform, my question is if this will work if I apply Terraform on Concourse?
It depends entirely on if you have the Concourse job configured to use a container image that has the aws cli tool installed. If the AWS CLI tool is installed and in the path then the local-exec should succeed. If not, then it will obviously fail.
My assumption is that in your local machine, you've already set up the required credentials. So if you simply try it on Concourse CI it will fail with an authentication error.
To set it up in Concourse -
AWS Console - Create a new IAM user cicd with programmatic access only and the relevant permissions. For testing purposes, you can use the AdministratorAcess policy, but make sure to make it least-privileged later on.
AWS Console - Create AWS security credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) for the cicd user (save them in a safe place)
Concourse CI - Create the secrets AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
Concourse CI - Add ((AWS_ACCESS_KEY_ID)) and ((AWS_SECRET_ACCESS_KEY)) environment variables to your Concourse CI task
I'm sure there are many tutorials about this subject, but the above steps will probably appear in most of these tutorials. Concourse CI should now be able to apply changes on your AWS account.
Related
What I want to do
I want to create Node.js (built with Nest.js) server in the infrastructure as follows:
infra-structure-image
GitHub repo is here.
Notice:
ECS is settled in private subnet.
I want to use private link to connect with AWS services (ECR and S3 in my case) rather than NAT gateway in public subnet.
Infrastructure is built from CloudFormation stack in AWS CDK Toolkit.
Node.js server is a simple app that responses 'Hello World!'.
Current behavior
When I deploy the AWS CloudFormation stack with cdk deploy, it is stuck in the ECS service creation at CREATE_IN_PROGRESS state. I can see ECS task execution error logs in ECS management console as follows:
STOPPED (ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry auth: service call has been retried 3 time(s): RequestError: send request failed caused by: Post https://api.ecr.ap-northeast-1.amazonaws.com/: dial tcp 99.77.62.61:443: i/o timeout)
If I don't delete stack or set minimum number of task to 0, ECS service continuously try to execute tasks for hours and finally get timeout error.
I have already checked some points based on this official article.
Create VPC endpoints (com.amazonaws.region.ecr.dkr, com.amazonaws.region.ecr.api, S3)
Configure VPC endpoints (security group, subnets to settle in, IAM policy)
Add permissions to ECS task execution role so that ECS can pull image from ECR
Check if the image exists in ECR
And I have checked 'hello world' with this docker image in local machine.
Reproduction Steps
A minimal GitHub repo is here.
$ git clone https://github.com/Fanta335/cdk-ecs-nest-app
$ cd cdk-ecs-nest-app
$ npm install
AWS CDK toolkit is used in this project, so you need to run npm install -g aws-cdk if you have not installed AWS CDK toolkit in your local machine.
And if you have not set default IAM user configuration in aws cli, you need to run aws configure in order to pass environment variables to the CloudFormation stack.
$ cdk deploy
Then the deployment should be stuck.
Versions
MacOS Monterey 12.6
AWS CDK cli 2.43.1 (build c1ebb85)
AWS cli aws-cli/2.7.28 Python/3.9.11 Darwin/21.6.0 exe/x86_64 prompt/off
Docker version 20.10.17, build 100c701
Nest cli 9.1.3
The problem was DNS resolution has not been enabled in ECR VPC endpoints. I should have set privateDnsEnabled: true manually to the InterfaceVpcEndpoint instances in cdk-ecs-nest-app-stack.ts file as follows:
const ECSPrivateLinkAPI = new ec2.InterfaceVpcEndpoint(this, "ECSPrivateLinkAPI", {
vpc,
service: new ec2.InterfaceVpcEndpointService(`com.amazonaws.${REGION}.ecr.api`),
securityGroups: [securityGroupPrivateLink],
privateDnsEnabled: true, // HERE
});
const ECSPrivateLinkDKR = new ec2.InterfaceVpcEndpoint(this, "ECSPrivateLinkDKR", {
vpc,
service: new ec2.InterfaceVpcEndpointService(`com.amazonaws.${REGION}.ecr.dkr`),
securityGroups: [securityGroupPrivateLink],
privateDnsEnabled: true, // HERE
});
According to the CDK docs, the default value of privateDnsEnabled is defined by the service which uses this VPC endpoint.
privateDnsEnabled?
Type: boolean (optional, default: set by the instance of IInterfaceVpcEndpointService, or true if not defined by the instance of IInterfaceVpcEndpointService)
I didn't checked the default privateDnsEnabled values of com.amazonaws.${REGION}.ecr.api and com.amazonaws.${REGION}.ecr.dkr but we have to set true manually in CDK Toolkit.
I want to pass variables gotten from gitlab to my terraform. I can see gitlab getting the vars but it errors out at the terraform apply/
In my gitlab file i have
- echo "$CI_COMMIT_REF_SLUG"
- echo "$GITLAB_USER_NAME"
- terraform apply --auto-approve -var branch_name=$CI_COMMIT_REF_SLUG -var branch_creator=$GITLAB_USER_NAME
in my terraform i'm setting branch_name and etc as variables
i keep getting this error in my gitlab build
Failed to load Terraform configuration or plan: open "last name": no such file or directory
You could try again with GitLab 13.5 (October 2020)
Get started quickly with GitLab and Terraform
A new GitLab CI/CD template enables you to set up Terraform pipelines without any manual work, lowering the barrier to entry for your teams to adopt Terraform.
See Documentation and Issue.
From that generated example, you can check if it includes a similar terraform call and compare it with your use-case.
Created a simple Azure devops release pipeline to provision a resource group. Tested terraform script with remote state file locally and checked in the code to git. This is how the code is organized:
IAC (root folder)
/bin/terraform.exe
main.tf (this has terraform configuration with remote state)
Created a release pipeline pointing to this repository as code. Pipleline gives the alias _IAC to the artifact
in the pipeline I have powershell activities to login to azure using a service principal
then the following line:
$(System.DefaultWorkingDirectory)/_IAC/bin/terraform init
This command executes but says there is no terraform configuration file.
2020-03-05T02:23:04.4536130Z [0m[1mTerraform initialized in an empty directory![0m
2020-03-05T02:23:04.4536786Z
2020-03-05T02:23:04.4556953Z The directory has no Terraform configuration files. You may begin working
2020-03-05T02:23:04.4559693Z with Terraform immediately by creating Terraform configuration files.[0m
the working directory where azure release pipeline agent is running did not have the configuration file.
I had to use a copy operation to copy the main.tf file to the $(AgentWorkingDirectory)
Source definition given below works for terraform modules BUT it has a PAT TOKEN. Works fine on local VM as well as on Azure Pipelines. This question is about how to define source definition for terraform modules but without hard coding PAT TOKEN
Working copy of code:
source = "git::https://<PAT TOKEN>#<AZURE DEVOPS URL>/DefaultCollection/<Project Name>y/_git/terraform-modules//<sub directory>"
I tried the below:
git::https://<AZURE DEVOPS URL>/DefaultCollection/<Project Name>/_git/terraform-modules.git//<sub directory>
That gave me error like below:
"git::https://<AZURE DEVOPS URL>/DefaultCollection/<Project Name>/_git/terraform-modules":
error downloading
'https://<AZURE DEVOPS URL>/DefaultCollection/<Project Name>/_git/terraform-modules':
/usr/bin/git exited with 128: Cloning into
'.terraform/modules/resource_group'...
fatal: could not read Username for 'https://<AZURE DEVOPS URL>':
terminal prompts disabled
Added my user name without the domain part like below:
source = "git::https://<USERNAMEM#<AZURE DEVOPS URL>/DefaultCollection/<PROJECT NAME>/_git/terraform-modules.git//compute"
Error below:
"git::https://<USERNAME>#<AZURE DEVOPS>/DefaultCollection/<PROJECT>/_git/terraform-modules.git":
error downloading
'https://<USERNAME>#<AZURE DEVOPS>/DefaultCollection/<PROJECT>/_git/terraform-modules.git':
/usr/bin/git exited with 128: Cloning into '.terraform/modules/sql_vms'...
fatal: could not read Password for
'https://<USERNAME>#<AZURE DEVOPS>': terminal prompts disabled
When Build pipeline can do checkout even without specifying username and password why do we have to mention in terraform code.
Azure Pipeline Agent has git credentials. Not sure if this is going to work at all without PAT Token?
Have a look at this - Is it possible to authenticate to a remote Git repository using the default windows credentials non interactively?
So, in our case we discovered that just running git config --global http.emptyAuth true before terraform resolves the problem. The :# business is not needed, unless your terraform module repository is an LFS repo. But this is not our case, so we did not need it.
I'm deploying a Django based project on AWS Elastic Beanstalk.
I have been following the Amazon example, where I add my credentials (ACCESS_KEY/SECRET) to my app.config under the .ebextentions directory.
The same config file has:
container_commands:
01_syncdb:
command: "django-admin.py migrate --noinput"
leader_only: true
02_collectstatic:
command: "django-admin.py collectstatic --noinput"
leader_only: true
Problem is that this is forcing me to store my credentials under Version Control, and I will like to avoid that.
I tried to remove the credentials and then add them with eb setenv, but the problem is that the two django commands require the these settings to be set on the environment.
I'm using the v3 cli:
eb create -db -c foo bar --profile foobar
where foobar is the name of the profile under ~/.aws/credentials, and where I want to keep my secret credentials.
What is the best security practices for the AWS credentials using EB?
One solution is to keep the AWS credentials, but create a policy that ONLY allows them to POST objects on the one bucket used for /static.
I ended up removing the collecstatic step from the config file, and simply take care of uploading statics on the build side.
After that, all credentials can be removed and all other boto commands will grab the credentials from the security role on the EC2 instance.