I'm trying to add CloudWatch logging to my API Gateway and have followed posts like this one to create the following terraform:
resource "aws_iam_role" "iam_for_api_gateway" {
name = "${var.name}-api-gateway-role"
description = "custom IAM Limited Role created with \"APIGateway\" as the trusted entity"
path = "/"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
tags = var.resourceTags
}
resource "aws_cloudwatch_log_group" "api_gateway_log_group" {
name = "/aws/lambda/${var.name}-api-gateway"
retention_in_days = 14
}
resource "aws_iam_policy" "api_gateway_logging" {
name = "${var.name}-api-gateway-logging"
path = "/"
description = "IAM policy for logging from the api gateway"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:GetLogEvents",
"logs:FilterLogEvents"
],
"Resource": "arn:aws:logs:*:*:*",
"Effect": "Allow"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "gateway_logs" {
role = aws_iam_role.iam_for_api_gateway.id
policy_arn = aws_iam_policy.api_gateway_logging.arn
}
resource "aws_api_gateway_rest_api" "root_api" {
name = "${var.name}-rest-api-service"
tags = var.resourceTags
}
# at this point there are various resource "aws_api_gateway_resource" "api" blocks, etc
resource "aws_api_gateway_account" "demo" {
cloudwatch_role_arn = aws_iam_role.iam_for_api_gateway.arn
}
resource "aws_api_gateway_deployment" "deployment" {
rest_api_id = aws_api_gateway_rest_api.root_api.id
stage_name = var.envName
depends_on = [
aws_cloudwatch_log_group.api_gateway_log_group,
aws_api_gateway_integration.lang_integration,
aws_api_gateway_account.demo
]
lifecycle {
create_before_destroy = true
}
}
resource "aws_api_gateway_method_settings" "example" {
rest_api_id = aws_api_gateway_rest_api.root_api.id
stage_name = var.envName
method_path = "*/*"
settings {
metrics_enabled = true
logging_level = "ERROR"
}
}
But I am seeing no log entries generated for my API Gateway, though the log group is created.
I was previously getting this error:
Error: updating API Gateway Stage failed: BadRequestException: CloudWatch Logs role ARN must be set in account settings to enable logging
on ..\2-sub-modules\e-api-gateway\main.tf line 627, in resource "aws_api_gateway_method_settings" "example":
627: resource "aws_api_gateway_method_settings" "example" {
But then I updated the resource "aws_api_gateway_method_settings" "example" block (as shown above).
Now, I don't get the above error, but I also don't get any API Gateway logs.
What am I missing?
To fix the issue with "CloudWatch Logs role ARN must be set in account settings to enable logging" you should specify this role in API Gateway Account Settigns:
resource "aws_api_gateway_account" "demo" {
cloudwatch_role_arn = aws_iam_role.cloudwatch.arn
}
resource "aws_iam_role" "cloudwatch" {
name = "api_gateway_cloudwatch_global"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
Details: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/api_gateway_account
In addition to the information I provided in the comments, I would like to give a more precise answer to the question about why the logs are not displayed and how to display them in case someone runs into the same problem in the future.
With the logging_level property set to ERROR only the errors will be displayed in cloudwatch.
If we want to log all the request going through the gateway we have to use logging_level = "INFO". In order to display all the information related to the request like request URI, request headers, request body... we have to activate the data_trace_enabled property:
resource "aws_api_gateway_method_settings" "example" {
rest_api_id = aws_api_gateway_rest_api.root_api.id
stage_name = var.envName
method_path = "*/*"
settings {
data_trace_enabled = true
metrics_enabled = true
logging_level = "ERROR"
}
}
Terraform data_trace_enabled property matches with the Enable Detailed CloudWatch Metrics property from the AWS API Gateway console:
Currently there is a known limitation in the API Gateway and all the logs events larger than 1024bytes are truncated so keep that in mind if expect calls with many headers or large bodies.
API Gateway currently limits log events to 1024 bytes. Log events larger than 1024 bytes, such as request and response bodies, will be truncated by API Gateway before submission to CloudWatch Logs.
Related
I am trying to create an Elastic Beanstalk application but I am coming across the error:
The instance profile iam_for_beanstalk associated with the environment does not exist.
The role does exist as you can see here:
and it is being created via Terraform through the following code:
resource "aws_iam_role" "beanstalk" {
name = "iam_for_beanstalk"
assume_role_policy = file("${path.module}/assumerole.json")
}
assumerole.json looks like this:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "elasticbeanstalk.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "elasticbeanstalk"
}
}
}]
}
And here is how I try to associate it to the newly created application:
resource "aws_elastic_beanstalk_environment" "nodejs" {
application = aws_elastic_beanstalk_application.nodejs.name
name = "stackoverflow"
version_label = var.app_version
solution_stack_name = "64bit Amazon Linux 2 v5.4.6 running Node.js 14"
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "iam_for_beanstalk"
}...
I also tried to assign the name like below, but no success:
value = aws_iam_role.beanstalk.name
You also need to create an aws_iam_instance_profile resource:
resource "aws_iam_instance_profile" "beanstalk_instance_profile" {
name = "stack-overflow-example"
role = aws_iam_role.beanstalk.name
}
I saw a lot of topic opened for this kind of issue, but impossible to resolve this.
I'm trying to create AWS IAM role with attachments policy but I have always this issue :
Error: Error creating IAM Role test-role: MalformedPolicyDocument: JSON strings must not have leading spaces
I am fully aligned with the documentation :
Role : https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role
Policy attachment: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment
Please find my configuration
resource "aws_iam_instance_profile" "test-role-profile" {
name = "test-role-profile"
role = aws_iam_role.test-role.name
}
resource "aws_iam_role" "test-role" {
name = "test-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ecr.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_policy" "test-role-policy" {
name = "test-role-policy"
description = "Test role policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ecr:CreateRepository",
"ecr:DescribeImages",
"ecr:DescribeRegistry",
"ecr:DescribeRepositories",
"ecr:GetAuthorizationToken",
"ecr:GetLifecyclePolicy",
"ecr:GetLifecyclePolicyPreview",
"ecr:GetRegistryPolicy",
"ecr:GetRepositoryPolicy",
"ecr:ListImages",
"ecr:ListTagsForResource",
"ecr:PutLifecyclePolicy",
"ecr:PutRegistryPolicy",
"ecr:SetRepositoryPolicy",
"ecr:StartLifecyclePolicyPreview",
"ecr:PutImage"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "test-role-attach" {
role = aws_iam_role.test-role.name
policy_arn = aws_iam_policy.test-role-policy.arn
}
Version : Terraform v0.12.31
Anyone have an idea ?
Thks
You have some space before the first { character in the JSON string here:
resource "aws_iam_role" "test-role" {
name = "test-role"
assume_role_policy = <<EOF
{
It should look like this instead:
resource "aws_iam_role" "test-role" {
name = "test-role"
assume_role_policy = <<EOF
{
I personally recommend either switching to the jsonencode() method of building JSON strings, which you can see examples of in your first link, or using aws_iam_policy_document to construct your IAM policies.
I want to create a list of S3 buckets and limit access to them to one user. That user should only have access to that bucket and no permissions to do other things in AWS.
I created my list as so (bucket names are not real in this example):
// List bucket names as a variable
variable "s3_bucket_name" {
type = "list"
default = [
"myfirstbucket",
"mysecondbucket",
...
]
}
Then I create a user.
// Create a user
resource "aws_iam_user" "aws_aim_users" {
count = "${length(var.s3_bucket_name)}"
name = "${var.s3_bucket_name[count.index]}"
path = "/"
}
I then create an access key.
// Create an access key
resource "aws_iam_access_key" "aws_iam_access_keys" {
count = "${length(var.s3_bucket_name)}"
user = "${var.s3_bucket_name[count.index]}"
// user = "${aws_iam_user.aws_aim_user.name}"
}
Now I create a user policy
// Add user policy
resource "aws_iam_user_policy" "aws_iam_user_policies" {
// user = "${aws_iam_user.aws_aim_user.name}"
count = "${length(var.s3_bucket_name)}"
name = "${var.s3_bucket_name[count.index]}"
user = "${var.s3_bucket_name[count.index]}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetLifecycleConfiguration",
...
],
"Resource": "${var.s3_bucket_name[count.index].arn}}"
}
]
}
EOF
}
Now I create my buckets with the user attached.
resource "aws_s3_bucket" "aws_s3_buckets" {
count = "${length(var.s3_bucket_name)}"
bucket = "${var.s3_bucket_name[count.index]}"
acl = "private"
policy = <<POLICY
{
"Id": "Policy1574607242703",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1574607238413",
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": {
"${var.s3_bucket_name[count.index].arn}}"
"${var.s3_bucket_name[count.index].arn}/*}"
},
"Principal": {
"AWS": "${var.s3_bucket_name[count.index]}"
}
}
]
}
POLICY
tags = {
Name = "${var.s3_bucket_name[count.index]}"
Environment = "live"
}
}
The problem I have is it doesn't like where I have set the ARN in the policy by using my variable.
I also believe I need to use the user.arn not the bucket, although they should have the same name. What am I doing wrong here?
I think I see a few things that might be able to help you out.
The bucket policy resource options aren't going to use the arn of the bucket, they're looking for the actual bucket name so it would look like this "arn:aws:s3:::my-bucket".
I also see a few extra }'s in your setup there which could also be causing problems.
and,,, terraform is on version 0.12 which removes the need for the {$"resource.thing"} and replaces it with resource.thing instead. They have a helpful terraform 0.12upgrade command to run that upgrades the files which is nice. With terrafor 0.12 they adjusted how the resource creation like you have is being done. https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each/
I'd like to create and deploy a cluster using terraform ecs_service, but am unable to do so. My terraform applys always fail around IAM roles, which I don't clearly understand. Specifically, the error message is:
InvalidParametersException: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.
And I have found that:
When I have iam_role specified in ecs_service, ECS complains that I need to use a service-linked role.
When I have iam_role commented in ecs_service, ECS complains that the assumed role cannot validate the targetGroupArn.
My terraform spans a bunch of files. I pulled what feels like the relevant portions out below. Though I have seen a few similar problems posted, none have provided an actionable solution that solves the dilemma above, for me.
## ALB
resource "aws_alb" "frankly_internal_alb" {
name = "frankly-internal-alb"
internal = false
security_groups = ["${aws_security_group.frankly_internal_alb_sg.id}"]
subnets = ["${aws_subnet.frankly_public_subnet_a.id}", "${aws_subnet.frankly_public_subnet_b.id}"]
}
resource "aws_alb_listener" "frankly_alb_listener" {
load_balancer_arn = "${aws_alb.frankly_internal_alb.arn}"
port = "8080"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_alb_target_group.frankly_internal_target_group.arn}"
type = "forward"
}
}
## Target Group
resource "aws_alb_target_group" "frankly_internal_target_group" {
name = "internal-target-group"
port = 8080
protocol = "HTTP"
vpc_id = "${aws_vpc.frankly_vpc.id}"
health_check {
healthy_threshold = 5
unhealthy_threshold = 2
timeout = 5
}
}
## IAM
resource "aws_iam_role" "frankly_ec2_role" {
name = "franklyec2role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role" "frankly_ecs_role" {
name = "frankly_ecs_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ecs.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
# aggresively add permissions...
resource "aws_iam_policy" "frankly_ecs_policy" {
name = "frankly_ecs_policy"
description = "A test policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*",
"ecs:*",
"ecr:*",
"autoscaling:*",
"elasticloadbalancing:*",
"application-autoscaling:*",
"logs:*",
"tag:*",
"resource-groups:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "frankly_ecs_attach" {
role = "${aws_iam_role.frankly_ecs_role.name}"
policy_arn = "${aws_iam_policy.frankly_ecs_policy.arn}"
}
## ECS
resource "aws_ecs_cluster" "frankly_ec2" {
name = "frankly_ec2_cluster"
}
resource "aws_ecs_task_definition" "frankly_ecs_task" {
family = "service"
container_definitions = "${file("terraform/task-definitions/search.json")}"
volume {
name = "service-storage"
docker_volume_configuration {
scope = "shared"
autoprovision = true
}
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-east-1]"
}
}
resource "aws_ecs_service" "frankly_ecs_service" {
name = "frankly_ecs_service"
cluster = "${aws_ecs_cluster.frankly_ec2.id}"
task_definition = "${aws_ecs_task_definition.frankly_ecs_task.arn}"
desired_count = 2
iam_role = "${aws_iam_role.frankly_ecs_role.arn}"
depends_on = ["aws_iam_role.frankly_ecs_role", "aws_alb.frankly_internal_alb", "aws_alb_target_group.frankly_internal_target_group"]
# network_configuration = {
# subnets = ["${aws_subnet.frankly_private_subnet_a.id}", "${aws_subnet.frankly_private_subnet_b}"]
# security_groups = ["${aws_security_group.frankly_internal_alb_sg}", "${aws_security_group.frankly_service_sg}"]
# # assign_public_ip = true
# }
ordered_placement_strategy {
type = "binpack"
field = "cpu"
}
load_balancer {
target_group_arn = "${aws_alb_target_group.frankly_internal_target_group.arn}"
container_name = "search-svc"
container_port = 8080
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-east-1]"
}
}
I was seeing an identical error message and I was doing something else wrong:
I had specified the loadbalancer's ARN and not the loadbalancer's target_group ARN.
For me, the problem was that I forgot to attach the right policy to the service role. Attaching this AWS-managed policy helped: arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceRole
For me, I was using output of previous command. But the output was empty hence target group arn was empty in the create service call.
I had the wrong role attached.
resource "aws_ecs_service" "ECSService" {
name = "stage-quotation"
cluster = aws_ecs_cluster.ECSCluster2.id
load_balancer {
target_group_arn = aws_lb_target_group.ElasticLoadBalancingV2TargetGroup2.arn
container_name = "stage-quotation"
container_port = 8000
}
desired_count = 1
task_definition = aws_ecs_task_definition.ECSTaskDefinition.arn
deployment_maximum_percent = 200
deployment_minimum_healthy_percent = 100
iam_role = aws_iam_service_linked_role.IAMServiceLinkedRole4.arn #
ordered_placement_strategy {
type = "spread"
field = "instanceId"
}
health_check_grace_period_seconds = 0
scheduling_strategy = "REPLICA"
}
resource "aws_iam_service_linked_role" "IAMServiceLinkedRole2" {
aws_service_name = "ecs.application-autoscaling.amazonaws.com"
}
resource "aws_iam_service_linked_role" "IAMServiceLinkedRole4" {
aws_service_name = "ecs.amazonaws.com"
description = "Role to enable Amazon ECS to manage your cluster."
}
I accidentally used my role for application-autoscaling due to poor naming convention. The correct role we need to use is defined above as IAMServiceLinkedRole4.
In order to prevent the error:
Error: creating ECS Service (*****): InvalidParameterException: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.
From my side is working with the following configuration:
Role Trusted relationship: Adding statement to Trusted Policy
{
"Sid": "ECSpermission",
"Effect": "Allow",
"Principal": {
"Service": [
"ecs.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
Role Permissions:
Adding AWS manged policies:
AmazonEC2ContainerRegistryFullAccess
AmazonEC2ContainerServiceforEC2Role
Adding custom inline policy: ( I know permissions is so extensive)
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:*",
"elasticloadbalancing:*",
"application-autoscaling:*",
"resource-groups:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Declare your custom role with the parameter iam_role in the resource "aws_ecs_service"
resource "aws_ecs_service" "team_deployment" {
name = local.ecs_task
cluster = data.terraform_remote_state.common_resources.outputs.ecs_cluster.id
task_definition = aws_ecs_task_definition.team_deployment.arn
launch_type = "EC2"
iam_role = "arn:aws:iam::****:role/my_custom_role"
desired_count = 3
enable_ecs_managed_tags = true
force_new_deployment = true
scheduling_strategy = "REPLICA"
wait_for_steady_state = false
load_balancer {
target_group_arn = data.terraform_remote_state.common_resources.outputs.target_group_api.arn
container_name = var.ecr_image_tag
container_port = var.ecr_image_port
}
}
Of course be careful with the parameter target_group_arn value. Must be the target group ARN. Then now is working fine!
Releasing state lock. This may take a few moments...
Apply complete! Resources: 1 added, 2 changed, 0 destroyed.
Resolved by destroying my stack and re-deploying.
I'm trying to create a CodeBuild project using Terraform, but when I build I'm getting the following error on the DOWNLOAD_SOURCE step:
CLIENT_ERROR: repository not found for primary source and source version
This project uses a CodeCommit repository as the source. It's odd because all of the links to the repository from the CodeCommit console GUI work fine for this build - I can see the commits, click on the link and get to the CodeCommit repo, etc so the Source setup seems to be fine. The policy used for the build has "codecommit:GitPull" permissions on the repository.
Strangely, if I go to the build in the console and uncheck the "Allow AWS CodeBuild to modify this service role so it can be used with this build project" checkbox then Update Sources, the build will work! But I can't find any way to set this from Terraform, and it will default back on if you go back to the Update Sources screen.
Here is the Terraform code I'm using to create the build.
# IAM role for CodeBuild
resource "aws_iam_role" "codebuild_myapp_build_role" {
name = "mycompany-codebuild-myapp-build-service-role"
description = "Managed by Terraform"
path = "/service-role/"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codebuild.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
# IAM policy for the CodeBuild role
resource "aws_iam_policy" "codebuild_myapp_build_policy" {
name = "mycompany-codebuild-policy-myapp-build-us-east-1"
description = "Managed by Terraform"
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetAuthorizationToken",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"logs:CreateLogStream",
"codecommit:GitPull",
"logs:PutLogEvents",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:logs:us-east-1:000000000000:log-group:/aws/codebuild/myapp-build",
"arn:aws:logs:us-east-1:000000000000:log-group:/aws/codebuild/myapp-build:*",
"arn:aws:s3:::codepipeline-us-east-1-*",
"arn:aws:codecommit:us-east-1:000000000000:mycompany-devops-us-east-1"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": [
"arn:aws:logs:us-east-1:000000000000:log-group:/aws/codebuild/myapp-build",
"arn:aws:logs:us-east-1:000000000000:log-group:/aws/codebuild/myapp-build:*"
]
}
]
}
POLICY
}
# attach the policy
resource "aws_iam_role_policy_attachment" "codebuild_myapp_build_policy_att" {
role = "${aws_iam_role.codebuild_myapp_build_role.name}"
policy_arn = "${aws_iam_policy.codebuild_myapp_build_policy.arn}"
}
# codebuild project
resource "aws_codebuild_project" "codebuild_myapp_build" {
name = "myapp-build"
build_timeout = "60"
service_role = "${aws_iam_role.codebuild_myapp_build_role.arn}"
artifacts {
type = "NO_ARTIFACTS"
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/docker:17.09.0"
type = "LINUX_CONTAINER"
privileged_mode = "true"
environment_variable {
"name" = "AWS_DEFAULT_REGION"
"value" = "us-east-1"
}
environment_variable {
"name" = "AWS_ACCOUNT_ID"
"value" = "000000000000"
}
environment_variable {
"name" = "IMAGE_REPO_NAME"
"value" = "myapp-build"
}
environment_variable {
"name" = "IMAGE_TAG"
"value" = "latest"
}
environment_variable {
"name" = "DOCKERFILE_PATH"
"value" = "docker/codebuild/myapp_build_agent"
}
}
source {
type = "CODECOMMIT"
location = "mycompany-devops-us-east-1"
git_clone_depth = "1"
buildspec = "docker/myapp/myapp_build/buildspec.yml"
}
tags {
Name = "myapp-build"
Environment = "${var.env_name}"
Region = "${var.aws_region}"
ResourceType = "CodeBuild Project"
ManagedBy = "Terraform"
}
}
Your problem is the specification of the source:
source {
type = "CODECOMMIT"
location = "mycompany-devops-us-east-1"
Here's the Amazon documentation for the source, what's relevant with some emphasis:
For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, https://git-codecommit.region-ID.amazonaws.com/v1/repos/repo-name ).
In your case, that is probably something like this, using the 'clone url' found in the codecommit console:
https://git-codecommit.us-east-1.amazonaws.com/v1/repos/mycompany-devops-us-east-1
I ran into this while using a private github repository source. In my case I gave the URL, not the clone link to github, so the problem was very similar:
bad: https://github.com/privaterepo/reponame
good: https://github.com/privaterepo/reponame.git