new to aws cli trying to get the caller identity Provided region_name '[us-east-2]' doesn't match a supported format. what do i do - region

Default region name [[us-east-2]]:
Default output format [Json]:
C:\Users\jovik>aws sts get-caller-identity
Provided region_name '[us-east-2]' doesn't match a supported format.

Related

aws_role_arn not being used for Terraform Vault provider in auth_login_aws

I'm hoping to contribute some documentation on auth_login_aws because I'm trying to use the feature described in this feature request.
TL;DR, despite specifying aws_role_arn in the snippet below, the provider is still trying to use the credentials in my ~.aws/credentials file.
provider "vault" {
address = "http://127.0.0.1:8200"
skip_child_token = true
auth_login_aws {
role = "dev-role-corey-iam"
header_value = "vault.example.com"
aws_role_arn = "arn:aws:iam::1234567890:role/SomeRole"
aws_role_session_name = "corey-test-session"
aws_region = "us-east-1"
}
}
I'm just trying to get Terraform to pull secrets from a local Vault instance using the SomeRole IAM role, which is bound to the dev-role-corey-iam Auth role. I'm currently logged in with a different AWS IAM role (PrimaryRole) that can assume the SomeRole role. When I run terraform plan, the provider is using PrimaryRole instead when I expect it to use SomeRole because aws_role_arn is specified. Am I misinterpreting what aws_role_arn should do?

Boto3: Get details of all available S3 Bucket details

I am trying to list the metadata about all the S3 buckets available in my AWS account using the boto3 client.
I tried below api:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.list_buckets but it just returns the bucket name and created date.
I am looking for more details like:
Bucket Region
Bucket status like Active
Bucket Id (if there is any)
It would be more helpful if I can get a single API which can return all these details as describe_instances of EC2 returns more metadata.
Any help is highly appreciated
Hope, this post of some help!
S3 Region? - I don't think, S3 is region-specific anymore. Also, the bucket name is already a unique value.
As you can see in the same document you have below function for collecting a different kinds of metadata.
get_bucket_accelerate_configuration()
get_bucket_acl()
get_bucket_analytics_configuration()
get_bucket_cors()
get_bucket_encryption()
get_bucket_inventory_configuration()
get_bucket_lifecycle()
get_bucket_lifecycle_configuration()
get_bucket_location()
get_bucket_logging()
get_bucket_metrics_configuration()
get_bucket_notification()
get_bucket_notification_configuration()
get_bucket_policy()
get_bucket_policy_status()
get_bucket_replication()
get_bucket_request_payment()
get_bucket_tagging()
get_bucket_versioning()
get_bucket_website()
These are created just for the reason of segregating information that specifically required for user.
In my opinion, you are looking for get_bucket_inventory_configuration - Returns an inventory configuration (identified by the inventory ID) from the bucket.
This will also return ARN which is a unique id for all AWS resources.
A sample ARN looks like
'Bucket' :"arn:aws:s3:::10012346561237-rawdata-bucket"

How can I fake cloud provider credentials in Terraform?

I am writing a cross-cloud Terraform module (for google and aws) which accepts a cloud input variable and applies it accordingly, for example:
variable "cloud" {}
resource "google_example" {
count = "${var.cloud == "google" ? 1 : 0}"
}
resource "aws_example" {
count = "${var.cloud == "aws" ? 1 : 0}"
}
The problem with this approach is that I only want to provide credentials for the selected cloud, not both - setting cloud=aws for example I get:
Error: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
Is there any way to fake cloud provider credentials for the non-selected cloud, or do I need to implement some sort of Terraform templating?
Every resource can allow a provider field, which tells it which provider to use. I think you may need to set real credentials for both providers, else Terraform will fail to initialise if you use backend configuration. Otherwise, set fake credentials as environment variables and see what happens:
TF_VAR_AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID" # et cetera

Terraform - Access SSM Parameter Store Value Access

I would like some help / guidance on how to securely access SSM Parameter store for the (decrypted) value on an existing secureString for use in other terraform resources?
e.g we have a github access token stored in SSM for CI - I need to pass this value to the GitHub provider to enable webhooks for codepipeline.
The SSM Parameter is not something managed from terraform, but its decrypted value can be used.
Is this insecure given the value would end up in the state file? What is the best practice for this type of use case?
Many thanks!
You can use the data source to reference an already existing resource:
data "aws_ssm_parameter" "foo" {
name = "foo"
}
one of the properties of the data source is value, which contains the actual value of the parameter. You can use this elsewhere in your terraform code:
data.aws_ssm_parameter.foo.value

Can't access Glacier using AWS CLI

I'm trying to access AWS Glacier (from the command line on Ubuntu 14.04) using something like:
aws glacier list-vaults -
rather than
aws glacier list-vaults --account-id 123456789
The documentation suggests that this should be possible:
You can specify either the AWS Account ID or optionally a '-', in
which case Amazon Glacier uses the AWS Account ID associated with the
credentials used to sign the request.
Unless "credentials used to sign the request" means that I have to explicitly include credentials in the command, rather than rely on my .aws/credentials file, I would expect this to work. Instead, I get:
aws: error: argument --account-id is required
Does anyone have any idea how to solve this?
The - is supposed to be passed as the value of --account-id, so like
aws glacier list-vaults --account-id -
--account-id is in fact a required option.
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/glacier/list-vaults.html
Says that "--account-id" is a required parameter for the glacier section of the full aws api. A little wierd, but documented. So yay.

Resources