Issue while recreating AWS authentication profile in Terraform - terraform

Followed below mentioned step for creating the sso profile in AWS using command prompt and leverage it to create S3 bucket:
Steps to Setup Profile:
In PowerShell ran: aws configure sso
sso_start_url = url
sso_region = us-east-1
Note: It will take to browser for login and verification. Once verified, it retrieves the role, After that i have selected the role
CLI default client Region [eu-west-1]: us-east-1
CLI default output format [json]: json
CLI profile name : <<Provide your choice of name>>
This will create .aws folder and config file under your home directory (under \User\\<<username>>)
Steps to use profile:
Ran below command to set profile in PowerShell
setx AWS_PROFILE <<profile name>> and configure the access key and secret key using command aws configure
After setting profile in current PowerShell session, trying to create the S3 bucket, but unable to create it.
Got error like
No valid credentials for AWS provider
Profile recreation steps:
To recreate profile I have deleted the .aws folder which was created in the home directory path (under \User\\<<username>>\\.aws)
Now, while trying to create new profile using aws configure sso command, it is showing error like
The config profile (profile name) could not be found

Related

Can't create google_storage_bucket via Terraform

I'd like to create the following resource via Terraform:
resource "google_storage_bucket" "tf_state_bucket" {
name = var.bucket-name
location = "EUROPE-WEST3"
storage_class = "STANDARD"
versioning {
enabled = true
}
force_destroy = false
public_access_prevention = "enforced"
}
Unfortunately, during the execution of terraform apply, I got the following error:
googleapi: Error 403: X#gmail.com does not have storage.buckets.create access to the Google Cloud project. Permission 'storage.buckets.create' denied on resource (or it may not exist)., forbidden
Here's the list of things I tried and checked:
Verified that Google Cloud Storage (JSON) API is enabled on my project.
Checked the IAM roles and permissions: X#gmail.com has the Owner and the Storage Admin roles.
I can create a bucket manually via the Google Console.
Terraform is generally authorised to create resources, for example, I can create a VM using it.
What else can be done to authenticate Terraform to create Google Storage Buckets?
I think you run the Terraform code in a Shell session from your local machine and use an User identity instead of a Service Account identity.
In this case to solve your issue from your local machine :
Create a Service Account in GCP IAM console for Terraform with Storage Admin roles role.
Download a Service Account token key from IAM.
Set the GOOGLE_APPLICATION_CREDENTIALS env var in your Shell session to the Service Account token key file path.
If you run your Terraform code in other place, you need to check if Terraform is correctly authenticated to GCP.
The use of a token key is not recommended because it's not the more secure way, that's why it is better to launch Terraform from a CI tool like Cloud Build instead of launch it from your local machine.
From Cloud Build no need to download and set a token key.

How to change aws config profile in AWS sdk nodejs

I have configured cross account s3 bucket access from a ec2 instance. when I login to the aws ec2 server and run aws cli command to get s3 bucket data that have created in the another aws account, its work properly. please find below command.
aws s3 ls s3://test-bucket-name --profile demo
But, I need to do this using nodejs sdk. I have a application that run on ec2. This application needs to access this bucket data through the application. Is there any way to access this bucket data from application using nodejs?
Launch the EC2 instance with an IAM role in account 1 that has permission to assume an IAM role in account 2. That second role provides S3 access to the relevant bucket/objects in account 2.
Code using the AWS JavaScript SDK will automatically get credentials for IAM role 1. Your code can then assume IAM role 2, get credentials, and then access the cross-account S3 bucket.

How to setup awscli without setting up access key & secret access key?

I tried to find set aws-cli locally using IAM role & without using access key/secret access key. But unable to get information from meta url[http://169.256.169.256/latest/meta-data].
I am running Ec2 instance with Ubuntu Server 16.04 LTS (HVM), SSD Volume Type - ami-f3e5aa9c.I have tried to configure aws-cli on that instance.I am not sure what type of role/policy/user needed to get aws-cli configured in my Ec2 instance.
Please provide me step by step guide to achieve that.I just need direction.So useful link also appreciated.
To read Instance Metadata, you dont need to configure the AWS CLI. The problem in your case, is you are using a wrong URL to read the Instance Metadata. The correct URL to use is http://169.254.169.254/ . For example, if you want to read the AMI id of the Instance, you can use the follow command.
curl http://169.254.169.254/latest/meta-data/ami-id
However, if you would like to configure the AWS cli without using the Access/Secret Keys. Follow the below steps.
Create an IAM instance profile and Attach it to the EC2 instance
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Roles, Create role.
On the Select role type page, choose EC2 and the EC2 use case. Choose Next: Permissions.
On the Attach permissions policy page, select an AWS managed policy that
grants your instances access to the resources that they need.
On the Review page, type a name for the role and choose Create role.
Install the AWS CLI(Ubuntu).
Install pip if it is not installed already.
`sudo apt-get install python-pip`
Install AWS CLI.
`pip install awscli --upgrade --user`
Configure the AWS CLI. Leave AWS Access Key ID and AWS Secret Access
Key as blank as we want to use a Role.
$ aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: us-west-2
Default output format [None]: json
Modify the Region and Output Format values if required.
I hope this Helps you!
AWS Documentation on how to setup an IAM role for EC2
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

AWS: Beanstalk ERROR: Operation Denied. Are your permissions correct? with EB CLI

Trying AWS hosting for the first time. Am using python3.4 eb CLI. Am always getting same Error output for eb init. On simulator for the same user all actions are allowed. Where am I going wrong? Why do I always get ERROR: Operation Denied. Are your permissions correct?
Used pip to install eb cli. Any pointers will be helpful.
This looks like the credentials you are using have limited permissions.
When you first setup the EB CLI, or run aws configure, you will be prompted for your AWS Access Key ID and AWS Secret Access Key. These are the credential keys for a specific root account or IAM User. It is best practice to use an IAM User for most access.
If you have already setup your credentials for the CLI you can check them either in the ~/.aws/config or ~/.aws/credentials file.
An example of a ~/.aws/credentials would be like so:
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[limited]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
By default the credentials under the [default] option will be used if no profile is specified in the command line. If you wish to use a specific profile of credentials you can specify them like this: eb init --profile limited.
You can search for the credentials being used via the IAM console, from there you can view which permissions have been granted to your user. You can also add permissions for that user in this console.

Configure AWS credentials to work with both the CLI and SDKs

In trying to automate some deploy tasks to S3, I noticed that the credentials I provided via aws configure are not picked up by the Node.js SDK. How can I get the shell and a gulp task to reference the same file?
After lots of searching, it was the excerpt from this article that caused a eureka moment.
If you've been using the AWS CLI, you might already have a credentials
file, which is in the same location as the new credentials file, but
is named config. If so, the CLI will continue to use that file.
However, if you create a new credentials file, the CLI will use that
one instead. (Be aware that the aws configure command that you can
use to set credentials from the command line will put the credentials
in the config file, not the credentials file.)
By moving ~/.aws/config to ~/.aws/credentials now both the CLI and SDK read from the same location. Sadly, I haven't found any interface for maintaining ~/.aws/credentials other than hand-editing just yet.

Resources