Will The Aws sdk for C++ refresh the assumed role once the time for assumed role is expired using STS? - aws-sts

I have a use case where i have to assume a role using ARN to access S3 resource but my concern is that will the STS automatically refresh the credentials once the time for assumed role is expired. The auto assuming role is present in Java Aws sdk but i want to know is it the same in Aws Sdk in c++.

Related

Can't create google_storage_bucket via Terraform

I'd like to create the following resource via Terraform:
resource "google_storage_bucket" "tf_state_bucket" {
name = var.bucket-name
location = "EUROPE-WEST3"
storage_class = "STANDARD"
versioning {
enabled = true
}
force_destroy = false
public_access_prevention = "enforced"
}
Unfortunately, during the execution of terraform apply, I got the following error:
googleapi: Error 403: X#gmail.com does not have storage.buckets.create access to the Google Cloud project. Permission 'storage.buckets.create' denied on resource (or it may not exist)., forbidden
Here's the list of things I tried and checked:
Verified that Google Cloud Storage (JSON) API is enabled on my project.
Checked the IAM roles and permissions: X#gmail.com has the Owner and the Storage Admin roles.
I can create a bucket manually via the Google Console.
Terraform is generally authorised to create resources, for example, I can create a VM using it.
What else can be done to authenticate Terraform to create Google Storage Buckets?
I think you run the Terraform code in a Shell session from your local machine and use an User identity instead of a Service Account identity.
In this case to solve your issue from your local machine :
Create a Service Account in GCP IAM console for Terraform with Storage Admin roles role.
Download a Service Account token key from IAM.
Set the GOOGLE_APPLICATION_CREDENTIALS env var in your Shell session to the Service Account token key file path.
If you run your Terraform code in other place, you need to check if Terraform is correctly authenticated to GCP.
The use of a token key is not recommended because it's not the more secure way, that's why it is better to launch Terraform from a CI tool like Cloud Build instead of launch it from your local machine.
From Cloud Build no need to download and set a token key.

How to change aws config profile in AWS sdk nodejs

I have configured cross account s3 bucket access from a ec2 instance. when I login to the aws ec2 server and run aws cli command to get s3 bucket data that have created in the another aws account, its work properly. please find below command.
aws s3 ls s3://test-bucket-name --profile demo
But, I need to do this using nodejs sdk. I have a application that run on ec2. This application needs to access this bucket data through the application. Is there any way to access this bucket data from application using nodejs?
Launch the EC2 instance with an IAM role in account 1 that has permission to assume an IAM role in account 2. That second role provides S3 access to the relevant bucket/objects in account 2.
Code using the AWS JavaScript SDK will automatically get credentials for IAM role 1. Your code can then assume IAM role 2, get credentials, and then access the cross-account S3 bucket.

Uploading a file through boto3 upload_file api to AWS S3 bucket gives "Anonymous users cannot initiate multipart uploads. Please authenticate." error

I have a file of around 16mb in size and am using python boto3 upload_file api to upload this file into the S3 bucket. However, I believe the API is internally choosing multipart upload and gives me an "Anonymous users cannot initiate multipart upload" error.
In some of the runs of the application, the file generated may be smaller (few KBs) in size.
What's the best way to handle this scenario in general or fix the error I mentioned above?
I currently have a Django application that generates a file when run and uploads this file directly into an S3 bucket.
Ok, so unless you've opened your S3 bucket up for the world to upload to (which is very much NOT recommended), it sounds like you need to setup the permissions for access to your S3 bucket correctly.
How to do that will vary a little depending on how you're running this application - so let's cover off a few options - in all cases you will need to do two things:
Associate your script with an IAM Principal (an IAM User or an IAM Role depending on where / how this script is being run).
Add permissions for that principal to access the bucket (this can be accomplished either through an IAM Policy, or via the S3 Bucket Policy)
Lambda Function - You'll need to create an IAM Role for your application and associate it with your Lambda function. Boto3 should be able to assume this role transparently for you once configured.
EC2 Instance or ECS Task - You'll need to create an IAM Role for your application and associate it with your EC2 instance/ECS Task. Boto3 will be able to access the credentials for the role via instance metadata and should automatically assume the role.
Local Workstation Script - If you're running this script from your local workstation, then boto3 should be able to find and use the credentials you've setup for the AWS CLI. If those aren't the credentials you want to use you'll need to generate an access key and secret access key (be careful how you secure these if you go this route, and definitely follow least privilege).
Now, once you've got your principal you can either attach an IAM policy that grants Allow permissions to upload to the bucket to the IAM User or Role, or you can add a clause to the Bucket Policy that grants that IAM User or Role access. You only need to do one of these.
Multi-part uploads are performed via the same S3:PutObject call as single part uploads (though if your files are small I'd be surprised it was using multi-part for them). If you're using KMS one small trick to be aware of is that you need permission to use the KMS key for both Encrypt and Decrypt permissions if encrypting a multi-part upload.

Spark - S3 - Access & Secret Key configured in code, is overridden with IAM Role

I am working on creating Spark job in Java which explicitly specifies IAM user with access & secret key in runtime. It can read or write to S3 with no issue in local machine using the keys. However, when I promote the job to Cloudera Oozie, it keeps picking up IAM role attached to EC2 instance (which can only read certain S3 slices). The goal is to set a IAM user per tenant who can only access own slice in S3 under the same bucket (multi tenancy). Can anyone advise me if you know how to prevent IAM role from overriding IAM user credentials in Spark? Thanks in advance.

How can I check if the AWS SDK was provided with credentials?

There are many ways to provide the AWS SDK with credentials to perform operations.
I want to make sure any of the methods were successful in setting up the interface before I try my operation on our continuous deployment system.
How can I check if AWS SDK was able to find credentials?
You can access them via the config.credentials property on the main client. All AWS service libraries included in the SDK have a config property.
Class: AWS.Config
The main configuration class used by all service objects to set the region, credentials, and other options for requests.
By default, credentials and region settings are left unconfigured. This should be configured by the application before using any AWS service APIs.
// Using S3
var s3 = new AWS.S3();
console.log(s3.config.credentials);

Resources