Connect to s3 using ARN role url using boto3 Python - python-3.x

I want to connect to the S3 using arn. but not sure how I can make a connection
I am looking for a code something like.
ARN = boto3.client('s3', 'arn:aws:iam::****:role/***')
Is there any way that I can make a connection using arn?

it runs on ECS
If so, then you do not have to explicitly assume the role in your application. Instead you should use (its a good practice) an IAM Role for Task. Thus if you can change arn:aws:iam::****:role/*** into a task role, boto3 will automatically assume it and you don't have to do anything in your python code.
But if you still need to assume some role in your ECS task, then your IAM Role for Task should have sts:AsumeRole permission to actually be able to assume arn:aws:iam::****:role/***. But the first option is better choice if you can use it.

Related

CDK how to add assume role while creating a role from an existing role using from_role_arn

I have a step function that has multiple lambdas and connections. Now this step function uses an existing role using the following method:
self.state_machine_role = _iam.Role.from_role_arn(
self,
"statemachinerole",
role_arn="existing-role-arn",
mutable=False,
)
now I want an event to invoke this step function, as per event-documentation I need to add ServicePrincipal('events.amazonaws.com') to this role. So my question is how I'm going to modify state_machine_role to have this new service principal.
This existing role existing-role-arn has already states.amazonaws.com associated with it along with other policies to run my lambdas and step-function.
I don't think you need to define role for event Rule neither the statemachine, CDK creates for you better.
See the ref here: AWS GuardDuty Combine With Security Hub And Slack
AWS CDK created roles will be much easier for you to use but if you have to use the existing Role, you can't update the Trust Policy document directly from AWS CDK however you can still implement AwsCustomResource with AwsSdkCall to do that for you.

Uploading a file through boto3 upload_file api to AWS S3 bucket gives "Anonymous users cannot initiate multipart uploads. Please authenticate." error

I have a file of around 16mb in size and am using python boto3 upload_file api to upload this file into the S3 bucket. However, I believe the API is internally choosing multipart upload and gives me an "Anonymous users cannot initiate multipart upload" error.
In some of the runs of the application, the file generated may be smaller (few KBs) in size.
What's the best way to handle this scenario in general or fix the error I mentioned above?
I currently have a Django application that generates a file when run and uploads this file directly into an S3 bucket.
Ok, so unless you've opened your S3 bucket up for the world to upload to (which is very much NOT recommended), it sounds like you need to setup the permissions for access to your S3 bucket correctly.
How to do that will vary a little depending on how you're running this application - so let's cover off a few options - in all cases you will need to do two things:
Associate your script with an IAM Principal (an IAM User or an IAM Role depending on where / how this script is being run).
Add permissions for that principal to access the bucket (this can be accomplished either through an IAM Policy, or via the S3 Bucket Policy)
Lambda Function - You'll need to create an IAM Role for your application and associate it with your Lambda function. Boto3 should be able to assume this role transparently for you once configured.
EC2 Instance or ECS Task - You'll need to create an IAM Role for your application and associate it with your EC2 instance/ECS Task. Boto3 will be able to access the credentials for the role via instance metadata and should automatically assume the role.
Local Workstation Script - If you're running this script from your local workstation, then boto3 should be able to find and use the credentials you've setup for the AWS CLI. If those aren't the credentials you want to use you'll need to generate an access key and secret access key (be careful how you secure these if you go this route, and definitely follow least privilege).
Now, once you've got your principal you can either attach an IAM policy that grants Allow permissions to upload to the bucket to the IAM User or Role, or you can add a clause to the Bucket Policy that grants that IAM User or Role access. You only need to do one of these.
Multi-part uploads are performed via the same S3:PutObject call as single part uploads (though if your files are small I'd be surprised it was using multi-part for them). If you're using KMS one small trick to be aware of is that you need permission to use the KMS key for both Encrypt and Decrypt permissions if encrypting a multi-part upload.

How to use AWS IAM in nodejs application on fargate scheduled task

I am a student who is trying to make a scheduled task using nodejs(typescript).
The task is to access S3 and fetch the object and then do some stuff with it.
However, I am having hard time trying to figure out loading the credentials. I am trying to do it without writing out the ClientConfiguration, which has the space for putting in accesskey, secretAccesskey. Hint or clue would be nice. Thank you for your time.
You can configure an IAM tole for your fargate task/service and assign permission to the role. In this way you do not have to hardcode the aws access credentials in side the code.
There are two types of IAM roles associated with ECS.
task execution role
gives permission to pull/push container images from the register and publish logs to cloudwatch.
task role
gives permission to access aws services. you should set up assign s3 permissions to this particular role.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html
Hope this helps.

Is it possible to create stack in my AWS account and resources like (ec2, vpc, rds) created in client AWS account?

I have written an AWS Lambda nodejs function for creating a stack in CloudFormation, using CloudFormation template and given input parameters from UI.
When I run my Lambda function with respected inputs, a stack is successfully creating and instances like (ec2, rds, and vpc, etc.) are also created and working perfectly.
Now I want to make this function as public and use this function with user AWS credentials.
So public user uses my function with his AWS credentials those resources should be created in his account and user doesn't want to see my template code.
How can I achieve this?
You can leverage AWS Cloud Development Kit better, than directly using CloudFormation for this purpose. Although CDK may not be directly used within Lambda, a workaround is mentioned here.
AWS CloudFormation will create resources in the AWS Account that is associated with the credentials used to create the stack.
The person who creates the stack will need to provide (upload) a template file or they can reference a template that is stored in Amazon S3, which is accessible to their credentials (meaning that it is either public, or their credentials have been given permission to access the template in S3).

Can Terraform be run from inside an AWS WorkSpace?

I'm trying to figure out a way to run Terraform from inside an AWS WorkSpace. Is that even possible? Has anyone made this work?
AWS WorkSpaces doesn't apply the same concept with an Instance Profile and associated IAM role that is attached to it.
I'm pretty confident, however, that you can run Terraform in an AWS WorkSpace just as you can do it from your personal computer. With Internet access (or VPC endpoints), it can reach AWS APIs and just requires a set of credentials, which in this case (without instance profiles) would be an AWS Access Key and Secret Access Key - just like on your local computer.

Resources