Upload a file from form in S3 bucket using boto3 and handler is created in lambda - python-3.x

I want to upload image , audio files of small size from form to the S3 using postman for test. I successfully uploaded file in AWS S3 bucket from my application running on my local machine. Following is the part of the code I used for file uploading .
import boto3
s3_client = boto3.client('s3',aws_access_key_id =AWS_ACCESS_KEY_ID,aws_secret_access_key = AWS_SECRET_ACCESS_KEY,)
async def save_file_static_folder(file, endpoint, user_id):
_, ext = os.path.splitext(file.filename)
raw_file_name = f'{uuid.uuid4().hex}{ext}'
# Save image file in folder
if ext.lower() in image_file_extension:
relative_file_folder =user_id+'/'+endpoint
contents = await file.read()
try:
response = s3_client.put_object(Bucket = S3_BUCKET_NAME,Key = (relative_file_folder+'/'+raw_file_name),Body = contents)
except:
return FileEnum.ERROR_ON_INSERT
I called this function from another endpoint and form data (e.g. name, date of birth and other details) are successfully saved in Mongodb database and files are uploaded in S3 bucket.
This app is using fastapi and files are uploaded in S3 bucket while deploying this app in local machine.
Same app is delpoyed in AWS lambda and S3 bucket as storage. For handling whole app , following is added in endpoint file.
handler = Mangum(app)
After deploying app in AWS creating lambda function from root user account of AWS, files didnot get uploaded in S3 bucket.
If I didnot provide files during form then the AWS API endpoint successfully works. Form data gets stored in MongoDB database (Mongodb atlas) and app works fine hosted using Lambda.
App deployed using Lambda function works successfully except file uploads in form. FOr local machine, file uploads in S3 get success.
EDIT
While tracing in Cloudwatch I got following error
exception An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.
I checked AWS Access Key Id and secret key many times and they are correct and root user credentials are kept.

It looks like you have configured your Lambda function with an execution IAM role, but you are overriding the AWS credentials supplied to the boto3 SDK here:
s3_client = boto3.client('s3',aws_access_key_id =AWS_ACCESS_KEY_ID,aws_secret_access_key = AWS_SECRET_ACCESS_KEY,)
You don't need to provide credentials explicitly because the boto3 SDK (and all language SDKs) will automatically retrieve credentials dynamically for you. So, ensure that your Lambda function is configured with the correct IAM role, and then change your code as follows:
s3_client = boto3.client('s3')
As an aside, you indicated that you may be using AWS root credentials. It's generally a best security practice in AWS to not use root credentials. Instead, create IAM roles and IAM users.
We strongly recommend that you do not use the root user for your everyday tasks, even the administrative ones. Instead, adhere to the best practice of using the root user only to create your first IAM user. Then securely lock away the root user credentials and use them to perform only a few account and service management tasks.

Related

NodeJS Aws sdk, can't unset credentials

I have a NodeJS application that runs on an EC2 instance that serves API to my customers. EC2 instance have a Instance Role that grants the minimum permissions for the application to access services it needs ( i need sqs, s3 Read and write, and ses ). One particular endpoint in my api is for creating a signed url, in order to be able to access s3 files, and to create the signed url i use an IAM user with only s3 read access to that Bucket.
My issue is that, whenever that endpoint is called the AWS credentials are set using
const awsConfig = {
region,
accessKeyId: ${keyofreadonlyuser},
secretAccessKey: ${secretofreadonlyuser},
};
AWS.config.update(awsConfig);
This way, all subsequent calls to aws sdk will use that credentials resulting in a Access Denied error.
I've tried to set accessKeyId: null, secretAccessKey:null and than call AWS.config.update, but the credentials are not cleared.
What is the best way to handle situations like that ?
I would recommend that instead of updating the default config, you instead use two boto3 sessions objects:
the default, implicitly-created session, that's associated with the assumed IAM role
an explicitly-created session, that's associated with the IAM user credentials
Specifically for the 2nd use case, pass the IAM user credentials to the session constructor.

Need help for AWS lambda

I am working on one issue where I need Lambda to write the logs in S3 bucket but the tricky part here is, Lambda will read the logs and write in another s3 bucket which is in another AWS account. Can we achieve this?
I wrote some code but it isn't working.
from urllib.request import urlopen
import boto3
import os
import time
BUCKET_NAME = '***'
CSV_URL = f'***'
def lambda_handler(event, context):
response = urlopen(CSV_URL)
s3 = boto3.client('s3')
s3.upload_fileobj(response, BUCKET_NAME, time.strftime('%Y/%m/%d'))
response.close()
It sounds like you are asking how to allow the Lambda function to create an object in an Amazon S3 bucket that belongs to a different AWS Account.
Bucket Policy on target bucket
The simplest method is to ask the owner of the target bucket (that is, somebody with Admin permissions in that other AWS Account) to add a Bucket Policy that permits PutObject access to the IAM Role being used by the AWS Lambda function. You will need to supply them with the ARN of the IAM Role being used by the Lambda function.
Also, make sure that the IAM Role has been given permission to write to the target bucket. Please note that two sets of permissions are required: The IAM Role needs to be allowed to write to the bucket in the other account, AND the bucket needs to permit access by the IAM Role. This double-set of permissions is required because access both accounts need to permit this access.
It is possible that you might need to grant some additional permissions, such as PutObjectACL.
Assuming an IAM Role from the target account
An alternative method (instead of using the Bucket Policy) is:
Create an IAM Role in the target account and give it permission to access the bucket
Grant trust permissions so that the IAM Role used by the Lambda function is allowed to 'Assume' the IAM Role in the target account
Within the Lambda function, use the AssumeRole() API call to obtain credentials from the target account
Use those credentials when connecting to S3, which will allow you to access the bucket in the other account
Frankly, creating the Bucket Policy is a lot easier.

How to authenticate with tokens in Nodejs to a private bucket in Cloud Storage

Usually in Python what I do, I get the application default credentials, I get the access token then I refresh it to be able to authenticate to a private environment.
Code in Python:
# getting the credentials and project details for gcp project
credentials, your_project_id = google.auth.default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
#getting request object
auth_req = google.auth.transport.requests.Request();
print(f"Checking Authentication : {credentials.valid}")
print('Refreshing token ....')
credentials.refresh(auth_req)
#check for valid credentials
print(f"Checking Authentication : {credentials.valid}")
access_token = credentials.token
credentials = google.oauth2.credentials.Credentials(access_token);
storage_client = storage.Client(project='itg-ri-consumerloop-gbl-ww-dv',credentials=credentials)
I am entirely new to NodeJS, and I am trying to make the same thing.
My goal later is to create an app engine application that would expose an image that is found in a private bucket, so credentials are a must.
How it is done?
For authentication, you could rely on the default application credentials that are present within the GCP platform (GAE, Cloud Functions, VM, etc.). Then you could just run the following piece of code from the documentation:
const {Storage} = require('#google-cloud/storage');
const storage = new Storage();
const bucket = storage.bucket('albums');
const file = bucket.file('my-existing-file.png');
In most circumstances, there is no need to explicitly use authentication packages since they are already executed underneath the google-cloud/storage package in Nodejs. The same holds for the google-cloud-storage package in Python. It could help to look at the source code of both packages on Github. For me, this really helped to understand the authentication mechanism.
When I develop code on my own laptop, that interacts with google cloud storage, I first tell the gcloud SDK what my credentials are and on which GCP project I am working. I use the following commands for this:
gcloud config set project [PROJECT_ID]
gcloud auth application-default login
You could also set DEFAULT_APPLICATION_CREDENTIALS as an environment variable that points to a credentials file. Then within your code, you could pass the project name when initializing the client. This could be helpful if you are running your code outside of GCP on another server for example.

How to store and access microsoft office365 account token inside AWS Lambda in python3.6

I have zipped and uploaded a python library O365 for accessing MS outlook calendar inside AWS Lambda-Layer. I'm able to import it, but the problem is the authorization. When I tested it in local the bearer token was generated and stored in the local txt file using the FileSytemTokenBackend.
But When I load this into AWS Lambda using layers, it is again asking to copy paste the URL process which is not able to fetch from the layer token file.
And I have tried FireSystemTokenBackend, but that also I'm failed to configure successfully. I have used this Token storage docs in local while testing the functionality.
My question is how to store and authenticate my account using the token file generated in my local. Because in the AWS lambda the input() functionality is throwing error in runtime. How can I keep that token file inside the aws lambda and use it without doing authentication everytime?
I have faced the same issue. The lambda filesystem is temporal, so you will need to do the autenticate process every time you run the function and the o365 lib will ask for the url.
So try saving your token (o365_token.txt) in S3 instead of getting it in lambda filesystem and the use this token for authentication.
I hope this code will help you:
import boto3
bucket_name = 'bucket_name'
# replace with your bucket name
filename_token = 'o365_token.txt'
# replace with your AWS credentials
s3 = boto3.resource('s3',aws_access_key_id='xxxx', aws_secret_access_key='xxxx')
# Read the token in S3 and save to /tmp directory in Lambda
s3.Bucket(bucket_name).download_file(filename_token, f'/tmp/{filename_token}')
# Read the token in /tmp directory
token_backend = FileSystemTokenBackend(token_path='/tmp',
token_filename=filename_token)
# Your azure credentials
credentials = ('xxxx', 'xxxx')
account = Account(credentials,token_backend=token_backend)
# Then do the normal authentication process and include the refresh token command
if not account.is_authenticated:
account.authenticate()
account.connection.refresh_token()

Boto3 not assuming IAM role from credentials where aws-cli does without problem

I am setting up some file transfer scripts and am using boto3 to do this.
I need to send some files from local to a third party AWS account (cross-account). I have a role set-up on the other account with permissions to write to the bucket, and assigned this role to a user on my account.
I am able to do this no problem on CLI, but Boto keeps on kicking out an AccessDenied error for the bucket.
I have read through the boto3 docs on this area such as they are here, and have set-up the credential and config files as they are supposed to be (assume they are correct as the CLI approach works), but I am unable to get this working.
Credential File:-
[myuser]
aws_access_key_id = XXXXXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Config File:-
[profile crossaccount]
region = eu-west-2
source_profile=myuser
role_arn = arn:aws:iam::0123456789:role/crossaccountrole
and here is the code I am trying to get working with this:-
#set-up variables
bucket_name = 'otheraccountbucket'
file_name = 'C:\\Users\\test\\testfile.csv'
object_name = 'testfile.csv'
#create a boto session with profile name for assume role call to be made with correct credentials
session = boto3.Session(profile_name='crossaccount')
#Create s3_client from that profile based session
s3_client = session.client('s3')
#try and upload the file
response = s3_client.upload_file(
file_name, bucket, object_name,
ExtraArgs={'ACL': 'bucket-owner-full-control'}
)
EDIT:
in response to John's multi-part permission comment, I have tried to upload via put_object method to bypass this - but still getting AccessDenied, but now on the PutObject permission - which I have confirmed is in place:-
#set-up variables
bucket_name = 'otheraccountbucket'
file_name = 'C:\\Users\\test\\testfile.csv'
object_name = 'testfile.csv'
#create a boto session with profile name for assume role call to be made with correct credentials
session = boto3.Session(profile_name='crossaccount')
#Create s3_client from that profile based session
s3_client = session.client('s3')
#try and upload the file
with open(file_name, 'rb') as fd:
response = s3_client.put_object(
ACL='bucket-owner-full-control',
Body=fd,
Bucket=bucket,
ContentType='text/csv',
Key=object_name
)
Crossaccountrole has PutObject permissions - error is :-
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
END EDIT
Here is the working aws-cli command:-
aws s3 cp "C:\Users\test\testfile.csv" s3://otheraccountbucket --profile crossaccount
I am expecting this to upload correctly as the equivalent cli code does, but instead I get an S3UploadFailedError exception - An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied
Any Help would be much appreciated
I had this same problem, my issue ended up being the fact that I had AWS CLI configured with different credentials than my python app where I was trying to use Boto3 to upload files into an s3 bucket.
Here's what worked for me, this only applies to people that have AWS CLI installed:
Open your command line or terminal
Type aws configure
Enter the ID & Secret key of the IAM user you are using for your python boto3 app when prompted
Run your python app and test boto3, you should no longer get the access denied message

Resources