AWS lambda nodejs function without using accessKeyId and secretAccessKey? - node.js

Working on a function, I've used to aws-sdk, as suggested. Which requires accessKeyId and secretAccessKey.
I'm wondering, since I assigned a role to the function and that role has a set of permissions, is there a way to use the permission of the role to download/upload from/to a bucket, and there by not putting the credentials in the code?

If you set appropriate role to the AWS lambda with necessary access, then you don't need any accessKey and secretKey.

Taken from the aws documentation page
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/nodejs-write-lambda-function-example.html
Configuring the SDK
Here is the portion of the Lambda function that configures the SDK. The credentials are not provided in the code because they are supplied to a Lambda function through the required IAM execution role.
var AWS = require('aws-sdk');
AWS.config.update({region: 'us-west-2'});
Bacisally you shouldn't need to specify the access key and secret when providing IAM execution role

Related

NodeJS Aws sdk, can't unset credentials

I have a NodeJS application that runs on an EC2 instance that serves API to my customers. EC2 instance have a Instance Role that grants the minimum permissions for the application to access services it needs ( i need sqs, s3 Read and write, and ses ). One particular endpoint in my api is for creating a signed url, in order to be able to access s3 files, and to create the signed url i use an IAM user with only s3 read access to that Bucket.
My issue is that, whenever that endpoint is called the AWS credentials are set using
const awsConfig = {
region,
accessKeyId: ${keyofreadonlyuser},
secretAccessKey: ${secretofreadonlyuser},
};
AWS.config.update(awsConfig);
This way, all subsequent calls to aws sdk will use that credentials resulting in a Access Denied error.
I've tried to set accessKeyId: null, secretAccessKey:null and than call AWS.config.update, but the credentials are not cleared.
What is the best way to handle situations like that ?
I would recommend that instead of updating the default config, you instead use two boto3 sessions objects:
the default, implicitly-created session, that's associated with the assumed IAM role
an explicitly-created session, that's associated with the IAM user credentials
Specifically for the 2nd use case, pass the IAM user credentials to the session constructor.

Upload a file from form in S3 bucket using boto3 and handler is created in lambda

I want to upload image , audio files of small size from form to the S3 using postman for test. I successfully uploaded file in AWS S3 bucket from my application running on my local machine. Following is the part of the code I used for file uploading .
import boto3
s3_client = boto3.client('s3',aws_access_key_id =AWS_ACCESS_KEY_ID,aws_secret_access_key = AWS_SECRET_ACCESS_KEY,)
async def save_file_static_folder(file, endpoint, user_id):
_, ext = os.path.splitext(file.filename)
raw_file_name = f'{uuid.uuid4().hex}{ext}'
# Save image file in folder
if ext.lower() in image_file_extension:
relative_file_folder =user_id+'/'+endpoint
contents = await file.read()
try:
response = s3_client.put_object(Bucket = S3_BUCKET_NAME,Key = (relative_file_folder+'/'+raw_file_name),Body = contents)
except:
return FileEnum.ERROR_ON_INSERT
I called this function from another endpoint and form data (e.g. name, date of birth and other details) are successfully saved in Mongodb database and files are uploaded in S3 bucket.
This app is using fastapi and files are uploaded in S3 bucket while deploying this app in local machine.
Same app is delpoyed in AWS lambda and S3 bucket as storage. For handling whole app , following is added in endpoint file.
handler = Mangum(app)
After deploying app in AWS creating lambda function from root user account of AWS, files didnot get uploaded in S3 bucket.
If I didnot provide files during form then the AWS API endpoint successfully works. Form data gets stored in MongoDB database (Mongodb atlas) and app works fine hosted using Lambda.
App deployed using Lambda function works successfully except file uploads in form. FOr local machine, file uploads in S3 get success.
EDIT
While tracing in Cloudwatch I got following error
exception An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.
I checked AWS Access Key Id and secret key many times and they are correct and root user credentials are kept.
It looks like you have configured your Lambda function with an execution IAM role, but you are overriding the AWS credentials supplied to the boto3 SDK here:
s3_client = boto3.client('s3',aws_access_key_id =AWS_ACCESS_KEY_ID,aws_secret_access_key = AWS_SECRET_ACCESS_KEY,)
You don't need to provide credentials explicitly because the boto3 SDK (and all language SDKs) will automatically retrieve credentials dynamically for you. So, ensure that your Lambda function is configured with the correct IAM role, and then change your code as follows:
s3_client = boto3.client('s3')
As an aside, you indicated that you may be using AWS root credentials. It's generally a best security practice in AWS to not use root credentials. Instead, create IAM roles and IAM users.
We strongly recommend that you do not use the root user for your everyday tasks, even the administrative ones. Instead, adhere to the best practice of using the root user only to create your first IAM user. Then securely lock away the root user credentials and use them to perform only a few account and service management tasks.

AWS Lambda credentials from the execution environment do not have the execution role's permissions

I am deploying an AWS lambda function (with nodejs 12.x) that executes an AWS command ("iot:attachPrincipalPolicy") when invoked. I'm taking the credentials to run this command from the lambda execution environment variables.
const AWS = require('aws-sdk/global');
const region = process.env['AWS_REGION'];
const accessKeyId = process.env['AWS_ACCESS_KEY_ID'];
const secretAccessKey = process.env['AWS_SECRET_ACCESS_KEY'];
AWS.config.region = region;
AWS.config.credentials = new AWS.Credentials(accessKeyId, secretAccessKey);
// attachPrincipalPolicy command from the AWS SDK here
When I test the function locally (with sam local start-api) it runs successfully, because in my AWS CLI I have set the ACCESS_KEY_ID and secret of my administrator account.
However when I deploy the function and invoke it the lambda fails on that command with a client error (the credentials are not valid), even when I give full admin access also to the lambda's execution role.
Here I gave full permissions in an inline policy and I also explicitly added the pre-defined admin access policy too.
I expected the AWS_ACCESS_KEY_ID that you get from the environment variables to grant me all the permissions that I have set in the lambda function's execution role but it looks like the privilege that I grant to the execution role are not reflected in these credentials.
Is my assumption wrong? Where do these credentials come from and how can I find out what they allow me to do?
The Lambda execution runtime will provide your function invocation with a temporary session token (not a persistent/permanent access key / secret access key).
Behind the scene, the Lambda Service will use the AWS Security Token Service (AWS STS) to assume the Lambda execution role of your Lambda function. This is why you must also add the Lambda Service principal as a trusted service principal in the trust policy of your execution role. And the result of this is a temporary session.
The credentials for this temporary session are stored in a combination of the environment variables AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID and AWS_SESSION_TOKEN.
You should however not need to configure/specify any credentials manually, as the default credentials loader chain in the AWS SDK takes care of this automatically.

Need help for AWS lambda

I am working on one issue where I need Lambda to write the logs in S3 bucket but the tricky part here is, Lambda will read the logs and write in another s3 bucket which is in another AWS account. Can we achieve this?
I wrote some code but it isn't working.
from urllib.request import urlopen
import boto3
import os
import time
BUCKET_NAME = '***'
CSV_URL = f'***'
def lambda_handler(event, context):
response = urlopen(CSV_URL)
s3 = boto3.client('s3')
s3.upload_fileobj(response, BUCKET_NAME, time.strftime('%Y/%m/%d'))
response.close()
It sounds like you are asking how to allow the Lambda function to create an object in an Amazon S3 bucket that belongs to a different AWS Account.
Bucket Policy on target bucket
The simplest method is to ask the owner of the target bucket (that is, somebody with Admin permissions in that other AWS Account) to add a Bucket Policy that permits PutObject access to the IAM Role being used by the AWS Lambda function. You will need to supply them with the ARN of the IAM Role being used by the Lambda function.
Also, make sure that the IAM Role has been given permission to write to the target bucket. Please note that two sets of permissions are required: The IAM Role needs to be allowed to write to the bucket in the other account, AND the bucket needs to permit access by the IAM Role. This double-set of permissions is required because access both accounts need to permit this access.
It is possible that you might need to grant some additional permissions, such as PutObjectACL.
Assuming an IAM Role from the target account
An alternative method (instead of using the Bucket Policy) is:
Create an IAM Role in the target account and give it permission to access the bucket
Grant trust permissions so that the IAM Role used by the Lambda function is allowed to 'Assume' the IAM Role in the target account
Within the Lambda function, use the AssumeRole() API call to obtain credentials from the target account
Use those credentials when connecting to S3, which will allow you to access the bucket in the other account
Frankly, creating the Bucket Policy is a lot easier.

Can't access Glacier using AWS CLI

I'm trying to access AWS Glacier (from the command line on Ubuntu 14.04) using something like:
aws glacier list-vaults -
rather than
aws glacier list-vaults --account-id 123456789
The documentation suggests that this should be possible:
You can specify either the AWS Account ID or optionally a '-', in
which case Amazon Glacier uses the AWS Account ID associated with the
credentials used to sign the request.
Unless "credentials used to sign the request" means that I have to explicitly include credentials in the command, rather than rely on my .aws/credentials file, I would expect this to work. Instead, I get:
aws: error: argument --account-id is required
Does anyone have any idea how to solve this?
The - is supposed to be passed as the value of --account-id, so like
aws glacier list-vaults --account-id -
--account-id is in fact a required option.
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/glacier/list-vaults.html
Says that "--account-id" is a required parameter for the glacier section of the full aws api. A little wierd, but documented. So yay.

Resources