NodeJS Aws sdk, can't unset credentials - node.js

I have a NodeJS application that runs on an EC2 instance that serves API to my customers. EC2 instance have a Instance Role that grants the minimum permissions for the application to access services it needs ( i need sqs, s3 Read and write, and ses ). One particular endpoint in my api is for creating a signed url, in order to be able to access s3 files, and to create the signed url i use an IAM user with only s3 read access to that Bucket.
My issue is that, whenever that endpoint is called the AWS credentials are set using
const awsConfig = {
region,
accessKeyId: ${keyofreadonlyuser},
secretAccessKey: ${secretofreadonlyuser},
};
AWS.config.update(awsConfig);
This way, all subsequent calls to aws sdk will use that credentials resulting in a Access Denied error.
I've tried to set accessKeyId: null, secretAccessKey:null and than call AWS.config.update, but the credentials are not cleared.
What is the best way to handle situations like that ?

I would recommend that instead of updating the default config, you instead use two boto3 sessions objects:
the default, implicitly-created session, that's associated with the assumed IAM role
an explicitly-created session, that's associated with the IAM user credentials
Specifically for the 2nd use case, pass the IAM user credentials to the session constructor.

Related

Upload a file from form in S3 bucket using boto3 and handler is created in lambda

I want to upload image , audio files of small size from form to the S3 using postman for test. I successfully uploaded file in AWS S3 bucket from my application running on my local machine. Following is the part of the code I used for file uploading .
import boto3
s3_client = boto3.client('s3',aws_access_key_id =AWS_ACCESS_KEY_ID,aws_secret_access_key = AWS_SECRET_ACCESS_KEY,)
async def save_file_static_folder(file, endpoint, user_id):
_, ext = os.path.splitext(file.filename)
raw_file_name = f'{uuid.uuid4().hex}{ext}'
# Save image file in folder
if ext.lower() in image_file_extension:
relative_file_folder =user_id+'/'+endpoint
contents = await file.read()
try:
response = s3_client.put_object(Bucket = S3_BUCKET_NAME,Key = (relative_file_folder+'/'+raw_file_name),Body = contents)
except:
return FileEnum.ERROR_ON_INSERT
I called this function from another endpoint and form data (e.g. name, date of birth and other details) are successfully saved in Mongodb database and files are uploaded in S3 bucket.
This app is using fastapi and files are uploaded in S3 bucket while deploying this app in local machine.
Same app is delpoyed in AWS lambda and S3 bucket as storage. For handling whole app , following is added in endpoint file.
handler = Mangum(app)
After deploying app in AWS creating lambda function from root user account of AWS, files didnot get uploaded in S3 bucket.
If I didnot provide files during form then the AWS API endpoint successfully works. Form data gets stored in MongoDB database (Mongodb atlas) and app works fine hosted using Lambda.
App deployed using Lambda function works successfully except file uploads in form. FOr local machine, file uploads in S3 get success.
EDIT
While tracing in Cloudwatch I got following error
exception An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.
I checked AWS Access Key Id and secret key many times and they are correct and root user credentials are kept.
It looks like you have configured your Lambda function with an execution IAM role, but you are overriding the AWS credentials supplied to the boto3 SDK here:
s3_client = boto3.client('s3',aws_access_key_id =AWS_ACCESS_KEY_ID,aws_secret_access_key = AWS_SECRET_ACCESS_KEY,)
You don't need to provide credentials explicitly because the boto3 SDK (and all language SDKs) will automatically retrieve credentials dynamically for you. So, ensure that your Lambda function is configured with the correct IAM role, and then change your code as follows:
s3_client = boto3.client('s3')
As an aside, you indicated that you may be using AWS root credentials. It's generally a best security practice in AWS to not use root credentials. Instead, create IAM roles and IAM users.
We strongly recommend that you do not use the root user for your everyday tasks, even the administrative ones. Instead, adhere to the best practice of using the root user only to create your first IAM user. Then securely lock away the root user credentials and use them to perform only a few account and service management tasks.

AWS Lambda credentials from the execution environment do not have the execution role's permissions

I am deploying an AWS lambda function (with nodejs 12.x) that executes an AWS command ("iot:attachPrincipalPolicy") when invoked. I'm taking the credentials to run this command from the lambda execution environment variables.
const AWS = require('aws-sdk/global');
const region = process.env['AWS_REGION'];
const accessKeyId = process.env['AWS_ACCESS_KEY_ID'];
const secretAccessKey = process.env['AWS_SECRET_ACCESS_KEY'];
AWS.config.region = region;
AWS.config.credentials = new AWS.Credentials(accessKeyId, secretAccessKey);
// attachPrincipalPolicy command from the AWS SDK here
When I test the function locally (with sam local start-api) it runs successfully, because in my AWS CLI I have set the ACCESS_KEY_ID and secret of my administrator account.
However when I deploy the function and invoke it the lambda fails on that command with a client error (the credentials are not valid), even when I give full admin access also to the lambda's execution role.
Here I gave full permissions in an inline policy and I also explicitly added the pre-defined admin access policy too.
I expected the AWS_ACCESS_KEY_ID that you get from the environment variables to grant me all the permissions that I have set in the lambda function's execution role but it looks like the privilege that I grant to the execution role are not reflected in these credentials.
Is my assumption wrong? Where do these credentials come from and how can I find out what they allow me to do?
The Lambda execution runtime will provide your function invocation with a temporary session token (not a persistent/permanent access key / secret access key).
Behind the scene, the Lambda Service will use the AWS Security Token Service (AWS STS) to assume the Lambda execution role of your Lambda function. This is why you must also add the Lambda Service principal as a trusted service principal in the trust policy of your execution role. And the result of this is a temporary session.
The credentials for this temporary session are stored in a combination of the environment variables AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID and AWS_SESSION_TOKEN.
You should however not need to configure/specify any credentials manually, as the default credentials loader chain in the AWS SDK takes care of this automatically.

AWS lambda nodejs function without using accessKeyId and secretAccessKey?

Working on a function, I've used to aws-sdk, as suggested. Which requires accessKeyId and secretAccessKey.
I'm wondering, since I assigned a role to the function and that role has a set of permissions, is there a way to use the permission of the role to download/upload from/to a bucket, and there by not putting the credentials in the code?
If you set appropriate role to the AWS lambda with necessary access, then you don't need any accessKey and secretKey.
Taken from the aws documentation page
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/nodejs-write-lambda-function-example.html
Configuring the SDK
Here is the portion of the Lambda function that configures the SDK. The credentials are not provided in the code because they are supplied to a Lambda function through the required IAM execution role.
var AWS = require('aws-sdk');
AWS.config.update({region: 'us-west-2'});
Bacisally you shouldn't need to specify the access key and secret when providing IAM execution role

S3 Pre-signed URL with custom endpoint via API Gateway, MethodNotAllowed

I'm attempting to use a pre-signed url for an S3 bucket with a custom endpoint. I seem so close, but I keep getting a Method Not Allowed error. Here's where I'm at.
I have an API Gateway which connects an endpoint to a Lambda function. That function, among other things, generates a pre-signed url. Like so,
var s3 = new AWS.S3({
endpoint: 'custom.domain.com/upload',
s3BucketEndpoint: true,
signatureVersion: 'v4'
});
//...
s3.getSignedUrl('putObject', {
ACL: 'bucket-owner-full-control',
Bucket: process.env.S3_BUCKET_NAME,
ContentType: "image/png",
Key: asset.id + ".png"
};
This code successfully returns a url with what appears to be all the correct query params, correct key name, and the url is pointing to my endpoint. When attempting to upload however, I receive the following error:
MethodNotAllowedThe specified method is not allowed against this resource.PUTSERVICE[request id was here][host id was here]
If I remove my custom endpoint declaration from my S3 config, I receive a standard domain prefixed pre-signed url and the upload works fine.
Other notes on my setup.
I have configured the /upload resource on API Gateway to be an S3 passthrough for the PUT method.
I have enabled CORS where needed. On the bucket and on my API. I have confirmed CORS is good, as the browser passes checks.
I have setup my policies. The lambda function has access to the internet from my VPC, it has full S3 access, and it has a trust relationship with both S3 and API Gateway. This execution role is shared amongst the resources.
I am using the axios package to upload the file via PUT.
I have added a CloudTrail log, but it reports the exact same error as the browser...
Temporarily making my bucket public makes no difference.
I've attempted to add the query strings to the API Gateway Request/Response integrations without success.
I've added the necessary content type headers to the request and to the pre-signed url config.
I Googled the heck out of this thing.
Am I missing something? Is this possible? I plan to disable my custom endpoint and move forward with the default pre-signed url for the time being, but long term, I would like the custom endpoint. Worst case I may pay for some AWS support.
Thanks for the help!
I can't find documentation that states a presigned URL supports proxy (alt/custom domain). IMO the use-case to authenticate and grant requests access to AWS resources from an API gateway ( regardless of if you are proxy'ing S3 ) would be to use an API Gateway authorizer w/lambda to allow the request to assume an IAM role that has access to the AWS resources (in this case PUT OBJECT on an s3 bucket)
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html

Unable to access S3 file with IAM role from EC2

I created an IAM role 'test' and assigned to an EC2 instance. And I created a S3 bucket with bucket policy
{
"Version": "2012-10-17",
"Id": "Policy1475837721706",
"Statement": [
{
"Sid": "Stmt1475837720370",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::770370070203:role/test"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::test-role-123/*"
}
]
}
From EC2, I got the AccessKey and SecretKey from this AWS article by sending a curl request to
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/<role-name>
Using the response from the above, I wrote a node script to make a request to the resource in the bucket
var AWS = require('aws-sdk');
var d = {
"Code" : "Success",
"LastUpdated" : "2016-10-07T12:28:09Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "ASIAIMJBHYLH6GWOWNMQ",
"SecretAccessKey" : "7V/k5nvFdhXOcT+nhYjGqHM4QmUWjNBUM1ERJQJs",
"Token" : "FQoDYXdzEO7//////////wEaDGG+SgxD4Es4Z1RBZCKzAz855JuKfm8s7LDcP5T9TGvDdJELsYTzPi47HJ9Q5oaK8OTb0Us0RjvpGW278Mb1gg1dNip1VD2N/GW5/1TFC6xhNpnnZ9+LNkJAwVVZg5raGM91k56X/VOA++/5WivSpO4jWg8fZDibivVyHuoMJJTkurFtEXrweDOCqpiabypTCc5jFtX8NfQuHubwl4C1jp2pMasVS1jwhjU72TA8Pn9EsIIvh8JXDC1dVfppwnslolAeJyOOAHdL1AQSs3nI6IvPCtKhBjtDaVuoiH/lHrnKrw6AeMHoTYQay4wOYRnE4ffngtksekZEULXvERWE4NCs3leXGMqrdzOr8xdZ9m0j3IkshqSS56fkq6E9JtLhSVGyy44ELrL7kYW/dpHE03V+dwQPXMhRafjsVsPD7sUnBfH/+4yyL0VDX1vlFRKbRi50i/Eqvxsb9bcSTsE0W5yWmOWR8reTTYWcWyQXGvxKVYVxLWZKVRfmNfx6IX2sqan7e7pjCtUrqXB1TBMpXdy8KSH9qoJtNAQTYBXws7oFLYY+F2esnNCma0bdNcCeAQ6t/6aPfUdpdLgv8BcGciZxayiqqd6/BQ==",
"Expiration" : "2016-10-07T18:51:57Z"
};
AWS.config.accessKeyId = d.AccessKeyId;
AWS.config.secretAccessKey = d.SecretAccessKey;
var s3params = {Key: "test.json", Bucket:"test-role-123"};
AWS.config.region = 'ap-south-1';
var s3 = new AWS.S3();
s3.getSignedUrl('getObject', s3params, function(err, url) {
console.log(url);
});
On running this code I am getting the signed url. But this is giving an InvalidAccessKeyId error. I doubted if the s3 bucket policy is wrong so tried to get with similar policy with an IAM user credentials. It is completely working.
Any hints or suggestions are welcome.
There are three things to note:
How credentials are provided and accessed from an Amazon EC2 instance
How to assign permissions for access to Amazon S3
How Pre-Signed URLs function
1. How credentials are provided and accessed from an Amazon EC2 instance
When an Amazon EC2 instance is launched with an IAM Role, the Instance Metadata automatically provides temporary access credentials consisting of an Access Key, Secret Key and Token. These credentials are rotated approximately every six hours.
Any code that uses an AWS SDK (eg Python, Java, PHP) knows how to automatically retrieve these credentials. Therefore, code running on an Amazon EC2 instance that was launched with an IAM role does not require you to retrieve nor provide access credentials -- it just works automagically!
So, in your above code sample, you could remove any lines that specifically refer to credentials. Your job is simply to ensure that the IAM Role has sufficient permissions for the operations you wish to perform.
This also applies to the AWS Command-Line Interface (CLI), which is actually just a Python program that provides command-line access to AWS API calls. Since it uses the AWS SDK for Python, it automatically retrieves the credentials from Instance Metadata and does not require credentials when used from an Amazon EC2 instance that was launched with an IAM Role.
2. How to assign permissions for access to Amazon S3
Objects in Amazon S3 are private by default. There are three ways to assign permission to access objects:
Object ACLs (Access Control Lists): These are permissions on the objects themselves
Bucket Policies: This is a set of rules applied to the bucket as a whole, but it can also specify permissions related to a subset of a bucket (eg a particular path within the bucket)
IAM Policies that are applied to IAM Users, Groups or Roles: These permissions apply specifically to those entities
Since you are wanting to grant access to Amazon S3 objects to a specific IAM User, it is better to assign permissions via an IAM Policy attached to that user, rather than being part of the Bucket Policy.
Therefore, you should:
Remove the Bucket Policy
Create an Inline Policy in IAM and attach it to the desired IAM User. The policy then applies to that User and does not require a Principal
Here is a sample policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::MY-BUCKET/*"
]
}
]
}
I have recommended an Inline Policy because this policy applies to just one user. If you are assigning permissions to many users, it is recommended to attach the policy to an IAM Group and then the Users assigned to that group will in inherit the permissions. Alternatively, create an IAM Policy and then attach that policy to all relevant Users.
3. How Pre-Signed URLs function
Amazon S3 Pre-Signed URLs are a means of granting temporary access to Amazon S3 objects. The generated URL includes:
The Access Key of an IAM User that has permission to access the object
An expiration time
A signature created via a has operation that authorises the URL
The key point to realise is related to the permissions used when generating the pre-signed URL. As mentioned in the Amazon S3 documentation Share an Object with Others:
Anyone with valid security credentials can create a pre-signed URL. However, in order to successfully access an object, the pre-signed URL must be created by someone who has permission to perform the operation that the pre-signed URL is based upon.
This means that the credentials used when generating the pre-signed URL are also the credentials used as part of the pre-signed URL. The entity associated with those credentials, of course, needs permission to access the object -- the pre-signed URL is merely a means of on-granting access to an object for a temporary period.
What this also means is that, in the case of your example, you do not need to create a specific role for granting access to the object(s) in Amazon S3. Instead, you can use a more permissive IAM Role with your Amazon EC2 instance (for example, one that can also upload objects to S3) but when it generates a pre-signed URL it is only granting temporary access to the object (and not other permissions, such as the upload permission).
If the software running on your Amazon EC2 instance only interacts with AWS to created signed URLs, then your Role that has only GetObject permissions is fine. However, if your instance wants to do more, then create a Role that grants the instance the appropriate permissions (including GetObject access to S3) and generate Signed URLs using that Role.
If you wish to practice generating signed URLs, recent versions of the AWS Command-Line Interface (CLI) includes a aws s3 presign s3://path command that can generate pre-signed URLs. Try with with various --profile settings to see how it works with different IAM Users.

Resources