AWS Lambda credentials from the execution environment do not have the execution role's permissions - node.js

I am deploying an AWS lambda function (with nodejs 12.x) that executes an AWS command ("iot:attachPrincipalPolicy") when invoked. I'm taking the credentials to run this command from the lambda execution environment variables.
const AWS = require('aws-sdk/global');
const region = process.env['AWS_REGION'];
const accessKeyId = process.env['AWS_ACCESS_KEY_ID'];
const secretAccessKey = process.env['AWS_SECRET_ACCESS_KEY'];
AWS.config.region = region;
AWS.config.credentials = new AWS.Credentials(accessKeyId, secretAccessKey);
// attachPrincipalPolicy command from the AWS SDK here
When I test the function locally (with sam local start-api) it runs successfully, because in my AWS CLI I have set the ACCESS_KEY_ID and secret of my administrator account.
However when I deploy the function and invoke it the lambda fails on that command with a client error (the credentials are not valid), even when I give full admin access also to the lambda's execution role.
Here I gave full permissions in an inline policy and I also explicitly added the pre-defined admin access policy too.
I expected the AWS_ACCESS_KEY_ID that you get from the environment variables to grant me all the permissions that I have set in the lambda function's execution role but it looks like the privilege that I grant to the execution role are not reflected in these credentials.
Is my assumption wrong? Where do these credentials come from and how can I find out what they allow me to do?

The Lambda execution runtime will provide your function invocation with a temporary session token (not a persistent/permanent access key / secret access key).
Behind the scene, the Lambda Service will use the AWS Security Token Service (AWS STS) to assume the Lambda execution role of your Lambda function. This is why you must also add the Lambda Service principal as a trusted service principal in the trust policy of your execution role. And the result of this is a temporary session.
The credentials for this temporary session are stored in a combination of the environment variables AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID and AWS_SESSION_TOKEN.
You should however not need to configure/specify any credentials manually, as the default credentials loader chain in the AWS SDK takes care of this automatically.

Related

NodeJS Aws sdk, can't unset credentials

I have a NodeJS application that runs on an EC2 instance that serves API to my customers. EC2 instance have a Instance Role that grants the minimum permissions for the application to access services it needs ( i need sqs, s3 Read and write, and ses ). One particular endpoint in my api is for creating a signed url, in order to be able to access s3 files, and to create the signed url i use an IAM user with only s3 read access to that Bucket.
My issue is that, whenever that endpoint is called the AWS credentials are set using
const awsConfig = {
region,
accessKeyId: ${keyofreadonlyuser},
secretAccessKey: ${secretofreadonlyuser},
};
AWS.config.update(awsConfig);
This way, all subsequent calls to aws sdk will use that credentials resulting in a Access Denied error.
I've tried to set accessKeyId: null, secretAccessKey:null and than call AWS.config.update, but the credentials are not cleared.
What is the best way to handle situations like that ?
I would recommend that instead of updating the default config, you instead use two boto3 sessions objects:
the default, implicitly-created session, that's associated with the assumed IAM role
an explicitly-created session, that's associated with the IAM user credentials
Specifically for the 2nd use case, pass the IAM user credentials to the session constructor.

Azure SDK use CLI Creds or Managed Identity

When working with AWS, if you use aws configure to log in, you can use the AWS SDK without exposing credentials in any programming language from your local machine. If anything is running inside aws later (Lambda, EC2, whatever) the exact same code does use the resource assigned IAM Role without any configuration.
I try to get the same to work with Azure, I thought that the Azure.Identity.DefaultAzureCredential does do this. But I can't even run my code locally:
var blobServiceClient = new BlobServiceClient(storageUri, new DefaultAzureCredential());
var containerClient = await blobServiceClient.CreateBlobContainerAsync("test-container");
How can I get a BlobServiceClient that authenticates using the CLI creds on my local machine, and a managed identity if running inside an AppService.
In your scenario, as you used, the DefaultAzureCredential is the best choice along with the BlobServiceClient, but it does not use CLI credentials to authenticate.
To make it work, just set the Environment variables with AZURE_CLIENT_ID, AZURE_TENANT_ID, AZURE_CLIENT_SECRET of your service principal. In Azure, it uses the MSI to authenticate.
If you want to use CLI credentials to authenticate, there is AzureServiceTokenProvider, it can also access azure storage, but you could not use it along with BlobServiceClient, you need to get the access token with the resource https://storage.azure.com,
var azureServiceTokenProvider2 = new AzureServiceTokenProvider();
string accessToken = await azureServiceTokenProvider2.GetAccessTokenAsync("https://storage.azure.com").ConfigureAwait(false);
then use the access token to call Storge REST API, I think the first option is more convenient, to use which one, it is up to you.

How to keep google-cloud-auth.json securely in app.yaml as an environmental variable?

I'm new to deployment/securing keys, and I'm not sure how to securely store the google-cloud-auth.json (auth required for creating the API client) outside of source code to prevent leaking credentials.
I've currently secured my API keys and tokens in my app.yaml file specifying them as environmental variables which successfully work as expected and shown below.
accessruntime: nodejs10
env_variables:
SECRET_TOKEN: "example"
SECRET_TOKEN2: "example2"
However my google-cloud-auth.json is kept as its own file since the parameter used for creating the client requires a path string.
const {BigQuery} = require('#google-cloud/bigquery');
...
const file = "./google-cloud-auth.json";
// Creates a BigQuery client
const bigquery = new BigQuery({
projectId: projectId,
datasetId: datasetId,
tableId: tableId,
keyFilename: file
});
According to the Setting Up Authentication for Server to Server Production Applications:
GCP client libraries will make use of the ADC (Application Default Credentials) to find the credentials meant to be used by the app.
What ADC does is basically to check if the GOOGLE_APPLICATION_CREDENTIALS env variable is set with the path to a service account file.
In case the env variable is not set, ADC will use the default service account provided by App Engine.
With this information I can suggest a couple of solutions to provide these credentials safely:
If you require to use a specific service account, set the path to the file with the GOOGLE_APPLICATION_CREDENTIALS. This section explains how to do that.
If you are not a fan of moving credential files around, I would suggest trying to use the default service account provided by the App Engine.
I just created a new project and deployed a basic app by mixing these 2 guides:
BigQuery Client Libraries
Quickstart for Node.js in the App Engine Standard Environment
My app.yaml had nothing more than the runtime: nodejs10 line, and I was still able to query through the BigQuery client library, using the default service account.
This account comes with the Project/Editor role and you can add any additional roles you need.

What and how to pass credential using using Python Client Library for gcp compute API

I want to get list of all instances in a project using python google client api google-api-python-client==1.7.11
Am trying to connect using method googleapiclient.discovery.build this method required credentials as argument
I read documentation but did not get crdential format and which credential it requires
Can anyone explain what credentials and how to pass to make gcp connection
The credentials that you need are called "Service Account JSON Key File". These are created in the Google Cloud Console under IAM & Admin / Service Accounts. Create a service account and download the key file. In the example below this is service-account.json.
Example code that uses a service account:
from googleapiclient import discovery
from google.oauth2 import service_account
scopes = ['https://www.googleapis.com/auth/cloud-platform']
sa_file = 'service-account.json'
zone = 'us-central1-a'
project_id = 'my_project_id' # Project ID, not Project Name
credentials = service_account.Credentials.from_service_account_file(sa_file, scopes=scopes)
# Create the Cloud Compute Engine service object
service = discovery.build('compute', 'v1', credentials=credentials)
request = service.instances().list(project=project_id, zone=zone)
while request is not None:
response = request.execute()
for instance in response['items']:
# TODO: Change code below to process each `instance` resource:
print(instance)
request = service.instances().list_next(previous_request=request, previous_response=response)
Application default credentials are provided in Google API client libraries automatically. There you can find example using python, also check this documentation Setting Up Authentication for Server to Server Production Applications.
According to GCP most recent documentation:
we recommend you use Google Cloud Client Libraries for your
application. Google Cloud Client Libraries use a library called
Application Default Credentials (ADC) to automatically find your
service account credentials
In case you still want to set it manaully, you could, first create a service account and give all necessary permissions:
# A name for the service account you are about to create:
export SERVICE_ACCOUNT_NAME=your-service-account-name
# Create service account:
gcloud iam service-accounts create ${SERVICE_ACCOUNT_NAME} --display-name="Service Account for ai-platform-samples repo"
# Grant the required roles:
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member serviceAccount:${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com --role roles/ml.developer
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member serviceAccount:${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com --role roles/storage.objectAdmin
# Download the service account key and store it in a file specified by GOOGLE_APPLICATION_CREDENTIALS:
gcloud iam service-accounts keys create ${GOOGLE_APPLICATION_CREDENTIALS} --iam-account ${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com
Once it's done check whether the ADC path has been set properly by checking:
echo $GOOGLE_APPLICATION_CREDENTIALS
Having set the ADC path, you don't need to import from code the service access key, which undesirable, so the code looks as follows:
service = googleapiclient.discovery.build(<API>, <version>,cache_discovery=False)

AWS lambda nodejs function without using accessKeyId and secretAccessKey?

Working on a function, I've used to aws-sdk, as suggested. Which requires accessKeyId and secretAccessKey.
I'm wondering, since I assigned a role to the function and that role has a set of permissions, is there a way to use the permission of the role to download/upload from/to a bucket, and there by not putting the credentials in the code?
If you set appropriate role to the AWS lambda with necessary access, then you don't need any accessKey and secretKey.
Taken from the aws documentation page
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/nodejs-write-lambda-function-example.html
Configuring the SDK
Here is the portion of the Lambda function that configures the SDK. The credentials are not provided in the code because they are supplied to a Lambda function through the required IAM execution role.
var AWS = require('aws-sdk');
AWS.config.update({region: 'us-west-2'});
Bacisally you shouldn't need to specify the access key and secret when providing IAM execution role

Resources