Stop EC2 Instance - node.js

I have a node js application running on EC2. After a certain operation, I want to stop the EC2.
I am using this function to stop EC2
const stopInstance = () => {
// set the region
AWS.config.update({
accessKeyId: "MY ACCESS KEY",
secretAccesskey: "SECRET KEY",
region: "us-east-1"
})
// create an ec2 object
const ec2 = new AWS.EC2();
// setup instance params
const params = {
InstanceIds: [
'i-XXXXXXXX'
]
};
ec2.stopInstances(params, function(err, data) {
if (err) {
console.log(err, err.stack); // an error occurred
} else {
console.log(data); // successful response
}
});
}
When I am running it from EC2, it's giving error
UnauthorizedOperation: You are not authorized to perform this operation.
But when I am running the same code, using the same key and secret from my local machine, It's working perfectly.
Permissions I have

This will be down to the permissions of the IAM user being passed into the script.
Firstly this error message indicates that an IAM user/role was successfully used in the request, but failed to have permissions so that can be ruled out.
Assuming a key and secret are being successfully (looks like hard coded) you would be looking at further restrictions within the policy (such as principal or .
If the key and secret are not hard coded but instead passed in as environment variables, perform some debug to output the string values and validate these are what you expect. If they do not get passed into the SDK then it may be falling back to an instance role that is attached.
As a point of improvement, generally when interacting with the AWS SDK/CLI from within AWS (i.e. on an EC2 instance) you should use a IAM role over an IAM user as this will lead to less API credentials being managed/rotated. An IAM role will rotate temporary credentials for you every few hours.

If the same credentials are working on local machine then it's probably not a permission issue, but just to further isolate the issue, you can try to run the AWS-GetCallerIdentity to check the credentials that are being used.
https://docs.aws.amazon.com/cli/latest/reference/sts/get-caller-identity.html
In case if this does not help, create a new user and try giving full admin access and then using the credentials to see if this get's resolved. This will confirm whether we are facing a permission issue or not.

Related

MONGODB-AWS autentication with EKS node instance role

I'm using MongoDB Atlas to host my MongoDB database and I want to use the MONGODB-AWS authentication mechanism for authentication. When I'm trying it locally with my personal IAM user it works as it should, however when it runs in production I get the error MongoError: bad auth : aws sts call has response 403. I run my Node.js application inside an AWS EKS cluster and I have added the NodeInstanceRole used in EKS to MonogDB Atlas. I use fromNodeProviderChain() from AWS SDK v3 to get my secret access key and access key id and have verified that I indeed get credentials.
Code to get the MongoDB URI:
import { fromNodeProviderChain } from '#aws-sdk/credential-providers'
async function getMongoUri(config){
const provider = fromNodeProviderChain()
const awsCredentials = await provider()
const accessKeyId = encodeURIComponent(awsCredentials.accessKeyId)
const secretAccessKey = encodeURIComponent(awsCredentials.secretAccessKey)
const clusterUrl = config.MONGODB_CLUSTER_URL
return `mongodb+srv://${accessKeyId}:${secretAccessKey}#${clusterUrl}/authSource=%24external&authMechanism=MONGODB-AWS`
}
Do I have to add some STS permissions for the node instance role or are the credentials I get from fromNodeProviderChain() not the same as the node instance role?

NodeJS Aws sdk, can't unset credentials

I have a NodeJS application that runs on an EC2 instance that serves API to my customers. EC2 instance have a Instance Role that grants the minimum permissions for the application to access services it needs ( i need sqs, s3 Read and write, and ses ). One particular endpoint in my api is for creating a signed url, in order to be able to access s3 files, and to create the signed url i use an IAM user with only s3 read access to that Bucket.
My issue is that, whenever that endpoint is called the AWS credentials are set using
const awsConfig = {
region,
accessKeyId: ${keyofreadonlyuser},
secretAccessKey: ${secretofreadonlyuser},
};
AWS.config.update(awsConfig);
This way, all subsequent calls to aws sdk will use that credentials resulting in a Access Denied error.
I've tried to set accessKeyId: null, secretAccessKey:null and than call AWS.config.update, but the credentials are not cleared.
What is the best way to handle situations like that ?
I would recommend that instead of updating the default config, you instead use two boto3 sessions objects:
the default, implicitly-created session, that's associated with the assumed IAM role
an explicitly-created session, that's associated with the IAM user credentials
Specifically for the 2nd use case, pass the IAM user credentials to the session constructor.

AWS Lambda credentials from the execution environment do not have the execution role's permissions

I am deploying an AWS lambda function (with nodejs 12.x) that executes an AWS command ("iot:attachPrincipalPolicy") when invoked. I'm taking the credentials to run this command from the lambda execution environment variables.
const AWS = require('aws-sdk/global');
const region = process.env['AWS_REGION'];
const accessKeyId = process.env['AWS_ACCESS_KEY_ID'];
const secretAccessKey = process.env['AWS_SECRET_ACCESS_KEY'];
AWS.config.region = region;
AWS.config.credentials = new AWS.Credentials(accessKeyId, secretAccessKey);
// attachPrincipalPolicy command from the AWS SDK here
When I test the function locally (with sam local start-api) it runs successfully, because in my AWS CLI I have set the ACCESS_KEY_ID and secret of my administrator account.
However when I deploy the function and invoke it the lambda fails on that command with a client error (the credentials are not valid), even when I give full admin access also to the lambda's execution role.
Here I gave full permissions in an inline policy and I also explicitly added the pre-defined admin access policy too.
I expected the AWS_ACCESS_KEY_ID that you get from the environment variables to grant me all the permissions that I have set in the lambda function's execution role but it looks like the privilege that I grant to the execution role are not reflected in these credentials.
Is my assumption wrong? Where do these credentials come from and how can I find out what they allow me to do?
The Lambda execution runtime will provide your function invocation with a temporary session token (not a persistent/permanent access key / secret access key).
Behind the scene, the Lambda Service will use the AWS Security Token Service (AWS STS) to assume the Lambda execution role of your Lambda function. This is why you must also add the Lambda Service principal as a trusted service principal in the trust policy of your execution role. And the result of this is a temporary session.
The credentials for this temporary session are stored in a combination of the environment variables AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID and AWS_SESSION_TOKEN.
You should however not need to configure/specify any credentials manually, as the default credentials loader chain in the AWS SDK takes care of this automatically.

Getting "Permission denied" container.clusters.create when deployed but not on localhost

I'm trying to create a Kubernetes Cluster via the NodeJS client on google App engine. The Kubernetes cluster is on a separate project to where the app engine project is hosted, say "my-node-project" & "my-k8-project".
"my-node-project" has the relevant service account(Owner level access) for the kubernetes project.
I make the cluster create call as follows:
var client = new container.v1.ClusterManagerClient({
projectId: projectId,
key: serviceAccount
});
var zone = 'us-central1-b';
var password = "<some password>";
var clusterConfig = {
"name": clusterName,
"description": "api created cluster",
"initialNodeCount": 3,
"nodeConfig": {
"oauthScopes": [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only"
]
}
,
"masterAuth": {
"username": "admin",
"password": password
},
"zone": zone
};
var request = {
projectId: projectId,
zone: zone,
cluster: clusterConfig,
};
return client.createCluster(request)
.then(responses => {
var response = responses[0];
console.log("response: ", response);
return response;
})
.catch(err => {
console.error(err);
return err;
});
In the above code the serviceAccount variable is a json object containing the service account, with all the private key, project id fields etc.
The strange thing is that when I run the code locally, i.e. call the endpoint that runs the above function, the request goes through just fine, i.e. the clusters are created and I can even add workloads via the api.
However, after I deploy the nodejs project to app engine standard and call the same endpoint running on app engine, I get the error:
Error: 7 PERMISSION_DENIED: Required "container.clusters.create" permission(s) for "projects/my-k8-project". See https://cloud.google.com/kubernetes-engine/docs/troubleshooting#gke_service_account_deleted for more info.
at Object.exports.createStatusError (/srv/node_modules/grpc/src/common.js:91:15)
at Object.onReceiveStatus (/srv/node_modules/grpc/src/client_interceptors.js:1204:28)
at InterceptingListener._callNext (/srv/node_modules/grpc/src/client_interceptors.js:568:42)
at InterceptingListener.onReceiveStatus (/srv/node_modules/grpc/src/client_interceptors.js:618:8)
at callback (/srv/node_modules/grpc/src/client_interceptors.js:845:24)
code: 7,
metadata: Metadata { _internal_repr: {} },
details:
'Required "container.clusters.create" permission(s) for "projects/my-k8-project". See https://cloud.google.com/kubernetes-engine/docs/troubleshooting#gke_service_account_deleted for more info.' }
Since I got that troubleshooting link, I tried to create a new service account and use that. In addition I tried disabling and enabling the both the kubernetes and compute APIs. I also tried to place the service account in the root directory of the project and refer to the service account that way.
Unfortunately everything I tried resulted in exactly the same error. But still worked when running from localhost.
Is there a whitelist somewhere I'm missing? Perhaps localhost is whitelisted by default and "my-node-project" app engine project isn't on the list?
Any tips, hints or pointing in the right direction would be very much appreciated.
You need to add service account also to your "my-k8-project" as a member and give relevant role to it.
In the Cloud Console, navigate to project "my-k8-project". Find the "IAM & admin" > "IAM" page. Click the "Add" button. In the "New members" field paste the name of the service account and give it the appropriate role.

AWS EC2 IAM Role Credentials

Using the Node sdk for AWS, I'm trying to use the credentials and permissions given by the IAM role that is attached to the EC2 instance that my Node application is running on.
According to the sdk documentation, that can be done using the EC2MetadataCredentials class to assign the configuration properties for the sdk.
In the file that I'm using the sdk in to access a DynamoDB instance, I have the configuration code:
import AWS from 'aws-sdk'
AWS.config.region = 'us-east-1'
AWS.config.credentials = new AWS.EC2MetadataCredentials({
httpOptions: { timeout: 5000 },
maxRetries: 10,
retryDelayOptions: { base: 200 }
})
const dynamodb = new AWS.DynamoDB({
endpoint: 'https://dynamodb.us-east-1.amazonaws.com',
apiVersion: '2012-08-10'
})
However, when I trying to visit the web application I always get an error saying:
Uncaught TypeError: d.default.EC2MetadataCredentials is not a constructor
Uncaught TypeError: _awsSdk2.default.EC2MetadataCredentials is not a constructor
Even though that is the exact usage from the documentation! Is there something small that I'm missing?
Update:
Removing the credentials and region definitions from the file result in another error that'll say:
Error: Missing region|credentials in config
I don't know if this is still relevant for you, but you do need to configure the EC2MetadataCredentials as it is not in the default ProviderChain ( search for new AWS.CredentialProviderChain([ in node_loader.js in the sdk).
It seems you might have an old version of aws_sdk as that code works for me:
import AWS from 'aws-sdk';
...
AWS.config.credentials = new AWS.EC2MetadataCredentials();
I was facing a similar issue where AWS SDK was not fetching credentials. According to the documentation, SDK should be able to automatically fetch the credentials.
If you configure your instance to use IAM roles, the SDK automatically selects the IAM credentials for your application, eliminating the need to manually provide credentials.
I was able to solve the issue by manually fetching the credentials, and providing them directly wherever required (For MongoDB Atlas in my case):
var AWS = require("aws-sdk");
AWS.config.getCredentials(function(err) {
if (err) console.log(err.stack);
// credentials not loaded
else {
console.log("Access key:", AWS.config.credentials.accessKeyId);
}
});
Source: https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/global-config-object.html
Although, why SDK is not not doing it automatically is still mystery to me. I will update the answer once I figure it out.

Resources