Node AWS SDK: Updating Credentials After Initialization - node.js

According to the Node AWS SDK documentation, new objects take the AWS object's configuration when initialized, and updating the AWS object's configuration will not change an instantiated object's config, so it must be updated manually. The docs specifically say you can do this, but updating the instantiated object manually doesn't seem to work.
var AWS = require('aws-sdk')
, awsInstance;
AWS.config.update({region: 'us-west'});
awsInstance = new AWS();
awsInstance.config.update({region: 'us-east'});
awsInstance's region is still set to us-west. How do you update it after instantiating the object?

You can't update AWS global configuration from instance.
use
awsInstance = new AWS({region: 'us-east'});
when you create the instance

Related

SageMaker NodeJS's SDK is not locking the API Version

I am running some code in AWS Lambda that dynamically creates SageMaker models.
I am locking Sagemaker's API version like so:
const sagemaker = new AWS.SageMaker({apiVersion: '2017-07-24'});
And here's the code to create the model:
await sagemaker.createModel({
ExecutionRoleArn: 'xxxxxx',
ModelName: sageMakerConfigId,
Containers: [{
Image: ecrUrl
}]
}).promise()
This code runs just fine locally with aws-sdk on 2.418.0.
However, when this code is deployed to Lambda, it doesn't work due to some validation errors upon creating the model:
MissingRequiredParameter: Missing required key 'PrimaryContainer' in params
UnexpectedParameter: Unexpected key 'Containers' found in params
Is anyone aware of existing bugs in the aws-sdk for NodeJS using the SDK provided by AWS in the Lambda context? I believe the SDK available inside AWS Lambda is more up-to-date than 2.418.0 but apparently there are compatibility issues.
As you've noticed the 'embedded' lambda version of the aws-sdk lags behind. It's actually on 2.290.0 (you can see the full details on the environment here: https://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html)
You can see here: https://github.com/aws/aws-sdk-js/blame/master/clients/sagemaker.d.ts that it is not until 2.366.0 that the params for this method included Containers and did not require PrimaryContainer.
As you've noted, the workaround is to deploy your lambda with the aws-sdk version that you're using. This is sometimes noted as a best practice, as it pins the aws-sdk on the functionality you've built and tested against.

How to avoid setting aws config as env variable externally while using dynamoose.js

My aws config doesn't work unless I set it externally through env variable
db connection works only if I set the credentials externally like,
export AWS_ACCESS_KEY_ID=abcde
export AWS_SECRET_ACCESS_KEY=abcde
export AWS_REGION=ap-south-1
export AWS_DYNAMODB_ENDPOINT="http://localhost:8000"
It doesn't work if I don't set these externally. For example if I set it in code like the following, it does not work.
dynamoose.AWS.config.update({
accessKeyId:'abcde',
secretAccessKey:'abcde',
region:'ap-south-1',
endpoint:'http://localhost:8000'
});
I don't want to set config in any variable externally. Is there a way to just manage this in nodejs code?
These are the alternates I have tried/considered
Setting env variable in the code, this doesn't work either
process.env.AWS_REGION='ap-south-1';
I read about dotenv package. But it is recommended that it should be used for dev only and not production (I am not sure if that will work)
Please help me resolve this. How do I manage the config in code only?
The problem is probably that you are creating or requiring your Dynamoose models before you run the dynamoose.AWS.config.update method.
Make sure that dynamoose.AWS.config.update is the very first method you call, and you haven't created or initialized any Dynamoose related things before.
For example.
const dynamoose = require('dynamoose');
dynamoose.AWS.config.update({
accessKeyId:'abcde',
secretAccessKey:'abcde',
region:'ap-south-1',
endpoint:'http://localhost:8000'
});
const Model = require('./models/MyModel'); // should happen after `dynamoose.AWS.config.update`
Another trick I would try to do is enable debug logging and go through the logs to see what is happening. You can enable Dynamoose logging by running export DEBUG=dynamoose*, then rerunning the script.
If you are working with newer version syntax is changed and could be find here .
https://dynamoosejs.com/guide/Dynamoose/#dynamooseawssdk
const sdk = dynamoose.aws.sdk; // require("aws-sdk");
sdk.config.update({
"accessKeyId": "AKID",
"secretAccessKey": "SECRET",
"region": "us-east-1"
});

Is it possible to create a launch configuration from an EC2 running instance with node.js sdk?

From here I learned that is possible to create a launch configuration passing the InstanceId of an actually running instance.
Sadly it only show the possibility to do that from AWS Console and from AWS CLI. I found the documentation about how to do that with the AWS SDK for Java, but nothing for Node.js.
Has anybody found any information about that?
Thanks
JS documentation says you can
I would use this function - createLaunchConfiguration with param InstanceId.
Documentation well describes InstanceID as -
The ID of the instance to use to create the launch configuration. The
new launch configuration derives attributes from the instance, with
the exception of the block device mapping.
If you do not specify InstanceId, you must specify both ImageId and
InstanceType.
To create a launch configuration with a block device mapping or
override any other instance attributes, specify them as part of the
same request.

Do I need to specify the region when instantiating a AWS Helper Class in AWS Lambda?

If I want to call AWS SES from AWS Lambda, I normally write the following when instantiating the AWS Helper Class:
var ses = new aws.SES({apiVersion: '2010-12-01', region: 'eu-west-1'});
I'm wondering, do I actually need to specify the AWS Region? Or will the AWS SES helper class just run in the region where the AWS Lambda Function is running.
What is the best practice here? Might I encounter problems later if I omit this?
I have always specified the region for the sake of being explicit. I went and changed one of my NodeJS Lambda functions using SNS to using an empty constructor instead of providing region and deployed it...it appears to still work. It looks like the service will try to run in the region of the lambda function it is being called from. I imagine the IAM role for the lambda function would play a part as well. As far as best practice, I think it is best to be explicit when possible assuming it isn't creating a ton of overhead/hassle. The problem you risk running into in the future is the use of a resource that isn't in certain regions.

How to restore object from amazon glacier to s3 using nodejs code?

I have configured life cycle policy in S3, some of objects in S3 are stored in Glacier class, some of object are still in S3, now I am trying to restore objects from Glacier, I can restore objects in glacier using intiate restore in console and s3cmd line.How can i write code to restore objects in Glacier by using by Nodejs AWS SDK.
You would use the S3.restoreObject() function in the AWS SDK for NodeJS to restore an object from Glacier, as documented here.
Thanks mark for update.I have tried using s3.restoreObject() and code is working.But i am facing following issue :{ [MalformedXML: The XML you provided was not well-formed or did not validate against out published schema}
This is code i tried:
var AWS = require('aws-sdk');
var s3 = new AWS.S3({accessKeyId: 'XXXXXXXX', secretAccessKey:'XXXXXXXXXX'});
var params = {
Bucket: 'BUCKET',
Key: 'file.json',
RestoreRequest:
{ Days: 1, 
 GlacierJobParameters: { Tier: 'Standard'  }
} 
};
s3.restoreObject (params, function(err, data)
{ 
if (err) console.log(err, err.stack); 
else console.log(data);  
});

Resources