I recently discovered DynamoDB Local and started building it into my project for local development. I decided to go the docker image route (as opposed to the downloadable .jar file.
That being said I've gotten image up and running and have created a table and can successfully interact with the docker container via the aws cli. aws dynamodb list-tables --endpoint-url http://localhost:8042 successfully returns the table I created previously.
However, when I run my lambda function and set my aws config like so.
const axios = require('axios')
const cheerio = require('cheerio')
const randstring = require('randomstring')
const aws = require('aws-sdk')
const dynamodb = new aws.DynamoDB.DocumentClient()
exports.lambdaHandler = async (event, context) => {
let isLocal = process.env.AWS_SAM_LOCAL
if (isLocal) {
aws.config.update({
endpoint: new aws.Endpoint("http://localhost:8042")
})
}
(which I have confirmed is getting set) it actually writes to the table (with the same name of the local dynamodb instance) in the live AWS Webservice as opposed to the local container and table.
It's also worth mentioning I'm unable to connect to the local instance of DynamoDB with the AWS NoSQL Workbench tool even though it's configured to point to http://localhost:8042 as well...
Am I missing something? Any help would be greatly appreciated. I can provide any more information if I haven't already done so as well :D
Thanks.
SDK configuration changes, such as region or endpoint, do not retroactively apply to existing clients (regular DynamoDB client or a document client).
So, change the configuration first and then create your client object. Or simply pass the configuration options into the client constructor.
Related
I am trying to download a file from an EC2 instance and store it temporarily in the tmp folder inside AWS Lambda. This is what I have tried:
let Client = require('ssh2-sftp-client');
let sftp = new Client();
sftp.connect({
host: host,
username: user,
privateKey : fs.readFileSync(pemfile)
}).then(() => {
return sftp.get('/config/test.txt' , fs.createWriteStream('/tmp/test.txt'))
}).then(() => {
sftp.end();
}).catch(err => {
console.error(err.message);
});
The function runs without generating an error but nothing is written to the destination file. What am I doing wrong here and how could I debug this? Also is there a better way of doing this altogether?
This is not the cloud way to do it IMO. Create a S3 bucket, and create proper Lambda execution role for the lambda function to be able to read from the bucket. Also, create a role for the EC2 instance to be able to write to the same S3 bucket. Using S3 API from both sides, the lambda function and the EC2, should be enough to share the file.
Think about this approach: you decouple your solution from a VPC and region perspective. Also, since the lambda only needs to access S3, you save a ENI (elastic network interface) resources, so you are not using your VPC private ips. These are just advantages that may not care in your case, but it is good to be aware of them.
I am trying to manage direct file upload to S3 according to heroku recomendations
first one need to generate presigned URL at ones server
use this url in client to direct upload of image from browser to S3 bucket
and finally manage to works it locally.
but when I tried to deploy server on heroku it starts to fail with no reason or readable error. Just common error and strange message when I try to print it
what looks strange for me that presigned urls are completely different when I make call from local host or from heroku
response for localhost looks like this:
https://mybucket.s3.eu-west-1.amazonaws.com/5e3ec346d0b5af34ef9dfadf_avatar.png?AWSAccessKeyId=<AWSKeyIdHere>&Content-Encoding=base64&Content-Type=image%2Fpng&Expires=1581172437&Signature=xDJcRBiA%2FmQF1qKhBZrnhFXWdaM%3D
and response for heroku deployment looks like this:
https://mybucket.s3.u-west-1.amazonaws.com/5e3ee2bd1513b60017d85c6c_avatar.png?Content-Type=image%2Fpng&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=<credentials-key-here>%2F20200208%2Fu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20200208T163315Z&X-Amz-Expires=900&X-Amz-Signature=<someSignature>&X-Amz-SignedHeaders=content-encoding%3Bhost
server code is almost like in examples:
const Bucket = process.env.BUCKET_NAME
const region = process.env.BUCKET_REGION
AWS.config = new AWS.Config({
accessKeyId: process.env.S3_KEY,
secretAccessKey: process.env.S3_SECRET,
region,
logger: console
})
const s3 = new AWS.S3()
async function generatePutUrl(inputParams = {}) {
const params = { Bucket, ...inputParams }
const { Key } = inputParams
const putUrl = await s3.getSignedUrl('putObject', params)
const getUrl = generateGetUrlLocaly(Key)
return {putUrl, getUrl}
}
the only difference that I can imagine is SSL - I run local server VIA http and heroku goes over https by default...
but I don't understand how it may influence here.
I will appreciate any meaningful advises how to debug and fix it.
thank you.
It looks like that your bucket region is incorrect. Shouldn't it be eu-west-1 instead of u-west-1?
Please update your BUCKET_REGION in environment variables at Heroku Server settings from
u-west-1
to
eu-west-1
and restart the dynos. It may solve your problem.
I am using serveless + aws + node.js.
I have a lambda calling another lambda. I can't get to run the lot locally.
I can invoke both lambdas locally with 'serverless invoke local -f ...' BUT
the caller one comes back with:
{"message":"Function not found: arn:aws:lambda:eu-west-1:5701xxxxxxxxxx:function:the-right-function-name"}
as if the caller function invoked the callee on AWS and not locally.
Is there anyway to do stay local and if yes, what may I be missing?
You can achieve that with this plugin. There is a feature of AWS SDK for Lambda that allows you to override the API endpoint of Lambda service. Therefore you can set it to localhost.
const AWS = require('aws-sdk');
const endpoint = process.env.SERVERLESS_SIMULATE ?
process.env.SERVERLESS_SIMULATE_LAMBDA_ENDPOINT :
undefined
const lambda = new AWS.Lambda({ endpoint })
For more details, refer to the plugin's readme. Also there is a nice article about that.
In an effort to improve cold start latency in AWS Lambda, I am attempting to include only the necessary classes for each Lambda function. Rather than include the entire SDK, How can I include only the DynamoDB portion of the SDK?
// Current method:
var AWS = require('aws-sdk');
var dynamodb = new AWS.DynamoDB();
// Desired method:
var AWSdynamodb = require('aws-dynamodb-sdk');
The short answer is: you do not need to do this.
The AWS SDK for JavaScript uses dynamic requires to load services. In other words, the classes are defined, but the API data is only loaded when you instantiate a service object, so there is no CPU overhead in having the entire package around.
The only possible cost would be from disk space usage (and download time), but note that Lambda already bundles the aws-sdk package on its end, so there is no download time, and you're actually using less disk space by using the SDK package available from Lambda than using something customized.
I don't think this is possible.
The npm registry only has aws-sdk. https://www.npmjs.com/package/aws-sdk
There may be other npm packages available for dynamodb, but I would advice only using the sdk provided by aws team.
Be sure to instantiate the SDK outside of the handler.
This example is good, and will result in a cold start time of about 1s
const AWS = require("aws-sdk");
const SNS = new AWS.SNS({apiVersion: '2010-03-31'});
exports.handler = function(event, context) {
//do stuff
};
This example is bad and will result in a cold start time of about 5s
exports.handler = function(event, context) {
const AWS = require("aws-sdk");
const SNS = new AWS.SNS({apiVersion: '2010-03-31'});
//do stuff
};
https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html
Take advantage of execution context reuse to improve the performance
of your function. Initialize SDK clients and database connections
outside of the function handler, and cache static assets locally in
the /tmp directory. Subsequent invocations processed by the same
instance of your function can reuse these resources. This saves
execution time and cost.
I'm working on a cloud project using NodeJS.
I have to run EC2 instances so have done a npm install aws-sdk.
I believe we have to add our credentials now before we run the application?
I could not aws folder so I have created a folder and added the credentials in the credentials.txt file.
C:\Users\jessig\aws
I keep getting this error:
{ [TimeoutError: Missing credentials in config]
message: 'Missing credentials in config',
code: 'CredentialsError',
I tried setting the Access key and secret key in environment variables but still get the same error..
Not sure why I cant find the \.aws\credentials (Windows) folder..
Can anyone please help?
As Frederick mentioned hardcoding is not an AWS recommended standard, and this is not something you would want to do in a production environment. However, for testing purpose, and learning purposes, it can be the simplest way.
Since your request was specific to AWS EC2, here is a small example that should get you started.
To get a list of all the methods available to you for Node.js reference this AWS documentation.
var AWS = require('aws-sdk');
AWS.config = new AWS.Config();
AWS.config.accessKeyId = "accessKey";
AWS.config.secretAccessKey = "secretKey";
AWS.config.region = "us-east-1";
var ec2 = new AWS.EC2();
var params = {
InstanceIds: [ /* required */
'i-4387dgkms3',
/* more items */
],
Force: true
};
ec2.stopInstances(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
I used the following programmatic way, combined with the popular npm config module (which allows different config files for development vs production, etc.):
const config = require('config');
const AWS = require('aws-sdk');
const accessKeyId = config.get('AWS.accessKeyId');
const secretAccessKey = config.get('AWS.secretAccessKey');
const region = config.get('AWS.region');
AWS.config.update(
{
accessKeyId,
secretAccessKey,
region
}
);
And the json config file, e.g. development.json, would look like:
{
"AWS": {
"accessKeyId": "TODO",
"secretAccessKey": "TODO",
"region": "TODO"
}
}
There are multiple ways to configure the sdk to work with node js
There are a few ways to load credentials. Here they are, in order of
recommendation:
Loaded from IAM roles for Amazon EC2 (if running on EC2),
Loaded from the shared credentials file (~/.aws/credentials),
Loaded from environment variables,
Loaded from a JSON file on disk,
Hardcoded in your application
Although the hardcoded one is not recommended.
If you want to use a shared credentials files, on windows it would be
C:\Users\jessig\.aws\credentials
(note the . before aws). Your file should be something like
[default]
aws_access_key_id = your_access_key
aws_secret_access_key = your_secret_key
Adding accessKeyId and secretAccessKey in the config for AWS is deprecated as of today. As the AWS Docs for SDK for Node.js states:
The SDK automatically detects AWS credentials set as variables in your environment and uses them for SDK requests. This eliminates the need to manage credentials in your application. The environment variables that you set to provide your credentials are:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN (Optional)
https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/loading-node-credentials-environment.html
You may want to use dotenv package to load those environment variables.
The AWS credentials can be set as ENVIRONMENT VAR in the running container.
You would either add the following two ENVIRONMENT VAR directly:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
or set these ENVIRONMENT VAR programmatically within NODE as
var AWS = require('aws-sdk')
AWS.config = new AWS.Config();
process.env.AWS_ACCESS_KEY_ID = "AKIA************L55A"
process.env.AWS_SECRET_ACCESS_KEY = "Ef*******+C5LrtOroSj**********yNE"
AWS.config.region = "us-east-2"
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/loading-node-credentials-environment.html