KeyConditionExpression error and empty error response - node.js

Im using Lambda as my backend and im performing all the DynamoDB operations from it.
I have a user table Users and i want to query it via its hash-key Username
Using the KeyConditionExpression statement on my params variable but i get the following error:
There were 2 validation errors:\n* MissingRequiredParameter: Missing >required key 'KeyConditions' in params\n* UnexpectedParameter: Unexpected >key 'KeyConditionExpression' found in params
So yeah, i tried the following legacy statement:
var userQuery = {
TableName:"Users",
KeyConditions:{
Username:{
ComparisonOperator:'EQ',
AttributeValueList:[{S:"some_username"}]
}
}
};
For some reason, i get empty errors on the query callback, which looks like this:
dynamo.query(userQuery,function(err,data){
if(err) console.log("error "+JSON.stringify(err,null,2));
else console.log("pass "+JSON.stringify(data,null,2));
});
I've tried literally everything, gotten to the point of desperation...
I cant seem to query any table, but i can scan and use putItem with no problem. My policy has the query parameter as well.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "my_Stmt_num",
"Action": [
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:UpdateItem"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "",
"Resource": "*",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow"
}
]
}
In case its relevant, at the top of my handler js file im getting a reference to dynamo like this:
var doc = require('dynamodb-doc');
var dynamo = new doc.DynamoDB();
My whole application is 'new' meaning nothing prior than February 2015 exists, so i dont see any point in using legacy apis, as the docs say.

It sounds like your AWS SDK associated with the document client may be out of date and does not support the new KeyConditionExpression feature. Could you please try re-installing your AWS SDK and the document SDK? Please also attach the versions you are installing if you continue to have issues after re-installing.

Previous DynamoDB Document SDK was deprecated, new client from standard Javascript SDK should be used from now on:
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB/DocumentClient.html

Related

AWS CLI not honoring MultiFactorAuthAge

I'm trying to setup my AWS CLI to assume a role using MFA and expiring the creds after 15 minutes (minimum duration_seconds allowed, apparently).
My IAM role policy is:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/myuser"
},
"Action": "sts:AssumeRole",
"Condition": {
"Bool": {
"aws:MultiFactorAuthPresent": "true"
},
"NumericLessThan": {
"aws:MultiFactorAuthAge": "900"
}
}
}
]
}
My CLI config is setup as follows:
[profile xxx]
role_arn = arn:aws:iam::XXXXXXXXXXXX:role/mfa
mfa_serial = arn:aws:iam::XXXXXXXXXXXX:mfa/foobar
source_profile = mfa
When I run a command using the xxx profile above, MFA is asked the first time and remains valid for all the subsequent requests. However, after 15 minutes, the token is still valid and MFA isn't asked again.
$ aws s3 ls --profile xxx
I tried setting the duration_seconds parameter on my CLI as below:
[profile xxx]
role_arn = arn:aws:iam::XXXXXXXXXXXX:role/mfa
mfa_serial = arn:aws:iam::XXXXXXXXXXXX:mfa/foobar
source_profile = mfa
duration_seconds = 900
But now, I'm asked the MFA token for every command issued, even if the time difference is in the order of seconds.
Am I missing something here?
AWS CLI version: aws-cli/2.0.49 Python/3.7.4 Darwin/19.6.0 exe/x86_64
Appreciate any help.
Thanks in advance!
So, I discovered the reason for this behavior:
As described in this Github Issue, AWS CLI treats any session within 15min as expired, refreshing the creds automatically (or asking for a new one-time passcode, in case of MFA).
So, setting the session duration for 15min (900s) is basically the same as getting a one-time credential.
I just tested setting the session_duration to 930 (15min + 30s), and the session is indeed valid for 30 seconds.
Seems that the issue is with your conditions.
"Bool": {
"aws:MultiFactorAuthPresent": "true"
},
This will always require the MFA
instead try something like that:
{
"Version": "2012-10-17",
"Statement":[
{
"Sid": "AllowActionsForEC2",
"Effect": "Allow",
"Action": ["ec2:RunInstances",
"ec2:DescribeInstances",
"ec2:StopInstances "],
"Resource": "*"
},
{
"Sid": "AllowActionsForEC2WhenMFAIsPresent",
"Effect":"Allow",
"Action":"ec2:TerminateInstances",
"Resource":"*",
"Condition":{
"NumericLessThan ":{"aws:MultiFactorAuthAge":"300"}
}
}
]
}
The Users with short-term credentials older than 300 seconds must reauthenticate to gain access to this API.
So for your policy it will look like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/myuser"
},
"Action": "sts:AssumeRole",
"Condition": {
"NumericLessThan": {
"aws:MultiFactorAuthAge": "900"
}
}
}
]
}
The issue is that every conditions needs to be met in order for you to be able to assume the role. "aws:MultiFactorAuthPresent": "true" will always check if you used the MFA to login.

How to go from a AWS-console-derived policy to a working terraform-scripted policy?

I have a terraform script that provides a lambda function on aws to send emails. I pieced this terraform script from tutorials and templates on the web to use AWS SES, Api Gateway, Lambda and Cloudwatch services.
To get permissions to work though, I had to run the script and then, separately, build a policy in the AWS console and apply it to the lambda function so that it could fully access the SES and Cloudwatch services. But it's not at all not clear to me how to take that working policy and adapt it to my terraform script. Could anyone please provide or point to guidance on this matter?
The limited/inadequate but otherwise working role in my terraform script looks like this:
resource "aws_iam_role" "iam_for_lambda" {
name = "${var.role_name}"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Effect": "Allow",
"Sid": ""
}
]
} EOF
}
... and the working policy generated in the console (by combining two roles together for all-Cloudwatch and all-SES access):
{
"permissionsBoundary": {},
"roleName": "las_role_new",
"policies": [
{
"document": {
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:Describe*",
"cloudwatch:*",
"logs:*",
"sns:*",
"iam:GetPolicy",
"iam:GetPolicyVersion",
"iam:GetRole"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:CreateServiceLinkedRole",
"Resource": "arn:aws:iam::*:role/aws-service-role/events.amazonaws.com/AWSServiceRoleForCloudWatchEvents*",
"Condition": {
"StringLike": {
"iam:AWSServiceName": "events.amazonaws.com"
}
}
}
]
},
"name": "CloudWatchFullAccess",
"id": "ANPAIKEABORKUXN6DEAZU",
"type": "managed",
"arn": "arn:aws:iam::aws:policy/CloudWatchFullAccess"
},
{
"document": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ses:*"
],
"Resource": "*"
}
]
},
"name": "AmazonSESFullAccess",
"id": "ANPAJ2P4NXCHAT7NDPNR4",
"type": "managed",
"arn": "arn:aws:iam::aws:policy/AmazonSESFullAccess"
}
],
"trustedEntities": [
"lambda.amazonaws.com"
]
}
There are fields
So my question in summary, and put most generally, is this:
given a "policy" built in the aws console (by selecting a bunch of roles, etc. as in ), how do you convert that to a "role" as required for the terraform script?
To anyone else who might struggle to understand terraform-aws-policy matters, here's my understanding after some grappling. The game here is to carefully distinguish the various similar-sounding terraform structures (aws_iam_role, aws_iam_role_policy, aws_iam_role, assume_role_policy, etc.) and to work out how these black-box structures fit together.
First, the point of an aws role is to collect together policies (i.e. permissions to do stuff). By assigning such a role to a service (e.g. lambda), you thereby give that service the permissions described by those policies. A role must have at least one policy sort of built-in to it: the 'assume-role' policy that specifies which service(s) can use ('assume') that role. This assume-role policy is relatively simple and so 'might as well' be included in the terraform script explicitly (using the <<EOF ... EOF syntax above).
Secondly, if you want to now let that service with the (basic) role do anything to other services, then you need to somehow associate additional policies with that role. I've learned that there are several ways to do this but, in order to answer my question most succinctly, I'll now describe the most elegant way I have found to incorporate multiple template policies offered in the AWS console into one's terraform script.
The code is:
# Define variable for name of lambda function
variable "role_name" {
description = "Name for the Lambda role."
default = "las-role"
}
# Create role with basic policy enabling lambda service to use it
resource "aws_iam_role" "iam_for_lambda" {
name = "${var.role_name}"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": [ "lambda.amazonaws.com" ]
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
# Define a list of policy arn's given in the AWS console
variable "iam_policy_arn_list" {
type = list(string)
description = "IAM Policies to be attached to role"
default = ["arn:aws:iam::aws:policy/CloudWatchFullAccess", "arn:aws:iam::aws:policy/AmazonSESFullAccess"]
}
# Create attachment of the policies for the above arn's to our named role
# The count syntax has the effect of looping over the items in the list
resource "aws_iam_role_policy_attachment" "role-policy-attachment" {
role = var.role_name
count = length(var.iam_policy_arn_list)
policy_arn = var.iam_policy_arn_list[count.index]
depends_on = [aws_iam_role.iam_for_lambda]
}
As you can see, the template policies are included here using the arns which can be found in the AWS console. For example, here's the view for finding the arn for full access to Amazon SES via the AWS Management Console:
When you succesfully deploy your lambda to AWS using terraform, it will pull down these policies from the arns and generate a permission json for your lambda function (which you can view in the lambda-service section of the aws console) that looks a lot like the json I posted in the question.

Micronauts AWS API Gateway Authorizer JSON output issue

I've written a simple lambda function in Micronauts/Groovy to return Allow/Deny policies as an AWS API gateway authorizer. When used as the API gateway authorizer the JSON cannot be parsed
Execution failed due to configuration error: Could not parse policy
When testing locally the response has the correct property case in the JSON.
e.g:
{
"principalId": "user",
"PolicyDocument": {
"Context": {
"stringKey": "1551172564541"
},
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Effect": "Allow",
"Resource": "arn:aws:execute-api:eu-west-1:<account>:<ref>/*/GET/"
}
]
}}
When this is run in AWS the JSON response has the properties all in lowercase:
{
"principalId": "user",
"policyDocument": {
"context": {
"stringKey": "1551172664327"
},
"version": "2012-10-17",
"statement": [
{
"resource": "arn:aws:execute-api:eu-west-1:<account>:<ref>/*/GET/",
"action": "execute-api:Invoke",
"effect": "Allow"
}
]
}
}
Not sure if the case is the issue but I cannot see what else might be the issue (tried many variations in output).
I've tried various Jackson annotations (#JsonNaming(PropertyNamingStrategy.UpperCamelCaseStrategy.class) etc) but they do not seem to have an affect on the output in AWS.
Any idea how to sort this? Thanks.
Example code :
trying to get output to look like the example.
Running example locally using
runtime "io.micronaut:micronaut-function-web"
runtime "io.micronaut:micronaut-http-server-netty"
Lambda function handler:
AuthResponse sessionAuth(APIGatewayProxyRequestEvent event) {
AuthResponse authResponse = new AuthResponse()
authResponse.principalId = 'user'
authResponse.policyDocument = new PolicyDocument()
authResponse.policyDocument.version = "2012-10-17"
authResponse.policyDocument.setStatement([new session.auth.Statement(
Effect: Statement.Effect.Allow,
Action:"execute-api:Invoke",
Resource: "arn:aws:execute-api:eu-west-1:<account>:<ref>/*/GET/"
)])
return authResponse
}
AuthResponse looks like:
#CompileStatic
class AuthResponse {
String principalId
PolicyDocument policyDocument
}
#JsonNaming(PropertyNamingStrategy.UpperCamelCaseStrategy.class)
#CompileStatic
class PolicyDocument {
String Version
List<Statement> Statement = []
}
#JsonNaming(PropertyNamingStrategy.UpperCamelCaseStrategy.class)
#CompileStatic
class Statement {
String Action
String Effect
String Resource
}
Looks like you cannot rely on AWS lambda Java serializer to not change your JSON response if you are relying on some kind of annotation or mapper. If you want the response to be untouched you'll need to you the raw output stream type of handler.
See the end of this AWS doc Handler Input/Output Types (Java)

How to pass API Gateway authorizer context to a HTTP integration

I have successfully implemented a Lambda authorizer for my AWS API Gateway, but I want to pass a few custom properties from it to my Node.js endpoint.
My output from my authorizer follows the format specified by AWS, as seen below.
{
"principalId": "yyyyyyyy",
"policyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Effect": "Allow|Deny",
"Resource": "arn:aws:execute-api:<regionId>:<accountId>:<appId>/<stage>/<httpVerb>/[<resource>/<httpVerb>/[...]]"
}
]
},
"context": {
"company_id": "123",
...
}
}
In my case, context contains a few parameters, like company_id, that I would like to pass along to my Node endpoint.
If I was to use a Lambda endpoint, I understand that this is done with Mapping Template and something like this:
{
"company_id": "$context.authorizer.company_id"
}
However, Body Mapping Template is only available under Integration Request if Lambda is selected as Integration type. Not if HTTP is selected.
In short, how do I pass company_id from my Lambda authorizer to my Node API?
Most of the credit goes out to #Michael-sqlbot in the comments to my question, but I'll put the complete answer here if someone else finds this question.
Authorizer Lambda
It has to return an object in this format, where context contains the parameters you want to forward to your endpoint, as specified in the question.
{
"principalId": "yyyyyyyy",
"policyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Action": "execute-api:Invoke",
"Effect": "Allow|Deny",
"Resource": "arn:aws:execute-api:<regionId>:<accountId>:<appId>/<stage>/<httpVerb>/[<resource>/<httpVerb>/[...]]"
}]
},
"context": {
"company_id": "123", <-- The part you want to forward
...
}
}
Method Request
Under Method Request / HTTP Request Headers, add the context property you want to forward:
Name: company_id
Required: optional
Cashing: optional
Integration Request
And under Integration Request / HTTP Headers, add:
Name: company_id
Mapped from: context.authorizer.company_id
Cashing: optional
If you're using lamda-proxy, you can access the context from your event.requestContext.authorizer.
So your company_id can be accessed using event.requestContext.authorizer.company_id.
If you're using lamda-proxy (at least with Golang backend) you can access to that values stored on authorizer context without mapping template usage!
Remember re-launch API and wait a minutes!
It's working for me.

Use DynamoDB with Alexa Lambda function

Well, I am having an issue that I've been dealing with for the last 2 days and still seem to have no progress on.
Basically, I am trying to develop a skill for Amazon's echo dot, but my particular skill requires the use of persistent data. I took to the docs and found information on account linking and DynamoDB, account linking seemed to complex for a simple research project so I took to DynamoDB.
I used a lambda function, and it ran fine until I put the DynamoDB table line:
alexa.dynamoDBTableName = 'rememberThisDB';
That line completely stops my skill from working and returns the following message:
The remote endpoint could not be called, or the response it returned was invalid.
I honestly have no idea how to deal with it; I am completely new to the whole AWS concept so I don't even know how to get the actual error message that the Lambda function is returning.
I changed the role and gave it the following configuration:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Scan",
"dynamodb:UpdateItem"
],
"Resource": "*Yes, I did put the correct ARN*"
}
]
}
But that didn't really change anything, it still just returned the same error.
The issue is, I'm not doing anything at all with DynamoDB, I am simply defining the dynamoDBTableName property of the alexa object, that's it.
Yes, the DynamoDB table exists.
I feel like my head is about to blow up, so any help would be greatly appreciated.
UPDATE: Found out how to see logs, here is the latest log: Error fetching user state: ValidationException: The provided key element does not match the schema, not sure why it would give that error since I never queried anything, the only thing I did was declare the table name.
Just to document the resolution of the question in the comments and so that this question doesn't remain "unanswered" on SO:
Assuming you're using the alexa-skills-kit-sdk-for-nodejs, your table should have a single userId string HASH key.
var newTableParams = {
AttributeDefinitions: [
{
AttributeName: 'userId',
AttributeType: 'S'
}
],
KeySchema: [
{
AttributeName: 'userId',
KeyType: 'HASH'
}
],
ProvisionedThroughput: {
ReadCapacityUnits: 5,
WriteCapacityUnits: 5
}
}
It turned out the user did not have the appropriate schema setup for their DynamoDB table.

Resources