NodeJS Lambda Region WAF IPSetID - node.js

I am new NodeJS and trying to alter this sample provided by AWS for reputation list updates however it is specific to CloudFront Global Region only.
https://github.com/awslabs/aws-waf-sample/tree/master/waf-reputation-lists
I have made the changes to the CloudFormation to create regional IPSetID however the function exits on the fact the IPSetID does not exist. I assume this is due to the fact the SDK is looking at global rather than regional i.e eu-west-1 therefore I've set the region in the config but it is still unable to locate the IPSet.
var aws = require('aws-sdk');
// configure API retries
aws.config.update({
region:'eu-west-1',
maxRetries: 3,
retryDelayOptions: {
base: 1000
}
});
var waf = new aws.WAF();
I've seen a recent question (AWS WAF update ip sets and rules specific to a region from lambda) which shows the URL differences however I do not know where to start to update the URL?
Error getting IP sets { [WAFNonexistentItemException: The referenced item does not exist.]
message: 'The referenced item does not exist.',
code: 'WAFNonexistentItemException',
statusCode: 400,
retryable: false,
retryDelay: 162.11187234148383 }
Error getting ranges and/or IP sets { [WAFNonexistentItemException: The referenced item does not exist.]
message: 'The referenced item does not exist.',
code: 'WAFNonexistentItemException',
statusCode: 400,
retryable: false,
retryDelay: 162.11187234148383 }
{
"errorMessage": "The referenced item does not exist.",
"errorType": "WAFNonexistentItemException",
"stackTrace": [
"Request.extractError (/var/task/node_modules/aws-sdk/lib/protocol/json.js:48:27)",
"Request.callListeners (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:105:20)",
"Request.emit (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:77:10)",
"Request.emit (/var/task/node_modules/aws-sdk/lib/request.js:682:14)",
"Request.transition (/var/task/node_modules/aws-sdk/lib/request.js:22:10)",
"AcceptorStateMachine.runTo (/var/task/node_modules/aws-sdk/lib/state_machine.js:14:12)",
"/var/task/node_modules/aws-sdk/lib/state_machine.js:26:10",
"Request.<anonymous> (/var/task/node_modules/aws-sdk/lib/request.js:38:9)",
"Request.<anonymous> (/var/task/node_modules/aws-sdk/lib/request.js:684:12)",
"Request.callListeners (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:115:18)"
]
}

You should make sure you have an updated version of the aws-sdk that supports the regional WAF. Change the line var waf = new aws.WAF(); with code similar to the following.
var readline = require('readline');
var aws = require('aws-sdk');
var https = require('https');
var async = require('async');
// configure API retries
aws.config.update({
region:'eu-west-1',
maxRetries: 3,
retryDelayOptions: {
base: 1000
}
});
var waf = new aws.WAFRegional();
var cloudwatch = new aws.CloudWatch();
var cloudformation = new aws.CloudFormation();
I was using versions as follows and got this working. (from package.json node configuration file)
{
"name": "reputation-lists-parser",
"version": "1.0.0",
"description": "",
"main": "reputation-lists-parser.js",
"dependencies": {
"aws-sdk": "^2.76.0",
"async": "^2.4.1",
"xml2js": "^0.4.17"
}
}
You may need to load the entire zip file containing your code into AWS Lambda.
I used the code contained in https://github.com/itopiacloud/aws-waf-regional-security-automations to help me get this working.

Related

Facing TLS write failed: -2 issue while writing to aerospike

I have aerospike 6.0.0 installed on server and being used as cache. Also I am using Node.js client 5.2.0 to connect to the database.The aerospike nodes are configured to use TLS. The config of the namespace I am using is:
namespace application_cache {
memory-size 5G
replication-factor 2
default-ttl 1h
nsup-period 1h
high-water-memory-pct 80
stop-writes-pct 90
storage-engine memory
}
The config I am passing while connecting to the Db is:
"config": {
"captureStackTraces":true,
"connTimeoutMs": 10000,
"maxConnsPerNode": 1000,
"maxCommandsInQueue" : 300,
"maxCommandsInProcess" : 100,
"user": "test",
"password": "test",
"policies": {
"read": {
"maxRetries": 3,
"totalTimeout": 0,
"socketTimeout": 0
},
"write": {
"maxRetries": 3,
"totalTimeout": 0,
"socketTimeout": 0
},
"info": {
"timeout": 10000
}
},
"tls": {
"enable": true,
"cafile": "./certs/ca.crt",
"keyfile": "./certs/server.key",
"certfile": "./certs/server.crt"
},
"hosts": [
{
"addr": "-----.com",
"port": 4333,
"tlsname": "----"
},
{
"addr": "---.com",
"port": 4333,
"tlsname": "-----"
},
{
"addr": "----.com",
"port": 4333,
"tlsname": "-----"
}
]
}
This is the function I am using to write to a set in this namespace:
async function updateApp(appKey, bins) {
let result = {};
try {
const writePolicy = new Aerospike.WritePolicy({
maxRetries: 3,
socketTimeout: 0,
totalTimeout: 0,
exists: Aerospike.policy.exists.CREATE_OR_UPDATE,
key: Aerospike.policy.key.SEND
});
const key = new Aerospike.Key(namespace, set, appKey);
let meta = {
ttl: 3600
};
let bins = {};
result = await asc.put(key, bins, meta, writePolicy);
} catch (err) {
console.error("Aerospike Error:", JSON.stringify({params: {namespace, set, appKey}, err: {code: err.code, message: err.message}}));
return err;
}
return result;
}
It works most of the time but once in a while I get this error:
Aerospike Error: {"params":{"namespace":"application_cache","set":"apps","packageName":"com.test.application"},"err":{"code":-6,"message":"TLS write failed: -2 34D53CA5C4E0734F 10.57.49.180:4333"}}
Every record in this set has about 12-15 bins and the size of the record varies between 300Kb to 1.5Mb. The bigger the size of the record the higher chance of this error showing up. Has anyone faced this issue before and what could be causing this?
I answered this on the community forum but figured I'd add it here as well.
Looking at the server error codes a code: 6 is AS_ERR_BIN_EXISTS. Your write policy is setup with exists: Aerospike.policy.exists.CREATE_OR_UPDATE but the options for exists are:
Name
Description
IGNORE
Write the record, regardless of existence. (I.e. create or update.)
CREATE
Create a record, ONLY if it doesn't exist.
UPDATE
Update a record, ONLY if it exists.
REPLACE
Completely replace a record, ONLY if it exists.
CREATE_OR_REPLACE
Completely replace a record if it exists, otherwise create it.
My guess is that it's somehow defaulting to a CREATE only causing it to throw that error. Try updating your write policy to use exists: Aerospike.policy.exists.IGNORE and see if you still get that error.

Shopify NodeJS: Error creating Metafields: error 422 "type can't be blank"

I'm trying to create metafields using the Shopify Admin REST API (with NodeJS).
This is a simplified version of the call I'm trying to make (see documentation):
const data = await client.post({
path: 'metafields',
data: {
"metafield": {
"namespace": "namespace123",
"key": "key123",
"value": 42,
"type": "number_integer"
}
},
type: DataType.JSON,
});
But I get this error:
HttpResponseError: Received an error response (422 Unprocessable Entity) from Shopify:
{
"type": [
"can't be blank"
]
}
I've checked that the type attributes are set properly:
The root-level type is set correctly to application/json, and data.metafield.type is set following the rules here.
I've also tried other types, but I get the same error.
Problem: I was using an old API version to initialize my Shopify Context:
Shopify.Context.initialize({
// ... other stuff ... //
API_VERSION: ApiVersion.October20,
});
None of my other API calls relied on old breaking behaviour, so updating my API version fixed the issue without too much hassle:
Shopify.Context.initialize({
// ... other stuff ... //
API_VERSION: ApiVersion.January22,
});

Amazon S3 waitFor() inside AWS Lambda

I'm having issue when calling S3.waitFor() function from inside Lambda function (Serverless nodejs). I'm trying to asynchronously write a file into Amazon S3 using S3.putObject() from one rest api, and poll the result file from another rest api using S3.waitFor() to see if the writing is ready/finished.
Please see the following snippet:
...
S3.waitFor('objectExists', {
Bucket: bucketName,
Key: fileName,
$waiter: {
maxAttempts: 5,
delay: 3
}
}, (error, data) => {
if (error) {
console.log("error:" + JSON.stringify(error))
} else {
console.log("Success")
}
});
...
Given valid bucketName and invalid fileName, when the code runs in my local test script, it returns error after 15secs (3secs x 5 retries) and generates result as follows:
error: {
"message": "Resource is not in the state objectExists",
"code": "ResourceNotReady",
"region": null,
"time": "2018-08-03T06:08:12.276Z",
"requestId": "AD621033DCEA7670",
"extendedRequestId": "JNkxddWX3IZfauJJ63SgVwyv5nShQ+Mworb8pgCmb1f/cQbTu3+52aFuEi8XGro72mJ4ik6ZMGA=",
"retryable": true,
"statusCode": 404,
"retryDelay": 3000
}
Meanwhile, when it is running inside AWS lambda function, it returns result directly as follows:
error: {
"message": "Resource is not in the state objectExists",
"code": "ResourceNotReady",
"region": null,
"time": "2018-08-03T05:49:43.178Z",
"requestId": "E34D731777472663",
"extendedRequestId": "ONMGnQkd14gvCfE/FWk54uYRG6Uas/hvV6OYeiax5BTOCVwbxGGvmHxMlOHuHPzxL5gZOahPUGM=",
"retryable": false,
"statusCode": 403,
"retryDelay": 3000
}
As you can see that the retryable and statusCode values are different between the two.
On lamba, it seems that it always get statusCode 403 when the file doesn't exists. While on my local, everything works as expected (retried 5 times every 3 seconds and received statusCode 404).
I wonder if it has anything to do with IAM role. Here's my IAM role statements settings inside my serverless.yml:
iamRoleStatements:
- Effect: "Allow"
Action:
- "logs:CreateLogGroup"
- "logs:CreateLogStream"
- "logs:PutLogEvents"
- "ec2:CreateNetworkInterface"
- "ec2:DescribeNetworkInterfaces"
- "ec2:DeleteNetworkInterface"
- "sns:Publish"
- "sns:Subscribe"
- "s3:*"
Resource: "*"
How to make it work from lambda function?
Thank you in advance!
It turned out that the key is on how you set the IAM Role for the bucket and all the objects under it.
Based on the AWS docs here, it states that S3.waitFor() is relying on the underlying S3.headObject().
Waits for the objectExists state by periodically calling the underlying S3.headObject() operation every 5 seconds (at most 20 times).
Meanwhile, S3.headObject() itself relies on HEAD Object API which has the following rule as stated on AWS Docs here:
You need the s3:GetObject permission for this operation. For more information, go to Specifying Permissions in a Policy in the Amazon Simple Storage Service Developer Guide. If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
If you have the s3:ListBucket permission on the bucket, Amazon S3
will return a HTTP status code 404 ("no such key") error.
if you don’t have the s3:ListBucket permission, Amazon S3 will return
a HTTP status code 403 ("access denied") error.
It means that I need to add s3:ListBucket Action to the Bucket resource containing the objects to be able to get response 404 when the objects doesn't exist.
Therefore, I've configured the cloudformation AWS::IAM::Policy resource as below, where I added s3:Get* and s3:List* action specifically on the Bucket itself (i.e.: S3FileStorageBucket).
"IamPolicyLambdaExecution": {
"Type": "AWS::IAM::Policy",
"DependsOn": [
"IamRoleLambdaExecution",
"S3FileStorageBucket"
],
"Properties": {
"PolicyName": { "Fn::Join": ["-", ["Live-RolePolicy", { "Ref": "environment"}]]},
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "S3FileStorageBucket"
}
]
]
}
},
{
"Effect":"Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "S3FileStorageBucket"
},
"/*"
]
]
}
},
...
Now I've been able to do S3.waitFor() to poll for file/object under the bucket with only a single API call and get the result only when it's ready, or throw error when the resource is not ready after a specific timeout.
That way, the client implementation will be much simpler. As it doesn't have to implement poll by itself.
Hope that someone find it usefull. Thank you.

DynamoDB: ResourceNotFoundException When Creating Table (local)

I am following the AWS Node.js tutorial, but I cannot get the provided code to work.
Specifically, I am trying to create a “Movies” table on a local instance of DynamoDB and load it with some provided sample data. However, I am receiving ”Cannot do operations on a non-existent table” when I try to create the table, which seems a bit strange.
For my set up, I am running DynamoDB in one console window. The command I am using and output is as follows:
COMPUTERNAME:node-dyna-demo myUserName$ java -Djava.library.path=./dynamodb_local_latest/DynamoDBLocal_lib -jar ./dynamodb_local_latest/DynamoDBLocal.jar -sharedDb
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: false
DbPath: null
SharedDb: true
shouldDelayTransientStatuses: false
CorsParams: *
In a separate console, I am executing the following code:
AWS.config.update({
credentials: {
accessKeyId: ‘testAccessId’,
secretAccessKey: ‘testAccessKey’
},
region: 'us-west-2',
endpoint: 'http://localhost:8000'
})
const dynamodb = new AWS.DynamoDB()
const docClient = new AWS.DynamoDB.DocumentClient()
const dbParams = {
TableName : "Movies",
KeySchema: [ … ],
AttributeDefinitions: [ … ],
ProvisionedThroughput: { … }
}
dynamodb.createTable(dbParams, function(err, data) {
if (err) {
console.error(
'Unable to create table. Error JSON:',
JSON.stringify(err, null, 2)
)
} else {
console.log(
'Created table. Table description JSON:',
JSON.stringify(data, null, 2)
)
}
})
The error I get from execution is:
Unable to create table. Error JSON: {
"message": "Cannot do operations on a non-existent table",
"code": "ResourceNotFoundException",
"time": "2018-01-24T15:56:13.229Z",
"requestId": "c8744331-dd19-4232-bab1-87d03027e7fc",
"statusCode": 400,
"retryable": false,
"retryDelay": 9.419252980728942
}
Does anyone know a possible cause for this exception?
When Dynamodb local is started without -sharedDb flag, there is a possibility for occurrence of this kind of issue.
In your case local dynamodb is started as expected,
java -Djava.library.path=./dynamodb_local_latest/DynamoDBLocal_lib -jar ./dynamodb_local_latest/DynamoDBLocal.jar -sharedDb
Try the following solutions,
Solution 1: Remove the credential info in in AWS config
AWS.config.update({
region: 'us-west-2',
endpoint: 'http://localhost:8000'
})
Reason: If in case dynamodb local is running without -sharedDb flag, it will create new database with respect to that credential.
Solution 2: Creating table using the dynamodb shell (http://localhost:8000/shell/#) and verify whether its created or not using the following command
aws dynamodb list-tables --endpoint-url http://localhost:8000
The first answer gave me a hint for my solution. I am using Java in combination with maven. Hopefully my answer can help others. The AWS region might be conflicting. The Java code points to eu-west-1.
amazonDynamoDB = AmazonDynamoDBClientBuilder
.standard()
.withEndpointConfiguration(
new AwsClientBuilder
.EndpointConfiguration("http://localhost:" +
LocalDbCreationRule.port, "eu-west-1"))
.build();
But my AWS config in my home folder on my local machine on location
~/.aws/config
showed the following:
[default]
region = eu-central-1
I changed the region to eu-west-1, matching the Java code, in my local config and the issue was resolved.

AWS S3 Static Website via Nodejs SDK

I am trying to create a static website on AWS S3 via the AWS node SDK. I am at the step where I am putting the bucket website. I am calling putBucketWebsite(params = {}, callback) with the following parameters:
{
"Bucket": "xxx.example.com",
"WebsiteConfiguration": {
"IndexDocument": {
"Suffix": "index.html"
},
"RoutingRules": []
}
}
but I am getting the following error:
MalformedXML: The XML you provided was not well-formed or did not validate against our published schema
What am I doing wrong?
When I getBucketWebsite from a site that works, I get:
{
"IndexDocument": {
"Suffix": "index.html"
},
"RoutingRules": []
}
Try removing RoutingRules from your request. As per documentation it requires some properties to be present.

Resources