Getting InvalidSignatureException: Forbidden Error When deployed my dockerized Pinpoint Application on EC2 - aws-pinpoint

I have built an app that creates pinpoint endpoints using aws-sdk and it's working perfectly in my local machine and local docker container but when i deployed the same application on EC2 using docker it's giving me this "Forbidden" Error. why?
i mean if it's working fine locally it should work live as well.
const AWS = require('aws-sdk');
AWS.config.update({
secretAccessKey: process.env.AWS_SECRET_ACCESS,
accessKeyId: process.env.AWS_ACCESS_KEY,
region: 'ap-southeast-2',
});
const pinpoint = new AWS.Pinpoint();
pinpoint.updateEndpoint(params, function (err, data) {
if (err) {
logger.info('An error occurred.\n');
logger.info(err, err.stack);
} else {
logger.info(
'>> Endpoint added/pushed Successfully with endpoint ID ' + obj_id
);
}
});

Related

s3.getObject never responds when using Google Cloud Function

I am trying to call aws s3.getObject (v2) or .send (v3) within a Google Cloud Function and the request always times out. When developed, I used a simple node/express server locally to create/test then put code in the cloud function. Here are my two working versions using the express server
const params = {
Bucket: process.env.AWS_BUCKET_NAME,
Key: key,
};
V3
const createParams = new GetObjectCommand(params);
const response = await client.send(createParams);
V2
const s3Asset = await s3.getObject(params).promise();
Both of these will return the correct file asked for which I am then either sending as Buffer or creating Zip when multiple files.
Once I put this in a Google Cloud Function, it will never return and time out. I ran a quick http.url() call to google.com and it returned a status code of 200, so I know it has internet access. And it has a VCP connector to allow all traffic. The env variables are set and accurate. What else am I missing?
Here is my Google Cloud Function
const s3 = new S3Client({
credentials: {
secretAccessKey: process.env.AWS_ACCESS_KEY,
accessKeyId: process.env.AWS_ACCESS_ID,
},
region: process.env.AWS_REGION,
correctClockSkew: true,
});
exports.downloadAssets = async (req, res) => {
res.set('Access-Control-Allow-Origin', '*');
if (req.method === 'OPTIONS') {
console.log('Hit OPTIONS');
res.set('Access-Control-Allow-Methods', 'POST');
res.set('Access-Control-Allow-Headers', 'Content-Type');
res.set('Access-Control-Max-Age', '3600');
res.status(204).send('');
} else {
console.log('***** Assets to download\n', req.body?.assets, '*****');
try {
if (req.body?.assets) {
var params = {
Bucket: process.env.AWS_BUCKET_NAME,
Key: key,
};
const s3Asset = await s3.getObject(params).promise(); <<---V2
console.log('*** I have asset *** \n',s3Asset) <<<-- this line never hits
res.send({ buffer: s3Asset.Body, name: 'nameOfFile', mimeType: s3Asset.ContentType });
} else {
return res.status(400).json({ error: 'No Assets Sent' });
}
} catch (e) {
console.log('Error downloading assets: \n', e);
res.status(400).json({ error: e.message });
}
}
};
Change your Cloud Function's egress setting to route only traffic destined to private IP ranges through your VPC connector.
When you create a VPC Serverless Connector, resources like App Engine, Cloud Run and in your case, Cloud Functions, can connect to resources inside a VPC like a VM.
Let's say you have 2 VMs with Internal IPs 10.128.0.2 & 10.128.0.3. Then, making the CF use the VPC Serverless Connector for egress traffic to internal resources, you can call those VMs using their internal IP's from your Functions code (When creating the function remember to use Route only requests to private IPs through the VPC connector so only the traffic to reach internal resources uses the VPC).

creating s3 bucket and folders inside it using node js in aws lambda function

I am both new to Node js and AWS. I am trying to create a bucket in S3 using node js in lambda function. Consequently, I am trying to create folders inside this S3 bucket.
I followed all the questions answered before and tried different iterations of code, but none of them seem to be working. Following is my code which is executing without giving any issues, yet the bucket and the folders are not getting created.
const AWS = require('aws-sdk');
let s3Client = new AWS.S3({
accessKeyId: '<access_key_id>',
secretAccessKey: '<secret_access_key>'
});
var params = {
Bucket : 'pshycology06'
};
exports.handler = async (event, context, callback) => {
// call spaces to create the bucket
s3Client.createBucket(params, function(err, data) {
if (err) {
console.log("\r\n[ERROR] : ", err);
} else {
console.log("\r\n[SUCCESS] : data = ",data);
}
});
};
The code for creating folders inside the Lambda function is as following --
var AWS = require('aws-sdk');
AWS.config.region = 'us-east-1';
var s3Client = new AWS.S3({apiVersion: '2006-03-01'});
exports.handler = async (event, context) => {
let params1 = { Bucket: 'travasko', Key: '2/dir1/dir2', Body:'body does not matter' };
s3Client.putObject(params1, function (err, data) {
if (err) {
console.log("Error creating the folder: ", err);
} else {
console.log("Successfully created a folder on S3");
}
});
Both of them doesn't work. I read a lot of documents on this issue and answers previously asked, but none of them are working for me.
The lambda function has a timeout of 1 minute. It has following policies for the IAM role -
1. AmazonRDSFullAccess
2. AmazonS3FullAccess
3. AWSLambdaVPCExecutionRole
The VPC security group is the default one.
Also, when I am trying to create the same bucket using the following AWS CLI command, it creates the bucket.
aws s3api create-bucket --bucket psychology06 --region us-east-1
I am not sure, where am i making a mistake.
Make sure the bucket with same name is not present.Please share log if possible.
You need to chain the .promise() method to your aws-sdk calls and await on them because you are creating async functions.
await s3Client.createBucket(params).promise();
await s3Client.putObject(params1).promise();
Furthermore, S3 doesn't work with directories although you may be thrown over by the way the S3 console looks like when you add / to your filenames. You can read more about it here
As you are new, always try aws cli(not recommended) and then search for search for equivalent sdk function while implementing.As it(your code) is async it won't wait until the call back function executes , so you can try something like below.(This is not actual solution , it just tells how to wait until the call back does it's work.)
'use strict'
var AWS = require('aws-sdk');
AWS.config.region = 'us-east-1';
var s3Client = new AWS.S3({ apiVersion: '2006-03-01' });
exports.handler = async (event, context) => {
let params1 = { Bucket: 'travasko', Key: '2/dir1/dir2', Body: 'body does not matter' };
try {
let obj = await something(params1);
callback(null, obj);
}
catch (err) {
callback('error', err)
}
}
async function something(params1) {
return new Promise(async (resolve, reject) => {
await s3Client.putObject(params1, function (err, data) {
if (err) {
console.log('Error creating the folder:', err);
reject('error during putObject');
} else {
console.log('success' + JSON.stringify(data));
resolve('success');
}
});
});
}
To your question in the comments :
Hi Vinit , let me give you little background , the question you have asked is very generic. Firstly VPC is something which you create where you will have your organization private and public subnets that are used to run your ec2 or any hosted services (non-managed services by aws). But as lambda is managed service it runs in aws vpc , they usually take your code and lambda configurations and execute the code.Now coming to your question if we attach vpc in your lambda configurations ( only if your lambda needs to use services hosted in your vpc, else don't use it) then as we discussed lambda runs in aws vpc , so during cold start it created an ENI(think of it as elastic IP) and tries to communicate with your VPC. Before Re-invent an ENI was created for each lambda that was the reason it takes time for the first time and lambda used to time out even though your execution takes lesser time. Now after re-invent EIP's are created per subnet per security group. So now coming to your question when u have attached vpc, if lambda execution is taking more time or not working as expected, then you have to see how your vpc(configs, routes, subnets) is set up and it's very hard to answer as so many parameters are involved unless we debug. So short answer do not attach vpc if your function(code) does not need to talk to any of your own managed instances in vpc (usually private subnet
) etc.
Since, you are using async functionality. Thus, you have to use await on calling "s3Client.createBucket". Then resolve the received promise.
For creating folders, use trailing "/". For example "pshycology06/travasko/".
Do post error logs if these doesn't work.

Accessing AWS SSM Parameters in NodeJS

I'm trying to get the ssm parameter inside my nodejs project, is IAM credentials I and wrote a test in my elastic beanstalk instance and works. The problem is inside the project. Any ideas why?
// Load the AWS SDK for Node.js
var AWS = require('aws-sdk');
AWS.config.update({region: 'us-east-1'});
var ssm = new AWS.SSM();
var options = {
Name: '/test/test', /* required */
WithDecryption: false
};
var parameterPromise = ssm.getParameter(options).promise();
parameterPromise.then(function(data, err) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
I discovered, the same of this https://github.com/localstack/localstack/issues/1107
need to pass the region in the SSM constructor
var ssm = new AWS.SSM({region: 'us-east-1'});
seems to be a bug
tks

How to configure the region in the AWS js SDK?

My problem
I am writing a simple js function that reads some information from AWS CloudWatch Logs.
Following the answer at Configuring region in Node.js AWS SDK, and the AWS nodejs SDK documentation, I came up with the following:
Code
var AWS = require('aws-sdk');
var cloudwatchlogs = new AWS.CloudWatchLogs();
console.log(AWS.config.region) // Undefined
AWS.config.region = 'eu-central-1' // Define the region with dot notation
console.log(AWS.config.region) . // eu-central-1
AWS.config.update({region:'eu-central-1'}); // Another way to update
console.log(AWS.config.region) . // eu-central-1
var params = {
limit: 0,
// logGroupNamePrefix: 'STRING_VALUE',
// nextToken: 'STRING_VALUE'
};
// This call is failing
cloudwatchlogs.describeLogGroups(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Output and error
undefined
eu-central-1
eu-central-1
{ ConfigError: Missing region in config
at Request.VALIDATE_REGION (/Users/adam/binaris/adam-test-sls/node_modules/aws-sdk/lib/event_listeners.js:91:45)
at Request.callListeners (/Users/adam/binaris/adam-test-sls/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at callNextListener (/Users/adam/binaris/adam-test-sls/node_modules/aws-sdk/lib/sequential_executor.js:95:12)
at /Users/adam/binaris/adam-test-sls/node_modules/aws-sdk/lib/event_listeners.js:85:9
at finish (/Users/adam/binaris/adam-test-sls/node_modules/aws-sdk/lib/config.js:315:7)
at /Users/adam/binaris/adam-test-sls/node_modules/aws-sdk/lib/config.js:333:9
at SharedIniFileCredentials.get (/Users/adam/binaris/adam-test-sls/node_modules/aws-sdk/lib/credentials.js:126:7)
at getAsyncCredentials (/Users/adam/binaris/adam-test-sls/node_modules/aws-sdk/lib/config.js:327:24)
at Config.getCredentials (/Users/adam/binaris/adam-test-sls/node_modules/aws-sdk/lib/config.js:347:9)
at Request.VALIDATE_CREDENTIALS (/Users/adam/binaris/adam-test-sls/node_modules/aws-sdk/lib/event_listeners.js:80:26)
message: 'Missing region in config',
code: 'ConfigError',
time: 2017-07-11T09:57:55.638Z } ...
Environment
The code is running locally under node v8.1.2.
My question
How can I correctly configure the region in the AWS js SDK?
Addendum
Opened an issue on github and got some response.
Or, alternatively, you can specify that when creating your cloudwatch object:
var AWS = require('aws-sdk');
var cloudwatchlogs = new AWS.CloudWatchLogs({region: 'eu-central-1'});
Write code in following way it will work.
var AWS = require('aws-sdk');
// assign AWS credentials here in following way:
AWS.config.update({
accessKeyId: 'asdjsadkskdskskdk',
secretAccessKey: 'sdsadsissdiidicdsi',
region: 'eu-central-1'
});
var cloudwatchlogs = new AWS.CloudWatchLogs({apiVersion: '2014-03-28'});
Use following.
AWS.config.update({region: 'eu-central-1'});
You can find more information in following link.
http://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/setting-region.html

AWS S3 listBuckets returns null, using node js

Using node js I'm trying to list the buckets I have in my AWS S3 by following this basic examples.
http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/node-examples.html
My code looks like this, and is runned from localhost.
var AWS = require("aws-sdk"),
con = require('./../lib/config.js');
var s3 = new AWS.S3({
accessKeyId: con.fig.AWSAccessKeyId,
secretAccessKey: con.fig.AWSSecretKey,
});
s3.listBuckets(function(err, data) {
console.log(data);
});
But data is null.
What have I missed?
Is there some permission to set? I have set the permission AmazonS3FullAccess on the user.
I want to be able to upload files from a website to a S3 bucket.
Try this. The documentation says if err is null then the request was successful.
s3.listBuckets(function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listBuckets-property
ok load your config using the following
var AWS = require("aws-sdk"),
con = require('./../lib/config.js');
AWS.config.update({
accessKeyId: con.fig.AWSAccessKeyId,
secretAccessKey: con.fig.AWSSecretKey
})
var s3 = new AWS.S3();
s3.listBuckets(function(err,data){
if(err)console.log(err);
else console.log (data)
});

Resources