How to get AWS SQS queue ARN in nodeJS? - node.js

I'm trying to build an application with a basic client-server infrastructure. The server infrastructure is hosted on AWS, and when a client logs on, it sends a message to the server to set up various infrastructure considerations. One of the pieces of infrastructure is an SQS Queue that the client can poll from to get updates from the server (eventually I'd like to build a push service but I don't know how for right now).
I'm building this application in NodeJS using the Node AWS SDK. The problem I'm having is I need the queue ARN to do various things like subscribe the SQS queue to an SNS topic that the application uses, but the create queue API returns the queue URL, not ARN. So I can get the ARN from the URL using the getQueueAttributes API, but it doesn't seem to be working. Whenever I call it, I get undefined as the response. Here's my code, please tell me what I'm doing wrong:
exports.handler = (event, context, callback) => {
new aws.SQS({apiVersion: '2012-11-05'}).createQueue({
QueueName: event.userId
}).promise()
)
.then(data => { /* This has the Queue URL */
new aws.SQS({apiVersion: '2012-11-05'}).getQueueAttributes({
QueueUrl: data.QueueUrl,
AttributeNames: ['QueueArn']
}).promise()
})
.then(data => {
console.log(JSON.stringify(data)); /* prints "undefined" */
})
/* Some more code down here that's irrelevant */
}
Thanks!

const AWS = require('aws-sdk');
const sqs = new AWS.SQS();
exports.handler = async(event, context, callback) => {
var params = {
QueueUrl: 'my-queue-url',
AttributeNames: ['QueueArn']
};
let fo = await sqs.getQueueAttributes(params).promise();
console.log(fo);
};
and it printed
{
ResponseMetadata: { RequestId: '123456-1234-1234-1234-12345' },
Attributes: {
QueueArn: 'arn:aws:sqs:eu-west-1:12345:my-queue-name'
}
}

With the help of Ersoy, I realized that I was using block-formatting (with {}) to write my Promises, but I was never returning anything from those blocks. I had thought that the last value in the Promise block was the return value by default, but it seems that was not the case. When I added return before the SQS API command, then it worked (without using async/await).

Related

creating s3 bucket and folders inside it using node js in aws lambda function

I am both new to Node js and AWS. I am trying to create a bucket in S3 using node js in lambda function. Consequently, I am trying to create folders inside this S3 bucket.
I followed all the questions answered before and tried different iterations of code, but none of them seem to be working. Following is my code which is executing without giving any issues, yet the bucket and the folders are not getting created.
const AWS = require('aws-sdk');
let s3Client = new AWS.S3({
accessKeyId: '<access_key_id>',
secretAccessKey: '<secret_access_key>'
});
var params = {
Bucket : 'pshycology06'
};
exports.handler = async (event, context, callback) => {
// call spaces to create the bucket
s3Client.createBucket(params, function(err, data) {
if (err) {
console.log("\r\n[ERROR] : ", err);
} else {
console.log("\r\n[SUCCESS] : data = ",data);
}
});
};
The code for creating folders inside the Lambda function is as following --
var AWS = require('aws-sdk');
AWS.config.region = 'us-east-1';
var s3Client = new AWS.S3({apiVersion: '2006-03-01'});
exports.handler = async (event, context) => {
let params1 = { Bucket: 'travasko', Key: '2/dir1/dir2', Body:'body does not matter' };
s3Client.putObject(params1, function (err, data) {
if (err) {
console.log("Error creating the folder: ", err);
} else {
console.log("Successfully created a folder on S3");
}
});
Both of them doesn't work. I read a lot of documents on this issue and answers previously asked, but none of them are working for me.
The lambda function has a timeout of 1 minute. It has following policies for the IAM role -
1. AmazonRDSFullAccess
2. AmazonS3FullAccess
3. AWSLambdaVPCExecutionRole
The VPC security group is the default one.
Also, when I am trying to create the same bucket using the following AWS CLI command, it creates the bucket.
aws s3api create-bucket --bucket psychology06 --region us-east-1
I am not sure, where am i making a mistake.
Make sure the bucket with same name is not present.Please share log if possible.
You need to chain the .promise() method to your aws-sdk calls and await on them because you are creating async functions.
await s3Client.createBucket(params).promise();
await s3Client.putObject(params1).promise();
Furthermore, S3 doesn't work with directories although you may be thrown over by the way the S3 console looks like when you add / to your filenames. You can read more about it here
As you are new, always try aws cli(not recommended) and then search for search for equivalent sdk function while implementing.As it(your code) is async it won't wait until the call back function executes , so you can try something like below.(This is not actual solution , it just tells how to wait until the call back does it's work.)
'use strict'
var AWS = require('aws-sdk');
AWS.config.region = 'us-east-1';
var s3Client = new AWS.S3({ apiVersion: '2006-03-01' });
exports.handler = async (event, context) => {
let params1 = { Bucket: 'travasko', Key: '2/dir1/dir2', Body: 'body does not matter' };
try {
let obj = await something(params1);
callback(null, obj);
}
catch (err) {
callback('error', err)
}
}
async function something(params1) {
return new Promise(async (resolve, reject) => {
await s3Client.putObject(params1, function (err, data) {
if (err) {
console.log('Error creating the folder:', err);
reject('error during putObject');
} else {
console.log('success' + JSON.stringify(data));
resolve('success');
}
});
});
}
To your question in the comments :
Hi Vinit , let me give you little background , the question you have asked is very generic. Firstly VPC is something which you create where you will have your organization private and public subnets that are used to run your ec2 or any hosted services (non-managed services by aws). But as lambda is managed service it runs in aws vpc , they usually take your code and lambda configurations and execute the code.Now coming to your question if we attach vpc in your lambda configurations ( only if your lambda needs to use services hosted in your vpc, else don't use it) then as we discussed lambda runs in aws vpc , so during cold start it created an ENI(think of it as elastic IP) and tries to communicate with your VPC. Before Re-invent an ENI was created for each lambda that was the reason it takes time for the first time and lambda used to time out even though your execution takes lesser time. Now after re-invent EIP's are created per subnet per security group. So now coming to your question when u have attached vpc, if lambda execution is taking more time or not working as expected, then you have to see how your vpc(configs, routes, subnets) is set up and it's very hard to answer as so many parameters are involved unless we debug. So short answer do not attach vpc if your function(code) does not need to talk to any of your own managed instances in vpc (usually private subnet
) etc.
Since, you are using async functionality. Thus, you have to use await on calling "s3Client.createBucket". Then resolve the received promise.
For creating folders, use trailing "/". For example "pshycology06/travasko/".
Do post error logs if these doesn't work.

Nodejs Lamba, how to asynchronously execute certain part of the block and return api call back early on?

I'm working on Nodejs Lambda function. It is to fetch remote video file to upload to s3. While it works on the small size file, but large size case fails upon the time limitation of API gateway (29 seconds).
Are there ways that I receive api call back early to the request while the code is running on lambda?
I wrapped the function with Async but it takes same time. Probaly I was wrong at setting for the asynchronous job in Nodejs.
The below is the code.
'use strict';
const fetch = require('node-fetch');
const AWS = require('aws-sdk'); // eslint-disable-line import/no-extraneous-dependencies
const s3 = new AWS.S3();
module.exports.save = (event, context, callback) => {
const url = "some_url";
const Bucket = "recorded-video";
const key = "some_key.mp4"
fetch(url)
.then((response) => {
if (response.ok) {
return response;
}
return Promise.reject(new Error(
`Failed to fetch ${response.url}: ${response.status} ${response.statusText}`));
})
.then(response => response.buffer())
.then(buffer => (
{
s3.putObject({
Bucket: process.env.BUCKET,
Key: key,
Body: buffer,
}).promise();
// then give permission.
}
))
};
I don't think so, but I can think of few ways to do this,
You can make another lambda to do the copying and invoke that lambda asynchronously from the current lambda. That will give you 15 minutes (lambdas can run upto 15 minutes).
You can setup a docker task and setup AWS batch. Then submit a batch job from your lambda. There is no time limit in this method.

AWS Lambda finish before sending message to SQS

I'm running a "Node.JS" lambda on AWS that sends a message to SQS.
For some reason the SQS callback function get execute only once every couple of calls. It's looks like that the thread that running the lambda finish the run (because it's not a synchronous call to SQS and also can't return a Future) and therefore the lambda doesn't "stay alive" for the callback to get executed.
How can I solve this issue and have the lambda wait for the SQS callback to get execute?
Here is my lambda code:
exports.handler = async (event, context) => {
// Set the region
AWS.config.update({region: 'us-east-1'});
// Create an SQS service object
var sqs = new AWS.SQS({apiVersion: '2012-11-05'});
const SQS_QUEUE_URL = process.env.SQS_QUEUE_URL;
var params = {
MessageGroupId: "cv",
MessageDeduplicationId: key,
MessageBody: "My Message",
QueueUrl: SQS_QUEUE_URL
};
console.log(`Sending notification via SQS: ${SQS_QUEUE_URL}.`);
sqs.sendMessage(params, function(err, data) { //<-- This function get called about one time every 4 lambda calls
if (err) {
console.log("Error", err);
context.done('error', "ERROR Put SQS");
} else {
console.log("Success", data.MessageId);
context.done(null,'');
}
});
};
You should either stick to callback based approach, or to promise based one. I recommend you to use the latter:
exports.handler = async (event, context) => {
// Set the region
AWS.config.update({region: 'us-east-1'});
// Create an SQS service object
var sqs = new AWS.SQS({apiVersion: '2012-11-05'});
const SQS_QUEUE_URL = process.env.SQS_QUEUE_URL;
var params = {
MessageGroupId: "cv",
MessageDeduplicationId: key,
MessageBody: "My Message",
QueueUrl: SQS_QUEUE_URL
};
console.log(`Sending notification via SQS: ${SQS_QUEUE_URL}.`);
try {
await sqs.sendMessage(params).promise(); // since your handler returns a promise, lambda will only resolve after sqs responded with either failure or success
} catch (err) {
// do something here
}
};
P.S. Instantiating aws classes in the handler is not a good idea in lambda environment, since it increases the cold start time. It's better to move new AWS.SQS(...) action out of handler and AWS.config.update() too, since these actions will be executed on each call of the handler, but you really need them to be executed only once.

Invoke AWS Lambda from a node.js app running on local system not S3

I would like to invoke AWS Lambda from a node.js file receding on my system. I followed Invoking a Lambda Function in a Browser Script and created a Congnito Identity Pool for Unauthenticated user and embedded the IdentityPoolId in the node js file like below:
let AWS = require('aws-sdk');
AWS.config.region = '<my-region>';
let lambda = new AWS.Lambda();
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: '<my-identity-pool-id>'
});
let params = {
FunctionName: '<my-lambda-function>',
InvocationType: 'RequestResponse',
LogType: 'Tail',
Payload: '{ "name" : "my-name" }'
}
lambda.invoke(params, (err, data) => {
if (err) {
console.log(err);
}
else {
if (data.Payload) {
console.log('my-lambda-function said: ' + data.Payload);
}
}
});
My Lambda Function:
exports.handler = function(event, context) {
context.succeed('Hello ' + event.name);
};
I have created an IAM Role with AWSLambdaExecute, AWSLambdaBasicExecutionRole and AmazonCognitoReadOnly policies attached and I am using the same role while creating Lambda as well as I have updated the same roles in the Identity Pool I have created in Cognito for Unauthorized access.
When I run node app.js all I get is the error:
UnrecognizedClientException: The security token included in the request is invalid.
Can somebody point me in the right direction to invoke an AWS lambda by writing a simple NodeJS file on my local system without uploading any HTML/CSS/JS files in an S3 Bucket and without using AccessKeyID, SecretKeyId, Just using roles associated with Lambda.
Thanks in Advance.
There are many ways to invoke a Lambda Function.
AWS services events (example: SNS triggered)
API created through AWS API Gateway.
Amazon CloudWatch cron jobs
API calls leveraging AWS Lambda APIs.
If your aim is to use your function as API, which can send and receive request and responses, You probably should go for API Gateway Integration.
It's super easy to get started with API Gateway.
Get your Lambda function ready.
Set Up an IAM Role and Policy for an API to Invoke Lambda Functions
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "lambda:InvokeFunction",
"Resource": "*"
}
]
}
Create API Resources for the Lambda Function
In the API Gateway console, create an API.
Create the /ResourceName resource off the API's root.
Create a GET or POST method based on you requirements.
Choose AWS Service for Integration type and Select Lambda Function you created in the respective region.
Now, you can customize integration Request based on your requirements with body mapping templates.
You may look into detailed documentation for API Gateway Integration with Lambda:
In this section, we walk you through the steps to integrate an API
with a Lambda function using the AWS Service and Lambda Function
integration types.
Once, your test invocation succeeds, you can then use the API invocation URL from API_NAME/Dashboard, which will look something like:
https:// ###****.execute-api.us-west-2.amazonaws.com/{APIStageName}/
which can be used as REST endpoint and can be called from your Node js function locally.
Don't forget to enable authentication for your API with API keys.
Also, go through Production Checklist, if you are going to use it in Such Environment.
I had the same issue, and I was able to resolve it using a Kinesis Stream.
the Lambda function was acting as consumer.
you have to create a trigger for the Lambda function as follows :
function createTrigger (kinesisArn, lambdaName) {
// Create params const for trigger
const params = {
EventSourceArn: kinesisArn,
FunctionName: lambdaName,
StartingPosition: 'LATEST',
BatchSize: 100
}
return new Promise((resolve, reject) => {
lambda.createEventSourceMapping(params, (err, data) => {
if (err) reject(err)
else resolve(data)
})
})
}
each time a new piece of data is being pushed on the Kinesis stream, your Lambda function will be called.
here is an example of how sending data on AWS Kinesis stream :
function send(streamName, partition, msg) {
const params = {
Data: JSON.stringify(msg), // data you want to send to your Lambda function
PartitionKey: partition, // an id for each shard
StreamName: streamName
}
return new Promise((resolve, reject) => {
kinesis.putRecord(params, (err, data) => {
if(err) reject(err)
else resolve(data);
})
});
}

AWS node.js automatic retry on failed batchWrite()

According to this aws doc http://docs.aws.amazon.com/general/latest/gr/api-retries.html automatic retry feature is build in the aws sdk in my case node.js aws sdk. I configured the DocumentClient object like this:
var dynamodb = new AWS.DynamoDB.DocumentClient({
region: 'us-west-2',
retryDelayOptions: {base: 50},
maxRetries: 20
});
but I still cannot make it auto-retry for me. I want to auto-retry with all UnprocessedItems as well.
Can you point me to where is my mistake?
Thanks
The retryDelayOptions and maxRetries are the options present on AWS.DynamoDB. The DocumentClient has to be configured by setting the DynamoDB service.
var dynamodb = new AWS.DynamoDB({maxRetries: 5, retryDelayOptions: {base: 300} });
var docClient = new AWS.DynamoDB.DocumentClient({service : dynamodb});
The AWS Client SDKs all have built-in mechanisms for retry indeed, however those retries are at the request level. That means that any request that gets rejected by the server with a 500-level error, or in some cases, a 400-level throttling error will get automatically retried based on the configured settings.
What you are asking for is business-layer retry behavior which is NOT built into the SDK. The UnprocessedItems collection contains items that were rejected by the service for various reasons and you have to write your own logic to handle those.
After sending Response we can handle unprocessed Item's background Process until all unprocessed Items should be complete. below code is useful for you
var AWS= require('aws-sdk');
var docClient = new AWS.DynamoDB.DocumentClient();
router.post('/someBatchWrites',(req,res)=>{
docClient.batchWrite(params, function (error, data) {
res.send(error, data);
handler(error, data);//handling unprocessed items /back ground
})
});
//handle Method
function handler(err, data) {
if (err) {
console.log("Error", err);
} else {
console.log("Success", data);
if (Object.keys(data.UnprocessedItems).length) {
setTimeout(() => { docClient.batchWrite({ RequestItems: data.UnprocessedItems }, handler);
}, 100000);
}
}
}

Resources