The Issue:
I have a node.js (8.10) AWS Lambda function that takes a json object and publishes it to an IOT topic. The function successfully publishes to the topic, however, once fired it is continuously called until I throttle the concurrency to zero to halt any further calling of the function.
I'm trying to figure out what I've implemented incorrectly that causes more than one instance the of the function to be called.
The Function:
Here is my function:
var AWS = require('aws-sdk');
exports.handler = function (event, context) {
var iotdata = new AWS.IotData({endpoint: 'xxxxxxxxxx.iot.us-east-1.amazonaws.com'});
var params = {
topic: '/PiDevTest/SyncDevice',
payload: JSON.stringify(event),
qos: 0
};
iotdata.publish(params, function(err, data) {
if (err) {
console.log(err, err.stack);
} else {
console.log("Message sent.");
context.succeed();
}
});
};
My test json is:
{
"success": 1,
"TccvID": "TestID01"
}
The test console has a response of "null", but the IOT topic shows the data from the test json, published to the topic about once per second.
What I've Tried
-I've attempted to define the handler in it's own, non-anonymous function called handler, and then having the exports.handler = handler; This didn't produce any errors, but didn't successfully post to the iot topic either.
-I thought maybe the issues was with the node.js callback. I've tried implementing it and leaving it out (Current iteration above), but neither way seemed to make a difference. I had read somewhere that the function will retry if it errors, but I believe that only happens three times so it wouldn't explain the indefinite calling of the function.
-I've also tried calling the function from another lambda to make sure that the issue wasn't the aws test tool. This produced the same behavior, though.
Summary:
What am I doing incorrectly that causes this function to publish the json data indefinitely to the iot topic?
Thanks in advance for your time and expertise.
Use aws-iot-device-sdk to create a MQTT client and use it's messageHandler and publish method to publish your messages to IOT topic. Sample MQTT client code is below,
import * as DeviceSdk from 'aws-iot-device-sdk';
import * as AWS from 'aws-sdk';
let instance: any = null;
export default class IoTClient {
client: any;
/**
* Constructor
*
* #params {boolean} createNewClient - Whether or not to use existing client instance
*/
constructor(createNewClient = false, options = {}) {
}
async init(createNewClient, options) {
if (createNewClient && instance) {
instance.disconnect();
instance = null;
}
if (instance) {
return instance;
}
instance = this;
this.initClient(options);
this.attachDebugHandlers();
}
/**
* Instantiate AWS IoT device object
* Note that the credentials must be initialized with empty strings;
* When we successfully authenticate to the Cognito Identity Pool,
* the credentials will be dynamically updated.
*
* #params {Object} options - Options to pass to DeviceSdk
*/
initClient(options) {
const clientId = getUniqueId();
this.client = DeviceSdk.device({
region: options.region || getConfig('iotRegion'),
// AWS IoT Host endpoint
host: options.host || getConfig('iotHost'),
// clientId created earlier
clientId: options.clientId || clientId,
// Connect via secure WebSocket
protocol: options.protocol || getConfig('iotProtocol'),
// Set the maximum reconnect time to 500ms; this is a browser application
// so we don't want to leave the user waiting too long for reconnection after
// re-connecting to the network/re-opening their laptop/etc...
baseReconnectTimeMs: options.baseReconnectTimeMs || 500,
maximumReconnectTimeMs: options.maximumReconnectTimeMs || 1000,
// Enable console debugging information
debug: (typeof options.debug === 'undefined') ? true : options.debug,
// AWS access key ID, secret key and session token must be
// initialized with empty strings
accessKeyId: options.accessKeyId,
secretKey: options.secretKey,
sessionToken: options.sessionToken,
// Let redux handle subscriptions
autoResubscribe: (typeof options.debug === 'undefined') ? false : options.autoResubscribe,
});
}
disconnect() {
this.client.end();
}
attachDebugHandlers() {
this.client.on('reconnect', () => {
logger.info('reconnect');
});
this.client.on('offline', () => {
logger.info('offline');
});
this.client.on('error', (err) => {
logger.info('iot client error', err);
});
this.client.on('message', (topic, message) => {
logger.info('new message', topic, JSON.parse(message.toString()));
});
}
updateWebSocketCredentials(accessKeyId, secretAccessKey, sessionToken) {
this.client.updateWebSocketCredentials(accessKeyId, secretAccessKey, sessionToken);
}
attachMessageHandler(onNewMessageHandler) {
this.client.on('message', onNewMessageHandler);
}
attachConnectHandler(onConnectHandler) {
this.client.on('connect', (connack) => {
logger.info('connected', connack);
onConnectHandler(connack);
});
}
attachCloseHandler(onCloseHandler) {
this.client.on('close', (err) => {
logger.info('close', err);
onCloseHandler(err);
});
}
publish(topic, message) {
this.client.publish(topic, message);
}
subscribe(topic) {
this.client.subscribe(topic);
}
unsubscribe(topic) {
this.client.unsubscribe(topic);
logger.info('unsubscribed from topic', topic);
}
}
***getConfig() is to get environment variables from a yml file or else you can directly specify it here.
While he only posted it as an comment, MarkB pointed me in the correct direction.
The problem was the solution was related to another lambda who was listening to the same topic and invoking the lambda I was working on. This resulted in circular logic as the exit condition was never met. Fixing that code solved this issue.
Related
Azure Notification Hub Exception: not able to register device entries.. the request could not be completed. we are using node js to register
You can use below code to register device entries.
Related posts and documents:
1. Issue with notification hub
2. createRegistrationId(callback)
/**
* Creates a registration identifier.
*
* #param {Function(error, response)} callback `error` will contain information
* if an error occurs; otherwise, `response`
* will contain information related to this operation.
*/
NotificationHubService.prototype.createRegistrationId = function (callback) {
validateCallback(callback);
var webResource = WebResource.post(this.hubName + '/registrationids');
webResource.headers = {
'content-length': null,
'content-type': null
};
this._executeRequest(webResource, null, null, null, function (err, rsp) {
var registrationId = null;
if (!err) {
var parsedLocationParts = url.parse(rsp.headers.location).pathname.split('/');
registrationId = parsedLocationParts[parsedLocationParts.length - 1];
}
callback(err, registrationId, rsp);
});
};
Using the azure-sb node.js module, you should be able to register a device and send to it directly. This requires a connection string and hub, as well as a device token, depending on the platform such as APNS.
import { NotificationHubService } from 'azure-sb';
const CONNECTION_STRING = ''; // Full Access Connection String
const HUB_NAME = ''; // Notification Hub Name
const APNS_TOKEN = ''; //APNS Token
const APNS_MESSAGE = {
aps: {
alert: "Notification Hub test notification"
}
};
service.apns.createNativeRegistration(APNS_REGISTRATION_ID, SAMPLE_TAG, (err, response) => {
if (err) {
console.log(err);
return;
}
console.log('Registration success');
console.log(JSON.stringify(response));
service.apns.send(SAMPLE_TAG, APNS_MESSAGE, (error, res) => {
if (error) {
console.log(error);
return;
}
console.log('Message sent');
console.log(JSON.stringify(res));
});
});
When this runs, it should register the device and send to it. I have run this very recently and works without a hitch. The older azure module might be tied to earlier TLS 1.0/1.1, which has since been deprecated for usage on Azure.
I'm trying to do a get() from my AWS Lambda (NodeJS) on ElastiCache Redis using node_redis client. I believe that I'm able to connect to redis but I'm getting Time out (Lambda 60 sec time out) when I'm trying to perform a get() operation.
I have also granted my AWS lambda Administrator access just to be certain that it's not a permissions issue. I'm hitting lambda by going to AWS console and clicking the Test button.
Here is my redisClient.js:
const util = require('util');
const redis = require('redis');
console.info('Start to connect to Redis Server');
const client = redis.createClient({
host: process.env.ElastiCacheEndpoint,
port: process.env.ElastiCachePort
});
client.get = util.promisify(client.get);
client.set = util.promisify(client.set);
client.on('ready',function() {
console.log(" subs Redis is ready"); //Can see this output in logs
});
client.on('connect',function(){
console.log('subs connected to redis'); //Can see this output in logs
})
exports.set = async function(key, value) {
console.log("called set!");
return await client.set(key, value);
}
exports.get = async function(key) {
console.log("called get!"); //Can see this output in logs
return await client.get(key);
}
Here's my index.js which calls the redisClient.js:
const redisclient = require("./redisClient");
exports.handler = async (event) => {
const params = event.params
const operation = event.operation;
try {
console.log("Checking RedisCache by calling client get") // Can see this output in logs
const cachedVal = await redisclient.get('mykey');
console.log("Checked RedisCache by calling client get") // This doesn't show up in logs.
console.log(cachedVal);
if (cachedVal) {
return {
statusCode: 200,
body: JSON.stringify(cachedVal)
}
} else {
const setCache = await redisclient.set('myKey','myVal');
console.log(setCache);
console.log("*******")
let response = await makeCERequest(operation, params, event.account);
console.log("CE Request returned");
return response;
}
}
catch (err) {
return {
statusCode: 500,
body: err,
};
}
}
This is the output (time out error message) that I get:
{
"errorMessage": "2020-07-05T19:04:28.695Z 9951942c-f54a-4b18-9cc2-119eed65e9f1 Task timed out after 60.06 seconds"
}
I have tried using Bluebird (changing get to getAsync()) per this: https://github.com/UtkarshYeolekar/promisify-redis-client/blob/master/redis.js but still got the same behavior.
I also changed the port to use a random value (like 8088) that I'm using to create client (to see the behavior of connect event for a failed connection) - in this case I still see a Timed Out error response but I don't see the subs Redis is ready and subs connected to redis in my logs.
Can anyone please point me in the right direction? I don't seem to understand why I'm able to connect to redis but the get() request times out.
I figured out the issue and posting here in case it helps anyone in future as the behavior wasn't very intuitive for me.
I had enabled AuthToken param while setting up my redis. I was passing the param to lambda with the environment variables but wasn't using it while sending the get()/set() requests. When I disabled the AuthToken requirement from redis configuration - Lambda was able to hit redis with get/set requests. More details on AuthToken can be found here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticache-replicationgroup.html#cfn-elasticache-replicationgroup-authtoken
I'm creating an API in Azure Functions using TypeScript, with multiple endpoints connecting to the same Azure SQL Server. Each endpoint was set up using the Azure Functions extension for VS Code, with the HttpTrigger TypeScript template. Each endpoint will eventually make different calls to the database, collecting from, processing and storing data to different tables.
There don't seem to be any default bindings for Azure SQL (only Storage or Cosmos), and while tedious is used in some Microsoft documentation, it tends not to cover Azure Functions, which appears to be running asynchronously. What's more, other similar StackOverflow questions tend to be for standard JavaScript, and use module.exports = async function (context) syntax, rather than the const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> syntax used by the TypeScript HttpTrigger templates.
Here's what I've got so far in one of these endpoints, with sample code from the tedious documentation in the default Azure Functions HttpTrigger:
var Connection = require('tedious').Connection;
var config = {
server: process.env.AZURE_DB_SERVER,
options: {},
authentication: {
type: "default",
options: {
userName: process.env.AZURE_DB_USER,
password: process.env.AZURE_DB_PASSWORD,
}
}
};
import { AzureFunction, Context, HttpRequest } from "#azure/functions"
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('HTTP trigger function processed a request.');
const name = (req.query.name || (req.body && req.body.name));
if (name) {
var connection = new Connection(config);
connection.on('connect', function(err) {
if(err) {
console.log('Error: ', err)
}
context.log('Connected to database');
});
context.res = {
// status: 200, /* Defaults to 200 */
body: "Hello " + (req.query.name || req.body.name)
};
}
else {
context.res = {
status: 400,
body: "Please pass a name on the query string or in the request body"
};
}
};
export default httpTrigger;
This ends up with the following message:
Warning: Unexpected call to 'log' on the context object after function
execution has completed. Please check for asynchronous calls that are
not awaited or calls to 'done' made before function execution
completes. Function name: HttpTrigger1. Invocation Id:
xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. Learn more:
https://go.microsoft.com/fwlink/?linkid=2097909
Again, the async documentation linked to covers just the standard JavaScript module.exports = async function (context) syntax, rather than the syntax used by these TypeScript httpTriggers.
I've also been reading that best practice might be to have a single connection, rather than connecting anew each time these endpoints are called - but again unsure if this should be done in a separate function that all of the endpoints call. Any help would be much appreciated!
I'm glad that using the 'mssql' node package works for you:
const sql = require('mssql');
require('dotenv').config();
module.exports = async function (context, req) {
try {
await sql.connect(process.env.AZURE_SQL_CONNECTIONSTRING);
const result = await sql.query`select Id, Username from Users`;
context.res.status(200).send(result);
} catch (error) {
context.log('error occurred ', error);
context.res.status(500).send(error);
}
};
Ref:https://stackoverflow.com/questions/62233383/understanding-js-callbacks-w-azure-functions-tedious-sql
Hope this helps.
I am new Azure function. I want to call the Azure function which will trigger another Azure function. I have written bellow code but its giving me timeout. The code written in node js.
Please suggest what to do.
module.exports = async function (context, req, callback) {
UtilityAccountNumber=req.body.UtilityAccountNumber.split(',');
if(UtilityAccountNumber=='' || typeof UtilityAccountNumber==='undefined' || !(Array.isArray(UtilityAccountNumber) && UtilityAccountNumber.length) ){
response={
status: 0,
message:"Please provide utility acccount number."
};
else{
if(UtilityAccountNumber.length){
for(let accountNo of UtilityAccountNumber){
try
{
var options = {
host: process.env.API_HOST,
port: process.env.PORT,
path: '/api/'+process.env.WEBSCRAPERMASTER,
method: 'POST'
};
var myreq = http.request(options, function(res) {
});
myreq.end();
}
catch (ex) // if failed
{
await logHTTPErrorResponse(ex, huId);
console.error(ex);
}
}
}
}
context.res = {
status: 200,
body: response
};
};
Your code logic may be stuck at some point, so it gives you timeout.(Azure function has a default timeout limit.) I think the function chain of azure durable function fully meets your requirements, and you are using nodejs, so azure durable function is supported.
Please have a look of this:
https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-sequence?tabs=javascript
I have a function which is accessing multiple aws resources and now need to test this function, but I don't know how to mock these resources.
I have tried following github of aws-sdk-mock, but didn't get much there.
function someData(event, configuration, callback) {
// sts set-up
var sts = new AWS.STS(configuration.STS_CONFIG);
sts.assumeRole({
DurationSeconds: 3600,
RoleArn: process.env.CROSS_ACCOUNT_ROLE,
RoleSessionName: configuration.ROLE_NAME
}, function(err, data) {
if (err) {
// an error occurred
console.log(err, err.stack);
} else {
// successful response
// resolving static credential
var creds = new AWS.Credentials({
accessKeyId: data.Credentials.AccessKeyId,
secretAccessKey: data.Credentials.SecretAccessKey,
sessionToken: data.Credentials.SessionToken
});
// Query function
var dynamodb = new AWS.DynamoDB({apiVersion: configuration.API_VERSION, credentials: creds, region: configuration.REGION});
var docClient = new AWS.DynamoDB.DocumentClient({apiVersion: configuration.API_VERSION, region: configuration.REGION, endpoint: configuration.DDB_ENDPOINT, service: dynamodb });
// extract params
var ID = event.queryStringParameters.Id;
console.log('metrics of id ' + ID);
var params = {
TableName: configuration.TABLE_NAME,
ProjectionExpression: configuration.PROJECTION_ATTR,
KeyConditionExpression: '#ID = :ID',
ExpressionAttributeNames: {
'#ID': configuration.ID
},
ExpressionAttributeValues: {
':ID': ID
}
};
queryDynamoDB(params, docClient).then((response) => {
console.log('Params: ' + JSON.stringify(params));
// if the query is Successful
if( typeof(response[0]) !== 'undefined'){
response[0]['Steps'] = process.env.STEPS;
response[0]['PageName'] = process.env.STEPS_NAME;
}
console.log('The response you get', response);
var success = {
statusCode: HTTP_RESPONSE_CONSTANTS.SUCCESS.statusCode,
body: JSON.stringify(response),
headers: {
'Content-Type': 'application/json'
},
isBase64Encoded: false
};
return callback(null, success);
}, (err) => {
// return internal server error
return callback(null, HTTP_RESPONSE_CONSTANTS.BAD_REQUEST);
});
}
});
}
This is lambda function which I need to test, there are some env variable also which is being used here.
Now I tried writing Unit test for above function using aws-sdk-mock but still I am not able to figure out how to actually do it. Any help will be appreciated. Below is my test code
describe('test getMetrics', function() {
var expectedOnInvalid = HTTP_RESPONSE_CONSTANTS.BAD_REQUEST;
it('should assume role ', function(done){
var event = {
queryStringParameters : {
Id: '123456'
}
};
AWS.mock('STS', 'assumeRole', 'roleAssumed');
AWS.restore('STS');
AWS.mock('Credentials', 'credentials');
AWS.restore('Credentials');
AWS.mock('DynamoDB.DocumentClient', 'get', 'message');
AWS.mock('DynamoDB', 'describeTable', 'message');
AWS.restore('DynamoDB');
AWS.restore('DynamoDB.DocumentClient');
someData(event, configuration, (err, response) => {
expect(response).to.deep.equal(expectedOnInvalid);
done();
});
});
});
I am getting the following error :
{ MultipleValidationErrors: There were 2 validation errors:
* MissingRequiredParameter: Missing required key 'RoleArn' in params
* MissingRequiredParameter: Missing required key 'RoleSessionName' in params
Try setting aws-sdk module explicitly.
Project structures that don't include the aws-sdk at the top level node_modules project folder will not be properly mocked. An example of this would be installing the aws-sdk in a nested project directory. You can get around this by explicitly setting the path to a nested aws-sdk module using setSDK().
const AWSMock = require('aws-sdk-mock');
import AWS = require('aws-sdk');
AWSMock.setSDKInstance(AWS);
For more details on this : Read aws-sdk-mock documentation, they have explained it even better.
I strongly disagree with #ttulka's answer, so I have decided to add my own as well.
Given you received an event in your Lambda function, it's very likely you'll process the event and then invoke some other service. It could be a call to S3, DynamoDB, SQS, SNS, Kinesis...you name it. What is there to be asserted at this point?
Correct arguments!
Consider the following event:
{
"data": "some-data",
"user": "some-user",
"additionalInfo": "additionalInfo"
}
Now imagine you want to invoke documentClient.put and you want to make sure that the arguments you're passing are correct. Let's also say that you DON'T want the additionalInfo attribute to be persisted, so, somewhere in your code, you'd have this to get rid of this attribute
delete event.additionalInfo
right?
You can now create a unit test to assert that the correct arguments were passed into documentClient.put, meaning the final object should look like this:
{
"data": "some-data",
"user": "some-user"
}
Your test must assert that documentClient.put was invoked with a JSON which deep equals the JSON above.
If you or any other developer now, for some reason, removes the delete event.additionalInfo line, tests will start failing.
And this is very powerful! If you make sure that your code works the way you expect, you basically don't have to worry about creating integration tests at all.
Now, if a SQS consumer Lambda expects the body of the message to contain some field, the producer Lambda should always take care of it to make sure the right arguments are being persisted in the Queue. I think by now you get the idea, right?
I always tell my colleagues that if we can create proper unit tests, we should be good to go in 95% of the cases, leaving integration tests out. Of course it's better to have both, but given the amount of time spent on creating integration tests like setting up environments, credentials, sometimes even different accounts, is not worth it. But that's just MY opinion. Both you and #ttulka are more than welcome to disagree.
Now, back to your question:
You can use Sinon to mock and assert arguments in your Lambda functions. If you need to mock a 3rd-party service (like DynamoDB, SQS, etc), you can create a mock object and replace it in your file under test using Rewire. This usually is the road I ride and it has been great so far.
I see unit testing as a way to check if your domain (business) rules are met.
As far as your Lambda contains exclusively only integration of AWS services, it doesn't make much sense to write a unit test for it.
To mock all the resources means, your test will be testing only communication among those mocks - such a test has no value.
External resources mean input/output, this is what integration testing focuses on.
Write integration tests and run them as a part of your integration pipeline against real deployed resources.
This is how we can mock STS in nodeJs.
import { STS } from 'aws-sdk';
export default class GetCredential {
constructor(public sts: STS) { }
public async getCredentials(role: string) {
this.log.info('Retrieving credential...', { role });
const apiRole = await this.sts
.assumeRole({
RoleArn: role,
RoleSessionName: 'test-api',
})
.promise();
if (!apiRole?.Credentials) {
throw new Error(`Credentials for ${role} could not be retrieved`);
}
return apiRole.Credentials;
}
}
Mock for the above function
import { STS } from 'aws-sdk';
import CredentialRepository from './GetCredential';
const sts = new STS();
let testService: GetCredential;
beforeEach(() => {
testService = new GetCredential(sts);
});
describe('Given getCredentials has been called', () => {
it('The method returns a credential', async () => {
const credential = {
AccessKeyId: 'AccessKeyId',
SecretAccessKey: 'SecretAccessKey',
SessionToken: 'SessionToken'
};
const mockGetCredentials = jest.fn().mockReturnValue({
promise: () => Promise.resolve({ Credentials: credential }),
});
testService.sts.assumeRole = mockGetCredentials;
const result = await testService.getCredentials('fakeRole');
expect(result).toEqual(credential);
});
});