I am testing a call to a SOAP service using node-soap library.
This works fine as a standalone node.js app and the SOAP service responds, however when I package the same code up as a serverless AWS lambda function (using serverless framework, but also executed directly in AWS lambda), it doesn’t appear to create the soap client.
Any thoughts on why this might be happening?
export async function main(event, context) {
var soap = require('soap');
var url = 'https://service.blah.co.uk/Service/Function.asmx?wsdl';
var soapOptions = {
forceSoap12Headers: true
};
var soapHeader = {
'SOAPAction': 'http://www.blah.co.uk/Services/GetToken'
};
var params = {
xmlMessage: message
};
console.log(JSON.stringify(params));
soap.createClient(url, soapOptions, function (err, client) {
//the serverless AWS lambda function never reaches this point (deployed and invoked locally)
console.log("In create client")
if (err) console.log(err);
client.addSoapHeader(soapHeader);
client.GetToken(params, function (err, data) {
console.log(data);
});
});
}
I have created a very minimal working example using async/await. Rather using the old style callback approach, just invoke soap.createClientAsync(url).
Here's the code:
'use strict';
const soap = require('soap');
module.exports.hello = async (event, context) => {
const url = 'http://www.thomas-bayer.com/axis2/services/BLZService?wsdl';
const client = await soap.createClientAsync(url)
console.log(client)
return {
statusCode: 200,
message: 'It works!'
}
}
And here are the logs (partially):
EDIT: Function's output (via AWS's Lambda Console)
EDIT 2 - The REAL problem
Check the link above around async/await. The async keyword will execute asynchronously and will return a Promise. Since it's an asynchronous function, your Lambda is being terminated before it could actually execute your callback.
Using async/await will make your code simpler and will have you stop beating your brains out for such a small thing like this.
Related
I'm creating an API in Azure Functions using TypeScript, with multiple endpoints connecting to the same Azure SQL Server. Each endpoint was set up using the Azure Functions extension for VS Code, with the HttpTrigger TypeScript template. Each endpoint will eventually make different calls to the database, collecting from, processing and storing data to different tables.
There don't seem to be any default bindings for Azure SQL (only Storage or Cosmos), and while tedious is used in some Microsoft documentation, it tends not to cover Azure Functions, which appears to be running asynchronously. What's more, other similar StackOverflow questions tend to be for standard JavaScript, and use module.exports = async function (context) syntax, rather than the const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> syntax used by the TypeScript HttpTrigger templates.
Here's what I've got so far in one of these endpoints, with sample code from the tedious documentation in the default Azure Functions HttpTrigger:
var Connection = require('tedious').Connection;
var config = {
server: process.env.AZURE_DB_SERVER,
options: {},
authentication: {
type: "default",
options: {
userName: process.env.AZURE_DB_USER,
password: process.env.AZURE_DB_PASSWORD,
}
}
};
import { AzureFunction, Context, HttpRequest } from "#azure/functions"
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('HTTP trigger function processed a request.');
const name = (req.query.name || (req.body && req.body.name));
if (name) {
var connection = new Connection(config);
connection.on('connect', function(err) {
if(err) {
console.log('Error: ', err)
}
context.log('Connected to database');
});
context.res = {
// status: 200, /* Defaults to 200 */
body: "Hello " + (req.query.name || req.body.name)
};
}
else {
context.res = {
status: 400,
body: "Please pass a name on the query string or in the request body"
};
}
};
export default httpTrigger;
This ends up with the following message:
Warning: Unexpected call to 'log' on the context object after function
execution has completed. Please check for asynchronous calls that are
not awaited or calls to 'done' made before function execution
completes. Function name: HttpTrigger1. Invocation Id:
xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. Learn more:
https://go.microsoft.com/fwlink/?linkid=2097909
Again, the async documentation linked to covers just the standard JavaScript module.exports = async function (context) syntax, rather than the syntax used by these TypeScript httpTriggers.
I've also been reading that best practice might be to have a single connection, rather than connecting anew each time these endpoints are called - but again unsure if this should be done in a separate function that all of the endpoints call. Any help would be much appreciated!
I'm glad that using the 'mssql' node package works for you:
const sql = require('mssql');
require('dotenv').config();
module.exports = async function (context, req) {
try {
await sql.connect(process.env.AZURE_SQL_CONNECTIONSTRING);
const result = await sql.query`select Id, Username from Users`;
context.res.status(200).send(result);
} catch (error) {
context.log('error occurred ', error);
context.res.status(500).send(error);
}
};
Ref:https://stackoverflow.com/questions/62233383/understanding-js-callbacks-w-azure-functions-tedious-sql
Hope this helps.
I am trying to integrate AWS X-Ray with my nodejs api hosted on AWS Lambda(serverless).
X-Ray works as intended for api using express middleware and able to see traces on AWS Console.
For async functions without express framework, I am facing issues while integration.
Tried enabling Manual mode, but facing- Lambda not supporting manual mode error.
Referred this - Developing custom solutions for automatic mode section but no luck.
Can someone help me out with this?
'use strict';
const AWSXRay = require('aws-xray-sdk-core');
const Aws = AWSXRay.captureAWS(require('aws-sdk'))
const capturePostgres = require('aws-xray-sdk-postgres');
const { Client } = capturePostgres(require('pg'));
module.exports.test = async (event, context) => {
var ns = AWSXRay.getNamespace();
const segment = newAWSXRay.Segment('Notifications_push');
ns.enter(ns.createContext());
AWSXRay.setSegment(segment_push);
.... };
So, when in Lambda, the SDK creates a placeholder (facade) segment automatically. More in-depth explanation here: https://github.com/aws/aws-xray-sdk-node/issues/148
All you need is:
const AWSXRay = require('aws-xray-sdk-core');
//lets patch the AWS SDK
const Aws = AWSXRay.captureAWS(require('aws-sdk'));
module.exports.test = async (event, context) => {
//All capturing will work out of box
var sqs = new AWS.SQS({apiVersion: '2012-11-05'});
var params = {...}
//no need to add code, just regular SQS call
sqs.sendMessage(params, function(err, data) {
if (err) {
console.log("Error", err);
} else {
console.log("Success", data.MessageId);
}
});
//if you want to create subsegments manually simply do
const seg = AWSXRay.getSegment();
const subseg = seg.addSubsegment('mynewsubsegment');
subseg.close();
//no need to close the Lambda segment
};
Additional documentation here: https://docs.aws.amazon.com/lambda/latest/dg/nodejs-tracing.html
In the following event .once event is working fine
'use strict';
let firebase = require('firebase');
exports.handler = function(event, context)
{
context.callbackWaitsForEmptyEventLoop = false;
firebase.initializeApp({
serviceAccount: {},
databaseURL: "https://harmanconnectedcar-180411.firebaseio.com/"
});
firebase.database().ref('events').once('value').then(function(snapshot) {
console.log("*************event**********************")
console.log (snapshot.val()) ;
context.succeed() ;
});
var starCountRef = firebase.database().ref('events' );
starCountRef.on('value', function(snapshot) {
console.log("*************snapshot*****snapshot*****************")
console.log (snapshot.val()) ;
context.succeed();
})
}
When i try starCountRef.on i am not able to see the logs printed
Once i put the lambda function in AWS and write to firebase from firebase console i am not able to see the events where do i need to see the logs how to check starCountRef.on event(i mean the real time logs)
You're starting two asynchronous listeners. Then when the first of them finishes you call context.succeed(). At that point Lambda terminates your function, so the second listener never completes.
To makes this code work, you need to ensure you only call context.succeed() when all of the data has loaded. An easy way to do this is by using Promise.all():
exports.handler = function(event, context)
{
context.callbackWaitsForEmptyEventLoop = false;
firebase.initializeApp({
serviceAccount: {},
databaseURL: "https://harmanconnectedcar-180411.firebaseio.com/"
});
var starCountRef = firebase.database().ref('events' );
var promises = [];
promises.push(firebase.database().ref('events').once('value'));
promises.push(starCountRef.once('value');
Promises.all(promises).then(function(snapshots) {
snapshot.forEach(function(snapshot) {
console.log(snapshot.val();
});
context.succeed();
});
}
You seem to be mixing up two technologies here.
The onWrite method is an construct of Cloud Functions for Firebase, which (as far as I know) cannot be deployed on Amazon lambda.
If you want to access your Firebase Realtime Database from Amazon Lambda, you can use the Firebase Admin SDK. But that doesn't include a way to trigger your code whenever the database is written.
I have my firebase cloud function in which I am calling my external api end point like this.
const functions = require('firebase-functions');
var admin = require("firebase-admin");
admin.initializeApp(functions.config().firebase);
var request = require('request');
var moment = require('moment');
var rp = require('request-promise');
var db = admin.database();
exports.onCheckIn = functions.database.ref('/news/{newsId}/{entryId}')
.onCreate(event => {
console.log("Event Triggered");
var eventSnapshot = event.data.val();
request.post({
url: 'http://MyCustomURL/endpoint/',
form: {
data: eventSnapshot.data
}
}, function(error, response, body){
console.log(response);
});
})
I am using Blaze plan and this is working completely fine. But the problem is that when I am creating bulk data (around 50 to 100 entries) the HTTP request at to my custom url is not working properly.One or two HTTP request are being skipped.
I have debugged my custom server and found out that it is not receiving missing requests from firebase. But I have also checked the cloud function logs and I can find that every event is correctly being triggered.
What could be the problem? Am I doing anything wrong ?
You're not returning any value from your function. This means that Cloud Functions assumes that the function is done with its work after the last statement has run. But since you're making an asynchronous HTTP call in the function, that calls may not have completed yet. Unfortunately you're not telling Cloud Functions about the fact that you have an outstanding call, so it may kill your function at any time after you return.
The solution is to return a promise that resolves after the HTTP request has completed. Since you're already including request-promise this is simple:
exports.onCheckIn = functions.database.ref('/news/{newsId}/{entryId}')
.onCreate(event => {
console.log("Event Triggered");
var eventSnapshot = event.data.val();
return rp.post({
url: 'http://MyCustomURL/endpoint/',
form: {
data: eventSnapshot.data
}
});
})
This is a common problem for developers new to JavaScript, or with JavaScript and Firebase, and is covered in:
the Firebase documentation
this blog post
this video
How do I get the data back from a lambda invoked with as an event to the calling function?
Essentially the lambda function I have is:
exports.handler = function(event, context, callback) {
var data = {};
data.foo ='hello';
callback(null, data)
}
and the invoking function looks like this:
var AWS = require('aws-sdk');
var lambda = new AWS.Lambda();
var params = {
FunctionName: 'SomeFunction',
InvocationType: 'Event'
};
lambda.invoke(params, function (err, data) {
if (err) {
console.log(err, err.stack); // an error occurred
} else {
console.log(JSON.stringify(data, null, 2));
}
});
However the only thing I get back from the function is
{
"StatusCode": 202,
"Payload": ""
}
I thought the point of the callback parameter was to allow the invoking function to get the data when the function has finished. Am I using it wrong or is what I am asking not possible with Lambdas?
When you invoke the Lambda function you need to set the InvocationType to 'RequestResponse' instead of 'Event'.
When using the Event type your callback is invoked when the payload has been received by AWS's servers. When using the RequestResponse type your callback is invoked only after the Lambda function has completed and you will receive the data it provided to its callback. It is not possible to do what you want with the Event type.
As #MatthewPope commented in this answer, if you do need the results from an asynchronous Lambda execution, you should consider having the Lambda function write its results to S3 (or something to that effect) at a known location where clients of the Lambda function can periodically check for the existence of the result.