AWS lambda fails to call Facebook SDK - node.js

I am setting up a AMS lambda to call the facebook sdk internally but unfortunately I am not able to get any response from facebook SDK.
Please find the below code :-
const listCampaign = async (event, context) => {
context.callbackWaitsForEmptyEventLoop = false;
await validateAuthToken(event.headers.Authorization, event.headers.accountId);
console.log("account are",account)
return await account.getCampaigns([
Campaign.Fields.account_id,
Campaign.Fields.adlabels,
Campaign.Fields.bid_strategy,
Campaign.Fields.boosted_object_id,
Campaign.Fields.brand_lift_studies,
Campaign.Fields.budget_rebalance_flag,
Campaign.Fields.budget_remaining,
Campaign.Fields.buying_type,
Campaign.Fields.can_create_brand_lift_study,
])
.then((campaign) => {
console.log("first check 3",campaign) // No response from facebook SDK and after 30 sec it get end point time out
})

The most probable cause could be that the security group is not configured for outside connections. If that is not the case and the lambda function is deployed in a VPC, please do check that the VPC subnets have NAt and Internet gateway permissions.

Related

Is it safe to store public keys/policies in a node.js constant in Lambda

I am writing a AWS lambda Authorizer in node.js. We are required to call Azure AD API to fetch the public keys/security policies to validate the incoming the Access Token.
However, to optimize the performance, I decided to store the public keys/security policies in node.js as a constant (this will be active until the Lambda is running or TTL of the keys expire).
Question : Is it safe from a security perspective ? I want to avoid "caching" it in DynamoDB as calls to DynamoDB would also incur additional milliseconds. Ours is a very high traffic application and we would like to save any millisecond possible for optimal performance. Also, any best practice is also higly appreciated
Typically, you should not hard-code things like that in your code. Even though it is not a security problem, it is making maintenance harder.
For example: when the key is "rotated" or the policy changed and you had it hard-coded in your Lambda, you would need to update your code and do another deployment. This is often causing issues, because the developer forgot about this etc. causing issues because your authorizer does not work anymore. If the Lambda loads the information from an external service like S3, SSM or directly Azure AD, you don't need another deployment. In theory, it should sort itself out depending on which service you use and how you manage your keys etc.
I think the best way is to load the key from an external service during the initialisation phase of the Lambda. That means when it is "booted" for the first time and then cache that value for the duration of the Lambdas lifetime (a few minutes to a few hours).
You could for example load the public keys and policies either directly from Azure, from S3 or SSM Parameter Store.
The following code uses the AWS SDK NodeJS v3, which is not bundled with the Lambda Runtime. You can use v2 of the SDK as well.
const { SSMClient, GetParameterCommand } = require("#aws-sdk/client-ssm");
// This only happens once, when the Lambda is started for the first time:
const init = async () => {
const config = {}
try {
// use whatever 'paramName' you defined, when you created the SSM parameter
const paramName = "/azure/publickey"
const command = new GetParameterCommand({Name: paramName});
const ssm = new SSMClient();
const data = await ssm.send(command);
config["publickey"] = data.Parameter.Value;
} catch (error) {
return Promise.reject(new Error("unable to read SSM parameter '"+ paramName + "'."));
}
return new Promise((resolve, reject) => {
resolve(config);
reject(new Error("unable to create configuration. Unknown error."));
});
};
const initPromise = init();
exports.handler = async (event) => {
const config = await initPromise;
console.log("My public key '%s'", config.key);
return "Hello World";
};
The most important point of this code is the init "function", which is only run on once, creating a "config" which should contain your AWS SDK clients and all the configuration you need in your code. This way, you don't have to get the policy for every request that the Lambda is processing etc.

Invoking a Lambda to message connected websockets

Is it possible to setup a stand-alone WebSocket service with lambdas that can be invoked from lambdas in separate services?
I've got an existing system that does things and then attempts to broadcast an update to connected clients by invoking a lambda in a websocket service like so:
const lambda = new Lambda({
region: 'us-east-1',
endpoint: 'https://lambda.us-east-1.amazonaws.com'
});
lambda.invoke({
FunctionName: `dev-functionName`,
Payload: JSON.stringify({payload, clientGroup}),
InvocationType: 'Event'
});
This triggers the correct lambda, which then
gets the connection IDs from a Dynamo table
sets domainName to {api-id}.execute-api.us-east-1.amazonaws.com
attempts to message connections them like so:
const ws = create(`https://${domainName}/dev/#connections/${ConnectionId}`);
// Also tried
//const ws = create(`https://${domainName}/dev`);
//const ws = create(`${domainName}/dev`);
//const ws = create(`${domainName}`);
const params:any = {
Data: JSON.stringify(payload),
ConnectionId
};
try{
return ws.postToConnection(params).promise();
} catch (err) {
if(err.statusCode == 410){
await removeConn(ConnectionId); // Delete connection from Dynamo
} else {
throw err;
}
}
The create function just returns:
return new AWS.ApiGatewayManagementApi({
apiVersion: '2018-11-29',
endpoint: domainName
});
CloudWatch logs suggest that all functions are triggering and completing successfully with no errors. It also shows that connections are being retrieved from Dynamo. However the clients are not receiving any messages.
When running the projects locally and using localhost urls, everything works as expected. What am I doing wrong here?
First, the correct domain for the endpoint is
const endpoint = `${domainName}/${"dev"}`;
Second, to see errors in CloudWatch, you need to await the postToConnection promise
Third, the external services were calling a REST endpoint in the WebSocket service. This meant that there are 2 entries added to API Gateway. The REST APIs need the following permissions for the WebSocket API:
Action:
- "execute-api:Invoke"
- "execute-api:ManageConnections"

Calling CosmosDB server from Azure Cloud Function

I am working on an Azure Cloud Function (runs on node js) that should return a collection of documents from my Azure Cosmos DB for MongoDB API account. It all works fine when I build and start the function locally, but fails when I deploy it to Azure. This is the error: MongoNetworkError: failed to connect to server [++++.mongo.cosmos.azure.com:++++] on first connect ...
I am new to CosmosDB and Azure Cloud Functions, so I am struggling to find the problem. I looked at the Firewall and virtual networks settings in the portal and tried out different variations of the connection string.
As it seems to work locally, I assume it could be a configuration setting in the portal. Can someone help me out?
1.Set up the connection
I used the primary connection string provided by the portal.
import * as mongoClient from 'mongodb';
import { cosmosConnectionStrings } from './credentials';
import { Context } from '#azure/functions';
// The MongoDB Node.js 3.0 driver requires encoding special characters in the Cosmos DB password.
const config = {
url: cosmosConnectionStrings.primary_connection_string_v1,
dbName: "****"
};
export async function createConnection(context: Context): Promise<any> {
let db: mongoClient.Db;
let connection: any;
try {
connection = await mongoClient.connect(config.url, {
useNewUrlParser: true,
ssl: true
});
context.log('Do we have a connection? ', connection.isConnected());
if (connection.isConnected()) {
db = connection.db(config.dbName);
context.log('Connected to: ', db.databaseName);
}
} catch (error) {
context.log(error);
context.log('Something went wrong');
}
return {
connection,
db
};
}
2. The main function
The main function that execute the query and returns the collection.
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('Get all projects function processed a request.');
try {
const { db, connection } = await createConnection(context);
if (db) {
const projects = db.collection('projects')
const res = await projects.find({})
const body = await res.toArray()
context.log('Response projects: ', body);
connection.close()
context.res = {
status: 200,
body
}
} else {
context.res = {
status: 400,
body: 'Could not connect to database'
};
}
} catch (error) {
context.log(error);
context.res = {
status: 400,
body: 'Internal server error'
};
}
};
I had another look at the firewall and private network settings and read the offical documentation on configuring an IP firewall. On default the current IP adddress of your local machine is added to the IP whitelist. That's why the function worked locally.
Based on the documentation I tried all the options described below. They all worked for me. However, it still remains unclear why I had to manually perform an action to make it work. I am also not sure which option is best.
Set Allow access from to All networks
All networks (including the internet) can access the database (obviously not advised)
Add the inbound and outbound IP addresses of the cloud function project to the whitelistThis could be challenging if the IP addresses changes over time. If you are on the consumption plan this will probably happen.
Check the Accept connections from within public Azure datacenters option in the Exceptions section
If you access your Azure Cosmos DB account from services that don’t
provide a static IP (for example, Azure Stream Analytics and Azure
Functions), you can still use the IP firewall to limit access. You can
enable access from other sources within the Azure by selecting the
Accept connections from within Azure datacenters option.
This option configures the firewall to allow all requests from Azure, including requests from the subscriptions of other customers deployed in Azure. The list of IPs allowed by this option is wide, so it limits the effectiveness of a firewall policy. Use this option only if your requests don’t originate from static IPs or subnets in virtual networks. Choosing this option automatically allows access from the Azure portal because the Azure portal is deployed in Azure.

Cognito post authentication custom response

Is it possible to return custom data in a AWS Cognito post-authentication lambda trigger?
I have tried setting properties in event.response, but these are not propagated back to the client.
For example:
module.exports.post_auth_trigger = (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
event.response.some_custom_property = 'this is custom';
callback(null, event);
};
Using this code does not return the some_custom_property to the client after authentication. How may this be achieved?
I don't think it's possible. Check the response in RespondToAuthChallenge. There is no element that could be used for passing custom data to client.
You will have to manually fire a new request after successful authentication.

Using AWS SDK from Lambda running in VPC

I have a simple lambda function as follows
var AWS = require("aws-sdk");
exports.handler = (event, context, callback) => {
var ec2 = new AWS.EC2({region:'us-east-1'});
return ec2.describeRegions({}).promise()
.then(function(regionResponse) {
console.log(regionResponse.Regions)
callback(null, regionResponse.Regions);
})
.catch(
function (err) {
console.log({"error" : err});
callback(err, null);
}
)
};
I can run this function outside of a VPC successfully.
I create a VPC using the VPC wizard and create a VPC with a single public subnet and an Internet Gateway. I place the function in the VPC and give it an execution role with Lambda VPC Execution rights.
It now fails with a timeout, which I have set to 10 seconds (normal execution 1 sec)
What am I missing from my config that prevents the function from accessing the AWS SDK inside the VPC?
You are putting callback after return statement. Of course it will never be executed because you returned from the function.
If the subnet you are running the Lambda is not public or does not have NAT Gateway, it won't be able to connect to Internet, thus to AWS API's.

Resources