Is it possible to setup a stand-alone WebSocket service with lambdas that can be invoked from lambdas in separate services?
I've got an existing system that does things and then attempts to broadcast an update to connected clients by invoking a lambda in a websocket service like so:
const lambda = new Lambda({
region: 'us-east-1',
endpoint: 'https://lambda.us-east-1.amazonaws.com'
});
lambda.invoke({
FunctionName: `dev-functionName`,
Payload: JSON.stringify({payload, clientGroup}),
InvocationType: 'Event'
});
This triggers the correct lambda, which then
gets the connection IDs from a Dynamo table
sets domainName to {api-id}.execute-api.us-east-1.amazonaws.com
attempts to message connections them like so:
const ws = create(`https://${domainName}/dev/#connections/${ConnectionId}`);
// Also tried
//const ws = create(`https://${domainName}/dev`);
//const ws = create(`${domainName}/dev`);
//const ws = create(`${domainName}`);
const params:any = {
Data: JSON.stringify(payload),
ConnectionId
};
try{
return ws.postToConnection(params).promise();
} catch (err) {
if(err.statusCode == 410){
await removeConn(ConnectionId); // Delete connection from Dynamo
} else {
throw err;
}
}
The create function just returns:
return new AWS.ApiGatewayManagementApi({
apiVersion: '2018-11-29',
endpoint: domainName
});
CloudWatch logs suggest that all functions are triggering and completing successfully with no errors. It also shows that connections are being retrieved from Dynamo. However the clients are not receiving any messages.
When running the projects locally and using localhost urls, everything works as expected. What am I doing wrong here?
First, the correct domain for the endpoint is
const endpoint = `${domainName}/${"dev"}`;
Second, to see errors in CloudWatch, you need to await the postToConnection promise
Third, the external services were calling a REST endpoint in the WebSocket service. This meant that there are 2 entries added to API Gateway. The REST APIs need the following permissions for the WebSocket API:
Action:
- "execute-api:Invoke"
- "execute-api:ManageConnections"
Related
How to use googleapis google.auth.GoogleAuth() for google API service account in Twilio serverless function, since there is no FS path to provide as a keyFile value?
Based on the example here ( https://www.section.io/engineering-education/google-sheets-api-in-nodejs/ ) and here ( Google api node.js client documentation ) my code is based on the example here ( Receive an inbound SMS ) and looks like...
const {google} = require('googleapis')
const fs = require('fs')
exports.handler = async function(context, event, callback) {
const twiml = new Twilio.twiml.MessagingResponse()
// console.log(Runtime.getAssets()["/gservicecreds.private.json"].path)
console.log('Opening google API creds for examination...')
const creds = JSON.parse(
fs.readFileSync(Runtime.getAssets()["/gservicecreds.private.json"].path, "utf8")
)
console.log(creds)
// connect to google sheet
console.log("Getting googleapis connection...")
const auth = new google.auth.GoogleAuth({
keyFile: Runtime.getAssets()["/gservicecreds.private.json"].path,
scopes: "https://www.googleapis.com/auth/spreadsheets",
})
const authClientObj = await auth.getClient()
const sheets = google.sheets({version: 'v4', auth: authClientObj})
const spreadsheetId = "myspreadsheetID"
console.log("Processing message...")
if (String(event.Body).trim().toLowerCase() == 'KEYWORD') {
console.log('DO SOMETHING...')
try {
// see https://developers.google.com/sheets/api/guides/values#reading_a_single_range
let response = await sheets.spreadsheets.values.get({
spreadsheetId: spreadsheetId,
range: "'My Sheet'!B2B1000"
})
console.log("Got data...")
console.log(response)
console.log(response.result)
console.log(response.result.values)
} catch (error) {
console.log('An error occurred...')
console.log(error)
console.log(error.response)
console.log(error.errors)
}
}
// Return the TwiML as the second argument to `callback`
// This will render the response as XML in reply to the webhook request
return callback(null, twiml)
...where the Asset referenced in the code is for a JSON generated from creating a key pair for a Google APIs Service Account and manually copy/pasting the JSON data as an Asset in the serverless function editor web UI.
I see error messages like...
An error occurred...
{ response: '[Object]', config: '[Object]', code: 403, errors: '[Object]' }
{ config: '[Object]', data: '[Object]', headers: '[Object]', status: 403, statusText: 'Forbidden', request: '[Object]' }
[ { message: 'The caller does not have permission', domain: 'global', reason: 'forbidden' } ]
I am assuming that this is due to the keyFile not being read in right at the auth const declaration (IDK how to do it since all the example I see assume a local filepath as the value, but IDK how to do have the function access that file for a serverless function (my attempt in the code block is really just a shot in the dark)).
FYI, I can see that the service account has an Editor role in the google APIs console (though I notice the "Resources this service account can access" has the error
"Could not fund an ancestor of the selected project where you have access to view a policy report on at least one ancestor"
(I really have no idea what that means or implies at all, very new to this)). Eg...
Can anyone help with what could be going wrong here?
(BTW if there is something really dumb/obvious that I am missing (eg. a typo) just LMK in a comment so can delete this post (as it would then not serve any future value of others))
The caller does not have permission', domain: 'global', reason: 'forbidden
This actually means that the currently authenticated user (the service account) does ot have access to do what you are asking it to do.
You are trying to access a spread sheet.
Is this sheet on the service accounts google drive account? If not did you share the sheet with the service account?
The service account is just like any other user if it doesn't have access to something it cant access it. Go to the google drive web application and share the sheet with the service account like you would share it with any other user just use the service account email address i think its called client id its the one with an # in it.
delegate to user on your domain
If you set up delegation properly then you can have the service account act as a user on your domain that does have access to the file.
delegated_credentials = credentials.with_subject('userWithAccess#YourDomain.org')
I am working on an Azure Cloud Function (runs on node js) that should return a collection of documents from my Azure Cosmos DB for MongoDB API account. It all works fine when I build and start the function locally, but fails when I deploy it to Azure. This is the error: MongoNetworkError: failed to connect to server [++++.mongo.cosmos.azure.com:++++] on first connect ...
I am new to CosmosDB and Azure Cloud Functions, so I am struggling to find the problem. I looked at the Firewall and virtual networks settings in the portal and tried out different variations of the connection string.
As it seems to work locally, I assume it could be a configuration setting in the portal. Can someone help me out?
1.Set up the connection
I used the primary connection string provided by the portal.
import * as mongoClient from 'mongodb';
import { cosmosConnectionStrings } from './credentials';
import { Context } from '#azure/functions';
// The MongoDB Node.js 3.0 driver requires encoding special characters in the Cosmos DB password.
const config = {
url: cosmosConnectionStrings.primary_connection_string_v1,
dbName: "****"
};
export async function createConnection(context: Context): Promise<any> {
let db: mongoClient.Db;
let connection: any;
try {
connection = await mongoClient.connect(config.url, {
useNewUrlParser: true,
ssl: true
});
context.log('Do we have a connection? ', connection.isConnected());
if (connection.isConnected()) {
db = connection.db(config.dbName);
context.log('Connected to: ', db.databaseName);
}
} catch (error) {
context.log(error);
context.log('Something went wrong');
}
return {
connection,
db
};
}
2. The main function
The main function that execute the query and returns the collection.
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('Get all projects function processed a request.');
try {
const { db, connection } = await createConnection(context);
if (db) {
const projects = db.collection('projects')
const res = await projects.find({})
const body = await res.toArray()
context.log('Response projects: ', body);
connection.close()
context.res = {
status: 200,
body
}
} else {
context.res = {
status: 400,
body: 'Could not connect to database'
};
}
} catch (error) {
context.log(error);
context.res = {
status: 400,
body: 'Internal server error'
};
}
};
I had another look at the firewall and private network settings and read the offical documentation on configuring an IP firewall. On default the current IP adddress of your local machine is added to the IP whitelist. That's why the function worked locally.
Based on the documentation I tried all the options described below. They all worked for me. However, it still remains unclear why I had to manually perform an action to make it work. I am also not sure which option is best.
Set Allow access from to All networks
All networks (including the internet) can access the database (obviously not advised)
Add the inbound and outbound IP addresses of the cloud function project to the whitelistThis could be challenging if the IP addresses changes over time. If you are on the consumption plan this will probably happen.
Check the Accept connections from within public Azure datacenters option in the Exceptions section
If you access your Azure Cosmos DB account from services that don’t
provide a static IP (for example, Azure Stream Analytics and Azure
Functions), you can still use the IP firewall to limit access. You can
enable access from other sources within the Azure by selecting the
Accept connections from within Azure datacenters option.
This option configures the firewall to allow all requests from Azure, including requests from the subscriptions of other customers deployed in Azure. The list of IPs allowed by this option is wide, so it limits the effectiveness of a firewall policy. Use this option only if your requests don’t originate from static IPs or subnets in virtual networks. Choosing this option automatically allows access from the Azure portal because the Azure portal is deployed in Azure.
I am setting up a AMS lambda to call the facebook sdk internally but unfortunately I am not able to get any response from facebook SDK.
Please find the below code :-
const listCampaign = async (event, context) => {
context.callbackWaitsForEmptyEventLoop = false;
await validateAuthToken(event.headers.Authorization, event.headers.accountId);
console.log("account are",account)
return await account.getCampaigns([
Campaign.Fields.account_id,
Campaign.Fields.adlabels,
Campaign.Fields.bid_strategy,
Campaign.Fields.boosted_object_id,
Campaign.Fields.brand_lift_studies,
Campaign.Fields.budget_rebalance_flag,
Campaign.Fields.budget_remaining,
Campaign.Fields.buying_type,
Campaign.Fields.can_create_brand_lift_study,
])
.then((campaign) => {
console.log("first check 3",campaign) // No response from facebook SDK and after 30 sec it get end point time out
})
The most probable cause could be that the security group is not configured for outside connections. If that is not the case and the lambda function is deployed in a VPC, please do check that the VPC subnets have NAt and Internet gateway permissions.
I am using aws-sdk for publishing message on topic below is the code:
var AWS = require('aws-sdk');
AWS.config.region = 'us-east-1';
AWS.config.credentials = {
accessKeyId: 'myaccesskeyid',
secretAccessKey: 'mysecretaccesskey'
}
function LEDOnIntent() {
this.iotdata = new AWS.IotData({
endpoint: 'XXXXXXXXX.iot.us-east-1.amazonaws.com'
});
}
LEDOnIntent.prototype.publishMessage = function() {
console.log('>publishMessage');
var params = {
topic: 'test_topic',
/* required */
payload: new Buffer('{action : "LED on"}') || 'STRING_VALUE',
qos: 1
};
this.iotdata.publish(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else {
console.log("Message published : " + data); // successful response
}
});
}
It works fine in local unit testing but when I deploy this code on AWS lambda then I got very uneven behaviour. For the first few requests it will not publish message then it will work fine when I continuously test it. When I test after some break then again it stop working for some initial requests.
Behind the scene, Lambda runs like a container model. It means to create a container when needs and destroy it if doesn't require the container.
The reason you see a delay in the initial request because It takes time to set up a container and do the necessary bootstrapping, which adds some latency each time the Lambda function is invoked. You typically see this latency when a Lambda function is invoked for the first time or after it has been updated because AWS Lambda tries to reuse the container for subsequent invocations of the Lambda function.
AWS Lambda maintains the container for some time in anticipation of another Lambda function invocation. In effect, the service freezes the container after a Lambda function completes, and thaws the container for reuse if AWS Lambda chooses to reuse the container when the Lambda function is invoked again.
Please read the official documentation here
I have used the following piece of code to handle the AWS SNS subscription and notification messages. The configured http endpoint is receiving the confirmation message but I am unable to confirm it through code. However the manual confirmation of it is happening by visiting the "subscription url" from the logged console message.
I have configured the aws and sns part as mentioned below:
var aws = require('aws-sdk');
aws.config.loadFromPath(__dirname + '/awsConfig.json');
var sns = new aws.SNS();
This is the following function I am using for handling http endpoint messages.
function handleIncomingMessage(msgType, msgData) {
if (msgType === 'SubscriptionConfirmation') {
//confirm the subscription.
console.log("Subscription Confirmation Message--->"+msgData);
sns.confirmSubscription({
TopicArn: msgData.TopicArn
}, onAwsResponse);
} else if (msgType === 'Notification') {
console.log("Notification has arrived");
} else {
console.log('Unexpected message type ' + msgType);
}
}
Here the sns.confirmSubscription isn't working, Is there any solution/work around for this?
You also need to pass a Token field in the confirmSubscription parameters as described here.