I have a developed a Microsoft bot that works using the botframework emulator, I want to host it on AWS Lambda, and am using the following code and when I execute it on lambda, it results in errors.
const builder = require('botbuilder');
const lambda = require('botbuilder-aws-lambda');
var connector = new builder.ChatConnector({
appId: 'My App ID',
appPassword: 'PassWord ID'
});
exports.handler = lambda(connector);
var bot = new builder.UniversalBot(connector, function (session) {
session.send("You said: %s", session.message.text);
});
The following are the errors:
"errorMessage": "RequestId: 2d91dffa-84d3-11e7-870e-0151204c40e6 Process exited before completing request"
The detailed log file shows the following:
2017-08-19T11:40:20.889Z 2d91dffa-84d3-11e7-870e-0151204c40e6 SyntaxError: Unexpected token u in JSON at position 0
at Object.parse (native)
at handler (/var/task/node_modules/botbuilder-aws-lambda/lib/index.js:5:24)
"errorMessage": "RequestId: 2d91dffa-84d3-11e7-870e-0151204c40e6 Process exited before completing request"
I'm not sure what your issue is, but I thought I would share some of my learning working with Twilio and AWS Lambda. Not sure if this is what you already know or not however:
API Gateway - in order to expose an HTTP endpoint via a Lambda, you require a API Gateway configured to point to the AWS Lambda.
Lambda Request - the only payload that Lambda will receive is a JSON payload. Twilio will only produce a FORM-URL-Encoded format. You have to configure API Gateway to transform the Form-URL-Endcoded to JSON.
Lambda Reply - JSON in, JSON out. To convert the reply from JSON to TwiXML, you have to transform the message from JSON to TwilXML
AWS Lambda, IMO, doesn't hold a candle to Azure Functions. Do yourself a favour and try Azure Functions - it does not have any the restrictions of AWS Lambda, and does not have to go through a translation layer like API Gateway. In addition, and likely the best feature, you can run it locally without having to create your own framework. Why AWS would not prioritize a local development environment is beyond me - other than perhaps being first to market.
HTH
Related
I'm trying to send emails through Mailchimp Transactional/Mandrill using Node and Serverless Framework.
I'm able to send emails fine locally (using serverless-offline), however when I deploy the function to our staging environment, it is giving a timeout error when trying to connect to the API.
My code is:
const mailchimp = require('#mailchimp/mailchimp_transactional')(MAILCHIMP_TRANSACTIONAL_KEY);
async function sendEmail(addressee, subject, body) {
const message = {
from_email: 'ouremail#example.com',
subject,
text: body,
to: [
{
email: addressee,
type: 'to',
},
],
};
const response = await mailchimp.messages.send({ message });
return response;
}
My Lambda is set at a 60 second timeout, and the error I'm getting back from Mailchimp is:
Response from Mailchimp: Error: timeout of 30000ms exceeded
It seems to me that either Mailchimp is somehow blocking traffic from the Lambda IP, or AWS is not letting traffic out to connect to the mail API.
I've tried switching to use fetch calls to the API directly instead of using the npm module, and still get back a similar error (although weirdly in html format):
Mailchimp send email failed: "<html><body><h1>408 Request Time-out</h1>\nYour browser didn't send a complete request in time.\n</body></html>\n\n"
Are there any AWS permissions I've missed, or Mailchimp Transactional/Mandrill configs I've overlooked?
I was having the identical issue using Mailchimp's Marketing API and solved it by routing traffic through an NAT Gateway. Doing this allows your lambda functions that are within a VPC to reach external services.
The short version of how I was able to do this:
Create a new subnet within your VPC
Create a new route table for the new subnet you just created and make sure that the new subnet is utilizing this new route table
Create a new NAT Gateway
Have the new route table point all outbound traffic (0.0.0.0/0) to that NAT Gateway
Have the subnet associated with the NAT Gateway point all outbound traffic to an Internet Gateway (this is generally already created when AWS populates the default VPC)
You can find out more at this link: https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
When I create a cloud function to process a charge on a user’s card (by writing a stripe token to firebase and using a cloud function to charge), how do I pass errors (like a declined card due to insufficient funds) to the client. If it’s important, I’m using the firebase web sdk to send the tokens.
write the errors to a firebase database so that you can read the errors from the database and show them where you need to.
I decided to use a Firebase HTTP cloud function and just send the token to the link firebase sets for the function. Like so,
exports.addSourceToCustomer = functions.https.onRequest((req, res) => {
const token = req.body.token // use the stripe token however you like here
// when an error occurs use res.status(errorCode).send(errorMessage);
// which sends the error back to the client that made the request
});
I am try to setup azure function with HTTP triggered and make subsequent call to azure bot framework [bot service on azure].
Following is the error i encountered while setting it up:
{
"id": "ed81eca8-d536-4534-a97d-66e6a7ca7ad2",
"requestId": "ba346904-702b-465c-b7e3-b48afe29ab33",
"statusCode": 500,
"errorCode": 0,
"messsage": "Exception while executing function: Functions.adapter -> Unable to resolve value for property 'BotAttribute.SecretSetting'."
}
Env:
Nodejs dev
Using directline-JS git repo at BotFramework-DirectLineJS
Related questions:
Azure function doesn't notify my bot (Bot Framework)
Azure Function for Bot Framework C#
The secret was defined but i was returning value in a wrong way in azure function. i.e context.done(done,data) where i was passing incoming data directly to output which did not hold secret, rather than modified version of data.
I have created one API endpoint for lambda function, as - https://XXXXXXXXX.execute-api.us-east-1.amazonaws.com/XXXX/XXXXXXXXXXXX/ which is GET method.
While calling that endpoint from postman it is giving me
{
"message": "'XXXXXXXXX3LPDGPBF33Q:XXXXXXXXXXBLh219REWwTsNMyyyfbucW8MuM7' not a valid key=value pair (missing equal-sign) in Authorization header: 'AWS XXXXXXXXX3LPDGPBF33Q:XXXXXXXXXXBLh219REWwTsNMyyyfbucW8MuM7'."
}
This is a screenshot of the Amazon Lambda Upload Site: http://i.stack.imgur.com/mwJ3w.png
I have Access Key Id & Secret Access Key for IAM user. I used it all but no luck. Can anyone suggest tweak about this.
If you're using the latest version of Postman, you can generate the SigV4 signature automatically. The region should correspond to your API region (i.e. "us-east-1") and the service name should be "execute-api"
This is not a solution but it has helped me more than once:
Double-check that you are actually hitting an existing endpoint! Especially if you're working with AWS. AWS will return this error if you don't have the correct handler set up in your Lambda or if your API Gateway is not configured to serve this resource/verb/etc.
I have a Pull Task Queue running on App Engine. I am trying to access the queue externally from the NodeJS REST client: https://github.com/google/google-api-nodejs-client
I'm passing my Server API key in with the request:
var googleapis = require('googleapis'),
API_KEY = '...';
googleapis
.discover('taskqueue', 'v1beta2')
.execute(function(err, client) {
var req = client.taskqueue.tasks.insert({
project: 'my-project',
taskqueue: 'pull-queue',
key: API_KEY
});
req.execute(function(err, response) {
...
});
});
But I am getting back a 401 "Login Required" message. What am I missing?
If I need to use OAuth, how can I get an access token if my client is a NodeJS server instead of user/browser that can process the OAuth redirect URL?
The best way to do this is to take advantage of Service Accounts in GCE. This is a synthetic user account that is usable by anyone in the GCE project. Getting all of the auth lined up can be a little tricky. Here is an example on how to do this in python.
The general outline of what you need to do:
Start the GCE instance with the task queue OAuth scope.
Add the GCE service account to the task queue ACL in queue.yaml.
Acquire an access token. It looks like you can use the computeclient.js credential object to automate the HTTP call to http://metadata/computeMetadata/v1beta1/instance/service-accounts/default/token
Use this token in any API calls to the task queue API.
I'm not a Node expert, but searching around I saw found an example of how to connect to the Datastore API from Node using service accounts from GCE. It should be straightforward to adapt this to the task queue API.