I am trying to save a document to a collection in CosmosDB using output binding.
The DB was created with MongoDB API access.
I noticed a few issues:
Azure function in JavaScript with output binding doesn't seem to work. Here is the code:
module.exports = function (context, req) {
if (req.body) {
context.bindings.outputObject = JSON.stringify({
name: "Mike P"
}); // tried outputObjectOut as well, no difference
context.res = {
// status: 200, /* Defaults to 200 */
body: "Hello " + (req.query.name || req.body.name)
};
}
else {
context.res = {
status: 400,
body: "Please pass a name on the query string or in the request body"
};
}
context.done();
};
The function.json for the JavaScript code is the same as given further below.
I wrote equivalent code in C# and noticed that the collection was destroyed (by the binding apparently when the document was saved).
public static HttpResponseMessage Run(HttpRequestMessage req,
out object outputObject, TraceWriter log)
{
outputObject = new {
name = "Mike P"
};
log.Info("test");
return req.CreateResponse(HttpStatusCode.OK);
}
Here is the function.json
{
"type": "documentDB",
"name": "outputObject",
"databaseName": "newexp",
"collectionName": "Test",
"createIfNotExists": true,
"connection": "newexp_DOCUMENTDB",
"direction": "out"
}
The collection can't be queried any more in the Azure console. Querying through a MongoDB client like 3T failed as well.
I would expect this code to work and a document shows up in the collection in the DB. But instead I am seeing the collection becoming unusable. Is this a bug or am I doing something wrong? Any help or pointers are appreciated.
The Azure Function CosmosDB binding is written against the DocumentDB API (or SQL API) for CosmosDB - so doesn't work out-of-the-box with Mongo API. I believe it may work if you add a required _id property in the object. Without an _id field the Mongo API call will fail
Related
I have the following addEventToCalendar cloud function :
const { google } = require("googleapis");
const calendar = google.calendar("v3");
//const functions = require('firebase-functions');
const googleCredentials = require("./credentials.json");
const OAuth2 = google.auth.OAuth2;
const ERROR_RESPONSE = {
status: "500",
message: "There was an error adding an event to your Google calendar",
};
const TIME_ZONE = "EST";
const addEvent = (event, auth) => {
return new Promise(function (resolve, reject) {
calendar.events.insert(
{
auth: auth,
calendarId: "primary",
resource: {
summary: event.eventName,
description: event.description,
start: {
dateTime: event.startTime,
timeZone: TIME_ZONE,
},
end: {
dateTime: event.endTime,
timeZone: TIME_ZONE,
},
},
},
(err, res) => {
if (err) {
console.log("Rejecting because of error");
reject(err);
}
console.log("Request successful");
resolve(res.data);
}
);
});
}
exports.addEventToCalendar = functions.https.onRequest((request, response) => {
const eventData = {
eventName: request.body.eventName,
description: request.body.description,
startTime: request.body.startTime,
endTime: request.body.endTime,
};
console.log("Event Data: ", eventData);
const oAuth2Client = new OAuth2(
googleCredentials.web.client_id,
googleCredentials.web.client_secret,
googleCredentials.web.redirect_uris[0]
);
oAuth2Client.setCredentials({
refresh_token: googleCredentials.refresh_token,
});
addEvent(eventData, oAuth2Client)
.then((data) => {
response.status(200).send(data);
return;
})
.catch((err) => {
console.error("Error adding event: " + err.message);
response.status(500).send(ERROR_RESPONSE);
return;
});
});
URL looks like this : http://localhost:5001/prototype-dev-fadd5/us-central1/addEventToCalendar
How do I call this by passing the following data?
{
"eventName": "Firebase Event",
"description": "This is a sample description",
"startTime": "2018-12-01T10:00:00",
"endTime": "2018-12-01T13:00:00"
)
When I say call, it implies 2 things:
Call from a browser just to test it.
Call it from another firebase cloud function which runs on a trigger.
To call it from a browser, you can call it using XHR:
const data = {
eventName: 'Firebase Event',
description: 'This is a sample description',
startTime: '2018-12-01T10:00:00',
endTime: '2018-12-01T13:00:00'
}
fetch('http://localhost:5001/prototype-dev-fadd5/us-central1/addEventToCalendar', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(data)
}).then(response => {
// ...
}).catch(error => {
// ...
});
To call it from another Cloud Function, I'd suggest exporting the addEvent function so you can import and use it elsewhere.
I'd also move the creation of the oAuth2Client object over to the addEvent() function so that you don't have to create an oAuth client before calling the addEvent() function.
If you can't re-use the addEvent() function because the two Cloud Functions live in separate projects and you can't share code between them, I'd call the function through XHR using a tool like Axios or node-fetch.
I know the tutorial you are using.
There are two good ways to test it:
Google Cloud Function Console, Testing tab
Postman
And for using it, I can think of 3 good ways to actually use it.
fetch Javascript from the frontend
Callable Firebase Functions from the frontend
export import to use from the backend.
Testing - Google Cloud Functions Console.
https://console.cloud.google.com/functions/
Testing - Postman
To test you can also use Postman.
Create a POST request with the URL of your function.
Then set the body to raw, with the type set to JSON. Once that is all setup, hit send and the response will show up if you did everything correctly.
See this image on where to click in Postman.
Using - fetch
Fetch provides a
global fetch() method that provides an easy, logical way to fetch
resources asynchronously across the network.
https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch
And luckily, Postman will write the fetch code for you.
Click on the Code tab, and select Javascript - Fetch from the dropdown.
Using - Callable Functions
Callable Functions are functions you can call directly from your Javascript on the front end. They are explained here. https://firebase.google.com/docs/functions/callable
Biggest difference is instead of using
functions.https.onRequest
in your function declaration, you use:
functions.https.onCall
Using - export to use in Node.js on your server
If you want to be able to call a function directly on your Node.js server, aka your Firebase functions, if the function is in the same file, you call it like any other javascript file.
function normalFunction() {
//do work
return data;
}
exports.addEventToCalendar = functions.https.onRequest((request, response) => {
let data = normalFunction();
response.send(data);
}
You can put whatever you want in the normalFunction and call it where ever you want. Just make sure to use a promise, or await if it is doing something on a server etc.
If you want to use normalFunction from a different file, you would need to use export. https://developer.mozilla.org/en-US/docs/web/javascript/reference/statements/export
Firebase has a good doc on how you can organize multiple files here https://firebase.google.com/docs/functions/organize-functions
I'm creating an API in Azure Functions using TypeScript, with multiple endpoints connecting to the same Azure SQL Server. Each endpoint was set up using the Azure Functions extension for VS Code, with the HttpTrigger TypeScript template. Each endpoint will eventually make different calls to the database, collecting from, processing and storing data to different tables.
There don't seem to be any default bindings for Azure SQL (only Storage or Cosmos), and while tedious is used in some Microsoft documentation, it tends not to cover Azure Functions, which appears to be running asynchronously. What's more, other similar StackOverflow questions tend to be for standard JavaScript, and use module.exports = async function (context) syntax, rather than the const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> syntax used by the TypeScript HttpTrigger templates.
Here's what I've got so far in one of these endpoints, with sample code from the tedious documentation in the default Azure Functions HttpTrigger:
var Connection = require('tedious').Connection;
var config = {
server: process.env.AZURE_DB_SERVER,
options: {},
authentication: {
type: "default",
options: {
userName: process.env.AZURE_DB_USER,
password: process.env.AZURE_DB_PASSWORD,
}
}
};
import { AzureFunction, Context, HttpRequest } from "#azure/functions"
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('HTTP trigger function processed a request.');
const name = (req.query.name || (req.body && req.body.name));
if (name) {
var connection = new Connection(config);
connection.on('connect', function(err) {
if(err) {
console.log('Error: ', err)
}
context.log('Connected to database');
});
context.res = {
// status: 200, /* Defaults to 200 */
body: "Hello " + (req.query.name || req.body.name)
};
}
else {
context.res = {
status: 400,
body: "Please pass a name on the query string or in the request body"
};
}
};
export default httpTrigger;
This ends up with the following message:
Warning: Unexpected call to 'log' on the context object after function
execution has completed. Please check for asynchronous calls that are
not awaited or calls to 'done' made before function execution
completes. Function name: HttpTrigger1. Invocation Id:
xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. Learn more:
https://go.microsoft.com/fwlink/?linkid=2097909
Again, the async documentation linked to covers just the standard JavaScript module.exports = async function (context) syntax, rather than the syntax used by these TypeScript httpTriggers.
I've also been reading that best practice might be to have a single connection, rather than connecting anew each time these endpoints are called - but again unsure if this should be done in a separate function that all of the endpoints call. Any help would be much appreciated!
I'm glad that using the 'mssql' node package works for you:
const sql = require('mssql');
require('dotenv').config();
module.exports = async function (context, req) {
try {
await sql.connect(process.env.AZURE_SQL_CONNECTIONSTRING);
const result = await sql.query`select Id, Username from Users`;
context.res.status(200).send(result);
} catch (error) {
context.log('error occurred ', error);
context.res.status(500).send(error);
}
};
Ref:https://stackoverflow.com/questions/62233383/understanding-js-callbacks-w-azure-functions-tedious-sql
Hope this helps.
I have created an api, when called, triggers the lambda function, written in nodejs, to take the json(array of objects) and insert the data into dynamodb. For each object in the array, the function creates a PutRequest object and when finished calls the batchWriteItem function. When I test in the aws console everything works fine but when I try in postman I get a 500 error. I know that the event is different when coming from postman vs the console and you are supposed to reference "event.body" if you want to access the json however when I do that I get an error with event.body.ForEach: "Cannot read property 'forEach' of undefined" in the console and a 500 error in postman. Below is the code that works in the console
var dynamo = new AWS.DynamoDB({region: 'us-east-1',});
exports.handler = (event, context, callback) => {
const done = (err, res) => callback(null, {
statusCode: err ? '400' : '200',
body: err ? err.message : res,
});
var params = {
RequestItems: {
"Lead": []
}
}
event.forEach(x => {
params.RequestItems.Lead.push({
PutRequest: {
Item: {
"Address": {S: x.Address},
"City": {S: x.City},
"State": {S: x.State},
"Zipcode": {S: x.Zipcode},
"Owner_First_Name": {S: x.Owner_First_Name},
"Owner_Last_Name": {S: x.Owner_Last_Name}
}
}
})
})
dynamo.batchWriteItem(params, done);
};
When the lambda receive the json body from api gateway, it will be passed as a json string.
To convert the json string to json, You need to parse the event.body.
const body = JSON.parse(event.body)
Then you can do body.forEach
Hope this helps
I am trying to return data from an external api in my inline Lambda function but when I test this in the developer console for Alexa, I get 'There was a problem with the requested skills response' and I can't work out why.
Also, as I am doing this from the AWS console, I can't console.log to see what it actually being returned.
(I have removed the default intents for the sake of the post)
const request = require('request');
const handlers = {
'LaunchRequest': function () {
this.emit(':ask', 'Welcome');
},
'GiveUpdateIntent': function (){
var slot = this.event.request.intent.slots.line.value;
httpGet(slot, (theResult) => {
this.response.speak(theResult);
this.emit(':responseReady');
});
}
};
function httpGet(query, callback) {
var options = {
host: 'api.tfl.gov.uk',
path: '/line/' + encodeURIComponent(query) + '/status',
method: 'GET',
};
var req = http.request(options, res => {
res.setEncoding('utf8');
var responseString = "";
//accept incoming data asynchronously
res.on('data', chunk => {
responseString += chunk;
});
//return the data when streaming is complete
res.on('end', () => {
console.log(responseString[0]);
callback(responseString[0]);
});
});
req.end();
}
exports.handler = function (event, context, callback) {
const alexa = Alexa.handler(event, context, callback);
alexa.APP_ID = APP_ID;
alexa.registerHandlers(handlers);
alexa.execute();
};
"There was a problem with the requested skills response" generally means that the response from your skill was not in the expected format.
Your API request
Ex: vicotria
https://api.tfl.gov.uk/Line/victoria/Status
returns a JSON, and you can't directly pass it Alexa as response. Before you send it back to Alexa, take out status that you actually want Alexa to speak. Then put that into a meaningful sentence that any skill user will understand and send it back.
For example you can return something like:
var speech = "Status severity description for " +
this.event.request.intent.slots.line.value +
" is "
+ responseBody[0].lineStatuses.statusSeverityDescription;
this.emit(':ask',speech, "your re-prompt here");
This is a sample JSON that I got
[
{
"$type": "Tfl.Api.Presentation.Entities.Line, Tfl.Api.Presentation.Entities",
"id": "victoria",
"name": "Victoria",
"modeName": "tube",
"disruptions": [],
"created": "2018-07-31T12:11:08.477Z",
"modified": "2018-07-31T12:11:08.477Z",
"lineStatuses": [
{
"$type": "Tfl.Api.Presentation.Entities.LineStatus, Tfl.Api.Presentation.Entities",
"id": 0,
"statusSeverity": 10,
"statusSeverityDescription": "Good Service",
"created": "0001-01-01T00:00:00",
"validityPeriods": []
}
],
"routeSections": [],
"serviceTypes": [
{
"$type": "Tfl.Api.Presentation.Entities.LineServiceTypeInfo, Tfl.Api.Presentation.Entities",
"name": "Regular",
"uri": "/Line/Route?ids=Victoria&serviceTypes=Regular"
},
{
"$type": "Tfl.Api.Presentation.Entities.LineServiceTypeInfo, Tfl.Api.Presentation.Entities",
"name": "Night",
"uri": "/Line/Route?ids=Victoria&serviceTypes=Night"
}
],
"crowding": {
"$type": "Tfl.Api.Presentation.Entities.Crowding, Tfl.Api.Presentation.Entities"
}
}
]
CloudWatch:
Always make use of CloudWatch to see the logs of your Lambda function, you will get a link under Monitoring tab of your Lambda Function.
Configuring Lambda Test Events: You can test you Lambda code right from your inline editor by configuring Lambda Test Events under Test menu of your inline editor. A function can have up to 10 test events.
This is because your handler is returning before the callback is called. I strongly suggest to move away from callback based development in NodeJS and to use Promise instead.
I just answered a similar question, and provided sample code with promises. Check it here How to make an asynchronous api call for Alexa Skill application with a Lambda function?
The issue turned out to be with using http itself instead of https.
The only response back that I was getting was a status code of 302 which is a redirection because the api I was calling changes all http requests to https.
Therefore, I changed my import to https and used the https.get method (instead of http.get) to call the api and the correct response was returned.
I'm creating an API that creates authorized API calls to Google's APIs, specifically Drive for this question. My API is working fine and uses Google's Node API to make the requests. When I fire off a request to this resource, I get back the following response:
{
"kind": "drive#file",
"id": "...",
"name": "bookmobile.jpg",
"mimeType": "image/jpeg"
}
I use the above response to determine the MIME type of the file I'm to display later. I then make a subsequent call to the same endpoint, but specifying alt=media as an option to download the file as specified in Google's Guide. If I console.log or res.send() the response, I get the following output:
Which we can see is the raw image bytes from the API call. How do I render these bytes to the response body properly? My code is as follows:
// DriveController.show
exports.show = async ({ query, params }, res) => {
if (query.alt && query.alt.toLowerCase().trim() === 'media') {
// Set to JSON as we need to get the content type of the resource
query.alt = 'json'
// Get the Files Resource object
const options = createOptions(query, params.fileId)
const filesResource = await Promise.fromCallback(cb => files.get(options, cb))
// Grab the raw image bytes
query.alt = 'media'
await createAPIRequest(createOptions(query, params.fileId), 'get', res, filesResource)
} else {
await createAPIRequest(createOptions(query, params.fileId), 'get', res)
}
}
async function createAPIRequest (options, method, res, filesResource = {}) {
try {
const response = await Promise.fromCallback(cb => files[method](options, cb))
if (filesResource.hasOwnProperty('mimeType')) {
// Render file resource to body here
} else {
res.json(response)
}
} catch (error) {
res.json(error)
}
}
Searching through various answers here all seem to point to the following:
res.type(filesResource.mimeType)
const image = Buffer.from(response, 'binary')
fs.createReadStream(image).pipe(res)
But this kills my Express app with the following error:
Error: Path must be a string without null bytes
How would I go about rendering those raw image bytes to the response body properly?
The Google API client returns binary data as a string by default, which will corrupt image data when it happens. (The issue is discussed on this thread: https://github.com/google/google-api-nodejs-client/issues/618). To fix, use the encoding: null option when requesting the file contents:
files[method](options, { encoding: null }, cb))