context.log vs console.log in Azure function apps - node.js

In the nodejs examples for Azure function apps, there is a passed in context obj to the function and it is possible to do context.log in the same manner as you can with console.log to output messages.
What is the difference between these two methods and does it matter which you use? Thx.

This documentation should answer your question :)
In Functions, you use the context.log methods to write trace output to the console. In Functions v2.x, trace outputs using console.log are captured at the Function App level. This means that outputs from console.log are not tied to a specific function invocation, and hence aren't displayed in a specific function's logs. They do, however, propagate to Application Insights. In Functions v1.x, you cannot use console.log to write to the console.
Long story short - context.log is best!

You can redirect console.log to context.log with my npm package so you dont have to pass down the context everywhere.
https://www.npmjs.com/package/azure-function-log-intercept
Source here if you just want to create your own module
https://github.com/BrianRosamilia/azure-function-log-intercept/blob/master/index.js

Related

Can't inject IWebHostEnvironment in Azure Durable Funciton

I have an Azure Durable Function and want to use IWebHostEnvironment. But when I try to inject it into the HttpStart function or my orchestrator I get the error:
Cannot bind parameter 'env' to type IWebHostEnvironment. Make sure the parameter Type is supported by the binding. I have not been able to find examples with Durable Functions and IWebHostEnvironment.
What I am trying to do is the equivalent of Server.MapPath(), which is not available in n.et core.
The trouble was that I was not getting errors but could not see the files in Kudo. They I found a post saying to set WEBSITE_DISABLE_SCM_SEPARATION = true, allowing Kudu to see the current process. Makes perfect sense after I thought about it for a moment.
Reference: https://serverfault.com/questions/788799/where-are-my-azure-temp-files

Call azure function from another azure function in javascript

I am new in Azure technology. I have created two functions Master (masterFunction) and Child (childFunction) in Javascript.
I want to invoke Child function from Master function.
I have tried with childFunction.Run(req, log); but its not working.
Please suggest me to fix above issue.
Thanks
This does not work, the azure function actually executes code to meet the trigger conditions. For example, if your childfunction is triggered by a blob, then the way your masterfunction calls it is to satisfy the trigger condition of the childfunction, that is, add operations to the target blob in your masterfunction. If your childfunction is http, then you need to send a request to the childfunction url in the code of the masterfunction.

Getting ExecutionContext in other libraries/projects in Azure Function App

The execution context that is injected to a function (https://github.com/Azure/azure-functions-host/wiki/Retrieving-information-about-the-currently-running-function), is it possible to get it in some other helper libraries.
I want to get the InvocationId of the current function in some other libraries. For e.g. let's say I have written a logger and I need to add the Invocation ID for every log. One trivial way to achieve this would be to pass the Invocation ID from the function to all the helpers, but it may not be possible especially if one is working with legacy code.
In App services we could solve this problem by getting access to the HttpContext via the IHttpContextAccessor.
Is there any alternative to this in Azure function?

Cloud Functions for Firebase error: "400, Change of function trigger type or event provider is not allowed"

When I run firebase deploy I get this error message:
functions: HTTP Error: 400, Change of function trigger type or event provider is not allowed
TL;DR
firebase functions:delete yourFunction // this can be done via the Firebase Console as well
firebase deploy
Explanation
Basically, Cloud Functions expects the same trigger for every function all the time, i.e. once it is created it has to stick to its original trigger because every function name is connected to a specific trigger. The trigger can therefore only be changed by deleting the function first and then creating it again with a different trigger.
This can now be done easily by using the functions:delete command:
firebase functions:delete yourFunction
The documentation features more advanced use cases as well.
Old solution
Solution of this is basically commenting or cutting out your function and then saving the Functions file and deploying. The function will get deleted in Firebase, but after that you can insert/uncomment your function and it will deploy just fine again. This error occurs when you take a function and change the type of trigger that it uses, i.e. HTTP, database or authentication.
Firstly cut it out
/* exports.yourFunction = someTrigger... */
And then, after deploying ("firebase deploy") replace your trigger
exports.yourFunction = anotherTrigger...
For those who stumble upon this in the future, the Cloud Functions console now offers a delete button.
You can also go to the Cloud Functions panel in the Google Cloud Platform console and delete your function from there. After that you can upload the function normally from firebase CLI. Not sure why they don't have a delete function option in the firebase console.

How to test advanced interactions when developing an Alexa Skill

I am developing an Alexa Skill, and I am struggling a bit in understanding if I setup everything in the best way possible to debug while developing.
Right now I am developing locally using Node.js, uploading to the cloud when ready, and testing all the responses to intents using the Service Simulator in the Test section of the developer console.
I find the process a bit slow but working... But still, I have two questions:
1) Is there a way of avoiding the process of uploading to the cloud?
And mostly important 2) How do I test advanced interactions, for examples multi-step ones, in the console? How for example to test triggering the response to an intent, but then asking the user for confirmation (Yes/No)? Right now the only way of doing it is using the actual device.
Any improvement is highly appreciated
Like #Tom suggested - take a look at bespoken.tools for testing skills locally.
Also, the Alexa Command Line Interface was recently released and it has some command line options you might look into.
For example, the 'api invoke-skill' command lets you invoke the skill locally via the command line (or script) so you don't have to use the service simulator. Like this...
$ask api invoke-skill -s $SKILL_ID -f $JSON --endpoint-region $REGION --debug
Here is a quick video I did that introduces the ASK CLI. It doesn't specifically cover testing but it will provide a quick intro.
https://youtu.be/p-zlSdixCZ4
Hope that helps.
EDIT: Had another thought for testing locally. If you're using node and Lambda functions, you can call the index.js file from another local .js file (example: test.js) and pass in the event data and context. Here is an example:
//path to the Lambda index.js file
var lambdaFunction = require('../lambda/custom/index.js');
// json representing the event - just copy from the service simulator
var event = require('./events/GetUpdateByName.json');
var context = {
'succeed': function (data) {
console.log(JSON.stringify(data, null,'\t') );
},
'fail': function (err) {
console.log('context.fail occurred');
console.log(JSON.stringify(err, null,'\t') );
}
};
function callback(error, data) {
if(error) {
console.log('error: ' + error);
} else {
console.log(data);
}
}
// call the lambda function
lambdaFunction.handler (event, context, callback);
Here's how I'm testing multi-step interactions locally.
I'm using a 3rd party, free, tool called BSTAlexa:
http://docs.bespoken.tools/en/latest/api/classes/bstalexa.html
It emulates Amazon's role in accepting requests, feeding them to your skill, and maintaining the state of the interactions.
So I start my test script by configuring BSTAlexa - pointing it to my skill config (eg. intents) and to a local instance of my skill (in my case I'm giving it a local URL).
Then I feed BSTAlexa a sequence of textual requests and verify that I'm getting back the expected responses. And I put all this in a Mocha script.
It works quite well.
Please find answers (Answering in reverse order),
You can test multiple steps using simulator (echosim.io) but each time you have to press and hold Mic button (Or hold on space bar). Say for example first you are asking something to Alexa with echosim and alexa responding to confirm'yes/no' then you have to press and hold mic button again to respond to confirm it.
You can automate the lambda deployment process. Please see the link,
http://docs.aws.amazon.com/lambda/latest/dg/automating-deployment.html
It would be good to write complete unit tests so that you can test your logic before uploading Lambda. Also it will help to reduce the number of Lambda deployments

Resources