How to monitor execution time per endpoint when using Express with Google Cloud Functions? - node.js

I have a Cloud Function, actually, a Firebase Function, running on Node.js runtime that serves an API based on Express Framework. Looking at the function logs I have outputs like this (real data omitted):
{
"insertId": "000000-00000000-0000-0000-0000-000000000000",
"labels": {
"execution_id": "000000000000"
},
"logName": "projects/project-00000/logs/cloudfunctions.googleapis.com%2Fcloud-functions",
"receiveTimestamp": "2000-01-01T00:00:00.000000000Z",
"resource": {
"labels": {
"function_name": "api",
"project_id": "project-00000",
"region": "us-central1"
},
"type": "cloud_function"
},
"severity": "DEBUG",
"textPayload": "Function execution took 5000 ms, finished with status code: 200",
"timestamp": "2000-01-01T00:00:00.000000000Z",
"trace": "projects/project-00000/traces/00000000000000000000000000000000"
}
The relevant data I wanted to extract is the execution time and response code, in the textPayload attribute. However, I wanted to create a metric that breaks the data for each API endpoint to identify which endpoints are slow. This is an HTTP function but I don't have any request detail in the logs.
I probably can achieve what I want coding the logs into the function. However, I was wondering if I can extract the info directly from Google Cloud without touching the code function.
There is a way to create a log-based metric to show execution times split in endpoints?
References:
https://firebase.google.com/docs/functions/monitored-metrics
https://cloud.google.com/functions/docs/monitoring/metrics

I don't believe this will be possible without writing code. If you want to collect information about some running code, typically folks turn to Stackdriver, and use its APIs to collection specific information for analysis.

Related

presetOverride when creating Azure Media Services v3 Job

When creating an Azure Media Services Job via the REST API, I cannot set a presetOverrides property on the JobOutputAsset as defined in the documentation: https://learn.microsoft.com/en-us/rest/api/media/jobs/create#joboutputasset
My request body is:
{
"properties": {
"input": {
"#odata.type": "#Microsoft.Media.JobInputAsset",
"assetName": "inputAsset"
},
"outputs": [
{
"#odata.type": "#Microsoft.Media.JobOutputAsset",
"assetName": "outputAsset",
"label": "en-US",
"presetOverride": {
"#odata.type": "#Microsoft.Media.AudioAnalyzerPreset",
"audioLanguage": "en-US",
"mode": "Basic"
}
}
],
"priority" : "Normal"
}
}
The error message thrown is:
{
"error": {
"code": "InvalidResource",
"message": "The property 'presetOverride' does not exist on type 'Microsoft.Media.JobOutputAsset'. Make sure to only use property names that are defined by the type."
}
}
When removing the presetOverride data, everything works as expected. The official documentation clearly states that the Microsoft.Media.JobOutputAsset does have a presetOverride property though. What am I doing wrong?
It is important to select the correct API version when communicating with the Azure Media Services REST API.
In this case, api version 2020-05-01 from the Azure Media Services Postman examples was used. But the presetOverride option is only available starting with version 2021-06-01.
Setting api-version=2021-06-01 as a GET parameter enables Preset Overrides.
couple of concerns here Rene. I would not recommend using the raw REST API directly for any Azure services. Reason being is that there are a lot of built-in retry scenarios and retry policies that are already rolled into the client SDKs. We've had many customers try to roll their own REST API library but run into massive issues in production because they failed to read up on how to handle and write their own custom retry policy code.
Unless you are really familiar with rolling your own retry policies and how Azure Resource Management gateway works, try to avoid it and just use the official client SDKs - see here - https://learn.microsoft.com/en-us/azure/architecture/best-practices/retry-service-specific#general-rest-and-retry-guidelines
Now, to answer your specific question - try using my sample here in .NET and see if it answers your question.
https://github.com/Azure-Samples/media-services-v3-dotnet/blob/3ab85647cbadd2b868aadf175afdede67b40b2fd/AudioAnalytics/AudioAnalyzer/Program.cs#L129
I can also provide a working sample of this in Node.js/Typescript in this repo if you like. It is using the latest 10.0.0 release of our Javascript SDK.
I'm working on samples in this repo today - https://github.com/Azure-Samples/media-services-v3-node-tutorials
UPDATE: Added basic audio in Typescript sample.
https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/AudioAnalytics/index.ts
Shows how to use the preset override per job.

Cosmosdb Trigger for Azure Functions Application

We are developing applications using azure functions (python). I have 2 questions regarding Azure Functions Application
Can we monitor 2 collections using single Cosmosdb trigger ?
--- I have looked through the documentation and it seems it doesn't support. Did i miss anything ?
If there are 2 functions monitoring the same collection, will only one of the functions be triggered.
-- I observed this behaviour today. I was running 2 instances of functions app and the data from cosmosdb trigger was sent to only one of the application. I am trying to find out the reason for it.
EDIT:
1-I had never used multiple input triggers, but according to the official wiki it's possible, just add function.json file:
{
"bindings": [
{
"type": "queueTrigger",
"direction": "in",
"queueName": "image-resize"
},
{
"type": "blob",
"name": "original",
"direction": "in",
"path": "images-original/{name}"
},
{
"type": "blob",
"name": "resized",
"direction": "out",
"path": "images-resized/{name}"
}
]
}
PS: I know you're using cosmosDB, the sample above is just to illustrate
2- I assume it's due the way it's implemented (e.g topic vs queue). So the first function lock the event / message, then the second one is not aware of the event. At this moment, Durable Functions for python is still under development and should be released next month (03/2020). It will allow you to chain the execution of the functions just like it's available for c# / node:
https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview?tabs=csharp#chaining
What you can do is output to a queue, which will trigger your second function after the first function completes. (pretty much what Durable Functions offers)

Azure Search, listAdminKeys, ARM output error (does not support http method 'POST')

I am using this bit of code as an output object in my ARM template,
"[listAdminKeys(variables('searchServiceId'), '2015-08-19').PrimaryKey]"
Full text sample of the output section:
"outputs": {
"SearchServiceAdminKey": {
"type": "string",
"value": "[listAdminKeys(variables('searchServiceId'), '2015-08-19').PrimaryKey]"
},
"SearchServiceQueryKey": {
"type": "string",
"value": "[listQueryKeys(variables('searchServiceId'), '2015-08-19')[0]]"
}
I receive the following error during deployment (unfortunately, any error means the template deployment skips output section):
"The requested resource does not support http method 'POST'."
Checking the browser behavior seems to validate the error is related to the function (and, it using POST).
listAdminKeys using POST
How might I avoid this error and retrieve the AzureSearch admin key in the output?
Update: the goal of doing this is to gather all the relevant bits of information to plug into other scripts (.ps1) as parameters, since those resources are provisioned by this template. Would save someone from digging through the portal to copy/paste.
Thank you
You error comes from listQueryKeys, not admin keys.
https://learn.microsoft.com/en-us/rest/api/searchmanagement/adminkeys/get
https://learn.microsoft.com/en-us/rest/api/searchmanagement/querykeys/listbysearchservice
you wont be able to retrive those in the arm template, it can only "emulate" POST calls, not GET
With the latest API version, it's possible to get the query key using this:
"SearchServiceQueryKey": {
"type": "string",
"value": "[listQueryKeys(variables('searchServiceId'), '2020-06-30').value[0].key]"
}

Is logic app will retry to insert the failed record again or not?

I have a Logic App which will trigger whenever a record is created in Salesforce CRM, after that I have SQL server insert action where it will inserts the salesforce CRM record into azure SQL database.
Here my question is, if my Azure SQL database is down or failed to connect. What will happen to the record which is failed? Is logic app will retry to insert the failed record again or not?
By default not.
But you have the Do-Until Loops, where you define a condition for repeating an action. In your condition you can simply evaluate the result of the SQL Insert.
I use, for example the following expression to make a reliable call to a REST API:
"GetBerlinDataReliable": {
"actions": {
"GetBerlinData": {
"inputs": {
"method": "GET",
"uri": "http://my.rest.api/path?query"
},
"runAfter": {},
"type": "Http"
}
},
"expression": "#and(equals(outputs('GetBerlinData').statusCode, 200),greaterOrEquals(body('GetBerlinData').query?.count, 1))",
"limit": {
"count": 100,
"timeout": "PT30M"
},
"runAfter": {},
"type": "Until"
},
It depends on whether the HTTP code from such API is retry-able or not. If it is, we will by default retry 4 times with 30 seconds in between (you can change that in Settings of a given action as well). If it is not, then no retry will happen.
There are multiple ways to handle errors, depending on what and how you expect the error to occur: do-until like mentioned above is one way, or you an consider a try(insert)-catch(save to blob) and have another Logic Apps to check blob and retry insert.

Requesting User Location from Google Actions with Api.ai

Google Actions can provide you with the user's location, name, and few other details. How can this be done on Api.ai without the nodejs SDK? All examples from Google are using the Nodejs sdk.
According to the Conversation Api it is just a matter of putting the correct json in the response, however it is unclear how to get Api.ai to fill in this json.
I've read the docs here , but am still unclear.
Sample code, or more detailed documentations, would be great for the non nodejs developers. I'm working in Java, however a good explanation of how Api.ai builts the response json for Google Actions would be helpful for developers of all languages.
You have to study the API.AI HTTP API here. As a reference, try to setup the node examples - this way you can see the JSON files in action.
For the permissions try the Name Psychic example.
Your outgoing JSON will be something like this:
{
"contextOut": [
{
"lifespan": 100,
"name": "_actions_on_google_",
"parameters": {}
},
{
"lifespan": 1,
"name": "requesting_permission",
"parameters": {}
}
],
"data": {
"google": {
"expect_user_response": true,
"is_ssml": false,
"no_input_prompts": [],
"permissions_request": {
"opt_context": "To send you something",
"permissions": [
"DEVICE_PRECISE_LOCATION"
]
}
}
},
"speech": "PLACEHOLDER_FOR_PERMISSION"
}
There is now another option for Java programmers working with Actions on Google. There is an open source port of the official SDK to Java/Kotlin. API is very similar, so for location it would be something like:
app.askForLocation()
https://github.com/TicketmasterMobileStudio/actions-on-google-kotlin

Resources