How to do logging in botframework composer 2.0.0? - bot-framework-composer

I have read the issue here https://github.com/microsoft/BotFramework-Composer/issues/3286#issuecomment-638601360. Does anyone have detailed steps on how to do the logging? When I add the blobstorage settings to advanced settings json it does not log into the blob container. Do I need to add additional code to startup.cs or am I missing a step?

Here is an example of the settings:
blobStorage is used to enable TranscriptLoggerMiddleware.
cosmosDb is used to store user and conversation data.
applicationInsights is used to enable the telemetry logger.
Only changes for activating the transcript logging into blob storage is adding these settings in appsettings json file:
"applicationInsights": {
"InstrumentationKey": "*******************************"
},
"cosmosDb": {
"cosmosDBEndpoint": "****",
"authKey": "****",
"databaseId": "botstate-db",
"collectionId": "botstate-collection",
"containerId": "botstate-container"
},
"blobStorage": {
"connectionString": "DefaultEndpointsProtocol=https;AccountName=****;AccountKey=********;EndpointSuffix=core.windows.net",
"container": "transcripts"
}

Related

Azure Functions Blob Trigger not working with Python V2 model in local

I'm new to azure functions so bear with me.
I'm working on a microservice that will use a blob storage trigger and input/output bindings to process data and write to a database, but I am having trouble with the basic blob storage trigger function. For reference, I am developing in Visual Studio Code using Python with the V2 model of programming as listed in the azure documentation I have installed the Azure functions extension and the azurite extension, but in my local.settings.json I have added the connection string to the Value AzureWebJobStorage and put the Feature Flag and Storage Type.
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AzureWebJobsStorage": "DefaultEndpointsProtocol=REDACTED;AccountName=REDACTED;AccountKey=REDACTED;EndpointSuffix=core.windows.net",
"AzureWebJobsFeatureFlags": "EnableWorkerIndexing",
"AzureWebJobsSecretStorageType": "files",
"FUNCTIONS_EXTENSION_VERSION": "~4",
"APPINSIGHTS_INSTRUMENTATIONKEY": "REDACTED",
"APPLICATIONINSIGHTS_CONNECTION_STRING": "REDACTED",
"user":"REDACTED",
"host":"REDACTED",
"password":"REDACTED",
"dbname":"REDACTED",
"driver":"REDACTED"
}
}
I had our architect create the necessary resources for me (gen2 Storage Account, function app), and our company has protocols in place for security, meaning the network access is disabled for the storage account and a private endpoint is configured, but not for the function App because the deployment process wouldn't work.
In my function_app.py I have this for the blob trigger.
#app.function_name(name="BlobTrigger1")
#app.blob_trigger(arg_name="myblob", path="samples-workitems/{name}",
connection="")
def test_function(myblob: func.InputStream):
logging.info(f"Python blob trigger function processed blob \n"
f"Name: {myblob.name}\n"
f"Blob Size: {myblob.length} bytes")
Host.json
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensions": {
"blobs": {
"maxDegreeOfParallelism": 4
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.15.0, 4.0.0)"
},
"concurrency": {
"dynamicConcurrencyEnabled": true,
"snapshotPersistenceEnabled": true
}
}
When I run the app locally using
Azurite: Start
func host start
it spits out.
It is also worth noting I have a util folder with simple scripts. They DO NOT FOLLOW THE BLUEPRINT SCHEMA. I don't know if this is the problem or if I can keep doing this, but then functions in them aren't meant to be azure functions, more like helper functions.
Storage account networking
Container view
Azure Function in the cloud. I haven't deployed to this function because it didn't work.
It is very frustrating because I don't know if the problem lies with the way the resource was configured or if it's my mistake with the code I wrote, the way I set this up, or if it's a simple issue with the settings in one of the Json files.

How can I create a Data Explorer Stream Analytics output using ARM templates?

I successfully manually configured a Stream Analytics Job that outputs data into Data Explorer. However, I am unable to set up my infrastructure pipeline using ARM templates. All the documentation only describes the other types of output (e.g. ServiceBus).
I also tried exporting the template in the resource group, but this does not work for my Stream Analytics Job.
How can I configure this output using ARM templates?
For many of such issues, you can use the following process:
Setup the desired configuration manually
Check https://resources.azure.com and find your resource
OR Export the resource using the export template feature
In this case, resources.azure.com is sufficient.
If you look at the Data Explorer output resource, you will see the required ARM representation that you can use in your ARM template:
"name": "xxxxxxx",
"type": "Microsoft.StreamAnalytics/streamingjobs/outputs",
"properties": {
"datasource": {
"type": "Microsoft.Kusto/clusters/databases",
"properties": {
"cluster": "https://yyyyy.westeurope.kusto.windows.net",
"database": "mydatabase",
"table": "mytable",
"authenticationMode": "Msi"
}
},

Github actions Azure deployment python app

I would like to deploy an app function which will create a python function (function code is in the repo). I have a storage account error appearing.
My repo: https://github.com/Palme240/GitHub-Ci-CD
When request Azure resource at ValidateAzureResource, Get Function App
Settings : AzureWebJobsStorage cannot be empty
According to the error message, you need to check whether the AzureWebJobsStorage property is empty in the app setting configuration.
The property AzureWebJobsStorage must be specified as an app setting in the site configuration.
The Azure Functions runtime uses the AzureWebJobsStorage connection string to create internal queues. When Application Insights is not enabled, the runtime uses the AzureWebJobsDashboard connection string to log to Azure Table storage and power the Monitor tab in the portal.
These properties are specified in the appSettings collection in the siteConfig object:
"appSettings": [
{
"name": "AzureWebJobsStorage",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
},
{
"name": "AzureWebJobsDashboard",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
}
]
You can refer to the official document for details.

How to use nested values from secrets.json Azure's App Service Configuration

In my app's secrets.json file, I have the following section.
"Serilog": {
"WriteTo": [
{
"Name": "AzureTableStorage",
"Args": {
"storageTableName": "Logging",
"connectionString": "DefaultEndpointsProtocol=xxxxxxxxxxx"
}
}
]
}
I am attempting to deploy to Azure and have added the keys to my app service's configuration like this.
Serilog__WriteTo__Name
Serilog__WriteTo__Args__storageTableName
Serilog__WriteTo__Args__connectionString
However, the application will not start (just shows an errror: "If you are the application administrator, you can access the diagnostic resources.") if I use either of the two longer keys. I have another setting named CosmosConnectionSettings__ContainerName which works fine, so it seems to be a problem with the nesting rather than they key lengths.
The app service is configured to use Linux.
Is there a better way to approach this, and is this limitation documented anywhere?
I think it's not the nesting's fault.
I have test it on my side, here is my secrets.json file:
{
"Serilog": {
"WriteTo": {
"Name": "AzureTableStorage",
"Args": {
"storageBlobName": "1.jpg",
"connectionString": "DefaultEndpointsProtocol=https;AccountName=XXX;AccountKey=XXX;"
}
}
}
}
And I write the value to the endpoint page like this:
Here is the Appsettings in my configuration on portal:
The Appsettings I set works well on azure web app.
My suggestion is:
Check how you use the key AzureTableStorage and connectionString in your scripts.
Test your project on IIS. Actually if it works well on IIS, it should work well on Azure.

Azure Durable Functions - Unable to change Task Hub Name to support Side-by-Side versioning

I'm looking into implementing the Side-By-Side Azure Durable Functions versioning pattern described here: https://learn.microsoft.com/en-us/azure/azure-functions/durable-functions-versioning
I have a FunctionApp deployed that currently uses the default HubName DurableFunctionsHub. I've read through the documentation above and it seems that all I am required to do is provided the following json in the host.json file:
{
"version":"2.0",
"durableTask": {
"HubName": "TaskHubV1"
}
}
When I deploy the new host.json file I can see in the portal that the host.json file has the changes above, but the storage container does not contain new Blobs, Queues or Tables prefixed with the HubName TaskHubV1. The Screenshot shows the storage container contents:
Durable Function Storage Screenshot
I was expecting additional Blobs, Queues and Tables to have been created using the HubName above as a prefix e.g.
Table Storage: TaskHubV1History TaskHubV1Instance
Could it be that HubName changes are not currently supported by V2 Functions?
The formatting for V2 has the "durableTask" property under "extensions". Could you try,
{
"version": "2.0",
"extensions": {
"durableTask": {
"HubName": "TaskHubV1"
}
}
}

Resources