Azure Container Apps- View trace logs in AppInsights - azure

How do i get the application trace logs deployed in Azure Container Logs to view in Application Insights?
_logger.LogInformation("get ratings " );
I have tried both Instrumentation key and ConnectionString to daprAI properties with no luck. I can view the logs in Logs Analytics (ContainerAppConsoleLogs_CL) so logs are working fine.
resource containerAppEnvironment 'Microsoft.App/managedEnvironments#2022-03-01' = {
name: containerAppEnvironmentName
location: location
tags: tags
properties: {
appLogsConfiguration: {
destination: 'log-analytics'
logAnalyticsConfiguration: {
customerId: logAnalyticsWs.properties.customerId
sharedKey: logAnalyticsWs.listKeys().primarySharedKey
}
}
daprAIConnectionString: appInsights.properties.ConnectionString
daprAIInstrumentationKey: appInsights.properties.InstrumentationKey
vnetConfiguration: null
zoneRedundant: false
}
}
Have added the env values in the container apps container
{ name: 'ApplicationInsights__ConnectionString'
value: appInsights.properties.ConnectionString
}
{
name: 'ApplicationInsights__InstrumentationKey'
value: appInsights.properties.InstrumentationKey
}
bootstrapped in the program files
builder.Services.AddApplicationInsightsTelemetry();
Logs are not visible in the appinsights

Related

Bicep ADF LinkedService

I'm having a heck of a time trying to deploy a simple Azure BlobFS linked service into an ADF using Bicep (which I have only really started to learn).
The bicep I have thus far is:
//---Data Factory
resource datafactory 'Microsoft.DataFactory/factories#2018-06-01' = {
name: adf_name
location: loc_name
identity: {
type: 'SystemAssigned'
}
properties: {
globalParameters: {}
publicNetworkAccess: 'Enabled'
}
}
//--- Data Factory Linked Service
resource adls_linked_service 'Microsoft.DataFactory/factories/linkedservices#2018-06-01' = {
name: 'ls_adf_to_adls'
parent: datafactory
properties: {
annotations: []
connectVia: {
parameters: {}
referenceName: 'AutoResolveIntegrationRuntime'
type: 'IntegrationRuntimeReference'
}
description: 'linked_service_for_adls'
parameters: {}
type: 'AzureBlobFS'
typeProperties: {
accountKey: datafactory.identity.principalId
azureCloudType: 'AzurePublic'
credential: {
referenceName: 'string'
type: 'CredentialReference'
}
servicePrincipalCredentialType: 'SecureString'
servicePrincipalId: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
servicePrincipalKey: {
type: 'SecureString'
value: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
}
tenant: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
url: bicepstorage.properties.primaryEndpoints.blob
}
}
}
The ADF resource deploys fine by itself as does the ADLS (symbolic name is: bicepstorage). The issue is when I added the linkedservice resource block. I get:
New-AzResourceGroupDeployment: /home/vsts/work/1/s/psh/deploy_main.ps1:12
Line |
12 | New-AzResourceGroupDeployment `
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| 22:46:27 - The deployment 'main' failed with error(s). Showing 1 out of
| 1 error(s). Status Message: Input is malformed. Reason: Could not get
| integration runtime details for AutoResolveIntegrationRuntime
| (Code:InputIsMalformedDetailed) CorrelationId:
| f77ef878-5314-46ea-9de6-65807845a104
The only integration runtime in the ADF is the 'AutoResolveIntegrationRuntime'. When I inspect it in the portal it's green, running and healthy.
I'm using task: AzurePowerShell#5 on ubuntu-latest in ADF, but I get the same error when I try to deploy the template directly from vscode.
I'm out of ideas and would really appreciate some assistance. I found the documentation for the 'connectVia' block (actually all the documentation on bicep linked services!) to be really confusing; if anyone could tell me exactly what is supposed to go in there, I'd really appreciate it.
Thanks.
As mentioned in this documentation, If you want to create a linked service to adls(blobfs) with default Azure IR (autoresolveintegrationruntime) then you can remove the ConnectionVia property in linked service block in your bicep template.
To test this I have created a bicep template which will deploy adlsgen2 storage account, data factory and a linked service to it using the service principal based authentication.
Here is the sample template for your reference:
param location string='westus'
//---Data Factory
resource storage 'Microsoft.Storage/storageAccounts#2022-09-01'={
name:'<storageAccountName>'
location:location
kind:'StorageV2'
sku:{
name:'Standard_GRS'
}
properties:{
accessTier:'Hot'
supportsHttpsTrafficOnly:true
isHnsEnabled:true
}
}
resource datafactory 'Microsoft.DataFactory/factories#2018-06-01' = {
name: '<AdfName>'
location: location
identity: {
type: 'SystemAssigned'
}
properties: {
globalParameters: {}
publicNetworkAccess: 'Enabled'
}
}
//--- Data Factory Linked Service
resource adls_linked_service 'Microsoft.DataFactory/factories/linkedservices#2018-06-01' = {
name: '<linkedserviceName>'
parent: datafactory
properties: {
annotations: []
description: 'linked_service_for_adls'
parameters: {}
type: 'AzureBlobFS'
typeProperties: {
url: storage.properties.primaryEndpoints.dfs
//encryptedCredential:storage.listKeys(storage.id).keys[0].value
servicePrincipalCredential: {
type: 'SecureString'
value: '<serviceprincipalKey>'
}
servicePrincipalId:'<serviceprincipalappId>'
servicePrincipalCredentialType:'ServicePrincipalKey'
azureCloudType:'AzurePublic'
servicePrincipalKey: {
type: 'SecureString'
value: '<serviceprincipalKey>'
}
tenant: '<tenantId>'
}
}
}

Add Identity Provider at a Azure Containter App by CLI or bicep

By a bicep script I created a Containter App which has now one Authentication Identity Provider.
I created also a second App regeistration whcih I want to add to this Container App
This can be done in the Azure Portal, but I want to do this in a script (PowerShell/Bicep)
Looks like that this is working:
resource containerAppConfig 'Microsoft.App/containerApps/authConfigs#2022-06-01-preview' = {
name: 'current'
parent: containerApp
properties: {
identityProviders: {
azureActiveDirectory:{
enabled: true
isAutoProvisioned: true
registration: {
clientId: adApiAppClientId
clientSecretSettingName: 'api-registry-password'
}
validation: {
allowedAudiences: [
'api://${adApiAppClientId}'
]
}
}
}
}

How do I configure my bicep scripts to allow a container app to pull an image from an ACR using managed identity across subscriptions

I am trialling the use of Bicep and container apps in my organisation and we have separated out concerns within the SAME tenant but in different subscriptions like so:
Development
Production
Management
I want to be able to deploy each of these subscriptions using Bicep scripts (individual ones per subscription) and ideally only use managed identity for security.
Within the management subscription we have an ACR which has the admin account intentionally disabled as I don't want to pull via username/password. Question one, should this be possible? As it seems that we should be able to configure an AcrPull role against the container app(s) without too much trouble.
The idea being that the moment the container app is deployed it pulls from the Acr and is actively useable. I don't want an intermediary such as Azure DevOps handling the orchestration for example.
In bicep I've successfully configured the workspace, container environment but upon deploying my actual app I'm a bit stuck - it fails for some incomprehensible error message which I'm still digging into. I've found plenty of examples using the admin/password approach but documentation for alternatives appears lacking which makes me worry if I'm after something that isn't feasible. Perhaps user identity is my solution?
My bicep script (whilst testing against admin/password) looks like this:
name: containerAppName
location: location
identity: {
type: 'SystemAssigned'
}
properties: {
managedEnvironmentId: containerAppEnvId
configuration: {
secrets: [
{
name: 'container-registry-password'
value: containerRegistry.listCredentials().passwords[0].value
}
]
ingress: {
external: true
targetPort: targetPort
allowInsecure: false
traffic: [
{
latestRevision: true
weight: 100
}
]
}
registries: [
{
server: '${registryName}.azurecr.io'
username: containerRegistry.listCredentials().username
passwordSecretRef: 'container-registry-password'
}
]
}
template: {
revisionSuffix: 'firstrevision'
containers: [
{
name: containerAppName
image: containerImage
resources: {
cpu: json(cpuCore)
memory: '${memorySize}Gi'
}
}
]
scale: {
minReplicas: minReplicas
maxReplicas: maxReplicas
}
}
}
}
However this is following an admin/password approach. For using managed identity, firstly do I need to put a registry entry in there?
``` registries: [
{
server: '${registryName}.azurecr.io'
username: containerRegistry.listCredentials().username
passwordSecretRef: 'container-registry-password'
}
]
If so, the listCredentials().username obviously won't work with admin/password disabled. Secondly, what would I then need in the containers section
containers: [
{
name: containerAppName
image: containerImage ??
resources: {
cpu: json(cpuCore)
memory: '${memorySize}Gi'
}
}
]
As there appears to be no mention of the need for pointing at a repository, or indeed specifying anything other than a password/admin account. Is it that my requirement is impossible as the container app needs to be provisioned before managed identity can be applied to it? Is this a chicken vs egg problem?
You could use a user-assigned identity:
Create a user assigned identity
Grant permission to the user-assigned identity
Assign the identity to the container app
# container-registry-role-assignment.bicep
param registryName string
param roleId string
param principalId string
// Get a reference to the existing registry
resource registry 'Microsoft.ContainerRegistry/registries#2021-06-01-preview' existing = {
name: registryName
}
// Create role assignment
resource roleAssignment 'Microsoft.Authorization/roleAssignments#2020-04-01-preview' = {
name: guid(registry.id, roleId, principalId)
scope: registry
properties: {
roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', roleId)
principalId: principalId
principalType: 'ServicePrincipal'
}
}
Then from your main:
param name string
param identityName string
param environmentName string
param containerImage string
param location string = resourceGroup().location
param containerRegistrySubscriptionId string = subscription().subscriptionId
param containerRegistryResourceGroupName string = resourceGroup().name
param containerRegistryName string
// Create identtiy
resource identity 'Microsoft.ManagedIdentity/userAssignedIdentities#2022-01-31-preview' = {
name: identityName
location: location
}
// Assign AcrPull permission
module roleAssignment 'container-registry-role-assignment.bicep' = {
name: 'container-registry-role-assignment'
scope: resourceGroup(containerRegistrySubscriptionId, containerRegistryResourceGroupName)
params: {
roleId: '7f951dda-4ed3-4680-a7ca-43fe172d538d' // AcrPull
principalId: identity.properties.principalId
registryName: containerRegistryName
}
}
// Get a reference to the container app environment
resource managedEnvironment 'Microsoft.App/managedEnvironments#2022-03-01' existing = {
name: environmentName
}
// create the container app
resource containerapp 'Microsoft.App/containerApps#2022-03-01' = {
dependsOn:[
roleAssignment
]
name: name
...
identity: {
type: 'UserAssigned'
userAssignedIdentities: {
'${identity.id}': {}
}
}
properties: {
managedEnvironmentId: managedEnvironment.id
configuration: {
...
registries: [
{
server: '${containerRegistryName}.azurecr.io'
identity: identity.id
}
]
}
template: {
...
containers: [
{
name: name
image: '${containerRegistryName}.azurecr.io/${containerImage}'
...
}
]
}
}
}

Deployment of queue, blob and ADLS2 private endpoints via Bicep goes wrong

I am trying to deploy a number of three azure storage resources under two storage accounts, and I want to implement three private endpoints as to only allow connection to these resources from VMs in the same VNET. Type of resources that need to be connected to in two separate storage accounts are (per storage account):
storageAccountTemp
Azure blob queue
Azure blob storage
storageAccountDatalake
ADLS 2 containers (datalake)
I have the following Azure Bicep code for deploying the permanent and temporary stores:
param location string
param environmentType string
param storageAccountSku string
param privateEndpointsSubnetId string
var privateEndpointNameTmpstBlob = 'pe-tmpst-blob-${environmentType}-001'
var privateEndpointNameTmpstQueue = 'pe-tmpst-queue-{environmentType}-001'
var privateEndpointNamePst = 'pe-pst-${environmentType}-001'
/// Temp storage ///
resource storageAccountTemp 'Microsoft.Storage/storageAccounts#2021-08-01' = {
name: 'tmpst${environmentType}'
location: location
sku: {
name: storageAccountSku
}
kind: 'StorageV2'
properties: {
allowBlobPublicAccess: false
accessTier: 'Hot'
minimumTlsVersion: 'TLS1_0'
publicNetworkAccess: 'Disabled'
}
}
resource blobContainerForQueue 'Microsoft.Storage/storageAccounts/blobServices/containers#2021-08-01' = {
name: '${storageAccountTemp.name}/default/claimcheck-storage-${environmentType}'
properties: {
publicAccess: 'None'
}
}
resource storageQueueMain 'Microsoft.Storage/storageAccounts/queueServices/queues#2019-06-01' = {
name: '${storageAccountTemp.name}/default/queue-main-${environmentType}'
}
/// Persistant storage datalake ///
resource storageAccountDatalake 'Microsoft.Storage/storageAccounts#2021-08-01' = {
name: 'pstdatalake${environmentType}'
location: location
sku: {
name: storageAccountSku
}
kind: 'StorageV2'
properties: {
allowBlobPublicAccess: false
accessTier: 'Hot'
minimumTlsVersion: 'TLS1_0'
isHnsEnabled: true
publicNetworkAccess: 'Disabled'
}
}
/// Data///
resource ContainerForData 'Microsoft.Storage/storageAccounts/blobServices/containers#2021-08-01' = {
name: '${storageAccountDatalake.name}/default/data-${environmentType}'
properties: {
publicAccess: 'None'
}
}
/// Private endpoints configuration for tempblob, queue and datalake ///
resource privateEndpointTmpstBlob 'Microsoft.Network/privateEndpoints#2021-05-01' = if (environmentType == 'dev' || environmentType == 'prd') {
name: privateEndpointNameTmpstBlob
location: location
properties: {
subnet: {
id: privateEndpointsSubnetId
}
privateLinkServiceConnections: [
{
name: privateEndpointNameTmpstBlob
properties: {
privateLinkServiceId: storageAccountTemp.id
groupIds: ['blob']
}
}
]
}
}
resource privateEndpointTmpstQueue 'Microsoft.Network/privateEndpoints#2021-05-01' = if (environmentType == 'dev' || environmentType == 'prd') {
name: privateEndpointNameTmpstQueue
location: location
properties: {
subnet: {
id: privateEndpointsSubnetId
}
privateLinkServiceConnections: [
{
name: privateEndpointNameTmpstQueue
properties: {
privateLinkServiceId: storageAccountTemp.id
groupIds: ['queue']
}
}
]
}
}
resource privateEndpointPst 'Microsoft.Network/privateEndpoints#2021-05-01' = if (environmentType == 'dev' || environmentType == 'prd') {
name: privateEndpointNamePst
location: location
properties: {
subnet: {
id: privateEndpointsSubnetId
}
privateLinkServiceConnections: [
{
name: privateEndpointNamePst
properties: {
privateLinkServiceId: storageAccountDatalake.id
groupIds: ['blob']
}
}
]
}
}
As you can see, for the storage account, IsHnsEnabled is set to true, as to enable HierarchicalNamespace and thus ADLS2 functionality. The problem is, if I include the privateEndpointPst resource deployment in the Bicep deployment, and then try to view a datalake container in the portal from a VM that is in the same VNET as the private endpoint (which are in the subnet that makes the privateEndpointsSubnetId variable), I get the following message when trying to look at files in one of the datalake containers:
I believe it is not the problems in the picture. The reason for this is that when I deploy all three endpoints together, they all show this same problem when trying to look at blob/queue/datalake in storageAccountTemp and storageAccountDatalake when I deploy all three endpoints.
However, only deploying the two endpoints for the storageAccountTemp resources and not the one for Datalake, I can see the data in the portal when running from the VM in the VNET and code running from this VM can also reach the queue + blob. So not only does the deployment of the privateEndpointPst seem to mess up datalake reachability, it also in some way does the same to the reachability of my other queue and blob in the storageAccountTemp if I deploy them altogether. My mind is boggled as to why this is happening, and why I cannot seem to deploy the datalake endpoint in the right way. Also, sometimes, deploying the endpoints altogether WILL make the datalake endpoint work, and break the other two, which is even more mind-boggling. Clicking do you want to do some checks to detect common connectivity issues gives me the following information, which does not make me much wiser as to what is causing the issue (since I'm pretty sure it's not firewalls; sometimes I can access, sometimes not):
Does anyone see what could be wrong with my Bicep code for deploying the endpoint that might be causing this issue? I'm at quite a loss here. Also tried to replace groupIds: ['blob'] with groupIds: ['dfs'], but that does not seem to solve my problem.
I seem to have found the issue. For connecting to a datalake resource, one needs to have both a private endpoint with groupIds: ['blob'] and groupIds: ['dfs], since the blob API is still used for getting some meta-info about the containers (as far as I can understand).
So adding:
resource privateEndpointPstDfs 'Microsoft.Network/privateEndpoints#2021-05-01' = if (environmentType == 'dev' || environmentType == 'prd') {
name: privateEndpointNamePstDfs
location: location
properties: {
subnet: {
id: privateEndpointsSubnetId
}
privateLinkServiceConnections: [
{
name: privateEndpointNamePstDfs
properties: {
privateLinkServiceId: storageAccountDatalake.id
groupIds: ['dfs']
}
}
]
}
}
Made the deployment work successfully.

Is there a workaround to keep app settings which not defined in Bicep template?

main.bicep
resource appService 'Microsoft.Web/sites#2020-06-01' = {
name: webSiteName
location: location
properties: {
serverFarmId: appServicePlan.id
siteConfig: {
linuxFxVersion: linuxFxVersion
appSettings: [
{
name: 'ContainerName'
value: 'FancyContainer'
}
{
name: 'FancyUrl'
value: 'fancy.api.com'
}
]
}
}
}
The infrastructure release process is run successfully, and the app settings are set correctly, after that I run the node application build and release where the Azure DevOps release pipeline adds some application-related config to app settings. (API keys, API URLs, for example) and everything works great.
But if I have to rerelease infrastructure, for example, I expand my environment with a storage account, the app settings which the application release set are lost.
Is there a workaround to keep app settings which not defined in Bicep template?
From this article: Merge App Settings With Bicep.
Don't include appSettings inside the siteConfig while deploying.
Create a module to create/update appsettings that will merge the existing settings with new settings.
appsettings.bicep file:
param webAppName string
param appSettings object
param currentAppSettings object
resource siteconfig 'Microsoft.Web/sites/config#2022-03-01' = {
name: '${webAppName}/appsettings'
properties: union(currentAppSettings, appSettings)
}
main.bicep:
param webAppName string
...
// Create the webapp without appsettings
resource webApp 'Microsoft.Web/sites#2022-03-01' = {
name: webAppName
...
properties:{
...
siteConfig: {
// Dont include the appSettings
}
}
}
// Create-Update the webapp app settings.
module appSettings 'appsettings.bicep' = {
name: '${webAppName}-appsettings'
params: {
webAppName: webApp.name
// Get the current appsettings
currentAppSettings: list(resourceId('Microsoft.Web/sites/config', webApp.name, 'appsettings'), '2022-03-01').properties
appSettings: {
Foo: 'Bar'
}
}
}

Resources