How to set linuxFxVersion for Azure CloudCustodian - azure

Azure Functions are deprecating Python 3.6 support in a month on Sep 30, 2022 (Azure update doc)
My azure-periodic CloudCustodian rules run as Azure Functions and they either have linuxFxVersion unset or set to python|3.6 (az command based on this Azure doc):
% az functionapp config show --ids <Azure Function ID>
{
...
"limits": null,
"linuxFxVersion": "python|3.6",
"loadBalancing": "LeastRequests",
...
}
I notice in AWS there is a way to configure the runtime here in CloudCustodian docs and this Medium example:
policies:
- name: sec-n-elb-internet-facing
resource: aws.elb
description: |
This policy identifies all Load Balancers that are facing the
Internet.
filters:
- Scheme: internet-facing
mode:
type: periodic
schedule: "rate(3 days)"
execution-options:
output_dir: s3://example-bucket/cclogs/policy/{account_id}
runtime: python 3.8
I looked through the azure-periodic CloudCustodian Azure Reference and couldn't find anything similar. Is there a configuration to set the linuxFxVersion value for Azure CloudCustodian?
Also asked here on CloudCustodian's GitHub discussions

This has been responded to in the CloudCustodian GitHub discussion thread here: https://github.com/cloud-custodian/cloud-custodian/discussions/7716#discussioncomment-3538443
This is set in tools/c7n_azure/c7n_azure/constants.py
The current value is Python 3.8 so I would expect that if you let custodian recreate the function app it would be using the 3.8.
If the app already exists it won’t be modified.

Related

Specify local tf state file to azurerm provider in pipeline

I have been working on deploying terraform package using azure devops pipeline.
We have our tf state file locally, and no plans to move to azure storage account. Could you please help how can we define the attribute values in terraform init step in pipeline.
- task: TerraformTaskV2#2
displayName: Terraform init
inputs:
provider: 'azurerm'
command: 'init'
workingDirectory: 'some directory'
backendServiceArm: 'some service conn'
**backendAzureRmContainerName: ??
backendAzureRmResourceGroupName: ??
backendAzureRmStorageAccountName: ??
backendAzureRmKey: **
What should be the values for Resource group, storage account name, container name. If I don't specify these values, pipeline is failing with below error
##[error]Error: Input required: backendAzureRmStorageAccountName
Any help on this is much appreciated. Thanks in advance.
I'm unsure if you can use the TerraformTaskV2 without utilizing a cloud provider's backend. In the README for said task it doesn't show options for using a local backend, only the following for terraform init:
... AzureRM backend configuration
... Amazon Web Services(AWS) backend configuration
... Google Cloud Platform(GCP) backend configuration
I haven't had experience with this yet, but you could look at the extension Azure Pipelines Terraform Tasks, which does explicitly support a local backend:
The Terraform CLI task supports the following terraform backends
local
...
Just a note on working in teams:
if you're working in a team deploying infrastructure, using a local backend can lead to potential undefined state and/or undesirable outcomes. The benefits of choosing a good backend can offer "...support locking the state while operations are being performed, which helps prevent conflicts and inconsistencies." - docs

Azure Bicep - Connect Azure API Management (API) to Azure Function App

I can see within the Azure Management Console, specifically within the Azure API Management Service, via the GUI you are able to use Azure Functions to form an API.
I am trying to implement the same via Azure Bicep, but I do not see any options in the Bicep documentation for API Management - API Service.
In the GUI, I see something like this:
This allows me to specify my Function App:
However, within the Bicep Documentation, I don't see anything where I would expect to: Microsoft.ApiManagement service/apis
I have instead tried using the Microsoft.ApiManagement service/backends but that doesn't give the same experience and I haven't managed to get that to work.
So my question is, how do I connect my Azure API Management service to an Azure Site (app) which is set as a suite of Azure Functions?
You need to create backend and all api definitions manually. The portal gives you a nice creator and does all those REST calls for you. With bicep (and ARM) which is operating directly on the REST endpoints of each resource provider you need to build own solution.
Perhaps there’re somewhere some existing templates that can do this but personally I didn’t see any yet.
I added OpenAPI specifications to my functionApps to produce the sawgger \ -openAPI link (or file). Then leveraged the OpenAPI file to build the APIs.
// Create APIM Service
resource apimServiceRes 'Microsoft.ApiManagement/service#2021-08-01' = {
name: 'apim service name'
location: resourceGroup().location
sku:{
capacity: 0
name: 'select a sku'
}
identity:{
type: 'SystemAssigned'
}
properties:{
publisherName: 'your info'
publisherEmail: 'your info'
}
}
// Create the API Operations with:
resource apimApisRes 'Microsoft.ApiManagement/service/apis#2021-08-01' = {
name: '${apimServiceRes.name}/name-to-represent-your-api-set'
properties: {
format: 'openapi-link'
value: 'https://link to your swagger file'
path: ''
}
}

Set Retention Period for App Service logs in Azure WebApp Deployment

I am deploying a Azure WebService (Linux Container) with az CLI and Biceps files. Below is an excerpt from my logging configuration.
resource appConfigLogs 'Microsoft.Web/sites/config#2021-02-01' = {
name: 'logs'
parent: app
properties: {
detailedErrorMessages: {
enabled: true
}
failedRequestsTracing: {
enabled: true
}
httpLogs: {
fileSystem: {
enabled: true
retentionInDays: 7
retentionInMb: 50
}
}
}
}
To my understanding the setting "retentionInDays" corresponds to "Retention Period (Days)" which can be found in the Azure Portal in the WebApp Resource > "Monitoring" > "App Service logs".
When setting via Portal, the App Services Configuration gets updated with an Application setting called "WEBSEITE_HTTPLOGGING_RETENTION_DAYS" set to the respective value.
When setting via ARM Deplyment (see Biceps above), there is no Configuration value set. Is this a bug or do these two settings "retentionInDays" / "Retention Period (Days)" simply not correlate with each other?
When setting via ARM Deployment (see Biceps above), there is no Configuration value set. Is this a bug or do these two settings "retentionInDays" / "Retention Period (Days)" simply not correlate with each other?
This is not a bug."retentionInDays" / "Retention Period (Days)" are not two individual settings. In ARM template configuration in order to use retention period to store logs for a period of time we use this parameter retentionInDays same parameter will be displayed in the portal as RententionPeriod(Days)
We have written an ARM template & tested in our local environment which is working fine.This template will create web app, storage account, enabling app service logs & the application setting WEBSEITE_HTTPLOGGING_RETENTION_DAYS as well as shown below.
You can refer this blogpost for more information about configuring app server logs to a storage account using ARM template.

Cannot invoke Google Cloud Function from GCP Scheduler

I've been trying to invoke a GCP function (--runtime nodejs8 --trigger-http) from GCP scheduler, both located within the same project. I can only make it work, if I grant unauthenticated access by adding the allUsers member to the functions permissions, with the Cloud Functions-Invoker role applied to it. However, when I only use the service account of the scheduler as the Cloud Functions-Invoker, I get a PERMISSION DENIED Error.
I created a hello world example, to show in detail, how my setup looks like.
I set up a service account:
gcloud iam service-accounts create scheduler --display-name="Task Schedule Runner"
Setting the role:
svc_policy.json:
{
"bindings": [
{
"members": [
"serviceAccount:scheduler#mwsdata-1544225920485.iam.gserviceaccount.com"
],
"role": "roles/cloudscheduler.serviceAgent"
}
]
}
gcloud iam service-accounts set-iam-policy scheduler#mwsdata-1544225920485.iam.gserviceaccount.com svc_policy.json -q
Deploying the Cloud Function:
gcloud functions deploy helloworld --runtime nodejs8 --trigger-http --entry-point=helloWorld
Adding the service account as a member to the function:
gcloud functions add-iam-policy-binding helloworld --member serviceAccount:scheduler#mwsdata-1544225920485.iam.gserviceaccount.com --role roles/cloudfunctions.invoker
Creating the scheduler job:
gcloud beta scheduler jobs create http test-job --schedule "5 * * * *" --http-method=GET --uri=https://us-central1-mwsdata-1544225920485.cloudfunctions.net/helloworld --oidc-service-account-email=scheduler#mwsdata-1544225920485.iam.gserviceaccount.com --oidc-token-audience=https://us-central1-mwsdata-1544225920485.cloudfunctions.net/helloworld
Log: PERMISSION DENIED
{
httpRequest: {
}
insertId: "1ny5xuxf69w0ck"
jsonPayload: {
#type: "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished"
jobName: "projects/mwsdata-1544225920485/locations/europe-west1/jobs/test-job"
status: "PERMISSION_DENIED"
targetType: "HTTP"
url: "https://us-central1-mwsdata-1544225920485.cloudfunctions.net/helloworld"
}
logName: "projects/mwsdata-1544225920485/logs/cloudscheduler.googleapis.com%2Fexecutions"
receiveTimestamp: "2020-02-04T22:05:05.248707989Z"
resource: {
labels: {
job_id: "test-job"
location: "europe-west1"
project_id: "mwsdata-1544225920485"
}
type: "cloud_scheduler_job"
}
severity: "ERROR"
timestamp: "2020-02-04T22:05:05.248707989Z"
}
Update
Here are the corresponding settings.
Scheduler Service Account
gcloud iam service-accounts get-iam-policy scheduler#mwsdata-1544225920485.iam.gserviceaccount.com
bindings:
- members:
- serviceAccount:scheduler#mwsdata-1544225920485.iam.gserviceaccount.com
role: roles/cloudscheduler.serviceAgent
etag: BwWdxuiGNv4=
version: 1
IAM Policy of the function:
gcloud functions get-iam-policy helloworld
bindings:
- members:
- serviceAccount:scheduler#mwsdata-1544225920485.iam.gserviceaccount.com
role: roles/cloudfunctions.invoker
etag: BwWdxyDGOAY=
version: 1
Function Description
gcloud functions describe helloworld
availableMemoryMb: 256
entryPoint: helloWorld
httpsTrigger:
url: https://us-central1-mwsdata-1544225920485.cloudfunctions.net/helloworld
ingressSettings: ALLOW_ALL
labels:
deployment-tool: cli-gcloud
name: projects/mwsdata-1544225920485/locations/us-central1/functions/helloworld
runtime: nodejs8
serviceAccountEmail: mwsdata-1544225920485#appspot.gserviceaccount.com
sourceUploadUrl: https://storage.googleapis.com/gcf-upload-us-central1-671641e6-3f1b-41a1-9ac1-558224a1638a/b4a0e407-69b9-4f3d-a00d-7543ac33e013.zip?GoogleAccessId=service-617967399269#gcf-admin-robot.iam.gserviceaccount.com&Expires=1580854835&Signature=S605ODVtOpnU4LIoRT2MnU4OQN3PqhpR0u2CjgcpRcZZUXstQ5kC%2F1rT6Lv2SusvUpBrCcU34Og2hK1QZ3dOPluzhq9cXEvg5MX1MMDyC5Y%2F7KGTibnV4ztFwrVMlZNTj5N%2FzTQn8a65T%2FwPBNUJWK0KrIUue3GemOQZ4l4fCf9v4a9h6MMjetLPCTLQ1BkyFUHrVnO312YDjSC3Ck7Le8OiXb7a%2BwXjTDtbawR20NZWfgCCVvL6iM9mDZSaVAYDzZ6l07eXHXPZfrEGgkn7vXN2ovMF%2BNGvwHvTx7pmur1yQaLM4vRRprjsnErU%2F3p4JO3tlbbFEf%2B69Wd9dyIKVA%3D%3D
status: ACTIVE
timeout: 60s
updateTime: '2020-02-04T21:51:15Z'
versionId: '1'
Scheduler Job Description
gcloud scheduler jobs describe test-job
attemptDeadline: 180s
httpTarget:
headers:
User-Agent: Google-Cloud-Scheduler
httpMethod: GET
oidcToken:
audience: https://us-central1-mwsdata-1544225920485.cloudfunctions.net/helloworld
serviceAccountEmail: scheduler#mwsdata-1544225920485.iam.gserviceaccount.com
uri: https://us-central1-mwsdata-1544225920485.cloudfunctions.net/helloworld
lastAttemptTime: '2020-02-05T09:05:00.054111Z'
name: projects/mwsdata-1544225920485/locations/europe-west1/jobs/test-job
retryConfig:
maxBackoffDuration: 3600s
maxDoublings: 16
maxRetryDuration: 0s
minBackoffDuration: 5s
schedule: 5 * * * *
scheduleTime: '2020-02-05T10:05:00.085854Z'
state: ENABLED
status:
code: 7
timeZone: Etc/UTC
userUpdateTime: '2020-02-04T22:02:31Z'
Here are the steps I followed to make Cloud Scheduler trigger an HTTP triggered Cloud Function that doesn't allow unauthenticated invocations:
Create a service account, which will have the following form [SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com.
Adde the service account [SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com as a project member and added the following roles to the service account: Cloud Functions Invoker and Cloud Scheduler Admin.
Deploy an HTTP triggered Cloud Function that doesn't allow public (unauthenticated) access (if you are using the UI, simply uncheck the Allow unauthenticated Invocations checkbox) and that used the recently created service account [SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com on the Service account field (click more and look for the Service account field, by default it should be set to the App Engine default service account) and take notice of the Cloud Function's URL.
Create a Cloud Scheduler job with authentication by issuing the following command from the Cloud Shell: gcloud scheduler jobs create http [JOB-NAME] --schedule="* * * * *" --uri=[CLOUD-FUNCTIONS-URL] --oidc-service-account-email=[SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com
In your specific case you are leaving the default App Engine service account for your Cloud Functions. Change it to the service account you created as specified on the previous steps.
#Marko I went through the same issue, it seems to re-enable (disable/enable) the scheduler API did the fix. This is why creating a new project makes sense because you probably got a scheduler service account by doing so. So if your project doesn't have a scheduler service account created from google, doing this trick will give you one. And although you don't need to assign this specific service account to any of your tasks, it must be available. You can see my work here: How to invoke Cloud Function from Cloud Scheduler with Authentication
I had a similar issue.
In our case, we've enabled Cloud Scheduler quite a long time ago.
According to the docs, if you enabled Cloud Scheduler API before March 19, 2019, you need to manually add the Cloud Scheduler Service Agent role to your Cloud Scheduler service account.
So we had to create a new service account that looks like this service-[project-number]#gcp-sa-cloudscheduler.iam.gserviceaccount.com
Hope this will help anybody else.
this tutorial helped me to invoke a programmer function, but there is a problem when creating the program after creating the service account, finally eliminating the programmer and doing it again.
Google Cloud Scheduler - Calling Cloud Function
As per the recent update on GCP, new function needs manual update for authentication.
We need to add Cloud Function Invoker permission to user allusers.
Please refer
https://cloud.google.com/functions/docs/securing/managing-access-iam#allowing_unauthenticated_function_invocation

Azure Logic App creation of Redis Cache requires x-ms-api-version

I'm building an Azure Logic App and try to automate the creation of an Azure Redis Cache. There is a specific action for this (Create or update resource) which I was able to bring up:
As you can see I entered 2016-02-01 as the api version. I was trying different values here just guessing from other api versions I know from Microsoft. I can't find any resource on this on the internet. The result of this step will be:
{
"error":
{
"code": "InvalidResourceType",
"message": "The resource type could not be found in the namespace 'Microsoft.Cache' for api version '2016-02-01'."
}
}
What is the correct value for x-ms-api-version and where can I find the history for this value based on the resource provider?
Try
Resource Provider: Microsoft.Cache
Name: Redis/<yourrediscachename>
x-ms-api-version: 2017-02-01
One easy way to know the supported versions for each resource type is using CLI on your Azure Portal, e.g.
az provider show --namespace Microsoft.Cache --query "resourceTypes[?resourceType=='Redis'].apiVersions | [0]"
would return:
[
"2017-02-01",
"2016-04-01",
"2015-08-01",
"2015-03-01",
"2014-04-01-preview",
"2014-04-01"
]
I made it work with:
HTH

Resources