Which IPs to allow in Azure for Github Actions? - azure

I have an Azure storage account. When I allow all networks to it, my Github Actions can run and update my Azure static website.
When I disallow all but named networks (147.243.0.0/16 and my machine's IP) I get a 403 (request denied) error in Github Actions.
I assume I need to add GitHub to these IPs but when I run:
curl -H "Accept: application/vnd.github.v3+json" https://api.github.com/meta
there are tons of IPs! Do I need to add them all?

I assume you want to allow the GitHub Actions runner access to your storage account? Then yes, since that is potentially a large fleet of VMs, there are ton of IPs you would need to whitelist.
The alternative is to use a few tasks inside your pipeline:
look up the IP of the runner, e.g. using https://api.ipify.org
Add this IP to the allow-list using AZ CLI
Do your actual work on storage
Remove the allow entry again through CLI
Example Code:
name: Deploy to Azure
on:
push:
branches:
- main
workflow_dispatch:
jobs:
publish:
environment: Production
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Whitelist GitHub Runner IP
uses: azure/CLI#v1
with:
inlineScript: |
set -eu
agentIP=$(curl -s https://api.ipify.org/)
az storage account network-rule add \
--resource-group "${{ secrets.RESOURCE_GROUP }}" \
--account-name "${{ secrets.STORAGE_ACCOUNT_NAME }}" \
--ip-address $agentIP
sleep 300
- name: Upload to blob storage
uses: azure/CLI#v1
with:
inlineScript: |
set -eu
az storage blob upload-batch \
--account-name "${{ secrets.STORAGE_ACCOUNT_NAME }}" \
--source ./src/ \
--destination '$web' \
--overwrite true
- name: Purge CDN endpoint
uses: azure/CLI#v1
with:
inlineScript: |
set -eu
az cdn endpoint purge \
--content-paths "/*" \
--profile-name "${{ secrets.CDN_PROFILE_NAME }}" \
--name "${{ secrets.CDN_ENDPOINT }}" \
--resource-group "${{ secrets.RESOURCE_GROUP }}"
- name: Remove GitHub Runner IP from Whitelist
if: always()
uses: azure/CLI#v1
with:
inlineScript: |
set -eu
agentIP=$(curl -s https://api.ipify.org/)
az storage account network-rule remove \
--resource-group "${{ secrets.RESOURCE_GROUP }}" \
--account-name "${{ secrets.STORAGE_ACCOUNT_NAME }}" \
--ip-address $agentIP
- name: logout
if: always()
run: |
az logout

Now you can assign fixed IP address for your runners. This is in beta, but now doable
Fixed IP ranges
Setup a fixed IP range for your machines by simply ticking a check box, this provides an IP range that can be allow listed in internal systems and in GitHub’s allow list to keep using Actions while making your GitHub environment more secure.

Related

Failed to deploy Azure Function using Aure Container Registry

Our team has set up an ansible-playbook to deploy Azure Function using this reference. We use a user assigned identity to pull the function app image from an ACR like the following:
- name: Force Login using identity
command: az login --identity
- name: Get subscriptionId
command: az account show --query id --output tsv
register: subscriptionId
- name: Get functionapp identity's clientId
command: az identity show --resource-group "{{ functionapp_resource_group }}" --name "{{ functionapp_user_identity }}" --query clientId --output tsv
register: identityId
- name: Use functionapp identity for acr pull
command:
az resource update
--ids /subscriptions/{{ subscriptionId.stdout }}/resourceGroups/{{ functionapp_resource_group }}/providers/Microsoft.Web/sites/{{ functionapp_name }}/config/web
--set properties.acrUseManagedIdentityCreds=True
--set properties.AcrUserManagedIdentityID={{ identityId.stdout }}
- name: Config the functionapp
command:
az functionapp config appsettings set --name "{{ functionapp_name }}"
--resource-group "{{ functionapp_resource_group }}"
--settings
"Key1={{ Value1 }}"
"Key2={{ Value2 }}"
- name: Deploy the functionapp
command:
az functionapp config container set --name "{{ functionapp_name }}"
--resource-group "{{ functionapp_resource_group }}"
--docker-custom-image-name "{{ docker_registry_host }}/{{ image_name }}:{{ image_tag }}"
--docker-registry-server-url "{{ docker_registry_host }}"
Usually, the script works fine. The successful deployment log should be like the following:
However, we encountered a strange failure today. When deploying the image, the Azure Portal shows an incomplete log:
Does anyone know why this failure occurred?
I finally figured out that it may imply there's no further docker log today.

az tag update ERROR: (MissingSubscription) The request did not have a subscription or a valid tenant level resource provider

I am trying to set a tag named "GitBranch" on an Azure Resource Group:
When I call the command in PowerShell window -
az tag update --resource-id "/subscriptions/79ca5b...7f/resourceGroups/ccg-afarber2" --subscription "79ca5b...7f" --operation merge --tags GitBranch=Test
then it works:
But when I try the same command in Git Bash window, then it fails.
I have also tried calling the following commands before and also tried both double and single quotes
az login
az account set --subscription "79ca5b....7f"
but the error is still the same:
ERROR: (MissingSubscription) The request did not have a subscription or a valid tenant level resource provider.
And the reason why I am trying to get the command working in bash is because I get the same error for my Azure pipeline task:
- task: AzureCLI#2
displayName: 'Set Resource Group tag'
inputs:
azureSubscription: '${{ parameters.ArmConnection }}'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
az tag update \
--resource-id '/subscriptions/${{ parameters.SubscriptionId }}/resourceGroups/${{ parameters.ResourceGroupName }}' \
--subscription '${{ parameters.SubscriptionId }}' \
--operation Merge --tags \
GitBranch=$(git branch --show-current)
What is happening here please?
On my PC I have azure-cli 2.28.0 installed.
I have found a solution myself!
In a AzureCLI pipeline task, when you run an az cli command, which has parameters starting with a slash, then the MinGW bash will auto-append the current path.
To prevent this, you can prepend the following variable to the az command:
MSYS_NO_PATHCONV=1 az ....
A double slash works too:

Using outputs of Powershell in Github Actions

I am trying to get connection string using Powershell and pass this argument to another step in the actions, but I am getting this error:
Input required and not supplied: connection-string
But I am following a similar behaviour that I use before but I am not sure why it is not working, Here is part of my script:
- name: Secrets to Key Vault
uses: azure/powershell#v1
env:
POWERSHELL_TELEMETRY_OPTOUT: 1
with:
inlineScript: |
$sqlConnectionString = (az keyvault secret show --vault-name <keyVaultName> --name <secret-name> --query [value] --output tsv)
echo ::set-output name=sqlConnectionString::$( $sqlConnectionString)
azPSVersion : '3.1.0'
- name: Deploy Core Module
uses: azure/sql-action#v1
id: sqlConnection
with:
server-name: <sqlServerName>
connection-string: ${{ steps.sqlConnection.outputs.sqlConnectionString}}
dacpac-package: './Database.dacpac'
I think problem is related to the output of the variable but I use similar syntax previously just in a simple run and it worked. Could it be related to the behaviour of the Powershell?
Plese add id to you first action:
- name: Secrets to Key Vault
uses: azure/powershell#v1
id: setSqlConnection
env:
POWERSHELL_TELEMETRY_OPTOUT: 1
with:
inlineScript: |
$sqlConnectionString = (az keyvault secret show --vault-name <keyVaultName> --name <secret-name> --query [value] --output tsv)
echo ::set-output name=sqlConnectionString::$( $sqlConnectionString)
azPSVersion : '3.1.0'
- name: Deploy Core Module
uses: azure/sql-action#v1
id: sqlConnection
with:
server-name: <sqlServerName>
connection-string: ${{ steps.setSqlConnection.outputs.sqlConnectionString}}
dacpac-package: './Database.dacpac'
and then use it to access output ${{ steps.setSqlConnection.outputs.sqlConnectionString}}

How to access SSH certificates from Azure DevOps Secure Files inside deployment pipeline?

I am currently struggling with Azure DevOps. I am trying to deploy a VM through the DevOps pipeline and pass it the SSH key that I have saved in the Library as a Secure File.
trigger:
- none
pool:
vmImage: ubuntu-latest
variables:
- group: 'deployment-test'
steps:
- task: AzureCLI#2
displayName: "Create VM"
inputs:
azureSubscription: sp-test1234
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
az vm create \
--resource-group testrg23142 \
--name testvm \
--image Canonical:UbuntuServer:18.04-LTS:latest \
--custom-data "${BASH_SOURCE%/*}"/cloud-init.yaml \
--ssh-key-values ............
--vnet-name vm-test-vnet \
--subnet vm-test-subnet \
--assign-identity '[system]'
I can´t find an example how to get my public key ¨test43.pub¨ from Secure File passed to ssh-key-values. Maybe there is something fundamental I am missing. I found the following link https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/download-secure-file?view=azure-devops which is about making the secure file accessible to the agent machine. The agent machine should be the temporary VM defined in pool, which has nothing to do with the VM I am creating. I am stuck, any help is much appreciated.
You could set a secret variable in your pipeline for the contents of your mykey.pub file. Then, call the variable in your pipeline definition as $(myPubKey). For the secret part of your key, use the Secure File library in Azure Pipelines.
Here's a more specific example for using an SSH key:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/install-ssh-key?view=azure-devops

How to pass the output from kubectl task to next task in Azure Devops

I am using AKS.I am trying to fetch the IP of the service post my deployment through devops so that I can pass on the IP to the API Management for further configuration. right now my task looks like this
- task: Kubernetes#1
inputs:
connectionType: 'Kubernetes Service Connection'
kubernetesServiceEndpoint: 'string-Conn'
namespace: '<appservices>'
command: 'get'
arguments: 'get services --namespace appservices authsvc --output jsonpath=''{.status.loadBalancer.ingress[0].ip}'''
secretType: 'dockerRegistry'
containerRegistryType: 'Azure Container Registry'
name: 'GetSvc'
when I run the command locally I am getting the IP of the loadbalancer. but how can I pass the output from this task to the next task? previously, when I use azure cli scripts, I can pass the vso set variable as part of the script itself like the one below but not sure how will I add the output of this task to a variable.
inlineScript: |
$something = (az storage container generate-sas --account-name <container> --name armtemplate --permissions r --expiry $(date -u -d "30 minutes" +%Y-%m-%dT%H:%MZ))
Write-Host($something) Write-Output("##vso[task.setvariable variable=SasToken;]$something")
I have followed the approach suggested by Amit Baranes since I am not clear on the script execution assignment without variable name. I have used the Azure cli task and ran it. It was successful
- task: AzureCLI#2
inputs:
azureSubscription: '<Service-Conn>'
scriptType: 'pscore'
scriptLocation: 'inlineScript'
inlineScript: |
az aks get-credentials -n $(clusterName) -g $(clusterRG)
$externalIp = (kubectl get -n $(ns) services $(svc) --output jsonpath='{.status.loadBalancer.ingress[0].ip}' )
Write-Host($externalIp) Write-Output("##vso[task.setvariable variable=AKSURL;]$externalIp")
We could use the logging command ##vso[task.setvariable variable=SasToken;]$something" to set variables in scripts.
But according to your description, we recommend that you use the output variable to pass the variable IP. For example, assume we have a task called MyTask, which sets an output variable called MyVar. We could use outputs in the same job.
steps:
- task: MyTask#1 # this step generates the output variable
name: ProduceVar # because we're going to depend on it, we need to name the step
- script: echo $(ProduceVar.MyVar) # this step uses the output variable

Resources