How to delete Azure Static Web App branch preview environments when deleting source branch in Azure DevOps? - azure

Background
I am using Azure DevOps for hosting the source of my web application and building/deploying the application to an Azure Static Web App.
I am using the "branch preview environments" of Static Web App like this (source):
steps:
...
- task: AzureStaticWebApp#0
inputs:
...
production_branch: 'main'
This works fine so far. For example, if I use a branch "dev", a corresponding branch environment is being created.
Question
How can I automatically delete the Azure static web app branch preview environment once the branch it was created for is being deleted?
Use Azure cli?
The only approach I found so far is using Azure CLI - but how to automate?
az staticwebapp environment delete --name my-static-app \
--environment-name an-env-name --subscription my-sub

I solved it by creating a separate pipeline triggered by the main branch. The pipeline removes all deployments that don't have an open pull request.
Here is the pipeline, basically just calling a node script that takes care of the cleanup:
name: Cleanup static web apps
trigger:
- main
# Add the following variables into devops:
# - DEVOPS_PAT: your personal access token for DevOps
# - AZURE_SUBSCRIPTION: the subscription in azure under which your swa lives
variables:
NPM_CONFIG_CACHE: $(Pipeline.Workspace)/.npm
DEVOPS_ORG_URL: "https://dev.azure.com/feedm3"
DEVOPS_PROJECT: "azure-playground"
AZURE_STATIC_WEBAPP_NAME: "react-app"
jobs:
- job: cleanup_preview_environments_job
displayName: Cleanup
pool:
vmImage: ubuntu-latest
steps:
- task: Cache#2
inputs:
key: 'npm | "$(Agent.OS)" | package-lock.json'
restoreKeys: |
npm | "$(Agent.OS)"
path: $(NPM_CONFIG_CACHE)
displayName: "Cache npm"
- script: |
npm ci
displayName: "Install dependencies"
- task: AzureCLI#2
inputs:
azureSubscription: "test-service-connection-name"
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
npm run ci:cleanup-deployments
displayName: "Cleanup outdated deployments"
This is the actual script that removes the deployments:
import { getPersonalAccessTokenHandler, WebApi } from "azure-devops-node-api";
import { exec as callbackExec } from 'child_process';
import { promisify } from 'util';
const exec = promisify(callbackExec);
const DEVOPS_ORG_URL = process.env["DEVOPS_ORG_URL"] as string;
const DEVOPS_PROJECT = process.env["DEVOPS_PROJECT"] as string;
const DEVOPS_PAT = process.env["DEVOPS_PAT"] as string;
const AZURE_SUBSCRIPTION = process.env["AZURE_SUBSCRIPTION"] as string;
const AZURE_STATIC_WEBAPP_NAME = process.env["AZURE_STATIC_WEBAPP_NAME"] as string;
const ALWAYS_DEPLOYED_BRANCHES = ['main'];
const REPO_ID = process.env['BUILD_REPOSITORY_ID'] as string;
const getAllStaticWebAppDeployments = async (): Promise<{ name: string; sourceBranch: string, hostname: string }[]> => {
const { stdout, stderr } = await exec(`az staticwebapp environment list --name ${AZURE_STATIC_WEBAPP_NAME} --subscription ${AZURE_SUBSCRIPTION}`);
if (stderr) {
console.error('Command failed!', stderr);
throw new Error(stderr);
}
return JSON.parse(stdout);
}
const run = async () => {
console.log(`Cleanup outdated deployments ${{REPO_ID, DEVOPS_PROJECT, AZURE_STATIC_WEBAPP_NAME}}...`)
const webAppDeployments = await getAllStaticWebAppDeployments();
// post comment
const authHandler = getPersonalAccessTokenHandler(DEVOPS_PAT);
const connection = new WebApi(DEVOPS_ORG_URL, authHandler);
await connection.connect();
const gitApi = await connection.getGitApi(`${DEVOPS_ORG_URL}/${DEVOPS_PROJECT}`);
// status 1 is active (PullRequestStatus type)
const activePullRequests = await gitApi.getPullRequests(REPO_ID, { status: 1 });
const activePullRequestBranches = activePullRequests.map(pr => pr.sourceRefName).filter(Boolean).map(fullBranchName => fullBranchName!.split('/')[2]);
// main deployment should always be alive
activePullRequestBranches.push(...ALWAYS_DEPLOYED_BRANCHES);
const outdatedDeployments = webAppDeployments.filter(deployment => {
return !activePullRequestBranches.includes(deployment.sourceBranch);
})
console.log('Deployments to delete:', outdatedDeployments);
for await (const deployment of outdatedDeployments) {
const deploymentName = deployment.name;
console.log(`Deleting deployment ${deploymentName}...`);
/**
* Deletion works, but ends with an irrelevant error.
*/
try {
const { stderr } = await exec(`az staticwebapp environment delete --name ${AZURE_STATIC_WEBAPP_NAME} --subscription ${AZURE_SUBSCRIPTION} --environment-name ${deploymentName} --yes`);
if (stderr) {
console.error('Could not delete deployment ', deploymentName);
} else {
console.log('Deleted deployment ', deploymentName);
}
} catch (e) {
console.log('Deleted deployment ', deploymentName);
}
}
console.log('Outdated deployments cleared!')
}
await run();
The full repo can be found here: https://github.com/feedm3/learning-azure-swa-devops

Related

Azure Function App infra redeployment makes existing functions in app fail because of missing requirements and also delete previous invocations data

I'm facing quite a big problem. I have a function app that I deploy by Azure Bicep in the following fashion:
param environmentType string
param location string
param storageAccountSku string
param vnetIntegrationSubnetId string
param kvName string
/*
This module contains the IaC for deploying the Premium function app
*/
/// Just a single minimum instance to start with and max scaling of 3 for dev, 5 for prd ///
var minimumElasticSize = 1
var maximumElasticSize = ((environmentType == 'prd') ? 5 : 3)
var name = 'nlp'
var functionAppName = 'function-app-${name}-${environmentType}'
/// Storage account for service ///
resource functionAppStorage 'Microsoft.Storage/storageAccounts#2019-06-01' = {
name: 'st4functionapp${name}${environmentType}'
location: location
kind: 'StorageV2'
sku: {
name: storageAccountSku
}
properties: {
allowBlobPublicAccess: false
accessTier: 'Hot'
supportsHttpsTrafficOnly: true
minimumTlsVersion: 'TLS1_2'
}
}
/// Premium app plan for the service ///
resource servicePlanfunctionApp 'Microsoft.Web/serverfarms#2021-03-01' = {
name: 'plan-${name}-function-app-${environmentType}'
location: location
kind: 'linux'
sku: {
name: 'EP1'
tier: 'ElasticPremium'
family: 'EP'
}
properties: {
reserved: true
targetWorkerCount: minimumElasticSize
maximumElasticWorkerCount: maximumElasticSize
elasticScaleEnabled: true
isSpot: false
zoneRedundant: ((environmentType == 'prd') ? true : false)
}
}
// Create log analytics workspace
resource logAnalyticsWorkspacefunctionApp 'Microsoft.OperationalInsights/workspaces#2021-06-01' = {
name: '${name}-functionapp-loganalytics-workspace-${environmentType}'
location: location
properties: {
sku: {
name: 'PerGB2018' // Standard
}
}
}
/// Log analytics workspace insights ///
resource applicationInsightsfunctionApp 'Microsoft.Insights/components#2020-02-02' = {
name: 'application-insights-${name}-function-${environmentType}'
location: location
kind: 'web'
properties: {
Application_Type: 'web'
Flow_Type: 'Bluefield'
publicNetworkAccessForIngestion: 'Enabled'
publicNetworkAccessForQuery: 'Enabled'
Request_Source: 'rest'
RetentionInDays: 30
WorkspaceResourceId: logAnalyticsWorkspacefunctionApp.id
}
}
// App service containing the workflow runtime ///
resource sitefunctionApp 'Microsoft.Web/sites#2021-03-01' = {
name: functionAppName
location: location
kind: 'functionapp,linux'
identity: {
type: 'SystemAssigned'
}
properties: {
clientAffinityEnabled: false
httpsOnly: true
serverFarmId: servicePlanfunctionApp.id
siteConfig: {
linuxFxVersion: 'python|3.9'
minTlsVersion: '1.2'
pythonVersion: '3.9'
use32BitWorkerProcess: true
appSettings: [
{
name: 'FUNCTIONS_EXTENSION_VERSION'
value: '~4'
}
{
name: 'FUNCTIONS_WORKER_RUNTIME'
value: 'python'
}
{
name: 'AzureWebJobsStorage'
value: 'DefaultEndpointsProtocol=https;AccountName=${functionAppStorage.name};AccountKey=${listKeys(functionAppStorage.id, '2019-06-01').keys[0].value};EndpointSuffix=core.windows.net'
}
{
name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING'
value: 'DefaultEndpointsProtocol=https;AccountName=${functionAppStorage.name};AccountKey=${listKeys(functionAppStorage.id, '2019-06-01').keys[0].value};EndpointSuffix=core.windows.net'
}
{
name: 'WEBSITE_CONTENTSHARE'
value: 'app-${toLower(name)}-functionservice-${toLower(environmentType)}a6e9'
}
{
name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
value: applicationInsightsfunctionApp.properties.InstrumentationKey
}
{
name: 'ApplicationInsightsAgent_EXTENSION_VERSION'
value: '~2'
}
{
name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
value: applicationInsightsfunctionApp.properties.ConnectionString
}
{
name: 'ENV'
value: toUpper(environmentType)
}
]
}
}
/// VNET integration so flows can access storage and queue accounts ///
resource vnetIntegration 'networkConfig#2022-03-01' = {
name: 'virtualNetwork'
properties: {
subnetResourceId: vnetIntegrationSubnetId
swiftSupported: true
}
}
}
/// Outputs for creating access policies ///
output functionAppName string = sitefunctionApp.name
output functionAppManagedIdentityId string = sitefunctionApp.identity.principalId
Output is used for giving permissions to blob/queue and some keyvault stuff. This code is a single module called in a main.bicep module and deployed via an Azure Devops pipeline.
I have a second repository in which I have some functions and which I also deploy via Azure Pipelines. This one contains three .yaml files for deploying, 2 templates (CI and CD) and 1 main pipeline called azure-pipelines.yml pulling it all together:
functions-ci.yml:
parameters:
- name: environment
type: string
jobs:
- job:
displayName: 'Publish the function as .zip'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python $(pythonVersion)'
- task: CopyFiles#2
displayName: 'Create project folder'
inputs:
SourceFolder: '$(System.DefaultWorkingDirectory)'
Contents: |
**
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: Bash#3
displayName: 'Install requirements for running function'
inputs:
targetType: 'inline'
script: |
python3 -m pip install --upgrade pip
pip install setup
pip install --target="./.python_packages/lib/site-packages" -r ./requirements.txt
workingDirectory: '$(Build.ArtifactStagingDirectory)'
- task: ArchiveFiles#2
displayName: 'Create project zip'
inputs:
rootFolderOrFile: '$(Build.ArtifactStagingDirectory)'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
replaceExistingArchive: true
- task: PublishPipelineArtifact#1
displayName: 'Publish project zip artifact'
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)'
artifactName: 'functions$(environment)'
publishLocation: 'pipeline'
functions-cd.yml:
parameters:
- name: environment
type: string
- name: azureServiceConnection
type: string
jobs:
- job: worfklowsDeploy
displayName: 'Deploy the functions'
steps:
# Download created artifacts, containing the zipped function codes
- task: DownloadPipelineArtifact#2
inputs:
buildType: 'current'
artifactName: 'functions$(environment)'
targetPath: '$(Build.ArtifactStagingDirectory)'
# Zip deploy the functions code
- task: AzureFunctionApp#1
inputs:
azureSubscription: $(azureServiceConnection)
appType: functionAppLinux
appName: function-app-nlp-$(environment)
package: $(Build.ArtifactStagingDirectory)/**/*.zip
deploymentMethod: 'zipDeploy'
They are pulled together in azure-pipelines.yml:
trigger:
branches:
include:
- develop
- main
pool:
name: "Hosted Ubuntu 1804"
variables:
${{ if notIn(variables['Build.SourceBranchName'], 'main') }}:
environment: dev
azureServiceConnection: SC-NLPDT
${{ if eq(variables['Build.SourceBranchName'], 'main') }}:
environment: prd
azureServiceConnection: SC-NLPPRD
pythonVersion: '3.9'
stages:
# Builds the functions as .zip
- stage: functions_ci
displayName: 'Functions CI'
jobs:
- template: ./templates/functions-ci.yml
parameters:
environment: $(environment)
# Deploys .zip workflows
- stage: functions_cd
displayName: 'Functions CD'
jobs:
- template: ./templates/functions-cd.yml
parameters:
environment: $(environment)
azureServiceConnection: $(azureServiceConnection)
So this successfully deploys my function app the first time around when I have also deployed the infra code. The imports are done well, the right function app is deployed, and the code runs when I trigger it.
But, when I go and redeploy the infra (bicep) code, all of a sudden I the newest version of the functions is gone and is replaced by a previous version.
Also, running this previous version doesn't work anymore since all my requirements that were installed in the pipeline (CI part) via pip install --target="./.python_packages/lib/site-packages" -r ./requirements.txt suddenly cannot be found anymore, giving import errors (i.e. Result: Failure Exception: ModuleNotFoundError: No module named 'azure.identity'). Mind you, this version did work previously just fine.
This is a big problem for me since I need to be able to update some infra stuff (like adding an APP_SETTING) without this breaking the current deployment of functions.
I had thought about just redeploying the function automatically after an infra update, but then I still miss the previous invocations which I need to be able to see.
Am I missing something in the above code because I cannot figure out what would be going wrong here that causes my functions to change on infra deployment...
Looking at the documentation:
To enable your function app to run from a package, add a WEBSITE_RUN_FROM_PACKAGE setting to your function app settings.
1 Indicates that the function app runs from a local package file deployed in the d:\home\data\SitePackages (Windows) or /home/data/SitePackages (Linux) folder of your function app.
In your case, when you deploy your function app code using AzureFunctionApp#1 and zipDeploy, this automatically add this appsetting into your function app. When redeploying your infrastructure, this setting is removed and the function app host does not know where to find the code.
If you add this app setting in your bicep file this should work:
{
name: 'WEBSITE_RUN_FROM_PACKAGE'
value: '1'
}

Azure Logic Apps (Standard) workflows getting deleted when I redeploy Logic App IaC part, how to avoid this?

I set up a Logic App in IaC in the following way:
param environmentType string
param location string
param storageAccountSku string
param vnetIntegrationSubnetId string
param storageAccountTempEndpoint string
param ResourceGroupName string
/// Just a single minimum instance to start with and max scaling of 3 ///
var minimumElasticSize = 1
var maximumElasticSize = 3
var name = 'somename'
var logicAppName = 'logic-app-${name}-${environmentType}'
/// Storage account for service ///
resource logicAppStorage 'Microsoft.Storage/storageAccounts#2019-06-01' = {
name: 'st4logicapp${name}${environmentType}'
location: location
kind: 'StorageV2'
sku: {
name: storageAccountSku
}
properties: {
allowBlobPublicAccess: false
accessTier: 'Hot'
supportsHttpsTrafficOnly: true
minimumTlsVersion: 'TLS1_2'
}
}
/// Existing temp storage for extracting variables ///
resource storageAccountTemp 'Microsoft.Storage/storageAccounts#2021-08-01' existing = {
scope: resourceGroup(ResourceGroupName)
name: 'tmpst${environmentType}'
}
/// Dedicated app plan for the service ///
resource servicePlanLogicApp 'Microsoft.Web/serverfarms#2021-02-01' = {
name: 'plan-${name}-logic-app-${environmentType}'
location: location
sku: {
tier: 'WorkflowStandard'
name: 'WS1'
}
properties: {
targetWorkerCount: minimumElasticSize
maximumElasticWorkerCount: maximumElasticSize
elasticScaleEnabled: true
isSpot: false
zoneRedundant: ((environmentType == 'prd') ? true : false)
}
}
// Create log analytics workspace
resource logAnalyticsWorkspacelogicApp 'Microsoft.OperationalInsights/workspaces#2021-06-01' = {
name: '${name}-logicapp-loganalytics-workspace-${environmentType}'
location: location
properties: {
sku: {
name: 'PerGB2018' // Standard
}
}
}
/// Log analytics workspace insights ///
resource applicationInsightsLogicApp 'Microsoft.Insights/components#2020-02-02' = {
name: 'application-insights-${name}-logic-${environmentType}'
location: location
kind: 'web'
properties: {
Application_Type: 'web'
Flow_Type: 'Bluefield'
publicNetworkAccessForIngestion: 'Enabled'
publicNetworkAccessForQuery: 'Enabled'
Request_Source: 'rest'
RetentionInDays: 30
WorkspaceResourceId: logAnalyticsWorkspacelogicApp.id
}
}
// App service containing the workflow runtime ///
resource siteLogicApp 'Microsoft.Web/sites#2021-02-01' = {
name: logicAppName
location: location
kind: 'functionapp,workflowapp'
properties: {
httpsOnly: true
siteConfig: {
appSettings: [
{
name: 'FUNCTIONS_EXTENSION_VERSION'
value: '~3'
}
{
name: 'FUNCTIONS_WORKER_RUNTIME'
value: 'node'
}
{
name: 'WEBSITE_NODE_DEFAULT_VERSION'
value: '~12'
}
{
name: 'AzureWebJobsStorage'
value: 'DefaultEndpointsProtocol=https;AccountName=${logicAppStorage.name};AccountKey=${listKeys(logicAppStorage.id, '2019-06-01').keys[0].value};EndpointSuffix=core.windows.net'
}
{
name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING'
value: 'DefaultEndpointsProtocol=https;AccountName=${logicAppStorage.name};AccountKey=${listKeys(logicAppStorage.id, '2019-06-01').keys[0].value};EndpointSuffix=core.windows.net'
}
{
name: 'WEBSITE_CONTENTSHARE'
value: 'app-${toLower(name)}-logicservice-${toLower(environmentType)}a6e9'
}
{
name: 'AzureFunctionsJobHost__extensionBundle__id'
value: 'Microsoft.Azure.Functions.ExtensionBundle.Workflows'
}
{
name: 'AzureFunctionsJobHost__extensionBundle__version'
value: '[1.*, 2.0.0)'
}
{
name: 'APP_KIND'
value: 'workflowApp'
}
{
name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
value: applicationInsightsLogicApp.properties.InstrumentationKey
}
{
name: 'ApplicationInsightsAgent_EXTENSION_VERSION'
value: '~2'
}
{
name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
value: applicationInsightsLogicApp.properties.ConnectionString
}
{
name: 'AzureBlob_connectionString'
value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountTemp.name};EndpointSuffix=${environment().suffixes.storage};AccountKey=${listKeys(storageAccountTemp.id, storageAccountTemp.apiVersion).keys[0].value}'
}
{
name: 'azurequeues_connectionString'
value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountTemp.name};EndpointSuffix=${environment().suffixes.storage};AccountKey=${listKeys(storageAccountTemp.id, storageAccountTemp.apiVersion).keys[0].value}'
}
]
use32BitWorkerProcess: true
}
serverFarmId: servicePlanLogicApp.id
clientAffinityEnabled: false
}
/// VNET integration so flows can access storage and queue accounts ///
resource vnetIntegration 'networkConfig' = {
name: 'virtualNetwork'
properties: {
subnetResourceId: vnetIntegrationSubnetId
swiftSupported: true
}
}
}
This all goes well and the Standard Logic App gets deployed.
Next, I define some workflows via azure pipelines (via zipdeploy) with code:
trigger:
branches:
include:
- '*'
pool:
name: "Ubuntu hosted"
stages:
- stage: logicAppBuild
displayName: 'Logic App Build'
jobs:
- job: logic_app_build
displayName: 'Build and publish logic app'
steps:
- task: CopyFiles#2
displayName: 'Create project folder'
inputs:
SourceFolder: '$(System.DefaultWorkingDirectory)/logicapp'
Contents: |
**
TargetFolder: 'project_output'
- task: ArchiveFiles#2
displayName: 'Create project zip'
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)/project_output'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
replaceExistingArchive: true
- task: PublishPipelineArtifact#1
displayName: 'Publish project zip artifact'
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)'
artifactName: 'artifectdev'
publishLocation: 'pipeline'
- stage: logicAppDeploy
displayName: 'Logic app deployment'
jobs:
- job: logicAppDeploy
displayName: 'Deploy the Logic apps'
steps:
- task: DownloadPipelineArtifact#2
inputs:
buildType: 'current'
artifactName: 'artifectdev'
targetPath: '$(Build.ArtifactStagingDirectory)'
- task: AzureFunctionApp#1 # Add this at the end of your file
inputs:
azureSubscription: SC-DEV
appType: functionApp # default is functionApp
appName: logic-app-name-dev
package: $(Build.ArtifactStagingDirectory)/**/*.zip
Running the IaC code first in a pipeline (called in a main.bicep with some other infra code) results in successful deployment of the LogicApp. After then running the pipeline with the zip-deploy, the flows defined in the logicapp directory get deployed well, connections and all.
However, when the IaC pipeline is run again, all my defined workflows that were deployed with the zip-deploy in the second pipeline, are now gone. Even if I don't change anything in the IaC code.
Is there any way to circumvent this? It is totally unworkable for me to have this happen every time I deploy IaC code (for instance when adding some app setting).
Sharing the resolution as discussed here if someone looking for the similar issue.
For the zip deploy you need to use the AzureFunctionApp task with workflowapp apptype
task: AzureFunctionApp#1
displayName: Deploy Logic App Workflows
inputs:
azureSubscription: ${<!-- -->{variables.azureSubscription}}
appName: $(pv_logicAppName)
appType: 'workflowapp'
package: '$(Pipeline.Workspace)/LogicApps/$(Build.BuildNumber).zip'
deploymentMethod: 'zipDeploy'

Github action write to a repo in Node with #actions/core or #actions/github?

Learning Github Actions I'm finally able to call an action from a secondary repo, example:
org/action-playground
.github/workflows/test.yml
name: Test Write Action
on:
push:
branches: [main]
jobs:
test_node_works:
runs-on: ubuntu-latest
name: Test if Node works
strategy:
matrix:
node-version: [12.x]
steps:
- uses: actions/checkout#v2
with:
repository: org/write-file-action
ref: main
token: ${{ secrets.ACTION_TOKEN }} # stored in GitHub secrets created from profile settings
args: 'TESTING'
- name: action step
uses: ./ # Uses an action in the root directory
id: foo
with:
who-to-greet: 'Darth Vader'
- name: output time
run: |
echo "The details are ${{ steps.foo.outputs.repo }}"
echo "The time was ${{ steps.foo.outputs.time }}"
echo "time: ${{ steps.foo.outputs.time }}" >> ./foo.md
shell: bash
and the action is a success.
org/write-file-action
action.yml:
## https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions
name: 'Write File Action'
description: 'workflow testing'
inputs:
who-to-greet: # id of input
description: 'Who to greet'
required: true
default: './'
outputs:
time: # id of output
description: 'The time we greeted you'
repo:
description: 'user and repo'
runs:
using: 'node12'
main: 'dist/index.js'
branding:
color: 'green'
icon: 'truck' ## https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#brandingicon
index.js that is built to dist/index.js
fs = require('fs')
const core = require('#actions/core')
const github = require('#actions/github')
try {
// `who-to-greet` input defined in action metadata file
const nameToGreet = core.getInput('who-to-greet')
console.log(`Hello ${nameToGreet}!`)
const time = new Date().toTimeString()
core.setOutput('time', time)
const repo = github.context.payload.repository.full_name
console.log(`full name: ${repo}!`)
core.setOutput('repo', repo)
// Get the JSON webhook payload for the event that triggered the workflow
const payload = JSON.stringify(github.context.payload, undefined, 2)
console.log(`The event payload: ${payload}`)
fs.writeFileSync('payload.json', payload) // Doesn't write to repo
} catch (error) {
core.setFailed(error.message)
}
package.json:
{
"name": "wite-file-action",
"version": "1.0.0",
"description": "workflow testing",
"main": "dist/index.js",
"scripts": {
"build": "ncc build ./index.js"
},
"dependencies": {
"#actions/core": "^1.4.0",
"#actions/github": "^5.0.0"
},
"devDependencies": {
"#vercel/ncc": "^0.28.6",
"prettier": "^2.3.2"
}
}
but at current workflow nothing is created in action-playground. The only way I'm able to write to the repo is from a module using the API with github-api with something like:
const GitHub = require('github-api')
const gh = new GitHub({
token: config.app.git_token,
}, githubUrl)
const repo = gh.getRepo(config.app.repoOwner, config.app.repoName)
const branch = config.app.repoBranch
const path = 'README.md'
const content = '#Foo Bar\nthis is foo bar'
const message = 'add foo bar to the readme'
const options = {}
repo.writeFile(
branch,
path,
content,
message,
options
).then((r) => {
console.log(r)
})
and passing in the repo, org or user from github.context.payload. My end goal is to eventually read to see if it exists, if so overwrite and write to README.md a badge dynamically:
`![${github.context.payload.workflow}](https://github.com/${github.context.payload.user}/${github.context.payload.repo}/actions/workflows/${github.context.payload.workflow}.yml/badge.svg?branch=main)`
Second goal from this is to create a markdown file (like foo.md or payload.json) but I cant run an echo command from the action to write to the repo, which I get is Bash and not Node.
Is there a way without using the API to write to a repo that is calling the action with Node? Is this only available with Bash when using run:
- name: output
shell: bash
run: |
echo "time: ${{ steps.foo.outputs.time }}" >> ./time.md
If so how to do it?
Research:
Passing variable argument to .ps1 script not working from Github Actions
How to pass variable between two successive GitHub Actions jobs?
GitHub Action: Pass Environment Variable to into Action using PowerShell
How to create outputs on GitHub actions from bash scripts?
Self-updating GitHub Profile README with JavaScript
Workflow syntax for GitHub Actions

Docker CI not working with mongodb-memory-server

I used mongodb-memory-server to test some repository functions in mongo, and run my unit-test at my local machine successfully, however when this code was pushed into GitHub, it was running fail. I am not sure the issue is about docker config or about mongodb-memory-server version.
Here is the log from GitHub:
9W45p5LM91Vj","tmpDir":{"name":"/tmp/mongo-mem--188-9W45p5LM91Vj"},"uri":"mongodb://127.0.0.1:42823/d791a878-09ac-4ccc-896d-ea603e2676ad?"}
2021-06-05T09:45:33.351Z MongoMS:MongoBinary MongoBinary options: {
"downloadDir": "/__w/son-git-test/son-git-test/node_modules/.cache/mongodb-memory-server/mongodb-binaries",
"platform": "linux",
"arch": "x64",
"version": "4.2.8",
"checkMD5": false
}
2021-06-05T09:45:33.356Z MongoMS:getos Trying LSB-Release
2021-06-05T09:45:33.372Z MongoMS:getos Trying OS-Release
2021-06-05T09:45:33.375Z MongoMS:MongoBinaryDownloadUrl Using "mongodb-linux-x86_64-debian92-4.2.8.tgz" as the Archive String
2021-06-05T09:45:33.375Z MongoMS:MongoBinaryDownloadUrl Using "https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-ubuntu1804-4.2.8.tgz" as the Download-URL
2021-06-05T09:45:33.377Z MongoMS:MongoBinaryDownload Downloading: "https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-ubuntu1804-4.2.8.tgz"
2021-06-05T09:45:33.377Z MongoMS:MongoBinaryDownload trying to download https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-ubuntu1804-4.2.8.tgz
2021-06-05T09:45:34.756Z MongoMS:MongoBinaryDownload moved /__w/son-git-test/son-git-test/node_modules/.cache/mongodb-memory-server/mongodb-binaries/mongodb-linux-x86_64-ubuntu1804-4.2.8.tgz.downloading to /__w/son-git-test/son-git-test/node_modules/.cache/mongodb-memory-server/mongodb-binaries/mongodb-linux-x86_64-ubuntu1804-4.2.8.tgz
2021-06-05T09:45:34.757Z MongoMS:MongoBinaryDownload extract(): /__w/son-git-test/son-git-test/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.2.8
2021-06-05T09:45:37.293Z MongoMS:MongoBinary MongoBinary: Download lock removed
2021-06-05T09:45:37.294Z MongoMS:MongoBinary MongoBinary: Mongod binary path: "/__w/son-git-test/son-git-test/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.2.8/mongod"
2021-06-05T09:45:37.309Z MongoMS:MongoInstance Mongo[42823]: Called MongoInstance._launchKiller(parent: 188, child: 203):
2021-06-05T09:45:37.323Z MongoMS:MongoInstance Mongo[42823]: STDERR: /__w/son-git-test/son-git-test/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.2.8/mongod: error while loading shared libraries: libcurl.so.4: cannot open shared object file: No such file or directory
2021-06-05T09:45:37.324Z MongoMS:MongoInstance Mongo[42823]: Mongod instance closed with an non-0 code!
2021-06-05T09:45:37.324Z MongoMS:MongoInstance Mongo[42823]: CLOSE: 127
2021-06-05T09:45:37.325Z MongoMS:MongoInstance Mongo[42823]: MongodbInstance: Instance has failed: Mongod instance closed with code "127"
2021-06-05T09:45:37.331Z MongoMS:MongoMemoryServer Called MongoMemoryServer.stop() method
2021-06-05T09:45:37.331Z MongoMS:MongoMemoryServer Called MongoMemoryServer.ensureInstance() method
2021-06-05T09:45:37.349Z MongoMS:MongoInstance Mongo[42823]: [MongoKiller]: exit - [null,"SIGTERM"]
FAIL src/squid/squid.controller.spec.ts (9.945 s)
● Console
console.log
before each
at Object.<anonymous> (squid/squid.controller.spec.ts:19:13)
console.log
Downloading MongoDB 4.2.8: 0 % (0mb / 126.5mb)
at MongoBinaryDownload.Object.<anonymous>.MongoBinaryDownload.printDownloadProgress (../node_modules/mongodb-memory-server-core/src/util/MongoBinaryDownload.ts:424:15)
● SquidController › should be defined
Failed: "Mongod instance closed with code \"127\""
16 | let controller: SquidController;
17 |
> 18 | beforeEach(async () => {
| ^
19 | console.log('before each');
20 | const module: TestingModule = await Test.createTestingModule({
21 | imports: [
at Env.beforeEach (../node_modules/jest-jasmine2/build/jasmineAsyncInstall.js:46:24)
at Suite.<anonymous> (squid/squid.controller.spec.ts:18:3)
at Object.<anonymous> (squid/squid.controller.spec.ts:15:1)
and here is gitflow config:
name: Code quality
on:
pull_request:
branches:
- develop
push:
branches:
- develop
defaults:
run:
shell: bash
jobs:
Code-Quality:
name: Code quality
runs-on: ubuntu-latest
container: node:lts-slim
steps:
- uses: actions/checkout#v2
- name: Install dependency
run: yarn install --frozen-lockfile
- name: Check lint and format
run: |
yarn format:check
yarn lint:check
- name: checking unit test
run: yarn test
and here is unit test code:
import { Test, TestingModule } from '#nestjs/testing';
import { MongooseModule } from '#nestjs/mongoose';
import { SquidController } from './squid.controller';
import { SquidService } from './squid.service';
import {
closeInMongodConnection,
rootMongooseTestModule,
} from '../test-utils/mongo/MongooseTestModule';
import { SquidSchema } from './model/squid.schema';
// May require additional time for downloading MongoDB binaries
jasmine.DEFAULT_TIMEOUT_INTERVAL = 600000;
describe('SquidController', () => {
let controller: SquidController;
beforeEach(async () => {
console.log('before each');
const module: TestingModule = await Test.createTestingModule({
imports: [
rootMongooseTestModule(),
MongooseModule.forFeature([{ name: 'Squid', schema: SquidSchema }]),
],
controllers: [SquidController],
providers: [SquidService],
}).compile();
controller = module.get<SquidController>(SquidController);
});
it('should be defined', () => {
expect(controller).toBeDefined();
});
afterAll(async () => {
await closeInMongodConnection();
});
});
After searching I found where the problem is. This issue is related to Node version. Mongo haven't had build version for Node slim/alpine.
We can fix by update node images: (container: node:14.17.0)
name: Code quality
on:
pull_request:
branches:
- develop
push:
branches:
- develop
defaults:
run:
shell: bash
jobs:
Code-Quality:
name: Code quality
runs-on: ubuntu-latest
container: node:14.17.0
steps:
- uses: actions/checkout#v2
- name: Install dependency
run: yarn install --frozen-lockfile
- name: Check lint and format
run: |
yarn format:check
yarn lint:check
- name: checking unit test
run: yarn test

Azure pipeline - terratest - ERROR: Please run 'az login' to setup account

i'm facing a (it seams) recurent pbm in Azure Pipeline to run terratest.
While resources are well created the destroyed, when I call an azure.ResourceGroupExists function (or whatever else azure.xxx function) i have the following error :
--- FAIL: TestTerraform_RM_resource_group (102.30s)
resourcegroup.go:15:
Error Trace: resourcegroup.go:15
RM_resource_group_test.go:108
Error: Received unexpected error:
Invoking Azure CLI failed with the following error: ERROR: Please run 'az login' to setup account.
Test: TestTerraform_RM_resource_group
FAIL
Regarding some forum, It seems to be some configuration issue, and I follow all these recomanded configuratoion :
set environments variables for terraform :
-- ARM_CLIENT_ID
-- ARM_CLIENT_SECRET
-- ARM_SUBSCRIPTION_ID
-- ARM_TENANT_ID
set the az login in AzureCli task outside the go task for terratest, as it seems that terratest needs 2 differents authentifications : (using service principal client id for this az login)
For Assert tests, needs the ARM_CLIENT authentification
for Exists tests, needs the Service connection authentification
here the link I follow :
https://github.com/gruntwork-io/terratest/issues/454
https://github.com/gruntwork-io/terratest/tree/master/examples/azure#review-environment-variables
https://github.com/gruntwork-io/terratest/blob/master/modules/environment/envvar.go
https://blog.jcorioland.io/archives/2019/09/25/terraform-microsoft-azure-ci-docker-azure-pipeline.html
bellow my pipeline code, where the TF_VAR_ARM_CLIENT_SECRET is a secret variable of the pipeline
runOnce:
deploy:
steps:
- checkout: self
- task: ms-devlabs.custom-terraform-tasks.custom-terraform-installer-task.TerraformInstaller#0
displayName: 'Install Terraform $(TERRAFORM_VERSION)'
inputs:
terraformVersion: $(TERRAFORM_VERSION)
- task: GoTool#0
displayName: 'Use Go $(GOVERSION)'
inputs:
version: $(GOVERSION)
goPath: $(GOPATH)
goBin: $(GOBIN)
- task: Go#0
displayName: 'Install Go Terratest module'
inputs:
command: get
arguments: '$(TF_LOG) github.com/gruntwork-io/terratest/modules/terraform'
- task: Go#0
displayName: 'Install Go Assert module'
inputs:
command: get
arguments: '$(TF_LOG) github.com/stretchr/testify/assert'
- task: Go#0
displayName: 'Install Go Terratest Azure module'
inputs:
command: get
arguments: '$(TF_LOG) github.com/gruntwork-io/terratest/modules/azure'
- task: Go#0
displayName: 'Install Go hashicorp/terraform-json module'
inputs:
command: get
arguments: '$(TF_LOG) github.com/hashicorp/terraform-json'
- task: Go#0
displayName: 'Install Go azure-sdk-for-go module'
inputs:
command: get
arguments: '$(TF_LOG) github.com/Azure/azure-sdk-for-go'
- task: AzureCLI#2
displayName: Azure CLI
inputs:
azureSubscription: $(serviceConnection)
scriptType: ps
scriptLocation: inlineScript
inlineScript: |
az login --service-principal --username $(TF_VAR_ARM_CLIENT_ID) --password $(TF_VAR_ARM_CLIENT_SECRET) --tenant 'f5ff14e7-93c8-49f7-9706-7beea059bd32'
# Go test command
- task: Go#0
displayName: 'Run Go terratest for resource_Modules'
inputs:
command: test
arguments: '$(TF_LOG) $(pathToTerraformRootModule)\resource_group\'
env:
ARM_CLIENT_SECRET: $(TF_VAR_ARM_CLIENT_SECRET) #pipeline secret variable
ARM_CLIENT_ID: $(TF_VAR_ARM_CLIENT_ID)
ARM_SUBSCRIPTION_ID: $(TF_VAR_ARM_SUBSCRIPTION_ID)
ARM_TENANT_ID: $(TF_VAR_ARM_TENANT_ID)
TF_VAR_SERVICE_PRINCIPAL_ID: $(TF_VAR_ARM_CLIENT_ID)
TF_VAR_SERVICE_PRINCIPAL_SECRET: $(TF_VAR_ARM_CLIENT_ID)
resource_group_name: $(storageAccountResourceGroup)
storage_account_name: $(storageAccount)
container_name: $(stateBlobContainer)
key: '$(MODULE)-$(TF_VAR_APPLICATION)-${{ parameters.Environment }}.tfstate'
Bellow my go terratest code :
package RM_resource_group_Test
import (
"testing"
"os"
"github.com/gruntwork-io/terratest/modules/azure"
"github.com/gruntwork-io/terratest/modules/terraform"
"github.com/stretchr/testify/assert"
)
var (
globalBackendConf = make(map[string]interface{})
globalEnvVars = make(map[string]string)
)
func TestTerraform_RM_resource_group(t *testing.T) {
t.Parallel()
// terraform Directory
fixtureFolder := "./"
// input value
inputStage := "demo_we"
inputEnvironment := "DEMO"
inputApplication := "DEMO"
// expected value
expectedName := "z-adf-ftnd-shrd-dm-ew1-rgp42"
// getting enVars from environment variables
ARM_CLIENT_ID := os.Getenv("ARM_CLIENT_ID")
ARM_CLIENT_SECRET := os.Getenv("ARM_CLIENT_SECRET")
ARM_SUBSCRIPTION_ID := os.Getenv("ARM_SUBSCRIPTION_ID")
ARM_TENANT_ID := os.Getenv("ARM_TENANT_ID")
if ARM_CLIENT_ID != "" {
globalEnvVars["ARM_USE_MSI"] = "false"
globalEnvVars["ARM_CLIENT_ID"] = ARM_CLIENT_ID
globalEnvVars["ARM_CLIENT_SECRET"] = ARM_CLIENT_SECRET
globalEnvVars["ARM_SUBSCRIPTION_ID"] = ARM_SUBSCRIPTION_ID
globalEnvVars["ARM_TENANT_ID"] = ARM_TENANT_ID
}
// getting backend vars from environment variables
resource_group_name := os.Getenv("resource_group_name")
storage_account_name := os.Getenv("storage_account_name")
container_name := os.Getenv("container_name")
key := os.Getenv("key")
if resource_group_name != "" {
globalBackendConf["use_msi"] = false
globalBackendConf["resource_group_name"] = resource_group_name
globalBackendConf["storage_account_name"] = storage_account_name
globalBackendConf["container_name"] = container_name
globalBackendConf["key"] = key
}
// User Terratest to deploy the infrastructure
terraformOptions := terraform.WithDefaultRetryableErrors(t, &terraform.Options{
// The path to where our Terraform code is located
TerraformDir: fixtureFolder,
// Variables to pass to our Terraform code using -var options
Vars: map[string]interface{}{
"STAGE": inputStage,
"ENVIRONMENT": inputEnvironment,
"APPLICATION" : inputApplication,
},
EnvVars: globalEnvVars,
// backend values to set when initialziing Terraform
BackendConfig: globalBackendConf,
// Disable colors in Terraform commands so its easier to parse stdout/stderr
NoColor: true,
})
// website::tag::4::Clean up resources with "terraform destroy". Using "defer" runs the command at the end of the test, whether the test succeeds or fails.
// At the end of the test, run `terraform destroy` to clean up any resources that were created
defer terraform.Destroy(t, terraformOptions)
// website::tag::2::Run "terraform init" and "terraform apply".
// This will run `terraform init` and `terraform apply` and fail the test if there are any errors
terraform.InitAndApply(t, terraformOptions)
actualName := terraform.Output(t, terraformOptions, "tested_name")
actualReaderName := terraform.Output(t, terraformOptions, "tested_readerName")
assert.Equal(t, expectedName, actualName)
assert.Equal(t, expectedName, actualReaderName)
subscriptionID := terraform.Output(t, terraformOptions, "current_subscription_id")
exists := azure.ResourceGroupExists(t, expectedName, subscriptionID)
assert.True(t, exists, "Resource group does not exist")
}
I'm sure I miss something in passing my parameters, as always I have the following error, after creating and destroying resources in Azure :
--- FAIL: TestTerraform_RM_resource_group (90.75s)
resourcegroup.go:15:
Error Trace: resourcegroup.go:15
RM_resource_group_test.go:108
Error: Received unexpected error:
Invoking Azure CLI failed with the following error: ERROR: Please run 'az login' to setup account.
Test: TestTerraform_RM_resource_group
please, help.
and thank-you for answering..
As I figure out earlier, it was a configuration mistake and, after having made some deep excavations on Go Terratest Azure module, I've found these lines that gives all the explanations :
https://github.com/gruntwork-io/terratest/blob/master/modules/azure/authorizer.go#L11
leading to
https://learn.microsoft.com/en-us/azure/developer/go/azure-sdk-authorization#use-environment-based-authentication
So I change my pipeline to this :
# Go test command
- task: Go#0
displayName: 'Run Go terratest for resource_Modules'
inputs:
command: test
arguments: '$(TF_LOG) $(pathToTerraformRootModule)\...'
env:
ARM_SUBSCRIPTION_ID: $(TF_VAR_ARM_SUBSCRIPTION_ID)
AZURE_CLIENT_ID: $(TF_VAR_ARM_CLIENT_ID)
AZURE_TENANT_ID: $(TF_VAR_ARM_TENANT_ID)
AZURE_CLIENT_SECRET: $(TF_VAR_ARM_CLIENT_SECRET)
resource_group_name: $(storageAccountResourceGroup)
storage_account_name: $(storageAccount)
container_name: $(stateBlobContainer)
key: '$(MODULE)-$(TF_VAR_APPLICATION)-${{ parameters.Environment }}.tfstate'
And my Go code to this (regarding the envVariables use) :
// getting enVars from environment variables
ARM_CLIENT_ID := os.Getenv("AZURE_CLIENT_ID")
ARM_CLIENT_SECRET := os.Getenv("AZURE_CLIENT_SECRET")
ARM_TENANT_ID := os.Getenv("AZURE_TENANT_ID")
ARM_SUBSCRIPTION_ID := os.Getenv("ARM_SUBSCRIPTION_ID")
// creating globalEnVars for terraform call through Terratest
if ARM_CLIENT_ID != "" {
//globalEnvVars["ARM_USE_MSI"] = "true"
globalEnvVars["ARM_CLIENT_ID"] = ARM_CLIENT_ID
globalEnvVars["ARM_CLIENT_SECRET"] = ARM_CLIENT_SECRET
globalEnvVars["ARM_SUBSCRIPTION_ID"] = ARM_SUBSCRIPTION_ID
globalEnvVars["ARM_TENANT_ID"] = ARM_TENANT_ID
}
// getting backend vars from environment variables
resource_group_name := os.Getenv("resource_group_name")
storage_account_name := os.Getenv("storage_account_name")
container_name := os.Getenv("container_name")
key := os.Getenv("key")
// creating globalBackendConf for terraform call through Terratest
if resource_group_name != "" {
//globalBackendConf["use_msi"] = true
globalBackendConf["resource_group_name"] = resource_group_name
globalBackendConf["storage_account_name"] = storage_account_name
globalBackendConf["container_name"] = container_name
globalBackendConf["key"] = key
}
// User Terratest to deploy the infrastructure
terraformOptions := terraform.WithDefaultRetryableErrors(t, &terraform.Options{
// website::tag::1::Set the path to the Terraform code that will be tested.
// The path to where our Terraform code is located
TerraformDir: fixtureFolder,
// Variables to pass to our Terraform code using -var options
Vars: map[string]interface{}{
"STAGE": inputStage,
"ENVIRONMENT": inputEnvironment,
"APPLICATION" : inputApplication,
//"configuration" : inputConfiguration,
},
// globalvariables for user account
EnvVars: globalEnvVars,
// backend values to set when initialziing Terraform
BackendConfig: globalBackendConf,
// Disable colors in Terraform commands so its easier to parse stdout/stderr
NoColor: true,
})
And all goes right !
Hopes this could help others.
Thanks again.
[EDIT] To be more explicit :
Go and Terraform uses two differents methods for Azure authentification.
** Terraform authentification is explained bellow :
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_secret#configuring-the-service-principal-in-terraform
** Go authentification is explained bellow :
https://learn.microsoft.com/en-us/azure/developer/go/azure-sdk-authorization#use-environment-based-authentication
** Terratest is using both authentification methods regarding the work it has to be done :
azure existences tests uses Go azure authentification :
https://github.com/gruntwork-io/terratest/blob/master/modules/azure/authorizer.go#L11
terraform commands uses terraform authentification :
https://github.com/gruntwork-io/terratest/blob/0d654bd2ab781a52e495f61230cf892dfba9731b/modules/terraform/cmd.go#L12
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_secret#configuring-the-service-principal-in-terraform
so both authentification methods have to be implemented

Resources