Github action write to a repo in Node with #actions/core or #actions/github? - node.js

Learning Github Actions I'm finally able to call an action from a secondary repo, example:
org/action-playground
.github/workflows/test.yml
name: Test Write Action
on:
push:
branches: [main]
jobs:
test_node_works:
runs-on: ubuntu-latest
name: Test if Node works
strategy:
matrix:
node-version: [12.x]
steps:
- uses: actions/checkout#v2
with:
repository: org/write-file-action
ref: main
token: ${{ secrets.ACTION_TOKEN }} # stored in GitHub secrets created from profile settings
args: 'TESTING'
- name: action step
uses: ./ # Uses an action in the root directory
id: foo
with:
who-to-greet: 'Darth Vader'
- name: output time
run: |
echo "The details are ${{ steps.foo.outputs.repo }}"
echo "The time was ${{ steps.foo.outputs.time }}"
echo "time: ${{ steps.foo.outputs.time }}" >> ./foo.md
shell: bash
and the action is a success.
org/write-file-action
action.yml:
## https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions
name: 'Write File Action'
description: 'workflow testing'
inputs:
who-to-greet: # id of input
description: 'Who to greet'
required: true
default: './'
outputs:
time: # id of output
description: 'The time we greeted you'
repo:
description: 'user and repo'
runs:
using: 'node12'
main: 'dist/index.js'
branding:
color: 'green'
icon: 'truck' ## https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#brandingicon
index.js that is built to dist/index.js
fs = require('fs')
const core = require('#actions/core')
const github = require('#actions/github')
try {
// `who-to-greet` input defined in action metadata file
const nameToGreet = core.getInput('who-to-greet')
console.log(`Hello ${nameToGreet}!`)
const time = new Date().toTimeString()
core.setOutput('time', time)
const repo = github.context.payload.repository.full_name
console.log(`full name: ${repo}!`)
core.setOutput('repo', repo)
// Get the JSON webhook payload for the event that triggered the workflow
const payload = JSON.stringify(github.context.payload, undefined, 2)
console.log(`The event payload: ${payload}`)
fs.writeFileSync('payload.json', payload) // Doesn't write to repo
} catch (error) {
core.setFailed(error.message)
}
package.json:
{
"name": "wite-file-action",
"version": "1.0.0",
"description": "workflow testing",
"main": "dist/index.js",
"scripts": {
"build": "ncc build ./index.js"
},
"dependencies": {
"#actions/core": "^1.4.0",
"#actions/github": "^5.0.0"
},
"devDependencies": {
"#vercel/ncc": "^0.28.6",
"prettier": "^2.3.2"
}
}
but at current workflow nothing is created in action-playground. The only way I'm able to write to the repo is from a module using the API with github-api with something like:
const GitHub = require('github-api')
const gh = new GitHub({
token: config.app.git_token,
}, githubUrl)
const repo = gh.getRepo(config.app.repoOwner, config.app.repoName)
const branch = config.app.repoBranch
const path = 'README.md'
const content = '#Foo Bar\nthis is foo bar'
const message = 'add foo bar to the readme'
const options = {}
repo.writeFile(
branch,
path,
content,
message,
options
).then((r) => {
console.log(r)
})
and passing in the repo, org or user from github.context.payload. My end goal is to eventually read to see if it exists, if so overwrite and write to README.md a badge dynamically:
`![${github.context.payload.workflow}](https://github.com/${github.context.payload.user}/${github.context.payload.repo}/actions/workflows/${github.context.payload.workflow}.yml/badge.svg?branch=main)`
Second goal from this is to create a markdown file (like foo.md or payload.json) but I cant run an echo command from the action to write to the repo, which I get is Bash and not Node.
Is there a way without using the API to write to a repo that is calling the action with Node? Is this only available with Bash when using run:
- name: output
shell: bash
run: |
echo "time: ${{ steps.foo.outputs.time }}" >> ./time.md
If so how to do it?
Research:
Passing variable argument to .ps1 script not working from Github Actions
How to pass variable between two successive GitHub Actions jobs?
GitHub Action: Pass Environment Variable to into Action using PowerShell
How to create outputs on GitHub actions from bash scripts?
Self-updating GitHub Profile README with JavaScript
Workflow syntax for GitHub Actions

Related

Azure Function App infra redeployment makes existing functions in app fail because of missing requirements and also delete previous invocations data

I'm facing quite a big problem. I have a function app that I deploy by Azure Bicep in the following fashion:
param environmentType string
param location string
param storageAccountSku string
param vnetIntegrationSubnetId string
param kvName string
/*
This module contains the IaC for deploying the Premium function app
*/
/// Just a single minimum instance to start with and max scaling of 3 for dev, 5 for prd ///
var minimumElasticSize = 1
var maximumElasticSize = ((environmentType == 'prd') ? 5 : 3)
var name = 'nlp'
var functionAppName = 'function-app-${name}-${environmentType}'
/// Storage account for service ///
resource functionAppStorage 'Microsoft.Storage/storageAccounts#2019-06-01' = {
name: 'st4functionapp${name}${environmentType}'
location: location
kind: 'StorageV2'
sku: {
name: storageAccountSku
}
properties: {
allowBlobPublicAccess: false
accessTier: 'Hot'
supportsHttpsTrafficOnly: true
minimumTlsVersion: 'TLS1_2'
}
}
/// Premium app plan for the service ///
resource servicePlanfunctionApp 'Microsoft.Web/serverfarms#2021-03-01' = {
name: 'plan-${name}-function-app-${environmentType}'
location: location
kind: 'linux'
sku: {
name: 'EP1'
tier: 'ElasticPremium'
family: 'EP'
}
properties: {
reserved: true
targetWorkerCount: minimumElasticSize
maximumElasticWorkerCount: maximumElasticSize
elasticScaleEnabled: true
isSpot: false
zoneRedundant: ((environmentType == 'prd') ? true : false)
}
}
// Create log analytics workspace
resource logAnalyticsWorkspacefunctionApp 'Microsoft.OperationalInsights/workspaces#2021-06-01' = {
name: '${name}-functionapp-loganalytics-workspace-${environmentType}'
location: location
properties: {
sku: {
name: 'PerGB2018' // Standard
}
}
}
/// Log analytics workspace insights ///
resource applicationInsightsfunctionApp 'Microsoft.Insights/components#2020-02-02' = {
name: 'application-insights-${name}-function-${environmentType}'
location: location
kind: 'web'
properties: {
Application_Type: 'web'
Flow_Type: 'Bluefield'
publicNetworkAccessForIngestion: 'Enabled'
publicNetworkAccessForQuery: 'Enabled'
Request_Source: 'rest'
RetentionInDays: 30
WorkspaceResourceId: logAnalyticsWorkspacefunctionApp.id
}
}
// App service containing the workflow runtime ///
resource sitefunctionApp 'Microsoft.Web/sites#2021-03-01' = {
name: functionAppName
location: location
kind: 'functionapp,linux'
identity: {
type: 'SystemAssigned'
}
properties: {
clientAffinityEnabled: false
httpsOnly: true
serverFarmId: servicePlanfunctionApp.id
siteConfig: {
linuxFxVersion: 'python|3.9'
minTlsVersion: '1.2'
pythonVersion: '3.9'
use32BitWorkerProcess: true
appSettings: [
{
name: 'FUNCTIONS_EXTENSION_VERSION'
value: '~4'
}
{
name: 'FUNCTIONS_WORKER_RUNTIME'
value: 'python'
}
{
name: 'AzureWebJobsStorage'
value: 'DefaultEndpointsProtocol=https;AccountName=${functionAppStorage.name};AccountKey=${listKeys(functionAppStorage.id, '2019-06-01').keys[0].value};EndpointSuffix=core.windows.net'
}
{
name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING'
value: 'DefaultEndpointsProtocol=https;AccountName=${functionAppStorage.name};AccountKey=${listKeys(functionAppStorage.id, '2019-06-01').keys[0].value};EndpointSuffix=core.windows.net'
}
{
name: 'WEBSITE_CONTENTSHARE'
value: 'app-${toLower(name)}-functionservice-${toLower(environmentType)}a6e9'
}
{
name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
value: applicationInsightsfunctionApp.properties.InstrumentationKey
}
{
name: 'ApplicationInsightsAgent_EXTENSION_VERSION'
value: '~2'
}
{
name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
value: applicationInsightsfunctionApp.properties.ConnectionString
}
{
name: 'ENV'
value: toUpper(environmentType)
}
]
}
}
/// VNET integration so flows can access storage and queue accounts ///
resource vnetIntegration 'networkConfig#2022-03-01' = {
name: 'virtualNetwork'
properties: {
subnetResourceId: vnetIntegrationSubnetId
swiftSupported: true
}
}
}
/// Outputs for creating access policies ///
output functionAppName string = sitefunctionApp.name
output functionAppManagedIdentityId string = sitefunctionApp.identity.principalId
Output is used for giving permissions to blob/queue and some keyvault stuff. This code is a single module called in a main.bicep module and deployed via an Azure Devops pipeline.
I have a second repository in which I have some functions and which I also deploy via Azure Pipelines. This one contains three .yaml files for deploying, 2 templates (CI and CD) and 1 main pipeline called azure-pipelines.yml pulling it all together:
functions-ci.yml:
parameters:
- name: environment
type: string
jobs:
- job:
displayName: 'Publish the function as .zip'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python $(pythonVersion)'
- task: CopyFiles#2
displayName: 'Create project folder'
inputs:
SourceFolder: '$(System.DefaultWorkingDirectory)'
Contents: |
**
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: Bash#3
displayName: 'Install requirements for running function'
inputs:
targetType: 'inline'
script: |
python3 -m pip install --upgrade pip
pip install setup
pip install --target="./.python_packages/lib/site-packages" -r ./requirements.txt
workingDirectory: '$(Build.ArtifactStagingDirectory)'
- task: ArchiveFiles#2
displayName: 'Create project zip'
inputs:
rootFolderOrFile: '$(Build.ArtifactStagingDirectory)'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
replaceExistingArchive: true
- task: PublishPipelineArtifact#1
displayName: 'Publish project zip artifact'
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)'
artifactName: 'functions$(environment)'
publishLocation: 'pipeline'
functions-cd.yml:
parameters:
- name: environment
type: string
- name: azureServiceConnection
type: string
jobs:
- job: worfklowsDeploy
displayName: 'Deploy the functions'
steps:
# Download created artifacts, containing the zipped function codes
- task: DownloadPipelineArtifact#2
inputs:
buildType: 'current'
artifactName: 'functions$(environment)'
targetPath: '$(Build.ArtifactStagingDirectory)'
# Zip deploy the functions code
- task: AzureFunctionApp#1
inputs:
azureSubscription: $(azureServiceConnection)
appType: functionAppLinux
appName: function-app-nlp-$(environment)
package: $(Build.ArtifactStagingDirectory)/**/*.zip
deploymentMethod: 'zipDeploy'
They are pulled together in azure-pipelines.yml:
trigger:
branches:
include:
- develop
- main
pool:
name: "Hosted Ubuntu 1804"
variables:
${{ if notIn(variables['Build.SourceBranchName'], 'main') }}:
environment: dev
azureServiceConnection: SC-NLPDT
${{ if eq(variables['Build.SourceBranchName'], 'main') }}:
environment: prd
azureServiceConnection: SC-NLPPRD
pythonVersion: '3.9'
stages:
# Builds the functions as .zip
- stage: functions_ci
displayName: 'Functions CI'
jobs:
- template: ./templates/functions-ci.yml
parameters:
environment: $(environment)
# Deploys .zip workflows
- stage: functions_cd
displayName: 'Functions CD'
jobs:
- template: ./templates/functions-cd.yml
parameters:
environment: $(environment)
azureServiceConnection: $(azureServiceConnection)
So this successfully deploys my function app the first time around when I have also deployed the infra code. The imports are done well, the right function app is deployed, and the code runs when I trigger it.
But, when I go and redeploy the infra (bicep) code, all of a sudden I the newest version of the functions is gone and is replaced by a previous version.
Also, running this previous version doesn't work anymore since all my requirements that were installed in the pipeline (CI part) via pip install --target="./.python_packages/lib/site-packages" -r ./requirements.txt suddenly cannot be found anymore, giving import errors (i.e. Result: Failure Exception: ModuleNotFoundError: No module named 'azure.identity'). Mind you, this version did work previously just fine.
This is a big problem for me since I need to be able to update some infra stuff (like adding an APP_SETTING) without this breaking the current deployment of functions.
I had thought about just redeploying the function automatically after an infra update, but then I still miss the previous invocations which I need to be able to see.
Am I missing something in the above code because I cannot figure out what would be going wrong here that causes my functions to change on infra deployment...
Looking at the documentation:
To enable your function app to run from a package, add a WEBSITE_RUN_FROM_PACKAGE setting to your function app settings.
1 Indicates that the function app runs from a local package file deployed in the d:\home\data\SitePackages (Windows) or /home/data/SitePackages (Linux) folder of your function app.
In your case, when you deploy your function app code using AzureFunctionApp#1 and zipDeploy, this automatically add this appsetting into your function app. When redeploying your infrastructure, this setting is removed and the function app host does not know where to find the code.
If you add this app setting in your bicep file this should work:
{
name: 'WEBSITE_RUN_FROM_PACKAGE'
value: '1'
}

How to delete Azure Static Web App branch preview environments when deleting source branch in Azure DevOps?

Background
I am using Azure DevOps for hosting the source of my web application and building/deploying the application to an Azure Static Web App.
I am using the "branch preview environments" of Static Web App like this (source):
steps:
...
- task: AzureStaticWebApp#0
inputs:
...
production_branch: 'main'
This works fine so far. For example, if I use a branch "dev", a corresponding branch environment is being created.
Question
How can I automatically delete the Azure static web app branch preview environment once the branch it was created for is being deleted?
Use Azure cli?
The only approach I found so far is using Azure CLI - but how to automate?
az staticwebapp environment delete --name my-static-app \
--environment-name an-env-name --subscription my-sub
I solved it by creating a separate pipeline triggered by the main branch. The pipeline removes all deployments that don't have an open pull request.
Here is the pipeline, basically just calling a node script that takes care of the cleanup:
name: Cleanup static web apps
trigger:
- main
# Add the following variables into devops:
# - DEVOPS_PAT: your personal access token for DevOps
# - AZURE_SUBSCRIPTION: the subscription in azure under which your swa lives
variables:
NPM_CONFIG_CACHE: $(Pipeline.Workspace)/.npm
DEVOPS_ORG_URL: "https://dev.azure.com/feedm3"
DEVOPS_PROJECT: "azure-playground"
AZURE_STATIC_WEBAPP_NAME: "react-app"
jobs:
- job: cleanup_preview_environments_job
displayName: Cleanup
pool:
vmImage: ubuntu-latest
steps:
- task: Cache#2
inputs:
key: 'npm | "$(Agent.OS)" | package-lock.json'
restoreKeys: |
npm | "$(Agent.OS)"
path: $(NPM_CONFIG_CACHE)
displayName: "Cache npm"
- script: |
npm ci
displayName: "Install dependencies"
- task: AzureCLI#2
inputs:
azureSubscription: "test-service-connection-name"
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
npm run ci:cleanup-deployments
displayName: "Cleanup outdated deployments"
This is the actual script that removes the deployments:
import { getPersonalAccessTokenHandler, WebApi } from "azure-devops-node-api";
import { exec as callbackExec } from 'child_process';
import { promisify } from 'util';
const exec = promisify(callbackExec);
const DEVOPS_ORG_URL = process.env["DEVOPS_ORG_URL"] as string;
const DEVOPS_PROJECT = process.env["DEVOPS_PROJECT"] as string;
const DEVOPS_PAT = process.env["DEVOPS_PAT"] as string;
const AZURE_SUBSCRIPTION = process.env["AZURE_SUBSCRIPTION"] as string;
const AZURE_STATIC_WEBAPP_NAME = process.env["AZURE_STATIC_WEBAPP_NAME"] as string;
const ALWAYS_DEPLOYED_BRANCHES = ['main'];
const REPO_ID = process.env['BUILD_REPOSITORY_ID'] as string;
const getAllStaticWebAppDeployments = async (): Promise<{ name: string; sourceBranch: string, hostname: string }[]> => {
const { stdout, stderr } = await exec(`az staticwebapp environment list --name ${AZURE_STATIC_WEBAPP_NAME} --subscription ${AZURE_SUBSCRIPTION}`);
if (stderr) {
console.error('Command failed!', stderr);
throw new Error(stderr);
}
return JSON.parse(stdout);
}
const run = async () => {
console.log(`Cleanup outdated deployments ${{REPO_ID, DEVOPS_PROJECT, AZURE_STATIC_WEBAPP_NAME}}...`)
const webAppDeployments = await getAllStaticWebAppDeployments();
// post comment
const authHandler = getPersonalAccessTokenHandler(DEVOPS_PAT);
const connection = new WebApi(DEVOPS_ORG_URL, authHandler);
await connection.connect();
const gitApi = await connection.getGitApi(`${DEVOPS_ORG_URL}/${DEVOPS_PROJECT}`);
// status 1 is active (PullRequestStatus type)
const activePullRequests = await gitApi.getPullRequests(REPO_ID, { status: 1 });
const activePullRequestBranches = activePullRequests.map(pr => pr.sourceRefName).filter(Boolean).map(fullBranchName => fullBranchName!.split('/')[2]);
// main deployment should always be alive
activePullRequestBranches.push(...ALWAYS_DEPLOYED_BRANCHES);
const outdatedDeployments = webAppDeployments.filter(deployment => {
return !activePullRequestBranches.includes(deployment.sourceBranch);
})
console.log('Deployments to delete:', outdatedDeployments);
for await (const deployment of outdatedDeployments) {
const deploymentName = deployment.name;
console.log(`Deleting deployment ${deploymentName}...`);
/**
* Deletion works, but ends with an irrelevant error.
*/
try {
const { stderr } = await exec(`az staticwebapp environment delete --name ${AZURE_STATIC_WEBAPP_NAME} --subscription ${AZURE_SUBSCRIPTION} --environment-name ${deploymentName} --yes`);
if (stderr) {
console.error('Could not delete deployment ', deploymentName);
} else {
console.log('Deleted deployment ', deploymentName);
}
} catch (e) {
console.log('Deleted deployment ', deploymentName);
}
}
console.log('Outdated deployments cleared!')
}
await run();
The full repo can be found here: https://github.com/feedm3/learning-azure-swa-devops

Docker CI not working with mongodb-memory-server

I used mongodb-memory-server to test some repository functions in mongo, and run my unit-test at my local machine successfully, however when this code was pushed into GitHub, it was running fail. I am not sure the issue is about docker config or about mongodb-memory-server version.
Here is the log from GitHub:
9W45p5LM91Vj","tmpDir":{"name":"/tmp/mongo-mem--188-9W45p5LM91Vj"},"uri":"mongodb://127.0.0.1:42823/d791a878-09ac-4ccc-896d-ea603e2676ad?"}
2021-06-05T09:45:33.351Z MongoMS:MongoBinary MongoBinary options: {
"downloadDir": "/__w/son-git-test/son-git-test/node_modules/.cache/mongodb-memory-server/mongodb-binaries",
"platform": "linux",
"arch": "x64",
"version": "4.2.8",
"checkMD5": false
}
2021-06-05T09:45:33.356Z MongoMS:getos Trying LSB-Release
2021-06-05T09:45:33.372Z MongoMS:getos Trying OS-Release
2021-06-05T09:45:33.375Z MongoMS:MongoBinaryDownloadUrl Using "mongodb-linux-x86_64-debian92-4.2.8.tgz" as the Archive String
2021-06-05T09:45:33.375Z MongoMS:MongoBinaryDownloadUrl Using "https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-ubuntu1804-4.2.8.tgz" as the Download-URL
2021-06-05T09:45:33.377Z MongoMS:MongoBinaryDownload Downloading: "https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-ubuntu1804-4.2.8.tgz"
2021-06-05T09:45:33.377Z MongoMS:MongoBinaryDownload trying to download https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-ubuntu1804-4.2.8.tgz
2021-06-05T09:45:34.756Z MongoMS:MongoBinaryDownload moved /__w/son-git-test/son-git-test/node_modules/.cache/mongodb-memory-server/mongodb-binaries/mongodb-linux-x86_64-ubuntu1804-4.2.8.tgz.downloading to /__w/son-git-test/son-git-test/node_modules/.cache/mongodb-memory-server/mongodb-binaries/mongodb-linux-x86_64-ubuntu1804-4.2.8.tgz
2021-06-05T09:45:34.757Z MongoMS:MongoBinaryDownload extract(): /__w/son-git-test/son-git-test/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.2.8
2021-06-05T09:45:37.293Z MongoMS:MongoBinary MongoBinary: Download lock removed
2021-06-05T09:45:37.294Z MongoMS:MongoBinary MongoBinary: Mongod binary path: "/__w/son-git-test/son-git-test/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.2.8/mongod"
2021-06-05T09:45:37.309Z MongoMS:MongoInstance Mongo[42823]: Called MongoInstance._launchKiller(parent: 188, child: 203):
2021-06-05T09:45:37.323Z MongoMS:MongoInstance Mongo[42823]: STDERR: /__w/son-git-test/son-git-test/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.2.8/mongod: error while loading shared libraries: libcurl.so.4: cannot open shared object file: No such file or directory
2021-06-05T09:45:37.324Z MongoMS:MongoInstance Mongo[42823]: Mongod instance closed with an non-0 code!
2021-06-05T09:45:37.324Z MongoMS:MongoInstance Mongo[42823]: CLOSE: 127
2021-06-05T09:45:37.325Z MongoMS:MongoInstance Mongo[42823]: MongodbInstance: Instance has failed: Mongod instance closed with code "127"
2021-06-05T09:45:37.331Z MongoMS:MongoMemoryServer Called MongoMemoryServer.stop() method
2021-06-05T09:45:37.331Z MongoMS:MongoMemoryServer Called MongoMemoryServer.ensureInstance() method
2021-06-05T09:45:37.349Z MongoMS:MongoInstance Mongo[42823]: [MongoKiller]: exit - [null,"SIGTERM"]
FAIL src/squid/squid.controller.spec.ts (9.945 s)
● Console
console.log
before each
at Object.<anonymous> (squid/squid.controller.spec.ts:19:13)
console.log
Downloading MongoDB 4.2.8: 0 % (0mb / 126.5mb)
at MongoBinaryDownload.Object.<anonymous>.MongoBinaryDownload.printDownloadProgress (../node_modules/mongodb-memory-server-core/src/util/MongoBinaryDownload.ts:424:15)
● SquidController › should be defined
Failed: "Mongod instance closed with code \"127\""
16 | let controller: SquidController;
17 |
> 18 | beforeEach(async () => {
| ^
19 | console.log('before each');
20 | const module: TestingModule = await Test.createTestingModule({
21 | imports: [
at Env.beforeEach (../node_modules/jest-jasmine2/build/jasmineAsyncInstall.js:46:24)
at Suite.<anonymous> (squid/squid.controller.spec.ts:18:3)
at Object.<anonymous> (squid/squid.controller.spec.ts:15:1)
and here is gitflow config:
name: Code quality
on:
pull_request:
branches:
- develop
push:
branches:
- develop
defaults:
run:
shell: bash
jobs:
Code-Quality:
name: Code quality
runs-on: ubuntu-latest
container: node:lts-slim
steps:
- uses: actions/checkout#v2
- name: Install dependency
run: yarn install --frozen-lockfile
- name: Check lint and format
run: |
yarn format:check
yarn lint:check
- name: checking unit test
run: yarn test
and here is unit test code:
import { Test, TestingModule } from '#nestjs/testing';
import { MongooseModule } from '#nestjs/mongoose';
import { SquidController } from './squid.controller';
import { SquidService } from './squid.service';
import {
closeInMongodConnection,
rootMongooseTestModule,
} from '../test-utils/mongo/MongooseTestModule';
import { SquidSchema } from './model/squid.schema';
// May require additional time for downloading MongoDB binaries
jasmine.DEFAULT_TIMEOUT_INTERVAL = 600000;
describe('SquidController', () => {
let controller: SquidController;
beforeEach(async () => {
console.log('before each');
const module: TestingModule = await Test.createTestingModule({
imports: [
rootMongooseTestModule(),
MongooseModule.forFeature([{ name: 'Squid', schema: SquidSchema }]),
],
controllers: [SquidController],
providers: [SquidService],
}).compile();
controller = module.get<SquidController>(SquidController);
});
it('should be defined', () => {
expect(controller).toBeDefined();
});
afterAll(async () => {
await closeInMongodConnection();
});
});
After searching I found where the problem is. This issue is related to Node version. Mongo haven't had build version for Node slim/alpine.
We can fix by update node images: (container: node:14.17.0)
name: Code quality
on:
pull_request:
branches:
- develop
push:
branches:
- develop
defaults:
run:
shell: bash
jobs:
Code-Quality:
name: Code quality
runs-on: ubuntu-latest
container: node:14.17.0
steps:
- uses: actions/checkout#v2
- name: Install dependency
run: yarn install --frozen-lockfile
- name: Check lint and format
run: |
yarn format:check
yarn lint:check
- name: checking unit test
run: yarn test

GitHub Action using npx fails with message /usr/bin/env: 'node': No such file or directory

I'm creating a nodejs GitHub Action that relies on npx to run semantic-release:
src/index.ts (extract, bundled to dist/index.js)
import * as core from '#actions/core'
import * as exec from '#actions/exec'
import * as github from '#actions/github'
;(async () => {
const githubRegistry: string = `https://npm.pkg.github.com/${github.context.repo.owner}`
const githubToken: string = core.getInput('github_token', { required: true })
await exec.exec('npx', ['semantic-release'], {
NPM_CONFIG_REGISTRY: githubRegistry,
NPM_TOKEN: githubToken,
GITHUB_TOKEN: githubToken
})
})()
action.yml (extract)
inputs:
github_token:
required: true
runs:
using: node12
main: dist/index.js
.github/workflows/release.yml
on:
push:
branches:
- main
jobs:
release:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v1
with:
node-version: 12
- run: yarn
- uses: ./
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
When the release workflow is ran on GitHub, it fails with the following output:
/opt/hostedtoolcache/node/12.19.0/x64/bin/npx semantic-release
/usr/bin/env: 'node': No such file or directory
Error: The process '/opt/hostedtoolcache/node/12.19.0/x64/bin/npx' failed with exit code 127
It looks like npx is trying to run node but could not find it.
I would appreciate any help, thanks :)

How can I use the Jenkins Copy Artifacts Plugin from within the pipelines (jenkinsfile)?

I am trying to find an example of using the Jenkins Copy Artifacts Plugin from within Jenkins pipelines (workflows).
Can anyone point to a sample Groovy code that is using it?
With a declarative Jenkinsfile, you can use following pipeline:
pipeline {
agent any
stages {
stage ('push artifact') {
steps {
sh 'mkdir archive'
sh 'echo test > archive/test.txt'
zip zipFile: 'test.zip', archive: false, dir: 'archive'
archiveArtifacts artifacts: 'test.zip', fingerprint: true
}
}
stage('pull artifact') {
steps {
copyArtifacts filter: 'test.zip', fingerprintArtifacts: true, projectName: env.JOB_NAME, selector: specific(env.BUILD_NUMBER)
unzip zipFile: 'test.zip', dir: './archive_new'
sh 'cat archive_new/test.txt'
}
}
}
}
Before version 1.39 of the CopyArtifact, you must replace second stage with following (thanks #Yeroc) :
stage('pull artifact') {
steps {
step([ $class: 'CopyArtifact',
filter: 'test.zip',
fingerprintArtifacts: true,
projectName: '${JOB_NAME}',
selector: [$class: 'SpecificBuildSelector', buildNumber: '${BUILD_NUMBER}']
])
unzip zipFile: 'test.zip', dir: './archive_new'
sh 'cat archive_new/test.txt'
}
}
With CopyArtifact, I use '${JOB_NAME}' as project name which is the current running project.
Default selector used by CopyArtifact use last successful project build number, never current one (because it's not yet successful, or not). With SpecificBuildSelector you can choose '${BUILD_NUMBER}' which contains current running project build number.
This pipeline works with parallel stages and can manage huge files (I'm using a 300Mb file, it not works with stash/unstash)
This pipeline works perfectly with my Jenkins 2.74, provided you have all needed plugins
If you are using agents in your controller and you want to copy artifacts between each other you can use stash/unstash, for example:
stage 'build'
node{
git 'https://github.com/cloudbees/todo-api.git'
stash includes: 'pom.xml', name: 'pom'
}
stage name: 'test', concurrency: 3
node {
unstash 'pom'
sh 'cat pom.xml'
}
You can see this example in this link:
https://dzone.com/refcardz/continuous-delivery-with-jenkins-workflow
If builds are not running in the same pipeline you can use direct CopyArtifact plugin, here is example: https://www.cloudbees.com/blog/copying-artifacts-between-builds-jenkins-workflow and example code:
node {
// setup env..
// copy the deployment unit from another Job...
step ([$class: 'CopyArtifact',
projectName: 'webapp_build',
filter: 'target/orders.war']);
// deploy 'target/orders.war' to an app host
}
name = "/" + "${env.JOB_NAME}"
def archiveName = 'relNum'
try {
step($class: 'hudson.plugins.copyartifact.CopyArtifact', projectName: name, filter: archiveName)
} catch (none) {
echo 'No artifact to copy from ' + name + ' with name relNum'
writeFile file: archiveName, text: '3'
}
def content = readFile(archiveName).trim()
echo 'value archived: ' + content
try that using copy artifact plugin

Resources