Gitlab CI/CD stage test fails when fetching external resource with error: dh key too small - node.js

I'm trying to work with Gitlab CI/CD but the test stage fails with the following error:
write EPROTO 140044051654592:error:141A318A:SSL routines:tls_process_ske_dhe:dh key too small:../deps/openssl/openssl/ssl/statem/statem_clnt.c:2171:
gitlab-ci.yml
image: node:16.15.1
stages:
- test
test-job:
stage: test
script:
- npm run test
To be note that this is an integration test that calls an external resource with axios, and I have tried to set rejectUnauthorized: false and minVersion: "TLSv1" as suggested here and here
const axiosOptions = {
httpsAgent: new https.Agent({
rejectUnauthorized: false,
minVersion: "TLSv1",
})
};
const axiosInstance = axios.create(axiosOptions);
const response = await axiosInstance.get('https://www.some-domain.com/some-article.html');
This is not a problem with the test itself as it runs fine on my PC, but I suppose with the TLS of the gitlab runner.
Thanks

Related

NodeJS Artillery with self signed client certificates

I installed the latest versions of nodejs and Artilliery. I want to do load tests using Artillery in this yml:
config:
target: "https://my.ip.address:443"
phases:
- duration: 60
arrivalCount: 100
tls:
rejectUnauthorized: false
client:
key: "./key.pem"
cert: "./certificate.pem"
ca: "./ca.cert"
passphrase: "mypass"
onError:
- log: "Error: invalid tls configuration"
extendedMetrics: true
https:
extendedMetrics: true
logging:
level: "debug"
scenarios:
- flow:
- log: "Current environment is set to: {{ $environment }}"
- get:
url: "/myapp/"
#sslAuth: false
capture:
json: "$.data"
as: "data"
failOnError: true
log: "Error: capturing ${json.error}"
check:
- json: "$.status"
as: "status"
comparison: "eq"
value: 200
not: true
capture:
json: "$.error"
as: "error"
log: "Error: check ${error}"
plugins:
http-ssl-auth: {}
I run Artillery with:
artillery -e production config_tests.yml
I checked the certificates, they are working when used in other applications. They are generated with Openssl
But, all the virtual users fail with error: errors.EPROTO.
Could you please help me find a solution? Thanks in advance!

Github action write to a repo in Node with #actions/core or #actions/github?

Learning Github Actions I'm finally able to call an action from a secondary repo, example:
org/action-playground
.github/workflows/test.yml
name: Test Write Action
on:
push:
branches: [main]
jobs:
test_node_works:
runs-on: ubuntu-latest
name: Test if Node works
strategy:
matrix:
node-version: [12.x]
steps:
- uses: actions/checkout#v2
with:
repository: org/write-file-action
ref: main
token: ${{ secrets.ACTION_TOKEN }} # stored in GitHub secrets created from profile settings
args: 'TESTING'
- name: action step
uses: ./ # Uses an action in the root directory
id: foo
with:
who-to-greet: 'Darth Vader'
- name: output time
run: |
echo "The details are ${{ steps.foo.outputs.repo }}"
echo "The time was ${{ steps.foo.outputs.time }}"
echo "time: ${{ steps.foo.outputs.time }}" >> ./foo.md
shell: bash
and the action is a success.
org/write-file-action
action.yml:
## https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions
name: 'Write File Action'
description: 'workflow testing'
inputs:
who-to-greet: # id of input
description: 'Who to greet'
required: true
default: './'
outputs:
time: # id of output
description: 'The time we greeted you'
repo:
description: 'user and repo'
runs:
using: 'node12'
main: 'dist/index.js'
branding:
color: 'green'
icon: 'truck' ## https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#brandingicon
index.js that is built to dist/index.js
fs = require('fs')
const core = require('#actions/core')
const github = require('#actions/github')
try {
// `who-to-greet` input defined in action metadata file
const nameToGreet = core.getInput('who-to-greet')
console.log(`Hello ${nameToGreet}!`)
const time = new Date().toTimeString()
core.setOutput('time', time)
const repo = github.context.payload.repository.full_name
console.log(`full name: ${repo}!`)
core.setOutput('repo', repo)
// Get the JSON webhook payload for the event that triggered the workflow
const payload = JSON.stringify(github.context.payload, undefined, 2)
console.log(`The event payload: ${payload}`)
fs.writeFileSync('payload.json', payload) // Doesn't write to repo
} catch (error) {
core.setFailed(error.message)
}
package.json:
{
"name": "wite-file-action",
"version": "1.0.0",
"description": "workflow testing",
"main": "dist/index.js",
"scripts": {
"build": "ncc build ./index.js"
},
"dependencies": {
"#actions/core": "^1.4.0",
"#actions/github": "^5.0.0"
},
"devDependencies": {
"#vercel/ncc": "^0.28.6",
"prettier": "^2.3.2"
}
}
but at current workflow nothing is created in action-playground. The only way I'm able to write to the repo is from a module using the API with github-api with something like:
const GitHub = require('github-api')
const gh = new GitHub({
token: config.app.git_token,
}, githubUrl)
const repo = gh.getRepo(config.app.repoOwner, config.app.repoName)
const branch = config.app.repoBranch
const path = 'README.md'
const content = '#Foo Bar\nthis is foo bar'
const message = 'add foo bar to the readme'
const options = {}
repo.writeFile(
branch,
path,
content,
message,
options
).then((r) => {
console.log(r)
})
and passing in the repo, org or user from github.context.payload. My end goal is to eventually read to see if it exists, if so overwrite and write to README.md a badge dynamically:
`![${github.context.payload.workflow}](https://github.com/${github.context.payload.user}/${github.context.payload.repo}/actions/workflows/${github.context.payload.workflow}.yml/badge.svg?branch=main)`
Second goal from this is to create a markdown file (like foo.md or payload.json) but I cant run an echo command from the action to write to the repo, which I get is Bash and not Node.
Is there a way without using the API to write to a repo that is calling the action with Node? Is this only available with Bash when using run:
- name: output
shell: bash
run: |
echo "time: ${{ steps.foo.outputs.time }}" >> ./time.md
If so how to do it?
Research:
Passing variable argument to .ps1 script not working from Github Actions
How to pass variable between two successive GitHub Actions jobs?
GitHub Action: Pass Environment Variable to into Action using PowerShell
How to create outputs on GitHub actions from bash scripts?
Self-updating GitHub Profile README with JavaScript
Workflow syntax for GitHub Actions

Parameterisation of a groovy Pipeline for use with jenkins

I have a groovy pipeline which I've inherited from a project that I forked.
I wish to pass in Jenkins Choice Parameters as a Parameterised build. At present, I only wish to expose the environment in which to run ( but will want to parameterise further at a later stage), such that a user can choose it from the Jenkins dropdown and use on demand.
I used the snippet generator to help.
Can someone please help with the syntax? I am using Node with a package.json to run a script and with to pass in either dev or uat:
properties([[$class: 'BuildConfigProjectProperty', name: '', namespace: '', resourceVersion: '', uid: ''], parameters([choice(choices: 'e2e\nuat', description: 'environment ', name: 'env')])])
node('de-no1') {
try {
stage('DEV: Initialise') {
git url: 'https://myrepo.org/mycompany/create-new-user.git', branch: 'master', credentialsId: CREDENTIALS_ID
}
stage('DEV: Install Dependencies') {
sh 'npm install'
}
stage('${params.env}: Generate new users') {
sh 'npm run generate:${params.env}'
archiveArtifacts artifacts: '{params.env}-userids.txt', fingerprint: true
}
This currently fails with:
npm ERR! missing script: generate:{params.env}
Assume you want to replace ${params.env} with a value when you call npm?
If this is the case, you need to use double quotes " to let Groovy know you will be doing String templating...ie:
sh "npm run generate:${params.env}"

Serverless variable from external file nested property

I have serverless yml and a config file
config file
app: {
port: 3000,
db: {
connectionString: 'xxxxx'
},
lambdaDeploy:{
stage : "DEV",
region : "es-west-1"
}
Trying to use these variables in yml like below
yml
provider:
name: aws
runtime: nodejs6.10
stage: ${file(./appconfiguration.json).app.stage}
region: ${file(./appconfiguration.json).app.region}
But its reading and taking default
Please advise.
Thanks
The syntax used here is not correct.
stage: ${file(./appconfiguration.json).app.stage}
Use colon instead:
stage: ${file(./appconfiguration.json):app.stage}
More details here: https://www.serverless.com/framework/docs/providers/aws/guide/variables/#reference-variables-in-other-files

Bitbucket pipelines, use ENV VARS in a NodeJS script to deploy to S3 Deploy

Right now I have a bitbucket pipeline that works well with a single step like so:
( options: docker: true )
- docker build --rm -f Dockerfile-deploy ./ -t deploy --build-arg AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID --build-arg AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
This sets the keys in the Docker container, which then deploy's to an ELB using a bash script and the AWS-CLI to commit, so I don't actually try to expose the env vars, but eb deploy sure does and it works.
When trying to run a pipeline with the image: node:latest and the steps
- npm i
- npm run build ( Babel transpile )
- npm run deploy ( node script to send to S3 )
That final step I need the node script to have access to the env vars that I've added to the bitbucket repo pipelines config, instead I get a named variable representation of that variable:
// NodeJS Config File
module.exports = {
AWS_S3_BUCKET : process.env.AWS_S3_BUCKET || undefined,
AWS_ACCESS_KEY : process.env.AWS_ACCESS_KEY || undefined,
AWS_ACCESS_SECRET : process.env.AWS_ACCESS_SECRET || undefined,
}
-
// NodeJS deploy file... parts
const aws = {
params: {
Bucket: config.AWS_S3_BUCKET
},
accessKeyId: config.AWS_ACCESS_KEY,
secretAccessKey: config.AWS_ACCESS_SECRET,
distributionId: config.CLOUDFRONT_DISTRIBUTION_ID,
region: "us-east-1"
}
console.log('-----START AWS-----')
console.log(aws)
console.log('------END AWS------')
Then the bitbucket pipelines echo's this for the console.logs
-----START AWS-----
{ params: { Bucket: '$AWS_S3_BUCKET' },
accessKeyId: '$AWS_ACCESS_KEY',
secretAccessKey: '$AWS_ACCESS_SECRET',
distributionId: '$CLOUDFRONT_DISTRIBUTION_ID',
region: 'us-east-1' }
------END AWS------
Any thoughts?
Well my problem was that I copy pasted the variables from AWS with a trailing space... which is a character, but certainly not in the expected secret or key string. Oops

Resources