How to execute multiple k6 scripts via single command in loadimpact/k6? - performance-testing

As indicated in the official loadimpact/k6 documentation, we are able to execute a single k6 script as follows:
k6 run ../tests/http_get.js
How would I go about executing multiple script files in a single run? Specifically all scripts that reside in a given local directory. Something like:
k6 run ../tests/
Is this supported out of the box by k6?

Depending on your setup there are a couple different ways you can solve this. A pretty straight forward way is to fork the k6 run command inside bash.
#!/bin/sh
k6 run test1_spec.js &
k6 run test2_spec.js &
k6 run test3_spec.js
You could easily write some more complicated bash scripting to read in everything from the /tests/ directory and run them like that. I chose to do it like this though because I had some custom input params to give to each specific test.
Another way would be to write a docker compose script to do pretty much the same thing. This would start up a docker container for each test and run it inside there. The k6 docker image is nothing more than a tiny linux image with the k6 binary added to it.
version: '3'
services:
k6_test:
image: loadimpact/k6
container_name: test_k6
volumes:
- ./:/specs
command: run /tests/test_spec.js
ports:
- "6565:6565"
k6_test2:
image: loadimpact/k6
container_name: test2_k6
volumes:
- ./:/specs
command: run /tests/test2_spec.js
ports:
- "6566:6566"
Both of these methods should allow you to run multiple tests at the same time in a CI environment as well as on your local machine.

At the moment, k6 only accepts one script file, and it runs the exported default function.
import {sleep} from "k6";
import http from "k6/http";
export default function() {
http.get("http://test.loadimpact.com/");
sleep(2);
}
Perhaps, you can accomplish your goal by using modules.
Splitting your logic into modules helps to organize your code and allows reusing your common use cases in different tests.
Check out the k6 Modules documentation
import {sleep} from "k6";
import mainPageUserFlow from "../cases/main-page";
import billingUserFlow from "../cases/billing";
export default function() {
mainPageUserFlow();
billingUserFlow();
sleep(2);
}
Additionally, you could also change the execution of the different Virtual Users on your script like https://community.k6.io/t/how-to-distribute-vus-across-different-scenarios-with-k6/49

Using & will run the tests in parallel, if you want to run sequentially and retrieve the combined result, I suggest:
exit_c=0
(
k6 run script_1.js || exit_c=$?
k6 run script_2.js || exit_c=$?
...
k6 run script_n.js || exit_c=$?
exit $exit_c
)

You can set different scenarios and point to different files containing the script to run each one, as per the docs here https://k6.io/docs/using-k6/k6-options/reference/#scenarios
Here is an example:
import { default as firstScenario } from './firstScenario';
import { default as secondScenario } from './secondScenario';
export const options: Options = {
thresholds: {
http_req_duration: [`p(99)<${httpReqDuration}`],
checks: ['rate>0.80'],
},
scenarios: {
scriptAuthenticatedScenario: {
exec: 'myFirstScenario',
executor: 'constant-vus',
vus,
duration,
},
scriptUnauthenticatedScenario: {
exec: 'mySecondScenario',
executor: 'constant-vus',
vus,
duration,
},
},
};
export function myFirstScenario() {
firstScenario();
}
export function mySecondScenario() {
secondScenario();
}
export function handleSummary(data) {
return {
'results/scenarios.html': htmlReport(data),
stdout: textSummary(data, { indent: ' ', enableColors: true }),
};
}

Related

Is it possible to install and run docker inside node container in Jenkins?

This is somewhat complicated situation, but I have Jenkins installed inside a docker container. I'm trying to run some tests in node.js app, but this test environment requires docker+docker-compose to be enabled. At the moment, the Jenkins configuration is through pipeline code
So far, I've tried pulling docker inside a stage, as follow:
pipeline {
agent {
docker {
image 'node'
}
}
stages {
stage("Checkout") {
steps {
git url: ....
}
}
stage("Docker") {
steps {
script {
def image = docker.image('docker')
image.pull()
image.inside() {
sh 'docker --version'
sh 'docker-compose --version'
}
}
}
}
}
with error returning 'docker: not found'. I was expecting the script to succeed because I've tried exactly the same with 'agent any' which had no problem, but inside node image it doesn't seem to work.
I'm also not sure if this is the right way to do so because as I understand correctly, this way of running docker inside a docker is not recommended. One method that I have found is that when running docker, it is recommended to run docker -v /var/run/docker.sock:/var/run/docker.sock ... but currently I am running on docker-compose, with installation steps from https://www.jenkins.io/doc/book/installing/docker/ (instead of individual docker, I've combined both jenkins and jenkins-blueocean into a docker-compose file), and that did not work.
At this moment, I'm out of idea and any solutions or other suggestions as to how to run both node.js and docker in the same environment, would be greatly appreciated.
You can try to use docker-in-docker image https://hub.docker.com/_/docker

Jenkins parameter cannot be recognized in command line

I want to create a simple job using NodeJS, Github and Jenkins.
There are an exchange what runs on two servers addresses:
for example, us.exchange.com and eu.exchange.com.
I created an environment variable named SERVERS_LOCATION,
browser.get(`http://${process.env.SERVERS_LOCATION}.exchange.com`);
and a Jenkins parameter named SERVERS_LOCATION_JEN which may takes two options - US and EU.
Also I created a pipeline in Jenkins where I want to run parameterized build by choose one or another option, for that I use pipeline script in jenkinsfile what looks like that:
pipeline{
agent any
options{
disableConcurrentBuilds()
}
stages{
stage("install npm"){
steps{
bat "npm install"
bat "npx webdriver-manager update --versions.chrome 76.0.3809.68"
}
}
stage("executing job"){
steps{
bat "SERVERS_LOCATION=%SERVERS_LOCATION_JEN% npx protractor config/conf.js"
}
}
}
}
The main idea is to take the choosen value from Jenkins variable SERVERS_LOCATION_JEN and put it to environment variable ${process.env.SERVERS_LOCATION}, which can be used in code for further calls.
But when I running this job I have an error:
'SERVERS_LOCATION' is not recognized as an internal or external command,operable program or batch file.
P.S. running that job from git-bash works fine. (Win10 Chrome browser)
Could you point me please what I am doing wrong?
You have to use "set" command to assign a value to a variable in batch, so please use the below code:-
bat "set SERVERS_LOCATION=%SERVERS_LOCATION_JEN% npx protractor config/conf.js"

pass filePath to dockerfile as variable _ nodeJS dockerode Docker

In my case, I am creating a config.json that I need to copy from the host to my container.
I figured out there is some option that I can pass args to my dockerfile.
so first step is :
1.create Dockerfile:
FROM golang
WORKDIR /go/src/app
COPY . . /* here we have /foo directory */
COPY $CONFIG_PATH ./foo/
EXPOSE $PORT
CMD ["./foo/run", "-config", "./foo/config.json"]
as you can see, I have 2 variable [ "$CONFIG_PATH", "$PORT"].
so these to variables are dynamic and comes from my command in docker run.
here I need to copy my config file from my host to my container, and I need to run my project with that config.json file.
after building image:
second step:
get my config file from user and run the docker image with these variables.
let configFilePath = '/home/baazz/baaaf/config.json'
let port = "8080"
docker.run('my_image', null, process.stdout, { Env: [`$CONFIG_PATH=${configFilePath}`, `$PORT=${port}`] }).then(data => {
}).catch(err => { console.log(err) })
I am getting this error message when I am trying to execute my code.
Error opening JSON configuration (./foo/config.json): open
./foo/config.json: no such file or directory . Terminating.
You generally don’t want to COPY configuration files like this in your Docker image. You should be able to docker run the same image in multiple environments without modification.
Instead, you can use the docker run -v option to inject the correct config file when you run the image:
docker run -v $PWD/config-dev.json:/go/src/app/foo/config.json my_image
(The Dockerode home page shows an equivalent Binds option. In Docker Compose, this goes into the per-container volumes:. There’s no requirement that the two paths or file names match.)
Since file paths like this become part of the external interface to how people run your container, you generally don’t want to make them configurable at build time. Pick a fixed path and document that that’s the place to mount your custom config file.

Use node.js and ANSIcolor plugin in Jenkins

I want to display colored output in jenkins which is produced by node.js
Both work separately, but not combined:
Node Script
My test script test.js:
console.log(require("chalk").red("Node Red"))
Calling the test script in the shell works:
node test.js => OK
Calling a colored shell script in jenkins works:
echo -e "\033[31mShell Red\033[0m" => OK
But calling the node script in jenkins does not display any colors:
node test.js => No Color, when executed in jenkins
For me it worked when putting
export FORCE_COLOR=1
at the top of my script.
See https://github.com/chalk/supports-color#info
The answer of Raphael pointed me in the right direction. Here my complete solution for a Jenkins Pipeline Script (Scripted Pipeline):
:
node {
ansiColor('xterm') {
withEnv(['FORCE_COLOR=3']) {
...
sh "some-node-script-using-chalk.js"
...
}
}
}
If you are using the Declarative Pipeline see https://jenkins.io/doc/pipeline/tour/environment/ how to set environment variables in a Declarative Pipeline Script.
I just found the problem in my case :
In The Job Configuration
Look at the Bindings
Check the checkbox named "Color ANSI Console Output"
And it works (for me...)

Golang Mac OSX build for Docker machine

I need to run Golang application on Docker machine.
I'm working on Mac OSX and Docker is working on top of Linux virtual machine, so binaries builded on Mac are not runnable on Docker.
I see two ways here:
cross-compile binaries on Mac for linux OS
copy project sources to docker, run 'go get' and 'go build' on it
First one is hard because of CGO (it is used in some imported libraries).
Second is very slow because of 'go get' operation.
Can you please tell me, which way is the most common in that situation? Or maybe I'm doing something wrong?
Here a solution to make cross-compile super easy even with CGO.
I stumbled upon it recently after wasting a lot of time getting a new windows build server to build my Go app.
Now I just compile it on my Mac and will create a Linux build server with it:
https://github.com/karalabe/xgo
Many thanks to Péter Szilágyi alias karalabe for this really great package!
How to use:
have Docker running
go get github.com/karalabe/xgo
xgo --targets=windows/amd64 ./
There are lots more options!
-- edit --
Almost 3 Years later I'm not using this any more, but my docker image to build my application in a linux based CD pipeline is still based on the docker images used in xgo.
I use the first approach. Here its a gulp task the build go code. If the production flag is set, it runs GOOS=linux CGO_ENABLED=0 go build instead go build. So the binary will work inside a docker container
gulp.task('server:build', function () {
var build;
let options = {
env: {
'PATH': process.env.PATH,
'GOPATH': process.env.GOPATH
}
}
if (argv.prod) {
options.env['GOOS'] = 'linux'
options.env['CGO_ENABLED'] = '0'
console.log("Compiling go binarie to run inside Docker container")
}
var output = argv.prod ? conf.paths.build + '/prod/bin' : conf.paths.build + '/dev/bin';
build = child.spawnSync('go', ['build', '-o', output, "src/backend/main.go"], options);
if (build.stderr.length) {
var lines = build.stderr.toString()
.split('\n').filter(function(line) {
return line.length
});
for (var l in lines)
util.log(util.colors.red(
'Error (go install): ' + lines[l]
));
notifier.notify({
title: 'Error (go install)',
message: lines
});
}
return build;
});
You could create a Docker container from the distinct OS you need for your executable, and map a volume to your src directory. Run the container and make the executable from within the container. You end up with a binary that you can run on the distinct OS.

Resources