How to write a PromQL unit test to check that an alarm doesn't fire? - promql

I'm starting to write unit tests for Prometheus PromQL alerts using promtool test rules. Basic tests work, but I'd also like to write tests that check whether an alarm didn't fire for a certain series of values.
Is this possible and how would I express such a test?

I think I have figured it out.
Just provide an empty exp_alerts in the test:
evaluation_interval: 1m
tests:
- interval: 10s
input_series:
- series: '...'
values: '...'
alert_rule_test:
- eval_time: 10m
alertname: my_alert
exp_alerts:
This will pass if there are no alerts, and will fail if alerts fired.

Related

How to delay a job in a CI/CD gitlab pipeline until a certain time?

With start_in: 30 minutes it's possible to delay a job (eg live deployment after a commit).
https://docs.gitlab.com/ee/ci/jobs/job_control.html#run-a-job-after-a-delay
Question: is it possible to also create a delay until a specific time?
Eg that the deployment is delayed until 5a.m.?
Something like start_at: 05:00?
You could approximate this with a dynamic child pipeline. The job that generates the configuration can calculate how much time is inbetween the current time and the desired deploy time and embed that in the generated config for the start_in parameter.
As an example:
create_deploy_pipeline:
stage: build
script:
# modify this line or write a script that suits your needs
# here, we calculate the number of seconds between now and "5AM tomorrow"
- seconds_until_deploy=$(( $(date +%s -d "tomorrow 05:00") - $( date +%s ) ))
- |
cat > dynamic.yml << EOF
deploy_job:
script:
- ./deploy.sh
when: delayed
start_in: ${seconds_until_deploy} seconds
EOF
artifacts:
paths:
- dynamic.yml
deploy-pipeline:
stage: test
trigger:
include:
- artifact: dynamic.yml
job: create_deploy_pipeline
Keep in mind that you will need to consider the timezone used by your runner (or calculate using UTC time) to get an accurate calculation.
What you are probably looking for is a scheduled pipeline. Scheduled pipelines benefit from execution based on cron clauses. You can limit the execution of your complete pipeline definition on a step by step basis like this.
EDIT: Avoiding duplicate pipeline runs has been documented here. If you tag your deployments you could build a command like this:
git log --oneline $(git describe --tags --abbrev=0 #^)..#
This will tell you if there are any commits since the last tag. This could be a Job Control feature to figure out, whether to run again or not.

Gitlab JUnit test report statistic over many builds

We set up junit test reporting in gitlab and can see the results in the pipeline.
Is it possible in any way (maybe via API?) to extract a statistic of how often a test fails? For example:
TestX 0/20 times successful
TestY 17/20 times successful
TestZ 19/20 times successful
...
Background: we have very many integration tests and some of them show timing issues which cause them to fail. I would like to identify the tests which fail most often.

How to extract Requests/Seconds (Throughput) of a performance test using Locust?

I am running Locust performance Test against an API and I need to plot a Requests/Second vs Response time plot. I can see the req/s as a parameter in the results of the tests. Is there a Library/Class from where I can directly access this parameter ?
Have you looked at using the master report / slave report event hook (depending on where you want to log it from?
https://docs.locust.io/en/stable/api.html#locust.events.EventHook
You havent said how you want to plot it, but we use something similar to shunt the metrics into a database to report on.
I think you can use the _requests.csv and _distribution.csv files that get generated if you pass in the --csv flag. These contain the requests/s column as well as the response times for different percentiles and also min, max, medain and avg.
https://docs.locust.io/en/stable/retrieving-stats.html

JMeter reports are different in Jenkins

I have a JMeter test that has two thread groups. The first thread group goes out and gets auth and audit tokens. The second requires the tokens to test the APIs on which I'm interested in gathering performance data. I have Listeners set up as children of the samplers in the second thread group only. Running JMeter I get the results I want. But when I execute the same test from Jenkins, I get results from the both of the thread groups. I don't want the results from the first thread group. They clutter up my graphs and since there is only one execution of each they fluctuate, performance wise, enough to trigger my unstable/failed percentages routinely. Is there a way to get Jenkins to report on only the listeners/samplers I want? Do I have to run one test to get the tokens and another to test? If so, how do I pass the tokens from one test to the other?
You can execute 2 jenkins jobs:
First job write to file the tokens using BeanShell/JSR223 PostProcessor
Second job read the tokens from file using CSV Data Set Config

How to run Load Test without run duration

Can we run the load test irrespective of time duration, for example, if i am running tests for 25 users then test will automatically stops, once all the users finished their scripts. please help on this ?
Set the Use Test Iterations property to True and Test Iterations to 25.
The first property will overide the test duration property and second will force load test to execute 25 total tests. Since you have 25 virtual users in your test it will share them to your users and so each one will execute one test.
Check here for mor details:
Load Test Run Setting Properties - Test Iterations Properties
Test iteration setting in loadtest using vs 2010

Resources