Artillery.io: How to generate test report for each Scenario? - node.js

Artillery: How to run the scenarios sequentially and also display the results of each scenario in the same file?
I'm currently writing nodejs test with artillery.io to compare performance between two endpoints that I implemented. I defined two scenarios and I would like to get the result of each in a same report file.
The execution of the tests is not sequential, it means that at the end of the test I have a result already combined and impossible to know the performance of each one but for all.
config:
target: "http://localhost:8080/api/v1"
plugins:
expect: {}
metrics-by-endpoint: {}
phases:
- duration: 60
arrivalRate: 2
environments:
dev:
target: "https://backend.com/api/v1"
phases:
- duration: 60
arrivalRate: 2
scenarios:
- name: "Nashhorn"
flow:
- post:
url: "/casting/nashhorn"
auth:
user: user1
pass: user1
json:
body:
fromFile: "./casting-dataset-01-as-input.json"
options:
filename: "casting_dataset"
conentType: "application/json"
expect:
statusCode: 200
capture:
regexp: '[^]*'
as: 'result'
- log: 'result= {{result}}'
- name: "Nodejs"
flow:
- post:
url: "/casting/nodejs"
auth:
user: user1
pass: user1
json:
body:
fromFile: "./casting-dataset-01-as-input.json"
options:
filename: "casting_dataset"
conentType: "application/json"
expect:
statusCode: 200
capture:
regexp: '[^]*'
as: 'result'
- log: 'result= {{result}}'
How to run the scenarios sequentially and also display the results of each scenario in the same file?
Thank you in advance for your answers

I think you miss the param weight, this param defines de probability to execute the scenario. if in you first scenario put a weight of 1 and in the second put the same value, both will have the same probability to been execute (50%).
If you put in the first scenario a weight of 3 and in the second one a weight of 1, the second scenario will have a 25% probability of execution while the first one will have a 75% probability of being executed.
This combined with the arrivalRate parameter and setting the value of rampTo to 2, will cause 2 scenarios to be executed every second, in which if you set a weight of 1 to the two scenarios, they will be executed at the same time.
Look down for scenario weights in the documentation
scenarios:
- flow:
- log: Scenario for GET requests
- get:
url: /v1/url_test_1
name: Scenario for GET requests
weight: 1
- flow:
- log: Scenario for POST requets
- post:
json: {}
url: /v1/url_test_2
name: Scenario for POST
weight: 1
I hope this helps you.

To my knowledge, there isn't a good way to do this with the existing the artillery logic.
using this test script:
scenarios:
- name: "test 1"
flow:
- post:
url: "/postman-echo.com/get?test=123"
weight: 1
- name: "test 2"
flow:
- post:
url: "/postman-echo.com/get?test=123"
weight: 1
... etc...
Started phase 0 (equal weight), duration: 1s # 13:21:54(-0500) 2021-01-06
Report # 13:21:55(-0500) 2021-01-06
Elapsed time: 1 second
Scenarios launched: 20
Scenarios completed: 20
Requests completed: 20
Mean response/sec: 14.18
Response time (msec):
min: 117.2
max: 146.1
median: 128.6
p95: 144.5
p99: 146.1
Codes:
404: 20
All virtual users finished
Summary report # 13:21:55(-0500) 2021-01-06
Scenarios launched: 20
Scenarios completed: 20
Requests completed: 20
Mean response/sec: 14.18
Response time (msec):
min: 117.2
max: 146.1
median: 128.6
p95: 144.5
p99: 146.1
Scenario counts:
test 7: 4 (20%)
test 5: 2 (10%)
test 3: 1 (5%)
test 1: 4 (20%)
test 9: 2 (10%)
test 8: 3 (15%)
test 10: 2 (10%)
test 4: 1 (5%)
test 6: 1 (5%)
Codes:
404: 20
So basically you can see that they are weighted equally, but are not running equally. So I think there needs to be something added to the code itself for artillery. Happy to be wrong here.

You can use the per endpoint metrics plugin to give you the results per endpoint instead of aggregated.
https://artillery.io/docs/guides/plugins/plugin-metrics-by-endpoint.html
I see you already have this in your config, but it cannot be working if it is not giving you what you need. Did you install it as well as add to config?
npm install artillery-plugin-metrics-by-endpoint
In terms of running sequentially, I'm not sure why you would want to, but assuming you do, you just need to define each POST as part of the same Scenario instead of 2 different scenarios. That way the second step will only execute after the first step has responded. I believe the plugin is per endpoint, not per scenario so will still give you the report you want.

Related

Change trialId in Google AI Platform hyperparameter tuning

I'm trying to follow this tutorial on hyperparameter tuning on AI Platform: https://cloud.google.com/blog/products/gcp/hyperparameter-tuning-on-google-cloud-platform-is-now-faster-and-smarter.
My configuration yaml file looks like this:
trainingInput:
hyperparameters:
goal: MINIMIZE
hyperparameterMetricTag: loss
maxTrials: 4
maxParallelTrials: 2
params:
- parameterName: learning_rate
type: DISCRETE
discreteValues:
- 0.0005
- 0.001
- 0.0015
- 0.002
The expected output:
"completedTrialCount": "4",
"trials": [
{
"trialId": "3",
"hyperparameters": {
"learning_rate": "2e-03"
},
"finalMetric": {
"trainingStep": "123456",
"objectiveValue": 0.123456
},
},
Is there any way to customize the trialId instead the defaults numeric values (e.g. 1,2,3,4...)?
It is not possible to customize the trialId as it is dependent on the parameter maxTrials in your hyperparameter tuning config.
maxTrials only accepts integers, so the assigned value to trialId will be a range from 1 to your defined maxTrials.
Also as mentioned in the example in your post where maxTrials: 40 is set and it yields a json that shows trialId: 35 which is within the range of maxTrials.
This indicates that 40 trials have been completed, and the best so far
is trial 35, which achieved an objective of 1.079 with the
hyperparameter values of nembeds=18 and nnsize=32.
Example output:

Is there any way to reduce this code of prometheus alert expressions? I have multiple similar expression only the source instance is different

Suppose I am getting the metrics from a service in event_processing_bucket tag
where instance are like source=ONE, source=TWO, source=THREE ...... TEN
Currently I am using the following way to get the alert, but here I have written a separate expression just because i have to get data for every single source.
Is there any way to reduce this duplicate code. so that i could write only one alert rule and it will alert for all separately based on its respective value
Here are the prometheus alert expressions,
- alert: ONE_SLA_GREATER_THAN_5DAYS
expr: sum(rate(event_processing_bucket{source="ONE"}[1m])) > 5
for: 1m
labels:
severity: warning
team: mySlackChannel
annotations:
description: ONE_SLA is GREATER_THAN_5DAYS
summary: ONE_SLA is GREATER_THAN_5DAYS
- alert: TWO_SLA_GREATER_THAN_5DAYS
expr: sum(rate(event_processing_bucket{source="TWO"}[1m])) > 5
for: 1m
labels:
severity: warning
team: mySlackChannel
annotations:
description: TWO_SLA is GREATER_THAN_5DAYS
summary: TWO_SLA is GREATER_THAN_5DAYS
.
.
.
- alert: TEN_SLA_GREATER_THAN_5DAYS
expr: sum(rate(event_processing_bucket{source="TEN"}[1m])) > 5
for: 1m
labels:
severity: warning
team: mySlackChannel
annotations:
description: TEN_SLA is GREATER_THAN_5DAYS
summary: TEN_SLA is GREATER_THAN_5DAYS
Please guide me to write single expression code if possible. if not please specify.
Thanks in advance!!
One way is to group by
histogram_quantile(0.95, sum(increase(event_bucket[5m])) by (le, source)) > 5
later result values can be used to trigger those many alerts

A complicated logstash pattern in Grok

I have following 3 lines in a log that need to be grok'd for ElasticSearch through logstash.
2020-01-27 13:30:43,536 INFO com.test.bestmatch.streamer.function.BestMatchProcessor - Best match for ID: COi0620200110450BAD5CB723457A9B4747F1727 Total Batch Processing time: 3942
2020-01-27 13:30:43,581 INFO HTTPConnection - COi0620200110450BAD5CB723457A9B4747F1727 | People: 51 | Addresses: 5935 | HTTP Query Time: 24
2020-01-27 13:30:43,698 INFO bestRoute - COi0620200110450BAD5CB723457A9B4747F1727 | Touch Points: 117 | Best Match Time 3943
I tried various grok patterns but couldn't get to any concrete one.
Edited as per request
I need the following in ES in the context of the specific log entry
1st line
ID: COi0620200110450BAD5CB723457A9B4747F1727
Total Batch Processing time: 3942
2nd Line
ID: COi0620200110450BAD5CB723457A9B4747F1727
People: 51
Addresses: 5935
HTTP Query Time: 24
3rd Line
Touch Points 117
Best Match Time: 3943.
The output is from a Flink log. If there are flink patterns out there then please let me know.
1st line:
^%{TIMESTAMP_ISO8601:time}\s*%{LOGLEVEL:loglevel}.*ID: (?<ID>[\w\d]*).*time: (?<total_time>[\d]*)$
2nd line:
^%{TIMESTAMP_ISO8601:time}\s*%{LOGLEVEL:loglevel}.* - (?<ID>[\w]*).*People: (?<people>[\w]*).*Addresses: (?<addresses>[\d]*).*HTTP Query Time: (?<query_time>[\d]*)$
3rd line:
^%{TIMESTAMP_ISO8601:time}\s*%{LOGLEVEL:loglevel}.* - (?<ID>[\w]*).*Touch Points: (?<touch_points>[\d]*).*Best Match Time (?<best_match_time>[\d]*)$
There are many ways to parse this, this is only one approach. I would reccomend to adjust the field names I used to the new ECS. https://www.elastic.co/guide/en/ecs/current/index.html

rampUser method is getting stuck in gatling 3.3

I am having issues using rampUser() method in my gatling script. The request is getting stuck after the following entry which had passed half way through.
Version : 3.3
================================================================================
2019-12-18 09:51:44 45s elapsed
---- Requests ------------------------------------------------------------------
> Global (OK=2 KO=0 )
> graphql / request_0 (OK=1 KO=0 )
> rest / request_0 (OK=1 KO=0 )
---- xxxSimulation ---------------------------------------------------
[##################################### ] 50%
waiting: 1 / active: 0 / done: 1
================================================================================
I am seeing the following in the log which gets repeated for ever and the log size increases
09:35:46.495 [GatlingSystem-akka.actor.default-dispatcher-2] DEBUG io.gatling.core.controller.inject.open.OpenWorkload - Injecting 0 users in scenario xxSimulation, continue=true
09:35:47.494 [GatlingSystem-akka.actor.default-dispatcher-6] DEBUG io.gatling.core.controller.inject.open.OpenWorkload - Injecting 0 users in scenario xxSimulation, continue=true
The above issue is happening only with rampUser and not happening with
atOnceUsers()
rampUsersPerSec()
rampConcurrentUsers()
constantConcurrentUsers()
constantUsersPerSec()
incrementUsersPerSec()
Is there a way to mimic rampUser() in some other way or is there a solution for this.
My code is very minimal
setUp(
scenarioBuilder.inject(
rampUsers(2).during(1 minutes)
)
).protocols(protocolBuilder)
I am stuck with this for some time and my earlier post with more information can be found here
Can any of the gatling experts help me on this?
Thanks for looking into it.
It seems you have slightly incorrect syntax for a rampUsers. You should try remove a . before during.
I have in my own script this code and it works fine:
setUp(userScenario.inject(
// atOnceUsers(4),
rampUsers(24) during (1 seconds))
).protocols(httpProtocol)
Also, in Gatling documentation example is also without a dot Open model:
scn.inject(
nothingFor(4 seconds), // 1
atOnceUsers(10), // 2
rampUsers(10) during (5 seconds), // HERE
constantUsersPerSec(20) during (15 seconds), // 4
constantUsersPerSec(20) during (15 seconds) randomized, // 5
rampUsersPerSec(10) to 20 during (10 minutes), // 6
rampUsersPerSec(10) to 20 during (10 minutes) randomized, // 7
heavisideUsers(1000) during (20 seconds) // 8
).protocols(httpProtocol)
)
My guess is that syntax can't be parsed, so instead 0 is substituted. (Here is example of rounding. Not applicable, but as reference: gatling-user-injection-constantuserspersec)
Also, you mentioned that others method work, could you paste working code as well?

Node js server and Apache ab tool: unexpected behavior

While testing a simple node server (written with Hapi.js):
'use strict';
var Hapi = require("hapi");
var count = 0;
const server = Hapi.server({
port: 3000,
host: 'localhost'
});
server.route({
method: 'GET',
path: '/test',
handler: (request, h) => {
count ++;
console.log(count);
return count;
}
});
const init = async () => {
await server.start();
};
process.on('unhandledRejection', (err) => {
process.exit(1);
});
init();
start the server:
node ./server.js
run the Apache ab tool:
/usr/bin/ab -n 200 -c 30 localhost:3000/test
Env details:
OS: CentOS release 6.9
Node: v10.14.1
Hapi.js: 17.8.1
I found unexpected results in case of multiple concurrent requests (-c 30): the request handler function has been called more times than the number of requests to be performed (-n 200).
Ab output example:
Benchmarking localhost (be patient)
Server Software:
Server Hostname: localhost
Server Port: 3000
Document Path: /test
Document Length: 29 bytes
Concurrency Level: 30
Time taken for tests: 0.137 seconds
Complete requests: 200
Failed requests: 0
Write errors: 0
Total transferred: 36081 bytes
HTML transferred: 6119 bytes
Requests per second: 1459.44 [#/sec] (mean)
Time per request: 20.556 [ms] (mean)
Time per request: 0.685 [ms] (mean, across all concurrent requests)
Transfer rate: 257.12 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 0
Processing: 15 17 1.5 16 20
Waiting: 2 9 3.9 9 18
Total: 15 17 1.5 16 21
Percentage of the requests served within a certain time (ms)
50% 16
66% 16
75% 17
80% 18
90% 20
95% 20
98% 21
99% 21
100% 21 (longest request)
And the node server print out 211 log lines. During various tests the mismatch is variable but always present:
-n 1000 -c 1 -> 1000 log
-n 1000 -c 2 -> ~1000 logs
-n 1000 -c 10 -> ~1001 logs
-n 1000 -c 70 -> ~1008 logs
-n 1000 -c 1000 -> ~1020 logs
It seems that as concurrency increases, the mismatch increases.
I couldn't figure out whether the ab tool performs more http requests or the node server responds more times than necessary.
Could you please help?
Its very strange and I don't get the same results as you on my machine. I would be very surprised if it was ab that was issuing different numbers of actual requests.
Things i would try:
Write a simple server using express rather than hapi. If the issue still occurs you at least know its not a problem with hapi.
Intercept the network calls using fiddler
ab -X localhost:8888 -n 100 -c 30 http://127.0.0.1:3000/test will use the fiddler proxy which will then let you see the actual calls across the network interface. more details
wireshark if you need more power and your feeling brave (I'd only use it if fiddler has let you down)
If after all these you are still finding an issue then it has been narrowed down to an issue with node, I'm not sure what else it could be. Try using node 8 rather than 10.
Using the Fiddler proxy I found that AB tool runs more times than the number of requests to be performed (example: -n 200).
By running a series of consecutive tests:
# 11 consecutive times
/usr/bin/ab -n 200 -c 30 -X localhost:8888 http://localhost:3000/test
Both the proxy and the node server report a total of 2209 requests. It looks like that AB is less imprecise with the proxy in the middle, but still imprecise.
In general, and more important, I never found mismatches between the requests passed through the proxy and the requests received by the node server.
Thanks!

Resources