Analysis of nodeload result - node.js

Can anyone explain me the nodeload result below.
./nl.js -c 1 -n 10000 -i 1 "http://localhost:3000/
Server: localhost:3000
HTTP Method: GET
Document Path: /
Concurrency Level: 1
Number of requests: 10000
Body bytes transferred: 3516274
Elapsed time (s): 1172.70
Requests per second: 9.23
Mean time per request (ms): 107.95
Time per request standard deviation: 187.76
Percentages of requests served within a certain time (ms)
Min: 38
Avg: 107.9
50%: 84
95%: 141
99%: 1076
Max: 5820
How is the percentage of requests calculated?
Thanks

Related

Does the httperf tool send all requests within a second even if the request rate/connection rate is way lower in the response?

Running the following httperf command, wanting to hit the API 500 times with a rate of 500 requests per second.
./httperf --server <server IP> --port <server port> --uri <api uri> --num-conns 500 --rate 500 --ssl --add-header "<cookie values>"
The response is taking longer than a second to complete, more like 60 seconds, with a request rate/connection rate of around 8.2 req/s.
Output below:
Total: connections 500 requests 500 replies 500 test-duration 61.102 s
Connection rate: 8.2 conn/s (122.2 ms/conn, <=500 concurrent connections)
Connection time [ms]: min 60011.9 avg 60523.1 max 61029.1 median 60526.5 stddev 290.7
Connection time [ms]: connect 8.1
Connection length [replies/conn]: 1.000
Request rate: 8.2 req/s (122.2 ms/req)
Request size [B]: 3106.0
Reply rate [replies/s]: min 0.0 avg 0.0 max 0.0 stddev 0.0 (12 samples)
Reply time [ms]: response 4.3 transfer 60510.6
Reply size [B]: header 178.0 content 12910.0 footer 2.0 (total 13090.0)
Reply status: 1xx=0 2xx=500 3xx=0 4xx=0 5xx=0
CPU time [s]: user 28.27 system 32.59 (user 46.3% system 53.3% total 99.6%)
Net I/O: 129.4 KB/s (1.1*10^6 bps)
Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
So does this mean that the requests are really only getting sent at around 8 per second, or are they technically send to the server, but the server is queing them up.

Azure FaceAPI limits iteration to 20 items

I have a list of image urls from which I use MS Azure faceAPI to extract some features from the photos. The problem is that whenever I iterate more than 20 urls, it seems not to work on any url after the 20th one. There is no error shown. However, when I manually changed the range to iterate the next 20 urls, it worked.
Side note, on free version, MS Azure Face allows only 20 requests/minute; however, even when I let time sleep up to 60s per 10 requests, the problem still persists.
FYI, I have 360,000 urls in total, and sofar I have made only about 1000 requests.
Can anyone help tell me why this happens and how to solve this? Thank you so much!
# My codes
i = 0
for post in list_post[800:1000]:
i += 1
try:
image_url = post['photo_url']
headers = {'Ocp-Apim-Subscription-Key': KEY}
params = {
'returnFaceId': 'true',
'returnFaceLandmarks': 'false',
'returnFaceAttributes': 'age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise',
}
response = requests.post(face_api_url, params=params, headers=headers, json={"url": image_url})
post['face_feature'] = response.json()[0]
except (KeyError, IndexError):
continue
if i % 10 == 0:
time.sleep(60)
The free version has a max of 30 000 request per month, your 356 000 faces will therefore take a year to run.
The standard version costs USD 1 per 1000 requests, giving a total cost of USD 360. This option supports 10 transactions per second.
https://azure.microsoft.com/en-au/pricing/details/cognitive-services/face-api/

Artillery.io: How to generate test report for each Scenario?

Artillery: How to run the scenarios sequentially and also display the results of each scenario in the same file?
I'm currently writing nodejs test with artillery.io to compare performance between two endpoints that I implemented. I defined two scenarios and I would like to get the result of each in a same report file.
The execution of the tests is not sequential, it means that at the end of the test I have a result already combined and impossible to know the performance of each one but for all.
config:
target: "http://localhost:8080/api/v1"
plugins:
expect: {}
metrics-by-endpoint: {}
phases:
- duration: 60
arrivalRate: 2
environments:
dev:
target: "https://backend.com/api/v1"
phases:
- duration: 60
arrivalRate: 2
scenarios:
- name: "Nashhorn"
flow:
- post:
url: "/casting/nashhorn"
auth:
user: user1
pass: user1
json:
body:
fromFile: "./casting-dataset-01-as-input.json"
options:
filename: "casting_dataset"
conentType: "application/json"
expect:
statusCode: 200
capture:
regexp: '[^]*'
as: 'result'
- log: 'result= {{result}}'
- name: "Nodejs"
flow:
- post:
url: "/casting/nodejs"
auth:
user: user1
pass: user1
json:
body:
fromFile: "./casting-dataset-01-as-input.json"
options:
filename: "casting_dataset"
conentType: "application/json"
expect:
statusCode: 200
capture:
regexp: '[^]*'
as: 'result'
- log: 'result= {{result}}'
How to run the scenarios sequentially and also display the results of each scenario in the same file?
Thank you in advance for your answers
I think you miss the param weight, this param defines de probability to execute the scenario. if in you first scenario put a weight of 1 and in the second put the same value, both will have the same probability to been execute (50%).
If you put in the first scenario a weight of 3 and in the second one a weight of 1, the second scenario will have a 25% probability of execution while the first one will have a 75% probability of being executed.
This combined with the arrivalRate parameter and setting the value of rampTo to 2, will cause 2 scenarios to be executed every second, in which if you set a weight of 1 to the two scenarios, they will be executed at the same time.
Look down for scenario weights in the documentation
scenarios:
- flow:
- log: Scenario for GET requests
- get:
url: /v1/url_test_1
name: Scenario for GET requests
weight: 1
- flow:
- log: Scenario for POST requets
- post:
json: {}
url: /v1/url_test_2
name: Scenario for POST
weight: 1
I hope this helps you.
To my knowledge, there isn't a good way to do this with the existing the artillery logic.
using this test script:
scenarios:
- name: "test 1"
flow:
- post:
url: "/postman-echo.com/get?test=123"
weight: 1
- name: "test 2"
flow:
- post:
url: "/postman-echo.com/get?test=123"
weight: 1
... etc...
Started phase 0 (equal weight), duration: 1s # 13:21:54(-0500) 2021-01-06
Report # 13:21:55(-0500) 2021-01-06
Elapsed time: 1 second
Scenarios launched: 20
Scenarios completed: 20
Requests completed: 20
Mean response/sec: 14.18
Response time (msec):
min: 117.2
max: 146.1
median: 128.6
p95: 144.5
p99: 146.1
Codes:
404: 20
All virtual users finished
Summary report # 13:21:55(-0500) 2021-01-06
Scenarios launched: 20
Scenarios completed: 20
Requests completed: 20
Mean response/sec: 14.18
Response time (msec):
min: 117.2
max: 146.1
median: 128.6
p95: 144.5
p99: 146.1
Scenario counts:
test 7: 4 (20%)
test 5: 2 (10%)
test 3: 1 (5%)
test 1: 4 (20%)
test 9: 2 (10%)
test 8: 3 (15%)
test 10: 2 (10%)
test 4: 1 (5%)
test 6: 1 (5%)
Codes:
404: 20
So basically you can see that they are weighted equally, but are not running equally. So I think there needs to be something added to the code itself for artillery. Happy to be wrong here.
You can use the per endpoint metrics plugin to give you the results per endpoint instead of aggregated.
https://artillery.io/docs/guides/plugins/plugin-metrics-by-endpoint.html
I see you already have this in your config, but it cannot be working if it is not giving you what you need. Did you install it as well as add to config?
npm install artillery-plugin-metrics-by-endpoint
In terms of running sequentially, I'm not sure why you would want to, but assuming you do, you just need to define each POST as part of the same Scenario instead of 2 different scenarios. That way the second step will only execute after the first step has responded. I believe the plugin is per endpoint, not per scenario so will still give you the report you want.

Node js server and Apache ab tool: unexpected behavior

While testing a simple node server (written with Hapi.js):
'use strict';
var Hapi = require("hapi");
var count = 0;
const server = Hapi.server({
port: 3000,
host: 'localhost'
});
server.route({
method: 'GET',
path: '/test',
handler: (request, h) => {
count ++;
console.log(count);
return count;
}
});
const init = async () => {
await server.start();
};
process.on('unhandledRejection', (err) => {
process.exit(1);
});
init();
start the server:
node ./server.js
run the Apache ab tool:
/usr/bin/ab -n 200 -c 30 localhost:3000/test
Env details:
OS: CentOS release 6.9
Node: v10.14.1
Hapi.js: 17.8.1
I found unexpected results in case of multiple concurrent requests (-c 30): the request handler function has been called more times than the number of requests to be performed (-n 200).
Ab output example:
Benchmarking localhost (be patient)
Server Software:
Server Hostname: localhost
Server Port: 3000
Document Path: /test
Document Length: 29 bytes
Concurrency Level: 30
Time taken for tests: 0.137 seconds
Complete requests: 200
Failed requests: 0
Write errors: 0
Total transferred: 36081 bytes
HTML transferred: 6119 bytes
Requests per second: 1459.44 [#/sec] (mean)
Time per request: 20.556 [ms] (mean)
Time per request: 0.685 [ms] (mean, across all concurrent requests)
Transfer rate: 257.12 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 0
Processing: 15 17 1.5 16 20
Waiting: 2 9 3.9 9 18
Total: 15 17 1.5 16 21
Percentage of the requests served within a certain time (ms)
50% 16
66% 16
75% 17
80% 18
90% 20
95% 20
98% 21
99% 21
100% 21 (longest request)
And the node server print out 211 log lines. During various tests the mismatch is variable but always present:
-n 1000 -c 1 -> 1000 log
-n 1000 -c 2 -> ~1000 logs
-n 1000 -c 10 -> ~1001 logs
-n 1000 -c 70 -> ~1008 logs
-n 1000 -c 1000 -> ~1020 logs
It seems that as concurrency increases, the mismatch increases.
I couldn't figure out whether the ab tool performs more http requests or the node server responds more times than necessary.
Could you please help?
Its very strange and I don't get the same results as you on my machine. I would be very surprised if it was ab that was issuing different numbers of actual requests.
Things i would try:
Write a simple server using express rather than hapi. If the issue still occurs you at least know its not a problem with hapi.
Intercept the network calls using fiddler
ab -X localhost:8888 -n 100 -c 30 http://127.0.0.1:3000/test will use the fiddler proxy which will then let you see the actual calls across the network interface. more details
wireshark if you need more power and your feeling brave (I'd only use it if fiddler has let you down)
If after all these you are still finding an issue then it has been narrowed down to an issue with node, I'm not sure what else it could be. Try using node 8 rather than 10.
Using the Fiddler proxy I found that AB tool runs more times than the number of requests to be performed (example: -n 200).
By running a series of consecutive tests:
# 11 consecutive times
/usr/bin/ab -n 200 -c 30 -X localhost:8888 http://localhost:3000/test
Both the proxy and the node server report a total of 2209 requests. It looks like that AB is less imprecise with the proxy in the middle, but still imprecise.
In general, and more important, I never found mismatches between the requests passed through the proxy and the requests received by the node server.
Thanks!

What does this error message "Received discontinuity error" mean? From Apple's mediastreamvalidator

Using Apple's mediastreamvalidator to validate the m3u8 file, I got an error message: "Received discontinuity error", but I didn't find any explanation for this error message in https://developer.apple.com/library/ios/technotes/tn2235/_index.html
Does anybody know what this error mean, and whether this error will cause any issues?
My mediastreamvalidator's version is: Beta Version 1.1(150608)
Below is the mediastreamvalidator's result:
--------------------------------------------------------------------------------
test_1444446455_hls_64944_116-10.m3u8
--------------------------------------------------------------------------------
Playlist Syntax: OK
Processed 15 out of 15 segments:
test_1444446455_hls_64944_116-10_00003.ts:
ERROR: (-12976) Received discontinuity error
--> Track ID: 258
test_1444446455_hls_64944_116-10_00007.ts:
ERROR: (-12976) Received discontinuity error
--> Track ID: 258
test_1444446455_hls_64944_116-10_00010.ts:
ERROR: (-12976) Received discontinuity error
--> Track ID: 258
test_1444446455_hls_64944_116-10_00014.ts:
ERROR: (-12976) Received discontinuity error
--> Track ID: 258
Average segment duration: 2.00 seconds
Playlist target bitrate: Average: 3.24 Mbits/sec, Max: 3.72 Mbits/sec
Segment bitrate: Average: 3.04 Mbits/sec, Max: 3.65 Mbits/sec
Average segment structural overhead: 76.21 kbits/sec (2.50 %)
Thank you.
A discontinuity can be caused by:
changes in encoding such as: different file format, number, type and identifiers of tracks, different encoding parameters
non-consistent timestamp sequences such as gaps or roll-overs
Such cases must/should be indicated with an EXT-X-DISCONTINUITY tag.
Check if your encoder is producing valid output.

Resources