Taurus failed execute apachebenchmark on windows - multithreading

I'm using Winsows 7 and download latest Taraus installation, Download updated ApacheBenchmark from ApacheServer installation.
I'm trying execute ApacheBenchmark using Taraus's Simplest working example
Calling bzt config.yaml but failed with Invalid number of requests
execution:
- executor: ab
scenario: simple
scenarios:
simple:
requests:
- http://blazedemo.com/
But failed
16:16:04 INFO: Preparing...
16:16:05 INFO: Starting...
16:16:05 INFO: Waiting for results...
16:16:06 INFO: Did not mute console logging
16:16:06 INFO: Waiting for finish...
16:16:06 WARNING: ab tool exited with non-zero code: 1
16:16:06 WARNING: Please wait for graceful shutdown...
16:16:06 INFO: Shutting down...
16:16:06 INFO: Post-processing...
16:16:06 INFO: Test duration: 0:00:01
16:16:07 ERROR: Child Process Error: Empty results, most likely simple (ApacheBenchmarkExecutor) failed. Actual reason for this can be found in logs under C:\Users\User\2018-12-06_16-16-04.160200
16:16:07 ERROR: ab STDERR:
Invalid number of requests
When I tried second example Example of hold-for usage taurus opened and crashed
Is there a taraus windows issue or known limitation? as I missing configuration/execution parameter?
Note I uninstall old version before installing new version and config.yaml is a valid yaml file
ApacheBenchmark works as standalone for example when executing
ab http://blazedemo.com/
Error in verbose (-v)
[2018-12-13 08:31:46,400 DEBUG bzt.utils] Executing shell: ['ab', '-n', '0', '-c
', '0', '-d', '-r', '-l', '-g', 'Z:\\2018-12-13_08-31-45.916555\\ab.tsv', '-k',
'http://blazedemo.com/'] at Z:\
[2018-12-13 08:31:46,406 DEBUG Engine] Checking <bzt.modules.aggregator.Consolid
atingAggregator object at 0x0000000003D50F98>
[2018-12-13 08:31:46,407 DEBUG Engine.ab.TSVDataReader.FileReader] File not appe
ared yet: Z:\2018-12-13_08-31-45.916555\ab.tsv
[2018-12-13 08:31:46,408 DEBUG Engine.ab.TSVDataReader] Buffer len: 0; Known err
ors count: 0
[2018-12-13 08:31:46,409 DEBUG Engine.consolidator] Consolidator buffer[0]: dict
_keys([])
[2018-12-13 08:31:46,410 DEBUG Engine] Checking <bzt.modules.monitoring.Monitori
ng object at 0x0000000003D65CC0>
[2018-12-13 08:31:46,736 DEBUG Engine] Checking <bzt.modules.reporting.FinalStat
us object at 0x00000000045E5D68>
[2018-12-13 08:31:46,737 DEBUG Engine] Checking <bzt.modules.console.ConsoleStat
usReporter object at 0x00000000045D9CC0>
[2018-12-13 08:31:46,739 INFO Engine.console] Did not mute console logging
[2018-12-13 08:31:46,832 INFO Engine.console] Waiting for finish...
[2018-12-13 08:31:46,886 DEBUG Engine] Iteration took 0.488 sec, sleeping for 0.
512 sec...
[2018-12-13 08:31:47,400 DEBUG Engine] Checking <bzt.modules.provisioning.Local
object at 0x0000000003D1B400>
[2018-12-13 08:31:47,403 WARNING Engine.ab] ab tool exited with non-zero code: 1

Works just fine on my Taurus version 1.13.1:
C:\temp>bzt -o modules.ab.path=c:/temp/ab.exe ab.yaml
17:19:41 INFO: Taurus CLI Tool v1.13.1
17:19:41 INFO: Starting with configs: ['ab.yaml']
17:19:41 INFO: Configuring...
17:19:41 INFO: Artifacts dir: C:\temp\2018-12-14_17-19-41.894000
17:19:41 INFO: Preparing...
17:19:42 WARNING: There is newer version of Taurus 1.13.2 available, consider upgrading. What's new: http://gettaurus.org/docs/Changelog/
17:19:42 INFO: Starting...
17:19:42 INFO: Waiting for results...
17:19:42 INFO: Did not mute console logging
17:19:42 INFO: Waiting for finish...
17:19:43 WARNING: Please wait for graceful shutdown...
17:19:43 INFO: Shutting down...
17:19:43 INFO: Post-processing...
17:19:43 INFO: Test duration: 0:00:01
17:19:43 INFO: Samples count: 1, 0.00% failures
17:19:43 INFO: Average times: total 0.000, latency 0.000, connect 0.000
17:19:43 INFO: Percentiles:
+---------------+---------------+
| Percentile, % | Resp. Time, s |
+---------------+---------------+
| 0.0 | 0.253 |
| 50.0 | 0.253 |
| 90.0 | 0.253 |
| 95.0 | 0.253 |
| 99.0 | 0.253 |
| 99.9 | 0.253 |
| 100.0 | 0.253 |
+---------------+---------------+
17:19:43 INFO: Request label stats:
+-----------------------+--------+---------+--------+-------+
| label | status | succ | avg_rt | error |
+-----------------------+--------+---------+--------+-------+
| http://blazedemo.com/ | OK | 100.00% | 0.000 | |
+-----------------------+--------+---------+--------+-------+
17:19:43 INFO: Artifacts dir: C:\temp\2018-12-14_17-19-41.894000
17:19:43 INFO: Done performing with code: 0
It might be the case you're sitting on a buggy version, you can obtain Taurus v1.13.1 as simple as:
pip install bzt==1.13.1
Just in case there is Taurus Support Forum where you can reach Taurus developers and maintainers, the chance of getting more professional answer is much higher there.

Taurus forum responded that it's a bug need to be fixed and suggested a workaround:
Now you can use explicit concurrency and iterations as workaround:
execution:
- executor: ab
iterations: 1
concurrency: 1

Related

Error when installing apk on emulator in WebStorm but not in Android Studio

I want to develop a React Native app in WebStorm. Therefore I created my emulator in Android Studio which I use in WebStorm as an external tool. If I start it in WebStorm I get:
INFO | Android emulator version 31.3.10.0 (build_id 8807927) (CL:N/A)
emulator: INFO: Found systemPath C:\Users\aerith\AppData\Local\Android\Sdk\system-images\android-33\google_apis\x86_64\
INFO | Duplicate loglines will be removed, if you wish to see each indiviudal line launch with the -log-nofilter flag.
INFO | IPv4 server found: 172.17.0.11
INFO | added library vulkan-1.dll
INFO | configAndStartRenderer: setting vsync to 60 hz
INFO | injectedQemuChannel!
INFO | Informing listeners of injection.
INFO | Rootcanal has been activated.
WHPX on Windows 10.0.19044 detected.
Windows Hypervisor Platform accelerator is operational
WARNING | *** No gRPC protection active, consider launching with the -grpc-use-jwt flag.***
INFO | Started GRPC server at 127.0.0.1:8554, security: Local, auth: none
INFO | Advertising in: C:\Users\aerith\AppData\Local\Temp\avd\running\pid_18624.ini
INFO | setDisplayConfigs w 1080 h 2400 dpiX 420 dpiY 420
It also starts successfully. However if I want to install an apk (I can install it on the same emulator in Android Studio) I get this error:
Unable to read the given APK.
Ensure that the file is readable.
Any ideas where the problem could be?

TypeError: add_worker() got an unexpected keyword argument 'versions'

when I try to add a worker with the scheduler address by running the below command:
dask-worker tcp://10.142.0.3:8786
My scheduler gives "add_worker() got an unexpected keyword argument 'versions' as shown below":
distributed.core - ERROR - add_worker() got an unexpected keyword argument 'versions'
Traceback (most recent call last):
File "/home/rsa-key-20180725/.local/lib/python3.6/site-packages/distributed/core.py", line 412, in handle_comm
result = handler(comm, **msg)
TypeError: add_worker() got an unexpected keyword argument 'versions'
But the worker didn't throw any error and it gave something like this
distributed.nanny - INFO - Start Nanny at: 'tcp://10.142.0.6:45083'
distributed.worker - INFO - Start worker at: tcp://10.142.0.6:35275
distributed.worker - INFO - Listening to: tcp://10.142.0.6:35275
distributed.worker - INFO - Waiting to connect to: tcp://10.142.0.3:8786
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - Threads: 16
distributed.worker - INFO - Memory: 63.32 GB
distributed.worker - INFO - Local Directory: /home/rsa-key-20180725/dask-worker-space/worker-t9crpot3
distributed.worker - INFO - -------------------------------------------------
What is the problem and what could be done?... Thanks in advance!!
The problem could be that the dask versions are different on workers (packaged into .tar.gz) from scheduler, or from your local environment where you run python code.
For me this error message was fixed after installing the same version.
Please check the worker log if you see warning there about package versions, like:
distributed.worker - WARNING - Mismatched versions found
+-------------+-------------+-----------+----------------------------+
| Package | This Worker | scheduler | workers |
+-------------+-------------+-----------+----------------------------+
| dask | 2021.11.2 | 2021.11.2 | {'2021.11.2', '2022.01.0'} |
| distributed | 2021.11.2 | 2021.11.2 | {'2021.11.2', '2022.01.0'} |
+-------------+-------------+-----------+----------------------------+

Azure ML Workbench Kubernetes Deployment Failed

I am trying to deploy a prediction web service to Azure using ML Workbench process using cluster mode in this tutorial (https://learn.microsoft.com/en-us/azure/machine-learning/preview/tutorial-classifying-iris-part-3#prepare-to-operationalize-locally)
The model gets sent to the manifest, the scoring script and schema
Creating
service..........................................................Error
occurred: {'Error': {'Code': 'KubernetesDeploymentFailed', 'Details':
[{'Message': 'Back-off 40s restarting failed container=...pod=...',
'Code': 'CrashLoopBackOff'}], 'StatusCode': 400, 'Message':
'Kubernetes Deployment failed'}, 'OperationType': 'Service',
'State':'Failed', 'Id': '...', 'ResourceLocation':
'/api/subscriptions/...', 'CreatedTime':
'2017-10-26T20:30:49.77362Z','EndTime': '2017-10-26T20:36:40.186369Z'}
Here is the result of checking the ml service realtime logs
C:\Users\userguy\Documents\azure_ml_workbench\projecto>az ml service logs realtime -i projecto
2017-10-26 20:47:16,118 CRIT Supervisor running as root (no user in config file)
2017-10-26 20:47:16,120 INFO supervisord started with pid 1
2017-10-26 20:47:17,123 INFO spawned: 'rsyslog' with pid 9
2017-10-26 20:47:17,124 INFO spawned: 'program_exit' with pid 10
2017-10-26 20:47:17,124 INFO spawned: 'nginx' with pid 11
2017-10-26 20:47:17,125 INFO spawned: 'gunicorn' with pid 12
2017-10-26 20:47:18,160 INFO success: rsyslog entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-10-26 20:47:18,160 INFO success: program_exit entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-10-26 20:47:22,164 INFO success: nginx entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2017-10-26T20:47:22.519159Z, INFO, 00000000-0000-0000-0000-000000000000, , Starting gunicorn 19.6.0
2017-10-26T20:47:22.520097Z, INFO, 00000000-0000-0000-0000-000000000000, , Listening at: http://127.0.0.1:9090 (12)
2017-10-26T20:47:22.520375Z, INFO, 00000000-0000-0000-0000-000000000000, , Using worker: sync
2017-10-26T20:47:22.521757Z, INFO, 00000000-0000-0000-0000-000000000000, , worker timeout is set to 300
2017-10-26T20:47:22.522646Z, INFO, 00000000-0000-0000-0000-000000000000, , Booting worker with pid: 22
2017-10-26 20:47:27,669 WARN received SIGTERM indicating exit request
2017-10-26 20:47:27,669 INFO waiting for nginx, gunicorn, rsyslog, program_exit to die
2017-10-26T20:47:27.669556Z, INFO, 00000000-0000-0000-0000-000000000000, , Handling signal: term
2017-10-26 20:47:30,673 INFO waiting for nginx, gunicorn, rsyslog, program_exit to die
2017-10-26 20:47:33,675 INFO waiting for nginx, gunicorn, rsyslog, program_exit to die
Initializing logger
2017-10-26T20:47:36.564469Z, INFO, 00000000-0000-0000-0000-000000000000, , Starting up app insights client
2017-10-26T20:47:36.564991Z, INFO, 00000000-0000-0000-0000-000000000000, , Starting up request id generator
2017-10-26T20:47:36.565316Z, INFO, 00000000-0000-0000-0000-000000000000, , Starting up app insight hooks
2017-10-26T20:47:36.565642Z, INFO, 00000000-0000-0000-0000-000000000000, , Invoking user's init function
2017-10-26 20:47:36.715933: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instruc
tions, but these are available on your machine and could speed up CPU computations.
2017-10-26 20:47:36,716 INFO waiting for nginx, gunicorn, rsyslog, program_exit to die
2017-10-26 20:47:36.716376: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instruc
tions, but these are available on your machine and could speed up CPU computations.
2017-10-26 20:47:36.716542: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructio
ns, but these are available on your machine and could speed up CPU computations.
2017-10-26 20:47:36.716703: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructi
ons, but these are available on your machine and could speed up CPU computations.
2017-10-26 20:47:36.716860: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructio
ns, but these are available on your machine and could speed up CPU computations.
this is the init
2017-10-26T20:47:37.551940Z, INFO, 00000000-0000-0000-0000-000000000000, , Users's init has completed successfully
Using TensorFlow backend.
2017-10-26T20:47:37.553751Z, INFO, 00000000-0000-0000-0000-000000000000, , Worker exiting (pid: 22)
2017-10-26T20:47:37.885303Z, INFO, 00000000-0000-0000-0000-000000000000, , Shutting down: Master
2017-10-26 20:47:37,885 WARN killing 'gunicorn' (12) with SIGKILL
2017-10-26 20:47:37,886 INFO stopped: gunicorn (terminated by SIGKILL)
2017-10-26 20:47:37,889 INFO stopped: nginx (exit status 0)
2017-10-26 20:47:37,890 INFO stopped: program_exit (terminated by SIGTERM)
2017-10-26 20:47:37,891 INFO stopped: rsyslog (exit status 0)
Received 41 lines of log
My best guess is theres something silent happening to cause "WARN received SIGTERM indicating exit request". The rest of the scoring.py script seems to kick off - see tensorflow get initiated and the "this is the init" print statement.
http://127.0.0.1:63437 is accessible from my local machine, but the ui endpoint is blank.
Any ideas on how to get this up and running in an Azure cluster? I'm not very familiar with how Kubernetes works, so any basic debugging guidance would be appreciated.
We discovered a bug in our system that could have caused this. The fix was deployed last night. Can you please try again and let us know if you still encounter this issue?

Emulator is not running after AOSP Ubuntu 14.04

After building successfully by ~$ make -j , if I run ~$emulator it shows below problem.
I am using Ubuntu 14.04.
sh: 1: glxinfo: not found emulator: WARNING: system partition size
adjusted to match image file (1792 MB > 200 MB)
emulator: WARNING: data partition size adjusted to match image file
(550 MB > 200 MB)
sh: 1: glxinfo: not found emulator: WARNING: encryption is off X Error
of failed request: BadAlloc (insufficient resources for operation)
Major opcode of failed request: 149 () Minor opcode of failed
request: 2 Serial number of failed request: 35 Current serial
number in output stream: 36 QObject::~QObject: Timers cannot be
stopped from another thread

unable to debug Jmeter through CLI

I'm trying to run Jmeter through command line on Centos VM like so:
./jmeter -n -t temp_cli/sampler.jmx -l temp_cli/results.xml -j temp_cli/j.log
I get :
INFO - jmeter.threads.JMeterThread: Thread is done: sampler flow 1-1
INFO - jmeter.threads.JMeterThread: Thread finished: sampler flow 1-1
DEBUG - jmeter.threads.ThreadGroup: Ending thread sampler 1-1
summary = 1 in 1s = 2.0/s Avg: 434 Min: 434 Max: 434 Err: 1 (100.00%)
Tidying up ... # Wed Apr 13 07:57:42 UTC 2016 (1460534262577)
... end of run
It supposed to take more than 1s so I'm pretty sure somthing went wrong. The thing is I don't get enough data about what went wrong.
I tried tail -f jmeter.log but I got no errors
Anyone knows how can I get more information?
Your file results.xml will give you more details.
You can see here that you got 100% error rate so your unique sample failed.
If you are running the test in non gui mode on a different machine from where you ran the gui mode, then you most probably did not install the plugin jars.

Resources