JMeter script does not achieve required TPS/RPS on Linux VM, but achieves it on MAC system running on GUI mode - linux

I have a script where I am using Throughput Shaping Timer to achieve 100 TPS/RPS.
When the script is executed on MAC System using GUI Mode, it is able to achieve ~99 TPS/RPS. But, when I execute it on Linux System it hardly goes beyond 60 RPS/TPS.
Following logs received on Linux OS (same script, so Thread Group settings remain as is):
No free threads available in current Thread Group Device Service
Some of the details given below:
JMeter version is 5.4.3 on both the systems (copied the same JMeter to Linux VM as well)
MAC OS version is: 11.6
Linux OS version is: Red Hat Enterprise Linux 8.6 (Ootpa)
Heap setting on both the systems are given below (even increased it to 13g on Linux VM):
: "${HEAP:="-Xms1g -Xmx1g -XX:MaxMetaspaceSize=256m"}"
Please let me know which settings I should do to achieve similar TPS/RPS as with my GUI on MAC.
Thread Group Setting shown in the attached image.

First of all GUI mode is for tests development and debugging, when it comes to test execution you should be running tests in command-line non-GUI mode in order to get accurate results
Make sure to use the same Java version
Make sure to use the same (or at least similar) hardware
Make sure to check resources consumption (CPU, RAM, Disk, Network, Swap usage, etc.) as it you're hitting the hardware or OS limits you might get false-negative results because JMeter cannot run requests as fast as it can and if this is the case you might need another Linux box and run JMeter in distributed mode
No free threads available in current Thread Group means that there is not enough threads in order to reach/maintain the desired throughput, you can try increasing the number of threads in the Thread Group or switch to Concurrency Thread Group and connect it to the Throughput Shaping Timer via Feedback function

Related

Yocto runs only one task inside at a time

I have setup my development environment inside Virtual Machine running Ubuntu 14.04. My company doesn't allow me to run direct Linux Flavoured OS may be due to security reasons.
One thing I have observed is that in VM it only runs one task ast a time whereas if i run on my personal laptop it runs multiple tasks at a time.
Is there any way to configure poky in local.conf file for example or any other file for it to run multiple tasks at the same time. I have given more than 6 GB of RAM to the VM.
As it is running one task, build is taking a lot of time..
Thanks for your time
bitbake task executor enquires for number of CPUs dynamically, so it seems that you might have allocated 1 cpu to your VM. You might be able to see CPUs with below command in VM
lscpu
You might want to allocate more CPUs. VirtualBox lets you do that
Stop virtual machine
Click settings-> click system -> click processor -> Change the number of procesors.

Does the system execution time of a program change if it's running on a virtual machine?

A friend asked for a command that can be used to find the real system
execution time for a program in Linux. I replied that the time
command is a great one.
He then asked, is the time of execution (via time) for a program which is returned by the virtual machine when you query for the execution time, the same as the real system execution time of the program?
My instinct was to say it depends:
If there are enough resources on the machine, then the VM time returned would be the same as the real/wall clock time of the program execution.
If the system does NOT have sufficient resources, then the VM time will be much different than the real (system) execution time. Moreover, the VM is an application on top of the OS, which has its own scheduler. This means that the VM needs to invoke systems calls which are then processed by the OS which in turn communicate with hardware and then provide a real (system) execution time. Hence, the time returned can be different than real time in this situation.
If the executed program is simple, then the VM time could be equal to real (system) time.
If the executed program is NOT simple, then the VM time could be much different.
Are my assumptions correct?
I now wonder: How could you find/calculate the real execution time of a program ran on a virtual machine? Any ideas?
The complexity of the program doesn't matter.
The guest OS doesn't have visibility into the host. If you run 'time' on the guest, the 'sys' value returned is describing guest system resources used, only.
Or to rephrase: in a typical virtual machine setup, you're going to allocate only a portion of the host CPU resources to the guest OS. But that's all the guest can see, and thus, that's all the 'time' command can report on. Since you allocated this when you launched the VM, the guest cannot exhaust all of the host resources.
This is true of pretty much any VM: it would be a major security issue if the guest had knowledge of the hypervisor.
So yes the sys time could absolutely differ for VM versus real hardware, because the guest won't have full resources. You could also see variability depending on whether you're dealing with hardware or software virtualization.
Some good reading here (sections 10.3 through 10.5):
https://www.virtualbox.org/manual/ch10.html#hwvirt

virtual machine or dual boot when measuring code performance

i am trying to measure code performance (basically speed-up when using threads). So far i was using cygwin via windows or linux on separate machine. Now i have the ability to set up a new system and i am not sure whether i should have dual boot (windows and ubuntu) or a virtual machine.
My concern is whether i can measure reliable speed up and possibly other stuff (performance monitors) via a linux virtual machine or if i have to go with with normal booting in linux.
anybody have an opinion?
If your "threading" relies heavily on scheduling, I won't recommend you to use VM. VM is just a normal process from the host OS's point of view, so the guest kernel and its scheduler will be affected by scheduling by the host kernel.
If your "threading" is more like parallel computation, I think it's OK to use VM.
For me, it is much safer to boot directly on the system and avoid using a VM in your case. Even when you don't use a VM, it is already hard to have twice the same results in multi-threading because the system being used for OS tasks, so having 2 OS running in the same time as for VM even increases the uncertainty on the results. For instance, running your tests 1000 times on a VM would lead to, let's say, 100 over-estimated time, while it would maybe be only 60 on a lonely OS. It is your call to know if this uncertainty is acceptable or not.

Linux/Windows: User mode vs Privilege mode time spent

How much percentage of time CPU spends in user mode vs privilege mode for different programs/operations.
Different Operations could be:
- running application without I/O interaction.
- application with I/O interaction like copying a file to USB
I know for a fact that Network operating system spends most of the time in interrupt context. Does this hold true for general purpose OS like Ubuntu/Windows?
I'm not much of an OS expert but I imagine it will depend a great deal on what background processes are running on the system. On any OS you might or might not be running some system (i.e. non-user) processes that are heavy resource users. Or you might have put some effort into stripping the system down so that very little CPU time is being used by the system for background maintenance.
If your question is how things compare for "clean" installations of these operating systems then all I can tell you is that on my laptop running Ubuntu right now (running top from the command line to look at resource usage) only about 5-10% of CPU time is being used by non-user processes; in my case Xorg and compiz are the main ones. I don't really know how that compares to Windows, but I think most linux users have a knee jerk reaction that Windows is greedier for system resources than most linux distros.
So, I guess the short answer is that I doubt there is a short answer to your question.

virtual clock speed throttling on linux

Throttling at will the execution and display speed of a particular process, for example, a game, a flash game, or an OpenGL game. I want to be able to slow it down to 20% or 0.5%. This is simply not possible on host space in linux.
But linux supports two kernel-level virtualisation environments: KVM and lxc.
Question: Is it possible to provide a fake system clock to a virtual lxc or KVM machine so that a flash game running in the guest will not run faster than what is set to run?
Some choices:
Qemu brake patch (will require work to apply no doubt.)
Bochs has ips=NNNN to define CPU "Instructions Per Second".
cpulimit a tool for limiting the CPU usage of a process (does not require virtualization.)
Update: You want this: https://superuser.com/questions/454534/how-can-i-slow-down-the-framerate-of-a-flash-game
I found a prototype version of the CheatEngine speed hack that works for linux.
http://forum.cheatengine.org/viewtopic.php?t=533437&sid=1a83d81ee08f8479eb8b190939b2e1aa
http://code.google.com/p/xeat-engine/source/checkout
http://pastebin.com/ZLryd20D
Basically it replaces gettimeofday with a hacked version using LD_PRELOAD magic. It works perfectly!
thanks lilezek! wherever you are!

Resources