How we can run Multiple Tests in K6 - performance-testing

Please assist me in running a number of test scripts in the K6 Distributed Environment.
I have multiple test cases, I looked online for articles or tutorials on this, but I couldn’t discover anything.
so I need to run those scripts on different EC2 machines.

Related

Run mocha tests in parallel on azure docker pipeline

Why when I run mocha tests in parallel via azure pipeline they always get executed one by one?
I have tried both with and without docker in the pipeline but no luck.
Also I've tried with locally built and pushed image and then run that image from Azure pipeline...still no luck. I get same unexpected results via GitHub actions as well.
However, local run with same configuration it works (even using Docker Desktop it works).
p.s. I do not want to use the solution of multiple agents.
Apparently most cloud providers (I have checked for AWS, Azure and GitHub) they offer default VMs/agents with 2 CPU cores unless you pay for more of course.
This affects the mocha parallelization because
"By default, Mocha's maximum job count is n - 1, where n is the number of CPU cores on the machine."
Reference,
https://mochajs.org/#-parallel-p

Gitlab, howto configure build and test be on different machines?

I'm new to gitlab and am asking for advice / best practice here.
I have a program that I build on my build machine. The program cant run on the build machine, as it needs to be installed to a test machine that has special hardware/enviornment that the program needs. I want to run some system tests (memory leak tests etc) on the test machine. How is this best done?
I think this can be accomplished with the "multi project pipeline" feature. Is this the simplest/best way?
Here is my plan:
I could have one (shh/shell) runner that build my program on my build machine, and a different runner that runs tests on my test machine. The two would be connected using "multi project pipeline" feature. The artifacts from the build pipeline would be installed on the test machine and then system tests would run on the test machine.
Is this the best way to solve this? Or is there a simpler/better way?
Answering my own question here. "Multi project pipeline" is not necessary here. You simply have a single project and mark jobs with different "tags". You can then register runners for these different tags, on different machines.
(Artifacts are transferred from one job to the next the same, regardless if the runners are run on the same machine or on different ones)
https://docs.gitlab.com/ee/ci/yaml/README.html#tags

Heroku workers in dev

I'm looking into using a worker as well as a web for the first time as I have to scrape a website. I'm just wondering before I commit to this about working in a dev environment. How do jobs in a queue get handled when I'm testing my app before it's pushed to Heroku?
I will probably be using RabbitMQ if that's relevant here.
I guess it depends on what you mean by testing. You can unit test the code that does the scraping in isolation from any queue, and you can provide a mock implementation of the queue operations to handle a goodly portion of your integration tests.
I suppose you might want a real instance of the queue for certain tests, but depending on the nature of your project, you might be satisfied with the sorts of tests described in the first paragraph.
If you simply must test the queue operation and/or you want to run a complete copy of production locally then you'll have to stand up an instance of Rabbitmq. You can stand one up locally or use one of the SAAS providers.
If you have multiple developers working on the project, you might want to make it easy for them by creating something like a vagrant script that sets up a complete environment in a vm. Or better still something like docker. Doing so also gives you a lot more deployment options (making you less dependent on the heroku tooling).
Lastly, numerous CI solutions like Travis CI provide instances of popular services for running tests (including rabbit).

How can I automatically start the JMeter HTTP(S) Test Script Recorder?

I am trying to automate the creation of JMeter scripts based on existing Cucumber tests to avoid maintaining two separate sets of tests (one for acceptance and one for load testing).
The Cucumber recording works great locally when I add the HTTP Recorder to the Workbench and start the recording, however I cannot figure out how I can automatically start it from the command line. Is this possible at all?
Why not run Cucumber from JMeter?
Because I'd like to avoid running multiple instances of Cucumber at the same time, and I'd like to be able to distribute the load generation (using jmeter-server)
This is not possible yet.
You should discuss this on user mailing list to give more details on your request.
If this looks useful, then you would create an Enhancement request on JMeter bugzilla and feature may be developed.

Testing automation tool suited for operation team

I would like to start using an testing framework that does the following:
contains an process(the process can be a test) management engine. It is able to start processes(tests) with the help of a scheduler
it is distributed, processes can run locally or on other machines
tests can be anything:
simple telnet on a given port (infrastructure testing)
a disk I/O or mysql benckmark
a jar exported from Selenium that does acceptance testing
will need to know if the test passed or not
has the capability to get real time data from the test(something like graphite) -- this is optional
allows processes to be build in many programing languages: perl, ruby, C, bash
has a graphical interface
open-source
written in any language as long as it doesn't use resources , I would prefer C, perl or ruby
to run on linux
What not to be:
an add on to a browser: Selenium, BITE ..
I do not want something focused on web development
I will like to use such a tool or maybe collaborate on building one. I hope I was explicit enough. Thank you .
You might want to look at the robot framework combined with jenkins. Robotframework is a tool written in python for doing keyword-based acceptance testing. Jenkins is a continuous integration tool which allows you to schedule jobs, and distribute jobs amongst a grid of nodes.
Robotframework tests can do anything python can do, plus a whole lot more. It has a remote interface, which means you can write test keywords in just about any language. For example, I had a job where we had keywords written in Java. In another job we used robotframework with .NET-based keywords. This can be accomplished via a remote interface (so you can write keywords in many different languages) or you can run robot using jython to run on the JVM, or iron python to run in a .NET environment.

Resources