Error when using Load test with SoapUI (Soap over JMS services) - multithreading

I am getting a problem with SoapUI tool for testing SOAP Project (using Hermes JMS, SOAP over JMS)
Case 1: There is no error when I increase Test delay
My testcase is: Number of threads is: 10, Strategy: Simple, Test delay: 4000, Random: 0.5, Limit: 300 Total runs
Case 2: There are some errors when I change Test deplay to 1000 (or 1 second)
My testcase is: Number of threads is: 10, Strategy: Simple, Test delay: 1000, Random: 0.5, Limit: 300 Total runs
Result: Both these errors show that: Response message is the same Request message and doesn't contain my expected keywords
I also try to use test case 2 with Java code, then all message are successful (without errors)
Here is my config on SOAPUI
Here is my config on Hermes JMS
Anyone can help me ?

Related

How to configure jest to fail when it takes more than x seconds

Is there a way to fail a jest test when it exceeds x number of seconds?
There is this property: https://jestjs.io/docs/configuration#slowtestthreshold-number, but it's only for reporting right?
Global-level configuration:
Create a jest.config.js file and add this testTimeout option. jest will read the config file before executing.
Add --testTimeout option when you are using the jest CLI like jest --testTimeout 2000
File-level:
Use jest.setTimeout(timeout)
Set the default timeout interval for tests and before/after hooks in milliseconds. This only affects the test file from which this function is called.
Test case level:
Use test(name, fn, timeout)
The third argument (optional) is timeout (in milliseconds) for specifying how long to wait before aborting. Note: The default timeout is 5 seconds.

How to avoid timeouts in mocha testcases?

Here I am attaching my code, I am passing done callback and using supertest for request. Since I am using assert/expect in my testcase inside request.end block why I need to worry about timeout? What is mistake I am making here.
it('should get battle results ', function(done) {
request(url)
.post('/compare?vf_id='+vf_id)
.set('access_token',access_token)
.send(battleInstance)
.end(function(err, res){ // why need timeout
if (err) return done(err);
console.log(JSON.stringify(res.body));
expect(res.body.status).to.deep.equal('SUCCESS');
done();
});
});
Testcase results following response:
Error: timeout of 2000ms exceeded. Ensure the done() callback is being called in this test.
If I am running my testcases with mocha command then its show this error while If I am running test mocha --timeout 15000 then testcase is passing correctly. But I want to avoid timeout, How can I do that?
If I am running my testcases with mocha command then its show this error while If I am running test mocha --timeout 15000 then testcase is passing correctly. But I want to avoid timeout, How can I do that?
You can't avoid timeouts, since it looks like you're testing a remote service. If, for whatever reason, the request to that service takes a long time, you will run into timeouts.
You can tell Mocha to disable for timeout checking by setting the timeout to 0, but that's probably also not ideal because it may cause each test case to take an excessive amount of time.
As an alternative, you can mock request (which I assume is superagent) so you can control the entire HTTP request/response flow, but since it looks like you're testing a remote service (one which you have no control over) that would make this particular test case moot.
In mocha a default timeout of 2 seconds (2000ms) is set by default.
You can extend the default (global) timeout from the command line using the --timeout xxxx flag.
If you want instead to change the timeout for a specific test case you can use the this.timeout( xxxx ) function - note it does not work for arrow functions - (where xxxx is a number like 20000 representing milliseconds).
it('My test', function(){
this.timeout(5000);
//... rest of your code
});
You can also set a timeout of a set of test cases (wrapped by a describe):
describe("My suite", function(){
// this will apply for both "it" tests
this.timeout(5000);
it( "Test 1", function(){
...
});
it( "Test 2", function(){
...
});
});
It also works for before, beforeEach, after, afterEach blocks.
More documentation is available here: https://mochajs.org/#timeouts
Consider that 2 seconds is usually a good amount of time to run your tests so I would say that extend the default timeout should be an exception, not the common rule in your tests.
Also if your test is not async and you have to extend the timeout I would strongly suggest to review the function that is taking so long before extending the timeout.
Mocha : Timeouts
Test-specific timeouts may also be applied, or the use of this.timeout(0) to disable timeouts all together:
To disable the timeout from happening simply set it to 0. I use mocha <file> --timeout 0 when I'm debugging so the timeout error does not get thrown.
Here is what you need
mocha timeouts
describe('a suite of tests', function() {
this.timeout(500);
it('should take less than 500ms', function(done){
setTimeout(done, 300);
});
it('should take less than 500ms as well', function(done){
setTimeout(done, 250);
});
})

SoapUI Groovy Script’s JSON Responses Is Empty When Using Testrunner

Using Windows 7 with Soap 5.2.0 freeware.
I also asked about this in the Smart Bear community and was only given recommended posts to read. The posts didn’t relate to this problem.
I have a REST project that has one test suite with one test case containing two test steps. The first step is a groovy step with a groovy script that calls the second test step. The second test step is a REST GET request that sends a string to our API server and receives a response back in JSON format. The second test step has a script assertion that does "log.info Test Is Run", so I can see when the second test is run.
When the groovy script calls the second test step it reads the second test step’s JSON response in the groovy script like this:
def response = context.expand('${PingTest#Response}').toString() // read results
I can also use this for getting JSON response:
def response = testRunner.testCase.getTestStepByName(testStepForPing).getPropertyValue("response")
The project runs as expected when run through the Soap UI but when I run the project with test runner, the response from the groovy script call to get the JSON response is empty, using either of the methods shown above. When run from testrunner, I know the second test step is being called because I see the log.info result in the script log.
This is part of the DOS log that shows the second test step is running and it seems there are no errors for the second test step run.
SoapUI 5.2.0 TestCase Runner
12:09:01,612 INFO [WsdlProject] Loaded project from [file:/C:/LichPublic/_Soap/_EdPeterWorks/DemoPing.xml]
12:09:01,617 INFO [SoapUITestCaseRunner] Running SoapUI tests in project [demo-procurement-api]
12:09:01,619 INFO [SoapUITestCaseRunner] Running Project [demo-procurement-api], runType = SEQUENTIAL
12:09:01,628 INFO [SoapUITestCaseRunner] Running SoapUI testcase [PingTestCase]
12:09:01,633 INFO [SoapUITestCaseRunner] running step [GroovyScriptForPingtest]
12:09:01,932 INFO [WsdlProject] Loaded project from [file:/C:/LichPublic/_Soap/_EdPeterWorks/DemoPing.xml]
12:09:02,110 DEBUG [HttpClientSupport$SoapUIHttpClient] Attempt 1 to execute request
12:09:02,111 DEBUG [SoapUIMultiThreadedHttpConnectionManager$SoapUIDefaultClientConnection] Sending request: GET /SomeLocation/ABC/ping?echoText=PingOne HTTP/1.1
12:09:02,977 DEBUG [SoapUIMultiThreadedHttpConnectionManager$SoapUIDefaultClientConnection] Receiving response: HTTP/1.1 200
12:09:02,982 DEBUG [HttpClientSupport$SoapUIHttpClient] Connection can be kept alive indefinitely
12:09:03,061 INFO [log] **Test Is Run**
This is the testrunner call I use in DOS command line:
“C:\Program Files\SmartBear\SoapUI-5.2.0\bin\testrunner.bat" DemoPing.xml
When the groovy script is run through test runner I get the project using ProjectFactoryRegistry and WsdlProjectFactory. Any advice on why I can’t read JSON response when using testrunner would be appreciated.
I can provide more info/code if needed.
Thanks.
Please try the below script:
import groovy.json.JsonSlurper
//provide the correct rest test step name
def stepName='testStepForPing'
def step = context.testCase.getTestStepByName(stepName)
def response = new String(step.testRequest.messageExchange.response.responseContent)
log.info response
def json = new JsonSlurper().parseText(response)
Thank you Rao! Your suggestion worked when I used it as shown below. The DOS window showed the response text:
// setup stepName as variable for name of test step to run.
def stepName = "PingTest"
// use stepName to get the test step for calling the test step (testCase is a
string variable of the test case name).
def step = context.testCase.getTestStepByName(stepName)
// call the test step.
step.run(testRunner, context)
// show the results.
def response = new String(step.testRequest.messageExchange.response.responseContent)
log.info response // this response shows correctly in the DOS window
The json slurper also works. At your convenience, if you have any suggested links or books describing the technique(s) you used here please let me know of them.
Thanks.

How do I log the request and response in soapUI?

I am calling a REST based service from SoapUI. I have created a load test for the service and the test works. I wrote the following code in my setup script for the load test.
log.info("This is from the setup script")
def request = context.expand('${#Request}')
log.info(request)
def response = context.expand('${#Response}')
log.info(response);
The only item I am getting in my log is the "This is from the setup script".
I also added the following lines of code in my Teardown script.
log.info("Teardown script")
def response = context.expand('${#Response}')
log.info(response);
I am not seeing the "Teardown script" text in the log. At this point I am a bit puzzled as to the behavior.
Load Test:
Test Suite
Test case options.
I have unchecked the Discard OK results test box.
What changes do I need to do to my scripts in order to log the requests and the responses?
When you create a setup and/or teardown script, remember those are run only once per run, not per test! What you intended is not going to work.
In your setup, since no tests have run yet, the context is going to be empty ... as you can see from your log message.
In your teardown, I suspect there is a bug in SoapUI and the log is not getting sent to the log tab. If you intentionally create an error (I used logg.info "Hello world!" - note the intentional double g), I still got a an error in the error log tab.

NUnitLite on MonoTouch: tests succeed but don't get marked as executed in the UI

I'm on MonoTouch 6.0.4 and have implemented a Unit test using MonoTouch's NUnitLite.
If I execute a test and end it with an Assert() I can see from the logs that the test executed successfully:
Tests run: 1 Passed: 0 Inconclusive: 0 Failed: 1 Ignored: 0
But in the UI, the test result is not reflected:
The test method:
[Test]
public void TestPing()
{
APIPingResult oRes = oManager.PingConnector.Ping(5);
Assert.True(oRes.Success);
}
Just a bug or am I missing something?
It's a known (I've noticed it before) bug.
The status is updated correctly when running all the tests, e.g. Run Everything, or all the test in a specific suite, e.g. Run all.
However when running a specific test the update is now done (actually I think it's just not refreshed). Note that the test result is still sent to the writer's output (e.g. Application Output or on the device console).
Update: Fixed in GIT (both 0.7 and master branches).

Resources