I have this code which runs all test case inside the test suite concurrently:
import com.eviware.soapui.model.testsuite.TestSuite.TestSuiteRunType
log.info testRunner.testCase.testSuite.getRunType()
testRunner.testCase.testSuite.setRunType(TestSuiteRunType.PARALLEL)
assert testRunner.testCase.testSuite.getRunType() == TestSuiteRunType.PARALLEL
but it seems that the application can't handle too much test case, question is: Is it possible to run test case per batch? like for example I have 300 test cases and it will get only 10 test cases to run concurrently and after it was completed another batch of 10 will be executed?
Thanks in Advance!
I've done something similar. I use the Lists.partition feature of Google Guava (which comes as a soapUI library) to partition the test cases into batches and then I loop through the test cases in each batch and run them. You can add you own logic about what action to take at the end of each batch:
import com.google.common.collect.*;
// Get a list of the test cases in the suite and partition them into batches of 10 using Google Guava
def testCases = context.getTestSuite().getTestCaseList();
def List<List<com.eviware.soapui.model.testsuite.TestCase>> testcasePartitions = Lists.partition(testCases, 10);
// Loop through the partitions
for (partition in testcasePartitions) {
// Loop through the test cases in each partition
for (testcase in partition) {
if (!testcase.isDisabled()) {
log.info("Running tests case " + testcase.getName());
def properties = new com.eviware.soapui.support.types.StringToObjectMap ()
testcase.run(properties, false)
// Check the outcomes here if required.
}
}
}
Because I do this in just one test suite, I've created this as a setup script at the test suite level, which I kick off manually.
Related
I'm using Jest for unit testing in a NestJS project. I've been struggling because one of the tests passes when I run it individually but when running the full test suite (which has about 6 tests in total) this sepcific test fails, but the other 5 are fine. I checked twice on the values that are being used in the tested function and I also checked that the conditions are being fulfilled so everything should be working perfectly, but the function isn't returning what it's supposed to.
The data I'm using is mock data and just to make sure I'm not messing it up I copy it into another variable, here is the test I'm running:
it("doesn't apply, 2 consecutive returns are the two first", ()=>{
const loan = {...mockloan}
loan.inputRecords[1].paymentAmount = '0'
loan.schedule[0].status = ScheduleStatus.RETURNED
loan.inputRecords[2].paymentAmount = '0'
loan.schedule[1].status = ScheduleStatus.RETURNED
loan.inputRecords[3].paymentAmount = '218.97'
loan.schedule[2].status = ScheduleStatus.PENDING
loan.inputRecords[4].paymentAmount = null
loan.inputRecords.forEach((i, index )=>{
console.log(i.paymentID+" -- " + i.paymentAmount + "-- "+ ((index != 0)? (loan.schedule[index-1] ?loan.schedule[index-1].status: ""): ""))
})
const {action, reason, returnedInputs} = loanService.appliesForHalfPayment(loan);
console.log(action)
expect(action).toBe("collection")
expect(reason).toBe(LoanTypeEvent.SECOND_RETURN_AFTER_FPD)
expect(returnedInputs).toBeNull()
})
I reviewed the the test suite again and I'm receiving the expected values (in the console.log(action) but the the expect(action).toBe("collection") is receiving null
I want to know how many steps failed on specific test cases and save it to a database. I already have a database.
My question now is how to save the number of test cases that failed in a test suite?
This is the answer to the second question: how to obtain the number of test cases that failed/passed on a test suite run.
Create test cases and suite
I have 3 test cases named Test Case 1, 2, and 3 respectively. They are simple, they have just assert false or assert true as their steps, as seen in the screencap.
Test Suite 1 contains all of the above cases.
Create a test listener
Create a listener (right-click on Test Listeners [#1 in image] > New > New Test Listener) with the following two checkboxes selected:
and add this to code:
class Listener {
#AfterTestCase
def sampleAfterTestCase(TestCaseContext testCaseContext) {
if(testCaseContext.getTestCaseStatus()=='PASSED') {
GlobalVariable.numOfPasses++
}
if(testCaseContext.getTestCaseStatus()=='FAILED') {
GlobalVariable.numOfFails++
}
}
#AfterTestSuite
def sampleAfterTestSuite(TestSuiteContext testSuiteContext) {
println 'Passes:' +GlobalVariable.numOfPasses
println 'Failures:' +GlobalVariable.numOfFails
}
}
Create global variables
Under Profiles > default [#2 in the first image] add two numeric variables numOfPasses and numOfFails and set them both to 0.
Run the test suite
Running the above setup will give you total number of failed/passed tests:
Passes:1
Failures:2
I have project in soapui with more testcases. After running each testcase I need to run one of two http request, depending on results of steps. So if one or more steps in testcase failed, I need to run httprequest1 and if all steps passed I need to run httprequest2. How can I do this? I have tried many scripts... for now my best solution is something like this, just add groovy script at the end of test case. Problem is that it is checking only last step. I have tried many other solutions, but nothing was working for me. Can somebody help me with this? Thank you
def lastResult = testRunner.getResults().last()
def lastResultStatus = lastResult.getStatus().toString()
log.info 'Test + lastResultStatus
if( lastResultStatus == 'FAILED' )
{
testRunner.gotoStepByName( 'httprequest1' )
testRunner.testCase.testSteps["httprequest2"].setDisabled(true)
}
else
{
testRunner.gotoStepByName( 'httprequest2' )
}
another solution that I have tried:
for( r in testRunner.results )
result = r.status.toString()
log.info result
if( result == 'FAILED' )
{
testRunner.gotoStepByName( 'httprequest1' )
testRunner.testCase.testSteps["httprequest2"].setDisabled(true)
}
else
{
testRunner.gotoStepByName( 'httprequest2' )
}
Like it was mentioned in the comment, and based on the details shared in the comments, Conditional GoTo test step can be used. However, it may required multiple of them. Instead Groovy Script can be the best way in this scenario.
Here are the details assuming the following are the steps in the test case.
Test Case:
request step1
request step2
groovy script step (the proposed script to handle the scenario)
request1 step if above step1 & step2 are successful
request2 step otherwise
following step x
following step y
Here is the pseudo code for the proposed Groovy Script mentioned in #3.
Evaluate the previous test step execution result, like what you currently doing.
Based on the condition, run the test step #4 if true, run step #5 otherwise. Here note that do not use gotoStepByName method, instead run step by its name. See sample #15 here
Once the above if ..else is done, then use gotoStepByName to continue the steps #6, #7 (of course, if any).
NOTE: If gotoStepByName is used to run a step in groovy step, then the control will not come back.
Use a testcase teardown to call the step, since you have to do it for all test cases. The teardown script will look something like this:
if(testRunner.status.toString() == "FAILED"){
testRunner.runTestStepByName( "httprequest1")
println "in if"
}else{
testRunner.runTestStepByName( "httprequest2")
println "in else"
}
Note that you have to use the SoapUI Runner to trigger the testcase / suite and the difference in method being called.
I am running grinder to test a POST URI with 10 different json bodies. The response times given by grinder are not proper.Individual json body tests are giving a reasonable response time though 10 the script with with 10 different json bodies is giving a very high response time and a very low tps. I am using 1 agent with 5 worker processes and 15 threads.Can someone help me figure out where the problem might be?
The script I am using are :-
`from net.grinder.script.Grinder import grinder
from net.grinder.script import Test
from net.grinder.plugin.http import HTTPRequest
from HTTPClient import NVPair
from java.io import FileInputStream
test1 = Test(1, "Request resource")
request1 = HTTPRequest()
#response1 = HTTPResponse()
test1.record(request1)
log = grinder.logger.info
class TestRunner:
def __call__(self):
#request1.setDataFromFile("ReqBody.txt")
payload1 = FileInputStream("/home/ubuntu/grinder-3.11/scripts/Req.txt")
payload2 = FileInputStream("/home/ubuntu/grinder-3.11/scripts/Req2.txt")
payload3 = FileInputStream("/home/ubuntu/grinder-3.11/scripts/Req3.txt")
payload4 = FileInputStream("/home/ubuntu/grinder-3.11/scripts/Req4.txt")
payload5 = FileInputStream("/home/ubuntu/grinder-3.11/scripts/Req5.txt")
payload6 = FileInputStream("/home/ubuntu/grinder-3.11/scripts/Req6.txt")
payload7 = FileInputStream("/home/ubuntu/grinder-3.11/scripts/Req7.txt")
payload8 = FileInputStream("/home/ubuntu/grinder-3.11/scripts/Req8.txt")
payload9 = FileInputStream("/home/ubuntu/grinder-3.11/scripts/Req9.txt")
payload10 = FileInputStream("/home/ubuntu/grinder-3.11/scripts/Req10.txt")
headersPost = [NVPair('Content-Type', ' application/json')]
#request1.setHeaders(headersPost)
#request1.setHeaders
myload = [payload1, payload2, payload3, payload4, payload5, payload6, payload7, payload8, payload9, payload10]
for f in myload:
result1 = request1.POST("http://XX.XX.XX.XX:8080/api/USblocks/ORG101/1/N/",f,headersPost)
log(result1.toString())`
First step for you is to run it with 1 thread 1 process and 1 agent . I hope it will run properly.
It looks since for loop is used all the scripts are going to run for each and every thread.
I think what you want /what should be done is that each thread should be sending one request .
You can move the request out to a global method and take some random number like Grinder.threadNo and use it to return the script to be executed . This then also demands that you remove the for loop since you can control the number of times the script is executed or leave it running for a duration .
Running 10 threads in parallel is a good number to start with then slowly you can see how many requests are actually getting accepted.
I want to fetch Total TestCase PASS and FAIL count for a build using groovy script. I am using Junit test results. I am using Multiconfiguration project , so is there any way to find this information on a per configuration basis?
If you use the plugin https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin, you can access the Jenkins TestResultAction directly from the build ie.
import hudson.model.*
def build = manager.build
int total = build.getTestResultAction().getTotalCount()
int failed = build.getTestResultAction().getFailCount()
int skipped = build.getTestResultAction().getSkipCount()
// can also be accessed like build.testResultAction.failCount
manager.listener.logger.println('Total: ' + total)
manager.listener.logger.println('Failed: ' + failed)
manager.listener.logger.println('Skipped: ' + skipped)
manager.listener.logger.println('Passed: ' + (total - failed - skipped))
API for additional TestResultAction methods/properties http://javadoc.jenkins-ci.org/hudson/tasks/test/AbstractTestResultAction.html
If you want to access a matrix build from another job, you can do something like:
def job = Jenkins.instance.getItemByFullName('MyJobName/MyAxisName=MyAxisValue');
def build = job.getLastBuild()
...
For Pipeline (Workflow) Job Type, the logic is slightly different from AlexS' answer that works for most other job types:
build.getActions(hudson.tasks.junit.TestResultAction).each {action ->
action.getTotalCount()
action.getFailCount()
action.getSkipCount()
}
(See http://hudson-ci.org/javadoc/hudson/tasks/junit/TestResultAction.html)
Pipeline jobs don't have a getTestResultAction() method.
I use this logic to disambiguate:
if (build.respondsTo('getTestResultAction')) {
//normal logic, see answer by AlexS
} else {
// pipeline logic above
}
I think you might be able to do it with something like this:
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import org.w3c.dom.Document;
File fXmlFile = new File("junit-tests.xml");
DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();
Document doc = dBuilder.parse(fXmlFile);
doc.getDocumentElement().normalize();
println("Total : " + doc.getDocumentElement().getAttribute("tests"))
println("Failed : " +doc.getDocumentElement().getAttribute("failures"))
println("Errors : " +doc.getDocumentElement().getAttribute("errors"))
I also harvest junit xml test results and use Groovy post-build plugin https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin to check pass/fail/skip counts. Make sure the order of your post-build actions has harvest junit xml before the groovy post-build script.
This example script shows how to get test result, set build description with result counts and also version info from a VERSION.txt file AND how to change overall build result to UNSTABLE in case all tests were skipped.
def currentBuild = Thread.currentThread().executable
// must be run groovy post-build action AFTER harvest junit xml
testResult1 = currentBuild.testResultAction
currentBuild.setDescription(currentBuild.getDescription() + "\n pass:"+testResult1.result.passCount.toString()+", fail:"+testResult1.result.failCount.toString()+", skip:"+testResult1.result.skipCount.toString())
// if no pass, no fail all skip then set result to unstable
if (testResult1.result.passCount == 0 && testResult1.result.failCount == 0 && testResult1.result.skipCount > 0) {
currentBuild.result = hudson.model.Result.UNSTABLE
}
currentBuild.setDescription(currentBuild.getDescription() + "\n" + currentBuild.result.toString())
def ws = manager.build.workspace.getRemote()
myFile = new File(ws + "/VERSION.txt")
desc = myFile.readLines()
currentBuild.setDescription(currentBuild.getDescription() + "\n" + desc)