I am working on a SoapUI project where I need to run my test suite using test runner. I am using external groovy scripting for environment variable. The problem I am facing here is whenever I am running test case from test runner its return Workspace as null, which is used in External groovy. So in external groovy I am getting workspace as null causing error [getProjectByname() cannot be invoked on null]. Below is the
constructor of global script where workspace is used
AvengerAPITestManager(String TestProject, String TestSuite, String TestCase,String TestStep)
{
TestName = "AvengerAPITests";
testProject = SoapUI.getWorkspace().getProjectByName(TestProject);
tSuite = testProject.getTestSuiteByName(TestSuite);
tCase = testProject.getTestSuiteByName(TestSuite).getTestCaseByName(TestCase);
tStepName = TestStep.toString();
tStep=testProject.getTestSuiteByName(TestSuite).getTestCaseByName(TestCase).getTestStepByName (TestStep);
}
Above we have user SoapUI.getWorkspace() which is working fine when trying to run from soapUI but whever I m trying to run from testrunner SoapUI.getWorkspace comes out to be null. I even tried passing workspace like I am passing testproject name still it didnt worked.
I tried something like this also
AvengerAPITestManager(Object workspace,String TestProject, String TestSuite, String TestCase, String TestStep)
{
TestName = "AvengerAPITests";
testProject = workspace.getProjectByName(TestProject);
tSuite = testProject.getTestSuiteByName(TestSuite);
tCase = testProject.getTestSuiteByName(TestSuite).getTestCaseByName(TestCase);
tStepName = TestStep.toString();
tStep = testProject.getTestSuiteByName(TestSuite).getTestCaseByName(TestCase).getTestStepByName(TestStep);
}
In the above code I tries passing Workspace object from the test case as I passed Testcase name and all but still I m getting null for workspace. Please tell me how do I deal with the problem.
Here is usefull working example https://github.com/stokito/soapui-junit
You should place your sample-soapui-project.xml to /src/test/resources folder that will expose it to classpath
If you want to use soap ui in external code, try to directly create new test runner with specific project file:
SoapUITestCaseRunner runner = new SoapUITestCaseRunner();
runner.setProjectFile( "src/dist/sample-soapui-project.xml" );
runner.run();
Or if you want to define test execution more precisely, you can use something like this:
WsdlProject project = new WsdlProject( "src/dist/sample-soapui-project.xml" );
TestSuite testSuite = project.getTestSuiteByName( "Test Suite" );
TestCase testCase = testSuite.getTestCaseByName( "Test Conversions" );
// create empty properties and run synchronously
TestRunner runner = testCase.run( new PropertiesMap(), false );
PS: don't forget to import soap ui classes, that you use in your code and put them to classpath.
PPS: If you need just run test cases outside the soap ui and/or automate this process, why not just use testrunner.sh/.bat for the same thing? (here is description of this way: http://www.soapui.org/Test-Automation/functional-tests.html)
I am not sure if this is going to help anyone out there but here is what I did to fix the problem I was having with workspace as null causing error[getProjectByname() cannot be invoked on null] When i run from cmd
try this:
import com.eviware.soapui.model.project.ProjectFactoryRegistry
import com.eviware.soapui.impl.wsdl.WsdlProjectFactory
import com.eviware.soapui.impl.wsdl.WsdlProject
//get the Util project
def project = null
def workspace = testRunner.testCase.testSuite.project.getWorkspace();
//if running Soapui
if (workspace != null) {
project = workspace.getProjectByName("Your Project")
}
//if running in Jenkins/Hudson
else{
project = new WsdlProject("C:\\...\\....\\....\\-soapui-project.xml");
}
if (project.open && project.name == "Your Project") {
def properties = new com.eviware.soapui.support.types.StringToObjectMap()
def testCase = project.getTestSuiteByName("TestSuite 1").getTestCaseByName("TestCase");
if(testCase == null)
{
throw new RuntimeException("Could not locate testcase 'TestCase'! ");
} else {
// This will run everything in the selected project
runner = testCase.run(new com.eviware.soapui.support.types.StringToObjectMap(), false)
}
}
else {
throw new RuntimeException("Could not find project ' Order Id....' !")
}
The above code will run everything in the selected project.
Related
While in development we occasionally use skip or only to debug a particular test or test suit. Accidentally, we might forget to revert the cases and push the code for PR. I am looking for a way to detect or automatically run all tests even for skip and only tests in our CI pipeline(using Github action). It can be in either case as follow.
Fail the test when there are skip or only tests.
Run all tests even for skip and only.
Very much appreciate any help.
I came up with a solution for the second part of the question about running all tests even for skip and only. I don't think it's elegant solution, but it works and it's easy to implement.
First of all you need to change test runner to jest-circus if you work with jest bellow 27.x version. We need it so our custom test environment will use handleTestEvent function to watch for setup events. To do so, install jest-circus with npm i jest-circus and then in your jest.config.js set testRunner property:
//jest.config.js
module.exports = {
testRunner: 'jest-circus/runner',
...
}
From Jest 27.0 they changed default test runner to jest-circus so you can skip this step if you have this or higher version.
Then you have to write custom test environment. I suggest to write it based on jsdom so for example we also have access to window object in tests and etc. To do so run in terminal npm i jest-environment-jsdom and then create custom environment like so:
//custom-jsdom-environment.js
const JsDomEnvironment = require('jest-environment-jsdom')
class CustomJsDomEnvironment extends JsDomEnvironment {
async handleTestEvent(event, state) {
if(process.env.IS_CI === 'true' && event.name === 'setup') {
this.global.describe.only = this.global.describe
this.global.describe.skip = this.global.describe
this.global.fdescribe = this.global.describe
this.global.xdescribe = this.global.describe
this.global.it.only = this.global.it
this.global.it.skip = this.global.it
this.global.fit = this.global.it
this.global.xit = this.global.it
this.global.test.only = this.global.test
this.global.test.skip = this.global.test
this.global.ftest = this.global.test
this.global.xtest = this.global.test
}
}
}
module.exports = CustomJsDomEnvironment
And inform jest to properly use it:
//jest.config.js
module.exports = {
testRunner: 'jest-circus/runner',
testEnvironment: 'path/to/custom/jsdom/environment.js',
...
}
Then you just have to setup custom environment value IS_CI in your CI pipeline and from now on all your skipped tests will run.
Also in custom test environment you could watch for skipped test and throw an error when your runner find skip/only. Unfortunately throwing an error in this place won't fail a test. You would need to find a way to fail a test outside of a test.
//custom-jsdom-environment.js
const JsDomEnvironment = require('jest-environment-jsdom')
const path = require('path')
class CustomJsDomEnvironment extends JsDomEnvironment {
constructor(config, context) {
super(config, context)
const testPath = context.testPath
this.testFile = path.basename(testPath)
}
async handleTestEvent(event, state) {
if(process.env.IS_CI === 'true' && event.name === 'add_test') {
if(event.mode === 'skip' || event.mode === 'only') {
const msg = `Run ${event.mode} test: '${event.testName}' in ${this.testFile}`
throw new Error(msg)
}
}
}
}
module.exports = CustomJsDomEnvironment
In my project, I want to keep all groovy utilities test step under one test case and to call them again and again where ever is needed. Like reading the test data file etc. I would be able to achieve that if the below problem is resolved. I tried a lot of ways but couldn't make it. Any help is welcome!!
For Example
script 1: test1Script
def sayHellow(){
log.info "Hello!!"
}
Script 2 : test2Script
import groovy.lang.Binding
import groovy.util.GroovyScriptEngine
def groovyUtils = new com.eviware.soapui.support.GroovyUtils(context)
def projectPath = groovyUtils.projectPath
def scriptPath = projectPath + "\\GroovyScripts\\"
//GroovyShell shell = new GroovyShell()
//Util = shell.parse(new File(directoryName + "groovyUtilities.groovy"))
//groovyUtilities gu = new groovyUtilities(Util)
// Create Groovy Script Engine to run the script.
GroovyScriptEngine gse = new GroovyScriptEngine(scriptPath)
// Load the Groovy Script file
externalScript = gse.loadScriptByName("sayHello.groovy")
// Create a runtime instance of script
instance = externalScript.newInstance()
// Sanity check
assert instance!= null
// run the foo method in the external script
instance.sayhellowTest()
When I'm calling that method from another script, I'm getting below exception
groovy.lang.MissingPropertyException: No such log for class test1Script
The error shows that groovy runtime calls your method but it can't find the log property. I assume that this log variable is declared in the test1Script body, e.g. def log = ... In this case the variable becomes local to its declaration scope and it's not visible to the script functions. To make it visible, it can be annotated by #Field or it should be undeclared (doesn’t have type declaration, just log = ...). The latter, however, requires you to pass the variable value via so-called bindings when running the script as you run it. So given my assumptions above, you can annotate your log variable as a field and it should work:
//just for the sake of example it prints to stdout whatever it receives
#groovy.transform.Field
def log = [info: {
println it
}]
def sayHellow() {
log.info "Hello!!"
}
Now calling sayHellow from another script prints "Hello" to stdout:
...
instance.sayHellow()
It is very important to declare, context, testRunner, and Log variables in the called script.
script 1: sayHello.groovy
public class groovyUtilities {
def context
def testRunner
def log
def sayhellowTest() {
log.info "Hi i'm arpan"
}
}
script 2: RunAnotherGroovyScript.groovy
def groovyUtils = new com.eviware.soapui.support.GroovyUtils(context)
def projectPath = groovyUtils.projectPath
def scriptPath = projectPath + "\\GroovyScripts\\"
// Create Groovy Script Engine to run the script and pass the location of the directory of your script.
GroovyScriptEngine gse = new GroovyScriptEngine(scriptPath)
// Load the Groovy Script file
externalScript = gse.loadScriptByName("sayHello.groovy")
// Create a runtime instance of script
instance = externalScript.newInstance(context: context, log: log, testRunner: testRunner)
// Sanity check if the instance is null or not
assert instance != null
// run the foo method in the external script
instance.sayhellowTest()
Standoutput:
Hi i'm arpan
"I want to keep all groovy utilities test step under one test case and
to call them again and again where ever is needed. Like reading the
test data file etc."
OK, so to me this simply sounds like you have a library of reusable functions and want to be able to call them from any test you might be running.
I suppose you could store them with another test and then call them from the test you're currently running, but SoapUI comes with the neat feature in that you can store your common functions/libraries 'outside' of the SoapUI project.
I have lots of such Groovy libraries and I store mine under the bin/scripts folder of SoapUI. I typically call common functions from a Script assertion test step in the test I'm running. For example, I have a getUserDetails type test step. I can do all the usual assertions against the step like valid response code, SLA etc. I can then use a Script assertion test step. This type of step allows you to run a chunk of Groovy script. This is OK for specific cases, but you wouldn't want to paste in something common as you need to update every Script assertion if something changes. But you can call an 'external' groovy script. Also, the Script Assertion step is just a method that has log, context and message exchange passed into it, so no need to instantiate your own. Just pass them into you external groovy script...
So, as an illustration...
ValidateUser.groovy (stored in bin/scripts/groovyScripts/yourOrg/common)
package groovyScripts.yourOrg.common; // Package aligns with folder it's stored in.
Class ValidateUser {
def log = null;
def context = null;
def messageExchange = null;
// Constructor for the class
ValidateUser(logFromTestStep, contextFromTestStep, messageExchangeFromTestStep) {
// Assign log, context and messageExchange to the groovy class.
this.log = logFromTestStep;
this.context = contextFromTestStep;
this.messageExhange = messageExchangeFromTestStep;
}
def runNameCheck() {
// Performs some validation. You have access to the API response via
// this.messageExchange
log.info("Running the Name Check");
}
}
In the test step of interest, go to the assertions and create a 'Script Assertion' From here you can instantiate your external class and call some method. E.g.
def validateUserObject = new groovyScripts.yourOrg.common.ValidateUser(log, context, messageExchange);
validateUserObject.runNameCheck();
What I like about these external type scripts is that I can use any text editor I like. Also, when I make a change and press Save, SoapUI is monitoring the scripts folder for changes and reloads the script so no need to restart SoapUI.
I want to upload automatically the APK file to the server when building the release version.
To do so, I'm going to use FTP protocol.
I'm new regarding Gradle scripting. I used those 2 questions (this and this) as a base but something is not working out.
Could anyone point out what it is?
This is the code (on build.gradle):
gradle.buildFinished {
println("---- Build finished. This message appears ----")
task ftp << {
project.logger.lifecycle('-- This message does not appear --')
ant {
taskdef(name: 'ftp',
classname: 'org.apache.tools.ant.taskdefs.optional.net.FTP',
classpath: configurations.ftpAntTask.asPath)
def destination = "ftp://xxxxxxxxxx#xxx.surftown.com/xxxxx/"
def source = null
android.applicationVariants.all { variant ->
if ( (variant.name).equals("release") ) {
variant.outputs.each { output ->
source = output.outputFile
}
}
}
def user = 'xxxxxxxxx'
def pass = 'xxxxxxxxx'
ftp(server: source, userid: user, password: pass, remoteDir: destination)
}
}
gradle.buildFinished registers a hook executed at the end of the build. In your case it just creates the ftp task.
Use this if the build task is involved :
build.finalizedBy(ftp)
Otherwise, to make sure it works whatever the invoked task :
tasks.all*.finalizedBy(ftp)
By the way, it was explained in the comment section of first answer of your first link.
I am trying to search a file recursively inside a directory hence cannot use findFiles.
I have seen the directories via manually login in to the slave but it cannot be recognized in the code below. When I use isDirectory() it says false hence later while using dir.listFiles() it return null.
Below is the code:
def recursiveFileSearch(File dir, filename, filesPath) {
File[] files = dir.listFiles() // It returns null here as it cannot recognize it as directory
echo "$files"
for (int i=0; i < files.size(); i++) {
if (files[i].isDirectory()) {
recursiveFileSearch(files[i], filename, filesPath)
} else {
if (files[i].getAbsolutePath().contains(filename)) {
filesPath.add(files[i].getAbsolutePath())
return filesPath
}
}
}
return filesPath
}
node('maven') {
git 'https://github.com/rupalibehera/t3d.git'
sh 'mvn clean install'
File currentDir = new File(pwd())
def isdir = currentDir.isDirectory()
println "isdir:${isdir}" // The output here is False
def isexist = currentDir.exists()
println "isexist:${isexist}" // The output here is False
def canread = currentDir.canRead()
println "canread:${canread}" // The output here is False
def filesPath = []
def openshiftYaml = recursiveFileSearch(currentDir, "openshift.yml", filesPath)
}
I am not sure what is going wrong here.
But below are some observations:
When I do File currentDir = new File("."), it returns / and starts reading complete root directory which I don't want and in that also it does not recognize WORKSPACE as directory
It executes well if I run it on Master node, but in my use case it will be always a slave.
I have also checked the permissions of directory the user has read/write/execute permissions.
Any pointers/help is appreciated
Generally, run a sh step to do whatever work you need. You may not use java.io.File or the like from Pipeline script. It does not run on the agent, and is also insecure, which is why any such attempt will be rejected when the sandbox mode is left on (the default).
you are running into the Using File in Pipeline Description problem. I know it all too well. File objects and NIO work fine for breaking up paths, but their isDirectory, exists and other methods run on master as a part of the Jenkinsfile and not on the node. So, all use on master looks great, because the files are in the workspace. All use on a node, fails.
In short, don't do that. Use fileExists(), pwd(), findFiles etc
If you created a shareLibrary and want to use unit tests on the code outside of Jenkins, then you can create a fascade which relies on the script object ('this' from a pipeline)
Class for shared lib
class PipelineUtils implements Serializable {
static def pipelineScript = null;
/**
* Setup this fascade with access to pipeline script methods
* #param jenkinsPipelineScript
* #return
*/
static initialize(def jenkinsPipelineScript) {
pipelineScript = jenkinsPipelineScript
}
/**
* Use pipelineScript object ('this' from pipeline) to access fileExists
* We cannot use Java File objects for detection as the pipeline script runs on master and uses delegation/serialization to
* get to the node. So, File.exists() will be false if the file was generated on the node and that node isn't master.
* https://support.cloudbees.com/hc/en-us/articles/230922128-Pipeline-Using-java-io-File-in-a-Pipeline-description
* #param target
* #return true if path exists
*/
static boolean exists(Path target) {
if (!pipelineScript) {
throw new Exception("PipelineUtils.initialize with pipeline script not called - access to pipeline 'this' required for access to file detection routines")
}
if (! target.parent) {
throw new Exception('Please use absolutePaths with ${env.WORKSPACE}/path-to-file')
}
return pipelineScript.fileExists(target.toAbsolutePath().toString())
}
/**
* Convert workspace relative path to absolute path
* #param path relative path
* #return node specific absolute path
*/
static def relativeWorkspaceToAbsolutePath(String path) {
Path pwd = Paths.get(pipelineScript.pwd())
return pwd.resolve(path).toAbsolutePath().toString()
}
static void echo(def message) {
pipelineScript.echo(message)
}
}
class for tests
class JenkinsStep {
static boolean fileExists(def path) {
return new File(path).exists()
}
static def pwd() {
return System.getProperty("user.dir")
}
static def echo(def message) {
println "${message}"
}
}
usage in jenkins
PipelineUtils.initialize(this)
println PipelineUtils.exists(".")
// calls jenkins fileExists()
usage in unit tests
PipelineUtils.initialize(new JenkinsStep())
println PipelineUtils.exists(".")
// calls File.exists
I found the answer,
for searching any file in your workspace from Jenkinsfile you can use findFiles step,
I did try this but I was passing the incorrect glob for the same. Now I just do
def files = findFiles(glob: '**/openshift.yml') \\ it returns the path of file
I am running on an CI machine my soapUI automation solution, invoked by testRunner.sh.
is is invoked as following:
/Projects/SoapUI-5.2.1/bin/testrunner.sh ~/sautomation_work/Automation_Project.xml
I would like to stop the whole process in case a certain API http status code is not 200.
Any ideas ?
currently, the only way I can do this is by invoking the last test suite "FinalReport" and disable rest of the test scripts currently available in the running test suite.
The code is as following:
public testSuiteStop() {
def properties = new com.eviware.soapui.support.types.StringToObjectMap();
def reportTestCase = testRunner.testCase.testSuite.project.getTestSuiteByName("Report").getTestCaseByName("FinalReport");
reportTestCase.run(properties, true);
def testSuite = context.testCase.testSuite;
def totalTestCases = testSuite.getTestCases().size();
for(testCaseItem in (0..totalTestCases-1)) {
testSuite.getTestCaseAt(testCaseItem).setDisabled(true)
}
}
I have found a way to exit the current soapUI execution in case one of the critical test has already failed.
The idea is to disable the remaining test suites and test cases, and execute a "final" test suite that reports the failed test execution.
my code looks like this:
import com.eviware.soapui.impl.wsdl.WsdlProject;
import com.eviware.soapui.impl.wsdl.WsdlTestSuite;
import com.eviware.soapui.impl.wsdl.testcase.WsdlTestCase;
WsdlTestSuite testSuite = context.testCase.testSuite;
WsdlProject project = context.testCase.getProject();
// Disable remaining test cases in the current test suite.
for (WsdlTestCase testCase in testSuite.testCaseList) {
testCase.setDisabled(true);
}
// Disable rest of the test suites
for (WsdlTestSuite testSuiteName in project.testSuiteList) {
testSuiteName.setDisabled(testSuiteName.name != "LastTestSuite");
}