How to set browser arguments conditionally (Selenium-Jupiter specific) - selenium-jupiter

I have tried so many things to set the proxy in a chrome-in-docker browser.
I finally found something that works, but it isn't the best solution.
#BeforeEach
public void beforeEach(#Arguments("--proxy-server=server:portNum") WebDriver driver) {
this.registrationPage = new RegistrationPage(driver);
this.registrationPage.navigateTo();
}
This works when I run the tests in Jenkins (Needs proxy there) but will fail the tests when running locally.
Is there a better way to set the proxy server, or to set it conditionally?
My code runs in Java with maven. I would be fine with adding a system property to Jenkins (-Dis.CI=true or whatever) but I can only figure out how to set these arguments as a method paramter. That definitly won't work for conditionally setting them.
Any other way to set the --proxy-server is greatly appreciated. I would also prefer a way to set this globally. Having to set it in every test class would be a nightmare.
I have tried using WebDriverManager.globalConfig().setProxy("...") and it had no effect. I'm under the assumption that the proxy in the config is different than the proxy-server.

I ended up setting this explicitly in ChromeOptions.
This isn't ideal, but it is the best solution that I could find. I would still like to find a more generic solution that will work across all browsers.
I also made a is.CI system property that I set when I run in Jenkins. This is necessary because the proxy does not work locally.
#ExtendWith(SeleniumExtension.class)
public class BaseTest {
#Options
static ChromeOptions options = new ChromeOptions();
#BeforeAll
public static void beforeAll() {
Boolean isCI = Boolean.getBoolean("is.CI");
if (isCI) {
options.addArguments("--proxy-server=server:portNum");
}
}
}

Related

jupiter zerocode ParallelLoadExtension choose methods order

Is it possible to use some kind of #Before annotation ?
I want to 'pre-load' datas (POST) before to launch my tests (GET).
But I only want parallel executions on the GET.
I was thinking to define a method with #LoadWith("preload_generation.properties") with :
number.of.threads=1
ramp.up.period.in.seconds=1
loop.count=1
Just to be sure that we execute it only once.
But it looks like I cannot choose the order of execution, and I need this POST method to be the first one executed.
I also tried to put a TestMappings with my 'loading method' at the top of the class.
But it doesn't work neither.
I am not aware of any way that ZeroCode would be able to do this as it is specific to only re-leveraging tests already written in JUnit. My suggestion would be to follow a bit more traditional approach and use standard JUnit setup methods
#BeforeClass
public static void setupClass() {
// setup before the entire class
}
#Before
public void setup() {
// setup before each individual test
}
rather than attempting to use a tool outside of its intended purposes.
As per your described scenario above that you want to ensure data is loaded before your tests are executed, especially in the case of being run under load by ZeroCode it is suggested that you determine how to create your data using the
#BeforeClass
public static void setupClass() {
// setup before the entire class
}
While this may take a bit more thought into how you create your data by creating it before all the tests it will ensure that your load test is excluding data setup time.

Setting debug strings dynamically in Loopback 2.0

I use debug strings for debugging Loopback 2.0 application. Loopback documentation says:
The LoopBack framework has a number of built-in debug strings to help
with debugging. Specify a string on the command-line via an
environment variable as follows:
MacOS and Linux
$ DEBUG=<pattern>[,<pattern>...] node .
Is it possible to change patterns dynamically in runtime? Or is it possible to use environment-specific configuration?
Before I get deeper note that this debug logging facility uses visionmedia's debug module which handles almost all of the logic.
Is it possible to change patterns dynamically in runtime?
Well before any module is loaded, safest and best way I believe is to just manipulate the environmental variables:
if (process.env.NODE_ENV === 'development') {
process.env.DEBUG = process.env.DEBUG + ',loopback:*';
}
Another way would be to load debug and use it's .enable method:
require('debug').enable('loopabck:*');
But note that it only works if you do this before Loopback is required since it only allows changes before it's instances are created, which is in this case before loopback is loaded. Another thing is that, there might be multiple debug modules installed depending on the dependencies and your package manager(npm#3, npm#2 and yarn behave differently). debug might be in your node_modules directory, or it might be in loopback each module, node_modules directory. So make sure you require all instances of it and enable, if you want to do it this way.
Now if don't want to do it on the startup, well API doesn't allow changes in the runtime. You can view the discussion regarding this here. Though there are some dirty ways to go around it, but these might possibly break in the future so be careful.
Firstly, there's a module called hot-debug which supposedly makes require('debug').enable work on previously created instances also, but when I tried it, it didn't work perfectly and it was buggy, but it's possible it might work fine for you.
If that doesn't work for you another way is to override require('debug').log method. If this is defined, debug will call this method instead of console.log with the formatted the arguments. You can set DEBUG=* and then filter it yourself:
require('debug').log = function (string) {
if (string.contains('loopback:security')) {
console.log(string);
}
};
This way will be slow in production though as all the debug output will be formatted before being filtered even though nothing might be outputted to console.
Another thing to override the require('debug').init method. This is called everytime a new debug instance is created. Since every debug instance uses an enabled property to check if it's enabled we can toggle that.
const debug = require('debug');
const { init } = debug;
const instances = [];
debug.init = function(debugInstance) {
init(debugInstance);
instances.push(debugInstance);
};
// You can call this function later to enable a given namespace like loopback.security.acl
function enableNamespace(namespace) {
instances.forEach(instance => {
instance.enabled = instance.namespace === namespace;
});
}
Though there's a lot of improvement can be done on this, but you get the idea.
I can change debug namespace dynamically with the hot-debug module.
In my app, I've just created a function for this :
require('hot-debug')
const debug = require('debug')
function forceDebugNamespaces (namespaces) {
debug.enable(namespaces)
}
// Usage
forceDebugNamespaces('*,-express:*,-nodemon:*,-nodemon')
In my case, I have a config file which allow me to set process.env.DEBUG but I needed to find a way to update debug namespaces without restarting my app (the config file is watched for changed by my app).

How do I keep the browser open after a coded ui test finishes?

I'm using Visual Studio 2012 Coded UI tests for a web application. I have a test for logging into the app which starts the browser, locates the login dialogue, enters credentials, and then clicks ok. I have an assertion which checks for the correct url after the login. This test appears to function correctly. My problem is that it closes the browser after the test runs. I need to keep the browser open, so I can run the next test in my sequence. How do I do this?
At the moment, I don't have anything in my [TestCleanup()] section. I'm assuming that what I'm looking for goes here, but so far I haven't had a lot of luck figuring out what that is supposed to be.
I don't have the original source where I found this solution :(
You can have a method like the one showed below. This method needs to be called in TestSetup. Also declare a class level variable _browserWindow of the tyep BrowserWindow
private void SetBrowser()
{
if(_browserWindow == null)
{
BrowserWindow.CurrentBrowser = "ie";
_browserWindow = BrowserWindow.Launch("http://www.google.com");
_browserWindow.CloseOnPlaybackCleanup = false;
_browserWindow.Maximized = !_browserWindow.Maximized;
}
else
{
BrowserWindow.CurrentBrowser = "ie";
_browserWindow = BrowserWindow.Locate("Google");
_browserWindow.Maximized = !_browserWindow.Maximized;
}
}
Ok, so what I needed to have happen was the launch and login before each test. I thought what I wanted was to run the browser and login test first, and then each additional test. After reading more, I've decided what I actually wanted was to run this logic as initialization code for each test. I've done that by adding this code to the default [TestInitialize()] generated when I started the coded ui project in Visual Studio 2012.
I have found the following method to work for my data driven coded UI test in Visual Studio 2015.
You will want to use [ClassInitialize] and get your browser open and direct it according to where your [TestMethod] begins.
Use [ClassCleanup] to release the resources after all the methods in the test class have been executed.
You can redirect test methods different after the class has been initialized by using the [TestInitialize] and clean-up test using the [TestCleanup]. Be careful with those though because they will occur for each test method and if it closes your browser instance your following test will fail.
private static BrowserWindow browserWindow = null;
[ClassInitialize]
public static void ClassInitialize(TestContext context)
{
Playback.Initialize();
browserWindow = BrowserWindow.Launch(new Uri("http://198.238.204.79/"));
}
[ClassCleanup]
public static void TestCleanup()
{
browserWindow.Close();
Playback.Cleanup();
}

Using selenium web driver to run test on multiple browsers

I'm trying to run a same test across multiple browsers through for loop but it always run only on Firefox.
bros = ['FIREFOX','CHROME','INTERNET EXPLORER']
for bro in bros:
print "Running "+bro+"\n"
browser = webdriver.Remote(
command_executor='http://10.236.194.218:4444/wd/hub',
desired_capabilities={'browserName': bro,
'javascriptEnabled': True})
browser.implicitly_wait(60000)
browser.get("http://10.236.194.156")
One interesting observation; when I include the parameter platform: WINDOWS it's running only on Internet Explorer.
Does Selenium Webdriver works this way or my understanding is wrong?
I actually have done this in java, the following works well for me:
...
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;
...
DesiredCapabilities[] browsers = {DesiredCapabilities.firefox(),DesiredCapabilities.chrome(),DesiredCapabilities.internetExplorer()};
for(DesiredCapabilities browser : browsers)
{
try{
System.out.println("Testing in Browser: "+browser.getBrowserName());
driver = new RemoteWebDriver(new URL("http://127.0.0.1:4444/wd/hub"), browser);
...
You will need to adapt this of course if you're writing your tests in a different language, I know it's possible in Java, not sure about otherwise.
Also, I agree with what you're trying to do, I think it is much better to have a class that runs the same tests with different browsers, instead of duplicating code many times over and being inelegant. If you are doing this in Java/other codes, I also highly suggest using a Page Object.
Good luck!
So if I got you right, you have one testcase and want this to be tested against different browsers.
I don't think a loop is a good idea even if it's possible (I don't know atm).
The idea is to be able to test every testcase standalone on the run with a specific browser (thats the JUnit philosophy), not to run all in order to get to that specific browser .
So you need to create a WebDriver with the specific browser and the specific testcase.
I suggest you seperate testcases by creating a testcase-class file for each browser.
Like: FirefoxTestOne.java, IeTestOne.java, ChromeTestOne.java .
Note that you can add multiple firefox tests in the FirefoxTestOne without problems. Theres no guarantee that they will be executed in a particular order through (JUnit philosophy).
For links and tutorials ask google. There are already looooots of examples written.
You will have to generate multiple test classes (or webdriver instances) with the chosen browsers.
A Webdriver is defined with one browser.
As Coretek said you need multiple webdriver instances. You will need to run the selenium-server .jar file and provide each one with an argument specifying the browser you want that instance of the server to run.
The argument for Internet Explorer is *iexplore, the argument for firefox is *firefox and the argument for chrome is *chrome. These are -forcedBrowserMode arguments. Otherwise selenium won't know what it should be running against. You may need to use *iexploreProxy for your tests, sometimes it works better than the *iexplore mode.
Check out this link for more arguments that may be useful:
http://seleniumforum.forumotion.net/t89-selenium-server-command-options-while-starting-server
This way (attached url) worked for me.
http://blog.varunin.com/2011/07/running-selenium-tests-on-different.html
The following point is different from the example.
#Parameters
public static List data() {
return Arrays.asList(new Object[][]{{"firefox"},{"ie"}});
}
#Before
public void setUp() throws Exception {
System.out.println("browser: " + browser);
if(browser.equalsIgnoreCase("ie")) {
System.setProperty("webdriver.ie.driver", "IEDriverServer64.exe");
driver = new InternetExplorerDriver();
} else if(browser.equalsIgnoreCase("firefox")) {
driver = new FirefoxDriver();
You can use TestNG for this
combination of selenium + testng gives you a batter result for this
just by adding parameters attribute you can do this
Have you considered using the composite design pattern to create a CompositeWebDriver that actually runs multiple component WebDriver (such as chrome, gecko,...)? To this end, you would extend the WebDriver class with a new one (e.g. CompositeWebDriver) that just delegates his calls to all the actual WebDrivers.
This could also be done with various instances of RemoteWebDriver as components.

Jmeter BSF using Groovy, Import one script's function to another

I use groovy in my Jmeter BSF, and sometimes I have functions that are used frequently enough to be moved to some script which I than can use as a library.
My approach was to create a file, say "library.groovy", and add there some function
public void function()
{
println("hello!");
}
and then use the following code in my BSF script
import library.groovy;
function();
Both files lie in the same dir, but script refuses to locate library. I also tried to explicitly wrap this function into class but I took no effect as well.
Can anyone suggest a solution for this?
Update:
I tried almost all possible solutions described in the internet. And everything that works in groovy console or Eclipse does not in Jmeter. Probably that is because of BSF. Anyone knows some workarounds?
I just had this problem, and solved it in a way that seems, to me, nicer-looking. It is basically winstaan74's answer, but with the extra bits needed to make it work.
You have your function's groovy file, named say: MyJmeterFunctions.groovy:
package My.JmeterFunctions
public class MyHelloClass {
public void hello() {
println("Hello!");
}
}
Then you compile this from the terminal:
$groovyc -d myJmeterFunctions myJmeterFunctions.groovy
and turn it into a .jar inside the /lib folder of your jmeter install, with all the other .jar files that came with jmeter
$jar cvf /<>/apache-jmeter-2.8/lib/myJmeterFunctions.jar -C myJmeterFunctions .
Now, restart jmeter. It won't know about your new .jar until you do.
Lastly you have the script that you want to run the hello() function from, which your jmeter BSF assertion/listener/ whatever points to:
import My.JmeterFunctions.*
def my_hello_class_instance = new MyHelloClass();
my_hello_class_instance.hello();
And this is what worked for me. If you'd rather organize you .jar into a different folder than jmeter's /lib, I believe you can run jmeter using (from here):
$jmeter -Jsearch_paths=/path/to/yourfunction.jar
But I haven't tried that myself.
I ended up having 2 files like below:
"MyHelloClass.groovy"
public class MyHelloClass {
public void hello() {
println("Hello!");
}
}
And "HelloScript.groovy"
try {
ClassLoader parent = getClass().getClassLoader();
GroovyClassLoader loader = new GroovyClassLoader(parent);
Class groovyClass = loader.parseClass(new File("../GroovyScripts/MyHelloClass.groovy"));
GroovyObject helloClass = (GroovyObject) groovyClass.newInstance();
helloClass.hello();
}
catch (Throwable e) {
println(e.toString());
}
Then I can run "HelloScript.groovy" in BSF from Jmeter.
I think you'll need to wrap your helper methods in a class, and then import that class. So, your helper methods file should contain..
package library
class UsefulFunctions {
static function() {
println 'hello'
}
}
And then in your test script, say
import static library.UsefulFunctions.*
function()
Now, this is just scratching the surface, but I hope it'd be enough to get you started.

Resources