I can't work out why my web based cucumber tests never terminate. All step definitions pass but the browser never exit on the last step and therefore my script is hanging.
I'm currently running cuke4duke (0.4.3), Geb(0.5.1), maven(2.2.1), selenium-firefox-driver/selenium-chrome-driver (2.0a6 and a7)
I've tested my scripts in Chrome and Firefox (3.6) / Windows XP and Ubuntu without any success.
Here is the output from my maven build
[INFO] Scenario: Navigate from homepage # features/helppage.feature:7
[INFO] Given I am on the homepage # Helppage$_run_closure1#f93ee4
[INFO] When I click on the about page # Helppage$_run_closure2#1c87031
[INFO] Then the title should display "About Google" # Helppage$_run_closure3#1f784d7
[INFO]
[INFO] 1 scenario (1 passed)
[INFO] 3 steps (3 passed)
[INFO] 0m5.421s
HANGING HERE
Env.groovy
import geb.Browser
import org.openqa.selenium.chrome.ChromeDriver;
this.metaClass.mixin(cuke4duke.GroovyDsl)
Before() {
new Browser(new ChromeDriver(),'http://www.google.com')
}
After() {
clearCookies()
}
helppage.groovy
this.metaClass.mixin(cuke4duke.GroovyDsl)
Given (~/I am on the homepage/) {
go('/')
}
When (~/I click on the about page/) {
go('/intl/en/about.html')
}
Then (~/the title should display "(.*)"/) { pageTitle ->
assert title == pageTitle
}
I'm not able to work out if the problem is in webdriver or in the cuke4duke distribution or anywhere else. I'm also not sure how I can add more debugging to the maven build in order to find out what is going wrong.
I think you need to call quit() on the Browser (you'll probably need to save a handle to it in your Before() hook).
I don't have time to test it for sure - we've moved from cuke4duke to cucumber-groovy, but I think it's actually a Geb Browser issue you're seeing here.
Have you tried running it without cuke4duke?
I found cuke4duke quite interesting but decided to even drop JRuby (not sure if you are using that) and go with a straight ruby installation, which is faster and more reliable.
Try updating to the latest jRuby (1.6.0). Might be related to this issue: Cucumber 0.4.3 (cuke4duke) with java + maven gem issues
Related
Can you advise how to configure the plugin verifier so it would return errors as JetBrains marketplace does, please?
Error from the marketplace:
[plugin] depends on plugin com.jetbrains.php that couldn't be resolved with respect to IntelliJ IDEA Ultimate IU-202.8194.7 (2020.2.4)
Note that the [plugin] cannot be installed into IntelliJ IDEA Ultimate IU-202.8194.7 (2020.2.4) without mandatory com.jetbrains.php
Found 1 incompatibility with IntelliJ IDEA Ultimate IU-202.8194.7 (2020.2.4), some of which may be caused by the missing dependencies.
When running runPluginVerifier locally everything is fine:
2020-12-11T13:01:29 [main] INFO verification - Finished 1 of 2 verifications (in 3.1 s): IU-202.8194.7 against com.lokalise.jetbrainsideplugin:1.0.0-alpha: Compatible
2020-12-11T13:01:30 [main] INFO verification - Finished 2 of 2 verifications (in 3.6 s): PS-202.6948.87 against com.lokalise.jetbrainsideplugin:1.0.0-alpha: Compatible
I would like to catch such an error during CI at most instead of throwing the plugin at the marketplace team.
Here is the gradle task configuration (Kotlin):
import org.jetbrains.intellij.tasks.RunPluginVerifierTask
...
tasks.runPluginVerifier {
ideVersions("PS-202.6948.87,IU-202.8194.7")
setFailureLevel(RunPluginVerifierTask.FailureLevel.ALL)
}
I struggled with it quite a bit. Found it.
tasks {
runPluginVerifier {
ideVersions.set(listOf("PS-202.6948.87","IU-202.8194.7"))
}
...
}
I'm porting a C# library to Kotlin to take advantage of multiplatform. When running the build task, it fails in the subtask linkDebugTestLinux.
For context, I'm using IDEA Ultimate on Manjaro. I'm certain there's nothing wrong with my code as compileKotlinLinux finishes without error.
There are zero DDG results for "linkDebugTestLinux" and nothing helpful for "konan could not find home" or "kotlin native ...". After hours of stitching together incomplete and outdated examples from the official docs, I've given up.
My build.gradle.kts:
plugins {
kotlin("multiplatform") version "1.3.40"
}
repositories {
mavenCentral()
}
dependencies {
commonMainImplementation("org.jetbrains.kotlin:kotlin-stdlib")
commonTestImplementation("org.jetbrains.kotlin:kotlin-test-common")
commonTestImplementation("org.jetbrains.kotlin:kotlin-test-annotations-common")
}
kotlin {
// js() // wasn't the issue
linuxX64("linux")
}
Output of task build without args:
> Configure project :
Kotlin Multiplatform Projects are an experimental feature.
> Task :compileKotlinLinux
[...unused param warnings...]
> Task :compileKotlinMetadata
[...unused param warnings...]
> Task :metadataMainClasses
> Task :metadataJar
> Task :assemble
> Task :linuxProcessResources NO-SOURCE
> Task :linuxMainKlibrary
> Task :linkDebugTestLinux FAILED
e: Could not find "/home/username/" in [/home/username/path/to/the/repo, /home/username/.konan/klib, /home/username/.konan/kotlin-native-linux-1.3/klib/common, /home/username/.konan/kotlin-native-linux-1.3/klib/platform/linux_x64].
[...snip...]
BUILD FAILED in 16s
4 actionable tasks: 4 executed
Process 'command '/usr/lib/jvm/java-8-openjdk/bin/java'' finished with non-zero exit value 1
In the boilerplate I omitted it suggests to use --debug, so I've uploaded that here.
After some investigation, it was assumed that the problem is in the path. In the debug log, you got the /home/yoshi/,/ fragment. As far as this directory name was unexpected, the compiler interpreted this , as a delimiter between lib names. So, it tried to find library /home/yoshi/, that was obviously unavailable.
For now, I would recommend you to change the directory name to be something trivial.
I have the following setup to test a directive:
beforeEach(inject(function($compile, $rootScope, $injector) {
$httpBackend = $injector.get('$httpBackend');
var html = '<password-strength-bar password-to-check="password"></password-strength-bar>';
scope = $rootScope.$new();
elm = angular.element(html);
$compile(elm)(scope);
$httpBackend.expectGET('l10n/en.js').respond({});
$httpBackend.expectGET('tpl/page_signin.html').respond({});
}));
This works fine on a Mac. However, when I run the same code on Linux, it fails with the following error. It is a headless Linux box, but I'm using PhantomJS as my "browsers" in karma.conf.js.
Error: Unsatisfied requests: GET tpl/page_signin.html
I verified that both operating systems are using the same version of Node.
On a similar note, I've installed Chrome and Xfvb (via Jenkins) to run my e2e tests driven by Protractor. The following works fine when running on my Mac locally, but fails on Linux.
it('should render signup when user clicks on "Create one" link', function () {
var signupLink = element(by.linkText('Create one'));
expect(signupLink.isDisplayed()).toBe(true);
signupLink.click();
expect(element.all(by.css('.wrapper')).first().getText()).
toMatch(/Hi there, we're so glad you're here./);
In Jenkins (on Linux), the error is:
Failures:
1) account signup should render signup when user clicks on "Create one" link
Message:
[31m Expected '' to match /Hi there, we're so glad you're here./.[0m
Stack:
Error: Failed expectation
at Object.<anonymous> (/var/lib/jenkins/jobs/myapp/workspace/tests/e2e/account.js:27:17)
at runMicrotasksCallback (node.js:337:7)
Any idea why tests would run fine on Mac, but not on Linux?
This turned out to be caused by using by.linkText('Create one') for my Protractor test. Once I added an id to the link and used by.id('create-account'), it worked.
I also found that using $('.alert') or by.css('alert') doesn't work nearly as well as by.id. Particularly when you click on a button and wait for something to appear on the next screen. For example:
var alert = element(by.id('success'));
browser.driver.wait(protractor.until.elementIsVisible(alert));
UPDATE 8/6:
The beefed up logging has shown me that there is an issue deleting the old jar from the cache, which leads to the fatal "not found" error. There are other threads similar to this, but only when someone is locking the file with their IDE. We are running a single groovy script from Jenkins, and no one is logged into this box.
We ran process explorer right after the failure and there were no locks. Then I login with the user that Jenkins is using to run the script, and I get no error deleting the files.
Also it seems there was a fix in IVY 2.1 to not fail when the jar cannot be deleted, and I'm on Ivy 2.2 (Groovy 1.8.4). What gives?
Couldn't delete outdated artifact from cache: C:\Users\myUser\.groovy\grapes\com.a.b.c\x-y-z\jars\x-y-z-1.496.jar
then the false(?) error:
Caught: java.lang.ExceptionInInitializerError
java.lang.ExceptionInInitializerError
Caused by: java.lang.RuntimeException: Error grabbing Grapes -- [unresolved dependency: com.a.b.c#x-y-z;1.+: not found]
at smokeTestSuccess.<clinit>(smokeTestSuccess.groovy)
Interestingly enough, this happens everyday the first time the script is run after 5am. I guess the cache gets invalidated through some default config at 5am? Is this some kind of clue??
Original post:
I am intermittently getting an error when running a number of different Groovy scripts which all share an identical #Grab declaration. (file names changed to protect the innocent). First the full Grab declaration:
#GrabResolver(name = 'libs.release', root = 'http://myserver:8081/artifactory/libs-release', m2compatible = 'true') #Grapes([
#Grab(group = 'com.a.b.c, module = 'x-y-z', version = '1.+', changing = true),
#Grab('commons-lang:commons-lang:2.3'),
#Grab('log4j:log4j:1.2.16'),
#Grab('gpars:gpars:0.12'),
#Grab('jsr166y:jsr166y:1.7.0'),
#Grab('org.codehaus.groovy.modules.http-builder:http-builder:0.6'),
#Grab('org.apache.commons:commons-collections:3.2.1'),
#Grab('org.apache.httpcomponents:httpclient:4.2.2'),
#Grab('org.apache.httpcomponents:httpcore:4.2.3'),
#Grab('org.cyberneko.html:nekohtml:1.9.17'),
#Grab('xerces:xercesImpl:2.11.0'),
]) #GrabConfig(systemClassLoader = true)
Then the error:
Caught: java.lang.ExceptionInInitializerError
java.lang.ExceptionInInitializerError
Caused by: java.lang.RuntimeException: Error grabbing Grapes -- [unresolved dependency: com.a.b.c#x-y-z;1.+: not found]
Upon doing numerous internet searches, the cause always seems to be very simple, either one of these two basic problems:
1. Repository unreachable
2. Jar file doesn’t exist
However, in the artifactory logs, I've proven that the file is actually being downloaded:
*Artifactory did accept the request for download:
2014-07-17 07:58:19,938 [ACCEPTED DOWNLOAD] libs-release-local:com/a/b/c/x-y-z/1.477/x-y-z-1.477.jar for anonymous/165.226.40.155.
*Artifactory did deliver jar:
20140717075820|156|REQUEST|165.226.40.155|non_authenticated_user|GET|/libs-release/com/a/b/c/x-y-z/1.477/x-y-z-1.477.jar|HTTP/1.1|200|1276695
The scripts all work about 100% of the time if they are simply restarted. This all leads me to believe that the issue is the Grab timing out. Theoretically the second time I run the script, the file is in the cache, and things happen faster, thus it doesnt fail.
For the above real request, I can see about 20 seconds of elapsed time in the http log from request to download.
Questions:
Does my theory seem correct?
Is there a way to increase the amount of time that the script will wait for the #Grab to resolve?
Does putting a try / catch block around the #Grab statements seem like a good idea? Or will that just hide the real problem?
thanks in advance!!!!
I think I finally figured out the answer to my own question.
I believe there is some sort of bug within Groovy 1.8.4 (or Ivy 2.2), especially since this behavior does mirror an ancient documented Ivy bug with this exact error message scheme and behavior.
Upgrading to Groovy 2.3.6 (which includes Ivy 2.3) appears to solve the issue.
I also still have no idea why the jars cannot be deleted, nothing is locking them. I experimented with moving the grape cache to a less secure folder to rule out a permission issue, but this didn't help:
-Dgrape.root=D:\Temp\grapeCache
UPDATE 8/19:
Once we upgraded to Groovy 2.3.6, the error went away, but I then figured out that the jar was no longer being downloaded at all, when using the "1.+" resolver. Something in the defaultgrapeConfig.xml was causing an issue. Everything is finally working properly after (in addition to the Groovy upgrade) we overrode defaultgrapeConfig.xml with our own stripped down file using this command line JAVA_OPT:
-Dgrape.config=D:\Temp\myGrapeConfig.xml
which had these contents:
<ivysettings>
<settings defaultResolver="downloadGrapes"/>
<resolvers>
<chain name="downloadGrapes">
</chain>
</resolvers>
</ivysettings>
ALSO:
For completeness (further steps):
In Jenkins GUI, update the job(s):
a. Update the drop down for each script: Execute Groovy Script > Groovy Version > Groovy-2.3.6
b. Update the JAVA_OPTS for each script (have to click the ‘advanced’ button under the script to see JAVA_OPTS):
-Dgrape.config=D:\Software\SfGrapeConfig.xml
Optional logging switches: -Dgroovy.grape.report.downloads=true -Divy.message.logger.level=4
In the actual Groovy script itself, delete this option within the #GrabResolver annotation: , m2compatible = 'true'
If you get this or a similar error:
"could not find client or server jvm under [Whatever JAVE_HOME is], please check that it is a valid jdk / jre containing the desired type of jvm"
Delete groovy.exe & groovyw.exe from D:\Software\Groovy-2.3.6\bin (if the exe’s do not exist, the Jenkins groovy plugin will use the bat file versions of these, and they handle the 32-bit / 64-bit problem better than the exe’s)
I am trying to test a Mozilla plugin (developed using FireBreath) in the form of an .so shared object file. The plugin was developed on Ubuntu, where it works fine.
I am now trying it under OpenSUSE - so I first symlinked the .so file in ~/.mozilla/plugins:
> ln -s /path/to/npXXX.so ~/.mozilla/plugins/
... and then ran Firefox (7) from command line:
> /path/to/firefox -P myprofile
...
LoadPlugin: failed to initialize shared library libXext.so [libXext.so: cannot open shared object file: No such file or directory]
LoadPlugin: failed to initialize shared library /path/to/npXXX.so [/path/to/npXXX.so: undefined symbol: gtk_widget_get_mapped]
# and the LoadPlugin messages do NOT show a second time - probably because plugin is disabled (via about:addons).
And so I thought to try different stuff to look into this - but first, I restarted Firefox, and realized that on the second run I do not get the "LoadPlugin: failed to initialize" messages anymore! Then I tried removing the plugins symlink, and restarting FF; and adding it again, and restarting FF - still no error messages!
So, this tells me that probably Firefox somehow disabled/blacklisted the plugin (but which one: libXext, npXXX or both?) , but searching (grepping) for (np)XXX in '/path/to/myprofile/blocklist.xml' returns nothing (the plugin should use a email-like id, not those number GUIDs, so I'd expect that string to show in blocklist.xml if it's there).
Does anyone know: is the default behavior of Firefox to disable/blocklist plugins, that fail to load at first? If so, is there a way to force Firefox to load them again (and spit out error messages)? If you'd also have links to where this behavior is documented, it will be much appreciated :)
Many thanks in advance for any answers,
Cheers!
Note: after I stopped seeing the error messages, I did the following:
I am trying "about:plugins": "No enabled plugins found";
then trying "about:addons", and clicking under Plugins: "You don't have any add-ons of this type installed";
This plugin is not embedded in an extension, so nothing new should be added in "about:addons" under "Extensions" - and as expected, nothing new shows there. Under Ubuntu (where all works), just by symlinking the plugin to ~/.mozilla/plugins, the above two locations/screens start showing the plugin info.
This one of the things that puzzle me - if it just showed the plugin as "disabled", maybe I would have had a chance to re-enable it again (to get a new batch of error messages) - however, "about:plugins" and "about:addons" simply show nothing - so there's nothing I can use to enable from there. Which tells me Firefox has used a different method to disable the plugin(s) - but I cannot tell what it is...
Firefox has a cache for XPCOM modules ("fastload cache"), if a module fails to load Firefox won't try again. The cache is reset automatically if an extension is installed or if the application is updated. Starting with Firefox 4 you can also use -purgecaches command line flag to discard the cache.