Spock test green although app code threw Exception - groovy

I'm using Groovy and Gradle for my testing.
I have the following lines in my app code:
Platform.runLater( new Runnable() {
void run() {
....
FileChooser fc = getFileChooser()
File file = fc.showOpenDialog( null )
// if( file != null && file.exists() ) { <--- this is what I have to put
if( file.exists() ) {
println( "file $file exists")
analyseFile( file )
}
If I mock (using GroovyMock because javafx.stage.FileChooser is final) so that fc.showOpenDialog returns null I would expect a NullPointerException to be thrown on file.exists()... and one is.
But this doesn't show up in the test results, which are all green. The only way you find that this has happened is when you go and look at the test results for the class: then you see that the grey button saying "StdErr" is present.
This appears to be because the enveloping Runnable is "swallowing" it...
Is there any way in Spock to make an Exception inside a Runnable lead to test failure?
PS test code:
def "getting a null result from the showOpenDialog should be handled OK"(){
given:
FileChooser fc = GroovyMock( FileChooser )
ConsoleHandler ch = Spy( ConsoleHandler ){ getFileChooser() >> fc }
ch.setMaxLoopCount 10
systemInMock.provideLines( "o" )
fc.showOpenDialog( _ ) >> null
when:
com.sun.javafx.application.PlatformImpl.startup( {} )
ch.loop()
Thread.sleep( 1000L ) // NB I know this is a rubbish way to handle things... but I'm a newb with Spock!
then:
0 * ch.analyseFile( _ )
}
here's the SSCCE for the app code
class ConsoleHandler {
int loopCount = 0
def maxLoopCount = Integer.MAX_VALUE - 1
def response
FileChooser fileChooser = new FileChooser()
void analyseFile( File file ) {
// ...
}
void loop() {
while( ! endConditionMet() ){
print '> '
response = System.in.newReader().readLine()
if( response ) response = response.toLowerCase()
if( response == 'h' || response == 'help' ){
println "Help: enter h for this help, q to quit"
}
else if ( response == 'o' ){
Platform.runLater( new Runnable() {
void run() {
FileChooser fc = getFileChooser()
File file = fc.showOpenDialog( null )
// if( file != null && file.exists() ) {
if( file.exists() ) {
analyseFile( file )
}
}
})
}
}
}
boolean endConditionMet() {
loopCount++
response == 'q' || loopCount > maxLoopCount
}
static void main( args ) {
com.sun.javafx.application.PlatformImpl.startup( {} )
new ConsoleHandler().loop()
com.sun.javafx.application.PlatformImpl.exit()
}
}
The only other thing that might require an explanation is systemInMock in the Spock test code, which lets the test supply text to StdIn in the app code:
import org.junit.contrib.java.lang.system.TextFromStandardInputStream
import static org.junit.contrib.java.lang.system.TextFromStandardInputStream.emptyStandardInputStream
....
// field of the Test class, of course:
#Rule
public TextFromStandardInputStream systemInMock = emptyStandardInputStream()
The corresponding line in my Gradle dependencies clause is :
testCompile 'com.github.stefanbirkner:system-rules:1.16.0'
... but I think this could be obtained by using a #Grab in the Groovy test file, right?
I think I now realise this question is really about how best to do JavaFX-app-thread testing in Spock... EDT testing with Swing was always a problem, no doubt similar things apply. And it's all coming back to me: when you get Platform to run the Runnable it would make no sense to throw an Exception from runLater as the caller code has moved on, so I believe the only realistic option is for run() to call some other method which the testing code can call...
I have searched for "Spock JavaFX testing" but not much has come up...

This is what I imagine to be the only practical way forward: change this bit like this:
else if ( response == 'o' ){
Platform.runLater(new Runnable() {
void run() {
openFileChooser()
}
})
}
new method for ConsoleHandler:
def openFileChooser() {
FileChooser fc = getFileChooser()
File file = fc.showOpenDialog( null )
// if( file != null && file.exists() ) {
if( file.exists() ) {
analyseFile( file )
}
}
new Spock test:
def "getting a null result from the showOpenDialog should be handled OK2"(){
given:
FileChooser fc = GroovyMock( FileChooser )
ConsoleHandler ch = Spy( ConsoleHandler ){ getFileChooser() >> fc }
fc.showOpenDialog( _ ) >> null
when:
com.sun.javafx.application.PlatformImpl.startup( {} )
ch.openFileChooser()
Thread.sleep( 1000L )
then:
0 * ch.analyseFile( _ )
}
Test fails (hurrah!):
java.lang.NullPointerException: Cannot invoke method exists() on null
object
But I'd welcome input about whether this can be improved, from more experienced Groovy/Spock hands.
Indeed I was slight puzzled why I didn't get "incorrect State" when I called openFileChooser in the when: clause. I presume this is because going showOpenDialog on a mock FileChooser doesn't actually involve any check to make sure we're currently in the JavaFX-app-thread. Turns out the ...PlatformImpl.startup( {} ) and Thread.sleep lines can be deleted in that test. Is this ideal practice though?! Hmmm...
later
Closing this as the only answer... in reality the same sorts of considerations apply to this question as to testing code in plain-old Java using plain-old JUnit in the JavaFX-app-thread. Spock + Groovy may pose some problems or they may make it easier... I am not yet experienced enough with them to know.

Related

Null and empty check in one go in groovy

Can someone please clarify below issue.
Below validation throw NULL pointer error when pass null in myVar. It is because of !myVar.isEmpty()
if (myVar!= null || !myVar.isEmpty() ) {
some code///
}
Below works though as expected,
if (myVar!= null) {
if (!myVar.isEmpty()) {
some code///
}
Any other way of having both steps together.
If .isEmpty() is used on a string, then you can also just use Groovy
"truth" directly, as null and also empty strings are "false".
[null, "", "ok"].each{
if (it) {
println it
}
}
// -> ok
if ( myVar!= null && !myVar.isEmpty() ) {
//some code
}
the same as
if ( !( myVar== null || myVar.isEmpty() ) ) {
//some code
}
and to make it shorter - it's better to add method like hasValues
then check could be like this:
if( myVar?.hasValues() ){
//code
}
and finally to make it groovier - create a method boolean asBoolean()
class MyClass{
String s=""
boolean isEmpty(){
return s==null || s.length()==0
}
boolean asBoolean(){
return !isEmpty()
}
}
def myVar = new MyClass(s:"abc")
//in this case your check could be veeery short
//the following means myVar!=null && myVar.asBoolean()==true
if(myVar) {
//code
}

to def or not to def #groovy

There is a groovy script that has a function defined and used in multiple threads.
I found that time to time it mixes some variable values with other threads.
The problem appears when developer forgot to declare variable like this:
def f ( x ) {
y = "" + x
println y
}
The problem disappears when developer declares variable
def f ( x ) {
def y = "" + x
println y
}
In classes there is no way to use undefined variables.
The reason is that in the scripts the undefined variable acts as an instance variable of the script-class. Actually this is a binding for external variables that could be passed into the script.
Here is a part of script that demonstrates the problem of using undefined variables in several threads.
void f(String x){
y=""+x; //if you put def at this line it'll work fine
Thread.sleep(333);
//usually developers expected that `y` is a local variable,
//but without declaration it belongs to script-class
if( !y.equals(x) ) println("failure: x=$x y=$y");
}
//thead 1 start
Thread.start{
for(int i=0;i<20;i++){
f( i.toString() )
Thread.sleep(100);
}
}
//thead 2 start
Thread.start{
for(int i=0;i<20;i++){
f( i.toString() )
Thread.sleep(150);
}
}
//main thread sleep.
Thread.sleep(2000);
println("done");
this code will print out failures when x not equals y (literally)
Write a compiler configuration using a scriptBaseClass to disallow undeclared variables and the usage of the script's own binding.
This is the base script (my DefBase.groovy file):
abstract class NoUndefShallPass extends Script {
void setProperty(String name, val) {
// seems like groovy itself set 'args' in the binding, probably from CL
assert name == 'args',
"Error in '$name'; variables should be declared using 'def'"
}
}
def configuration = new org.codehaus.groovy.control.CompilerConfiguration()
configuration.setScriptBaseClass(NoUndefShallPass.class.name)
def shell = new GroovyShell(this.class.classLoader, new Binding(), configuration)
shell.evaluate new File('/tmp/Defing.groovy')
And the script. It will thrown an AssertionError if the setProperty tries to use the binding:
void f(String x){
y=""+x; //if you put def at this line it'll work fine
Thread.sleep(333);
if( !y.equals(x) ) println("failure: x=$x y=$y");
}
def t1 = Thread.start{
20.times { i ->
f( i.toString() )
Thread.sleep(100);
}
}
def t2 = Thread.start{
20.times { i ->
f( i.toString() )
Thread.sleep(150);
}
}
Thread.sleep(2000);
t1.join()
t2.join()
println("done");
it's poorly described here as "the binding"
http://groovy.codehaus.org/Scoping+and+the+Semantics+of+%22def%22

How to build a reinitializable lazy property in Groovy?

This is what I'd like to do:
class MyObject {
#Lazy volatile String test = {
//initalize with network access
}()
}
def my = new MyObject()
println my.test
//Should clear the property but throws groovy.lang.ReadOnlyPropertyException
my.test = null
//Should invoke a new initialization
println my.test
Unfortunately lazy fields are readonly fields in Groovy and clearing the property leads to an exception.
Any idea how to make a lazy field reinitializable without reimplementing the double checking logic provided by the #Lazy annotation?
UPDATE:
Considering soft=true (from the 1st answer) made me run a few tests:
class MyObject {
#Lazy() volatile String test = {
//initalize with network access
println 'init'
Thread.sleep(1000)
'test'
}()
}
def my = new MyObject()
//my.test = null
10.times { zahl ->
Thread.start {println "$zahl: $my.test"}
}
Will have the following output on my Groovy console after approx 1 sec:
init
0: test
7: test
6: test
1: test
8: test
4: test
9: test
3: test
5: test
2: test
This is as expected (and wanted). Now I add soft=trueand the result changes dramatically and it takes 10 seconds:
init
init
0: test
init
9: test
init
8: test
init
7: test
init
6: test
init
5: test
init
4: test
init
3: test
init
2: test
1: test
Maybe I'm doing the test wrong or soft=true destroys the caching effect completely. Any ideas?
Can't you use the soft attribute of Lazy, ie:
class MyObject {
#Lazy( soft=true ) volatile String test = {
//initalize with network access
}()
}
edit
With soft=true, the annotation generates a setter and a getter like so:
private volatile java.lang.ref.SoftReference $test
public java.lang.String getTest() {
java.lang.String res = $test?.get()
if ( res != null) {
return res
} else {
synchronized ( this ) {
if ( res != null) {
return res
} else {
res = {
}.call()
$test = new java.lang.ref.SoftReference( res )
return res
}
}
}
}
public void setTest(java.lang.String value) {
if ( value != null) {
$test = new java.lang.ref.SoftReference( value )
} else {
$test = null
}
}
Without soft=true, you don't get a setter
private volatile java.lang.String $test
public java.lang.String getTest() {
java.lang.Object $test_local = $test
if ( $test_local != null) {
return $test_local
} else {
synchronized ( this ) {
if ( $test != null) {
return $test
} else {
return $test = {
}.call()
}
}
}
}
So the variable is read-only. Not currently sure if this is intentional, or a side-effect of using soft=true though...
Edit #2
This looks like it might be a bug in the implementation of Lazy with soft=true
If we change the getter to:
public java.lang.String getTest() {
java.lang.String res = $test?.get()
if( res != null ) {
return res
} else {
synchronized( this ) {
// Get the reference again rather than just check the existing res
res = $test?.get()
if( res != null ) {
return res
} else {
res = {
println 'init'
Thread.sleep(1000)
'test'
}.call()
$test = new java.lang.ref.SoftReference<String>( res )
return res
}
}
}
}
I think it's working... I'll work on a bugfix

Xtext: Scope method not execute?

I've got this IScope method:
IScope scope_Assignment(AssignmentOrFBCall a, EReference ref){
System.out.println(a.toString());
return IScope.NULLSCOPE;
}
but it doesn't produce any results. Nothing in output with println and content assistant does not change. So I thought it hasn't be executed, but if I try to add a breakpoint, it will be crossed.
So, where is the problem?
Grammar rule is this:
AssignmentOrFBCall:
(((variable=[VariableDefinition]) |
((variableArray=[ArrayDefinition]'['index=ExpressionIndex']')('.'internalVariable=InternalRecursive)?) |
(variableStructOrFB=[VariableDefinition]'.')(internalVariable=InternalRecursive))
((':='expression=Expression)|('('(parameter=FBParameter | ')'))))
;
Solved with this scope provider:
#Override
protected IScope createLocalVarScope(IScope parentScope,
LocalVariableScopeContext scopeContext) {
if (scopeContext != null && scopeContext.getContext() != null) {
EObject context = scopeContext.getContext();
if (context instanceof Program) {
Program program = (Program) context;
return Scopes.scopeFor(program.getDeclarations());
}
}
return super.createLocalVarScope(parentScope, scopeContext);
}

TcpListener.AcceptSocket( ) behavior: gets stuck in one app upon termination, but does not in another?

I have two TCP-server apps that are based on the same code, but for some reason exhibit different behavior and i'm ready to pull my hair out trying to figure out why. The code pattern is as follows:
public class TcpServer
{
public static void Start( bool bService )
{
..
oTcpListnr= new TcpListener( ip, iOutPort );
aTcpClient= new ArrayList( );
bListen= true;
oTcpListnr.Start( );
thOutComm= new Thread( new ThreadStart( AcceptTcpConn ) );
thOutComm.Name= "App-i.AcceptTcpConn";
thOutComm.Start( );
..
}
public static void Stop( )
{
bListen= false;
if( thOutComm != null )
{
thOutComm.Join( iTimeout );
thOutComm= null;
}
if( oTimer != null )
{
oTimer.Change( Timeout.Infinite, Timeout.Infinite );
oTimer.Dispose( );
}
}
public static void AcceptTcpConn( )
{
TcpState oState;
Socket oSocket= null;
while( bListen )
{
try
{
// if( oTcpListnr.Pending( ) )
{
oSocket= oTcpListnr.AcceptSocket( );
oState= new TcpState( oSocket );
if( oSocket.Connected )
{
Utils.PrnLine( "adding tcp: {0}", oSocket.RemoteEndPoint.ToString( ) );
Monitor.Enter( aTcpClient );
aTcpClient.Add( oState );
Monitor.Exit( aTcpClient );
oSocket.SetSocketOption( SocketOptionLevel.IP, SocketOptionName.DontFragment, true );
oSocket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.DontLinger, true );
// / oSocket.BeginReceive( oState.bData, 0, oState.bData.Length, SocketFlags.None, // no need to read
// / new AsyncCallback( AsyncTcpComm ), oState ); // for output only
}
else
{
Utils.PrnLine( "removing tcp: {0}", oSocket.RemoteEndPoint.ToString( ) );
Monitor.Enter( aTcpClient );
aTcpClient.Remove( oState );
Monitor.Exit( aTcpClient );
}
}
// Thread.Sleep( iTcpWake );
}
#region catch
catch( Exception x )
{
bool b= true;
SocketException se= x as SocketException;
if( se != null )
{
if( se.SocketErrorCode == SocketError.Interrupted )
{
b= false;
if( oSocket != null )
Utils.PrnLine( "TcpConn:\tclosing tcp: {0} ({1})", oSocket.RemoteEndPoint.ToString( ), se.SocketErrorCode );
}
}
if( b )
{
Utils.HandleEx( x );
}
}
#endregion
}
}
}
I omitted exception handling in Start/Stop methods for brevity. Variation in behavior is during program termination: one app shuts down almost immediately while the other gets stuck in oTcpListnr.AcceptSocket( ) call. I know that this is a blocking call, but in that case why does it not present an issue for the 1st app?
Usage of this class cannot be any simpler, e.g. for a command-line tool:
class Program
{
public static void Main( string[] args )
{
TcpServer.Start( false );
Console.Read( );
Console.WriteLine( "\r\nStopping.." );
TcpServer.Stop( );
Console.WriteLine( "\r\nStopped. Press any key to exit.." );
Console.Read( );
}
}
Whether any clients have connected or not does not make a difference, 2nd app always gets stuck.
I found a potential solution (commented lines) by checking TcpListener.Pending( ) prior to .AcceptSocket( ) call, but this immediately affects CPU utilization, therefore an inclusion of smth like Thread.Sleep(.) is a must. Altogether though I'd rather avoid this approach if possible, because of extra connection wait times and CPU utilization (small as it is).
Still, the main question is: what may cause the same exact code to execute differently? Both apps are compiled on .NET 4 Client Profile, x86 (32-bit), no specific optimizations. Thank you in advance for good ideas!
Finally found the root cause: I missed a couple of important lines [hidden in a #region] in the Stop( ) method, which starts the ball rolling. Here's how it should look:
public static void Stop( )
{
bListen= false;
if( thOutComm != null )
{
try
{
oTcpListnr.Stop( );
}
catch( Exception x )
{
Utils.HandleEx( x );
}
thOutComm.Join( iTimeout );
thOutComm= null;
}
if( oTimer != null )
{
oTimer.Change( Timeout.Infinite, Timeout.Infinite );
oTimer.Dispose( );
}
}
The call to TcpListener.Stop( ) kicks out the wait-cycle inside .AcceptSocket( ) with "A blocking operation was interrupted by a call to WSACancelBlockingCall" exception, which is then "normally ignored" (check for SocketError.Interrupted) by the code that i originally had.

Resources