Offline Jacoco using MockStatic cause re-instrumentation exception - mockito

I'm running PowerMock 1.6.4 and all the latest (JUnit 4.11 though).
I use the Jacoco Ant task to only instrument the classes, not the test classes. I also use the Jacoco ant task to run the Junit tests, then generate the reports.
Now I'm hitting a problem that I can't figure out...
I have a test class that tests one member function of class Foo.
One of the members of Foo is static, so I've wrapped that in a static function so I can control the execution via mock but the side effect is that I need to mockStatic now.
What I've noticed is that PowerMockito.mockStatic(Foo.class) ... all tests fail with instrumentation problems.
I have another test class that tests another member function of Foo. This test class works fine, but as soon as I introduce a mockStatic the test class fails with instrumentation failures.
Has anyone see this failure and know of any workarounds? I can't change the static member variable.

I finally figured out what I believe is the issue. Jacoco instrumentation injects data into your byte code so does PowerMock when it attempts to mock statics. This wrecks havoc since they are stepping on each other and you will get really odd behavior due to them messing with each other. I was getting an assorted bunch of NPE's in code that shouldn't throw NPE's.
The easy solution was to refactor out the unnecessary statics and know that if you plan to use statics to control data flow, probably should rethink the architecture for testing if you plan to use Jacoco for coverage.
You can still run Jacoco instrumentation on statics but you can't mock statics at the same time; at least not with the way PowerMock with Mockito does it. I'm not sure if EasyMock would result in a different behavior, so ymmv.

I had similar issues, but instead of having to refactor the statics out I believe there is another solution. I did this in a maven pom file but I'll explain what is going on. Jacoco does inject data into your byte code. And yes Powermock uses a custom Byte loader and Jacoco hates that. So heres a solution to work around it.
In your Jacoco executions, you need Jacoco to use default-instrumentation for your tests. (you can specify powermock tests or just include all tests it works either way). Heres an explantion of whats going on with default-Instrumentation: Offline-instrumentation with Jacoco.
You must then have the restore step for the tests. Now heres the interesting part, you have to run the normal Jacoco Prepare Agent step, while EXCLUDING all the tests that were run in default instrumentation. (if you don't you will get a bunch of warnings something like JaCoCo exceution data already exists for xTest)
This will solve your problem, and you don't need to refactor out your static methods. Though if they were unnecessary you should probably still take them out ;)
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>${jacoco.version}</version>
<configuration>
<append>true</append>
</configuration>
<executions>
<execution>
<id>default-instrument</id>
<goals>
<goal>instrument</goal>
</goals>
<configuration>
<includes>
<include>**/*test*</include>
</includes>
</configuration>
</execution>
<execution>
<id>default-restore-instrumented-classes</id>
<goals>
<goal>restore-instrumented-classes</goal>
</goals>
<configuration>
<includes>
<include>**/*test*</include>
</includes>
</configuration>
</execution>
<execution>
<id>Prepare-Jacoco</id>
<goals>
<goal>prepare-agent</goal>
</goals>
<configuration>
<excludes>
<exclude>**/*test*</exclude>
</excludes>
</configuration>
</execution>
</executions>
</plugin>

Related

Problems configuring clustered Vertx Eventbus in Quarkus

I'm using:
Quarkus 1.6.1.Final
Vertx 3.9.1 (provided by quarkus-vertx dependency, see pom.xml below)
And I can't get the clusered Eventbus working. I've followed the instructions listed here:
https://vertx.io/docs/vertx-hazelcast/java/
I've also enabled clustering in Quarkus:
quarkus.vertx.cluster.clustered=true
quarkus.vertx.cluster.port=8081
quarkus.vertx.prefer-native-transport=true
quarkus.http.port=8080
And here is my pom.xml:
<dependencies>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-resteasy</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-resteasy-mutiny</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-vertx</artifactId>
</dependency>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-hazelcast</artifactId>
<version>3.9.2</version>
<exclusions>
<exclusion>
<groupId>io.vertx</groupId>
<artifactId>vertx-core</artifactId>
</exclusion>
<!-- <exclusion>-->
<!-- <groupId>com.hazelcast</groupId>-->
<!-- <artifactId>hazelcast</artifactId>-->
<!-- </exclusion>-->
</exclusions>
</dependency>
<!-- <dependency>-->
<!-- <groupId>com.hazelcast</groupId>-->
<!-- <artifactId>hazelcast-all</artifactId>-->
<!-- <version>3.9</version>-->
<!-- </dependency>-->
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-native-epoll</artifactId>
<classifier>linux-x86_64</classifier>
</dependency>
</dependencies>
And the error I get is the following:
Caused by: java.lang.ClassNotFoundException: com.hazelcast.core.MembershipListener
As you can see in my pom.xml, I've also added the dependency hazelcast-all:3.9 and excluded the hazelcast dependency from vertx-hazelcast:3.9.2, then this error disappears but another comes up:
Caused by: com.hazelcast.config.InvalidConfigurationException: cvc-complex-type.2.4.a: Invalid content was found starting with element '{"http://www.hazelcast.com/schema/config":memcache-protocol}'. One of '{"http://www.hazelcast.com/schema/config":public-address, "http://www.hazelcast.com/schema/config":reuse-address, "http://www.hazelcast.com/schema/config":outbound-ports, "http://www.hazelcast.com/schema/config":join, "http://www.hazelcast.com/schema/config":interfaces, "http://www.hazelcast.com/schema/config":ssl, "http://www.hazelcast.com/schema/config":socket-interceptor, "http://www.hazelcast.com/schema/config":symmetric-encryption, "http://www.hazelcast.com/schema/config":member-address-provider}' is expected.
Am I doing something wrong or forgetting something, or is this simply a bug in Quarkus or in Vertx ?
Thx in advance for any help.
I think the most probable reason of your issue is that you are using the quarkus-universe-bom which enforces a version of Hazelcast (we have an Hazelcast extension there) which is not compatible with vertx-hazelcast.
Check your dependency tree with mvn dependency:tree and make sure the Hazelcast artifacts are of the version required by vertx-hazelcast.
Another option would be to simply use the quarkus-bom which does not enforce an Hazelcast version and let vertx-hazelcast drag the dependency by itself.
It seems like a bug in Quarkus and this issue is related to:
https://github.com/quarkusio/quarkus/issues/10889
Bringing this from its winter sleep...
I am looking to use quarkus 2 + vert.x 4 and use either the shared data vert.x api or vert.x cluster manager in order to achieve an in-process, distributed cache (as opposed to an external dist. cache cluster)
What's unclear to me, also by looking at the git issue described above (thats still open), is if I can count on these APIs working for me at this time with the versions I mentioned.
Any comments will be great!
Thanks in advance...
[UPDATE]: looks like the clustered cache works with no issues using the shared data API along with quarkus, vertx, hazlecast & mutiny bindings for vertx (all with latest versions).
all I needed to do is set quarkus.vertx.cluster.clustered=true in quarkus properties file, use vertx.sharedData().getClusterWideMap implementation for the distrubuted cache and add gradle/maven 'io.vertx:vertx-hazelcast:4.3.1' support.
in general, thats all it took for a small poc code.
thanks

Flink StreamingFileSink on Azure Blob Storage

I am trying to connect the StreamingFileSink to Azure Blob Storage. There is currently no mention of Azure in the documentation, but I hoped it will work with the file system abstraction.
After analysing the error, I assume this feature is out of scope right now for Azure Blob Storage.
Now I'd like to be sure that I did not make any mistake and would appreciate any pointers if there is a way to make it work.
So far i found out:
This is the exception I see:
java.lang.NoClassDefFoundError: org/apache/flink/fs/azure/common/hadoop/HadoopRecoverableWriter
at org.apache.flink.fs.azure.common.hadoop.HadoopFileSystem.createRecoverableWriter(HadoopFileSystem.java:202)
at org.apache.flink.core.fs.PluginFileSystemFactory$ClassLoaderFixingFileSystem.createRecoverableWriter(PluginFileSystemFactory.java:129)
at org.apache.flink.core.fs.SafetyNetWrapperFileSystem.createRecoverableWriter(SafetyNetWrapperFileSystem.java:69)
at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.<init>(Buckets.java:117)
at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilder.createBuckets(StreamingFileSink.java:288)
at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.initializeState(StreamingFileSink.java:402)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:284)
at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1006)
at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
at java.lang.Thread.run(Thread.java:748)
Now the HadoopRecoverableWriter class method seems to be missing
But in the flink-azure-fs-hadoop repository I can find this in the pom.xml
<!-- shade Flink's Hadoop FS adapter classes -->
<relocation>
<pattern>org.apache.flink.runtime.fs.hdfs</pattern>
<shadedPattern>org.apache.flink.fs.azure.common.hadoop</shadedPattern>
</relocation>
And the shaded package includes this class here.
Now I had a look at the contents of the flink-azure-fs-hadoop-1.10.0.jar and the shaded package and the RecoverableWriter is missing:
HadoopBlockLocation.class
HadoopDataInputStream.class
HadoopDataOutputStream.class
HadoopFileStatus.class
HadoopFileSystem.class
HadoopFsFactory.class
HadoopFsRecoverable.class
More digging shows there is actually a filter in the pom.xml shade part which excludes the RecoverableWriter.
<filter>
<artifact>org.apache.flink:flink-hadoop-fs</artifact>
<excludes>
<exclude>org/apache/flink/runtime/util/HadoopUtils</exclude>
<exclude>org/apache/flink/runtime/fs/hdfs/HadoopRecoverable*</exclude>
</excludes>
</filter>

Coverting Java class to Json schema - FasterXML / jackson-module-jsonSchema

I am playing with some Spring Boot code to convert a java class to a Json schema and am getting strange behavior just by addin the dependency to the POM file as in
<dependency>
<groupId>com.fasterxml.jackson.module</groupId>
<artifactId>jackson-module-jsonSchema</artifactId>
<version>2.9.4</version>
</dependency>
the error I am getting is:
The Bean Validation API is on the classpath but no implementation could be found
Action:
Add an implementation, such as Hibernate Validator, to the classpath
Any suggestion on reading on this or resolving.
Thanks.
Yes this comes because the artifact mentioned by you depends on:
<dependency>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
<version>1.1.0.Final</version>
</dependency>
But this is only the validation API, a real validator that can do the 'real' work has to be added.
It would be interesting to see your pom.xml because many Spring Boot Starters already come with a validation implementation, e.g.:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
…comes with…
<dependency>
<groupId>org.hibernate.validator</groupId>
<artifactId>hibernate-validator-parent</artifactId>
</dependency>
By the way, the behavior described by you is also documented in this Spring Boot issue.
It will be fixed with this pull request which mandates a validation implementation only if a validation action is really performed (#Validated).

Datastax Cassandra Driver throwing CodecNotFoundException

The exact Exception is as follows
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [varchar <-> java.math.BigDecimal]
These are the versions of Software I am using
Spark 1.5
Datastax-cassandra 3.2.1
CDH 5.5.1
The code I am trying to execute is a Spark program using the java api and it basically reads data (csv's) from hdfs and loads it into cassandra tables . I am using the spark-cassandra-connector. I had a lot of issues regarding the google s guava library conflict initially which I was able to resolve by shading the guava library and building a snap-shot jar with all the dependencies.
However I was able to load data for some files but for some files I get the Codec Exception . When I researched on this issue I got these following threads on the same issue.
https://groups.google.com/a/lists.datastax.com/forum/#!topic/java-driver-user/yZyaOQ-wazk
https://groups.google.com/a/lists.datastax.com/forum/#!topic/java-driver-user/yZyaOQ-wazk
After going through these discussion what I understand is either it is a wrong version of the cassandra-driver I am using . Or there is still a class path issue related to the guava library as cassandra 3.0 and later versions use guava 16.0.1 and the discussions above say that there might be a lower version of the guava present in the class path .
Here is pom.xml file
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.5.0</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.5.0-M3</version>
</dependency>
<dependency>
<groupId>org.apache.cassandra</groupId>
<artifactId>cassandra-clientutil</artifactId>
<version>3.2.1</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<relocations>
<relocation>
<pattern>com.google</pattern>
<shadedPattern>com.pointcross.shaded.google</shadedPattern>
</relocation>
</relocations>
<minimizeJar>false</minimizeJar>
<shadedArtifactAttached>true</shadedArtifactAttached>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
and these are the dependencies that were downloaded using the above pom
spark-core_2.10-1.5.0.jar
spark-cassandra-connector- java_2.10-1.5.0-M3.jar
spark-cassandra-connector_2.10-1.5.0-M3.jar
spark-repl_2.10-1.5.1.jar
spark-bagel_2.10-1.5.1.jar
spark-mllib_2.10-1.5.1.jar
spark-streaming_2.10-1.5.1.jar
spark-graphx_2.10-1.5.1.jar
guava-16.0.1.jar
cassandra-clientutil-3.2.1.jar
cassandra-driver-core-3.0.0-alpha4.jar
Above are some of the main dependencies on in my snap-shot jar.
Y is the CodecNotFoundException ? Is it because of the class path (guava) ? or cassandra-driver (cassandra-driver-core-3.0.0-alpha4.jar for datastax cassandra 3.2.1) or because of the code .
Another point is all the dates I am inserting to columns who's data type is timestamp .
Also when I do a spark-submit I see the class path in the logs , There are other guava versions which are under the hadoop libs . R these causing the problem ?
How do we specify the a user-specific class path while do a spark-submit. Will that help ?
Would be glad to get some points on these.
Thanks
Following is the stacktrace
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [timestamp <-> java.lang.String]
at com.datastax.driver.core.CodecRegistry.notFound(CodecRegistry.java:689)
at com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:550)
at com.datastax.driver.core.CodecRegistry.findCodec(CodecRegistry.java:530)
at com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:485)
at com.datastax.driver.core.AbstractGettableByIndexData.codecFor(AbstractGettableByIndexData.java:85)
at com.datastax.driver.core.BoundStatement.bind(BoundStatement.java:198)
at com.datastax.driver.core.DefaultPreparedStatement.bind(DefaultPreparedStatement.java:126)
at com.cassandra.test.LoadDataToCassandra$1.call(LoadDataToCassandra.java:223)
at com.cassandra.test.LoadDataToCassandra$1.call(LoadDataToCassandra.java:1)
at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1027)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1555)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1121)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1121)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I also got
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [Math.BigDecimal <-> java.lang.String]
When you call bind(params...) on a PreparedStatement the driver expects you to provide values w/ java types that map to the cql types.
This error ([timestamp <-> java.lang.String]) is telling you that there is no such Codec registered that maps the java String to a cql timestamp. In the java driver, the timestamp type maps to java.util.Date. So you have 2 options here:
Where the column being bound is for a timestamp, provide a Date-typed value instead of a String.
Create a codec that maps timestamp <-> String. To do so you could create sub class of MappingCodec as described on the documentation site, that maps String to timestamp:
public class TimestampAsStringCodec extends MappingCodec<String, Date> {
public TimestampAsStringCodec() { super(TypeCodec.timestamp(), String.class); }
#Override
protected Date serialize(String value) { ... }
#Override
protected String deserialize(Date value) { ... }
}
You then would need to register the Codec:
cluster.getConfiguration().getCodecRegistry()
.register(new TimestampAsStringCodec());
Better solution is provided here
The correct mappings that the driver offers out of the box for temporal types are:
DATE <-> com.datastax.driver.core.LocalDate : use getDate()

ElasticSearch: aggregation on _score field w/ Groovy disabled

Every example I've seen (e.g., ElasticSearch: aggregation on _score field?) for doing aggregations on or related to the _score field seems to require the usage of scripting. With ElasticSearch disabling dynamic scripting by default for security reasons, is there any way to accomplish this without resorting to loading a script file onto every ES node or re-enabling dynamic scripting?
My original aggregation looked like the following:
"aggs": {
"terms_agg": {
"terms": {
"field": "field1",
"order": {"max_score": "desc"}
},
"aggs": {
"max_score": {
"max": {"script": "_score"}
},
"top_terms": {
"top_hits": {"size": 1}
}
}
}
Trying to specify expression as the lang doesn't seem to work as ES throws an error stating the score can only be accessed when being used to sort. I can't figure out any other method of ordering my buckets by the score field. Anyone have any ideas?
Edit: To clarify, my restriction is not being able to modify the server-side. I.e., I cannot add or edit anything as part of the ES installation or configuration.
One possible approach is to use the other scripting options available. mvel seems not to be possible to be used unless dynamic scripting is enabled. And, unless a more fine-grained control of scripting enable/disable reaches 1.6 version, I don't think is possible to enable dynamic scripting for mvel and not for groovy.
We are left with native and mustache (used for templates) that are enabled by default. I don't think custom scripting can be done with mustache, if it's possible I didn't find a way and we are left with native (Java) scripting.
Here's my take to this:
create an implementation of NativeScriptFactory:
package com.foo.script;
import java.util.Map;
import org.elasticsearch.script.ExecutableScript;
import org.elasticsearch.script.NativeScriptFactory;
public class MyScriptNativeScriptFactory implements NativeScriptFactory {
#Override
public ExecutableScript newScript(Map<String, Object> arg0) {
return new MyScript();
}
}
an implementation of AbstractFloatSearchScript for example:
package com.foo.script;
import java.io.IOException;
import org.elasticsearch.script.AbstractFloatSearchScript;
public class MyScript extends AbstractFloatSearchScript {
#Override
public float runAsFloat() {
try {
return score();
} catch (IOException e) {
e.printStackTrace();
}
return 0;
}
}
alternatively, build a simple Maven project to tie all together. pom.xml:
<properties>
<elasticsearch.version>1.5.2</elasticsearch.version>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>${elasticsearch.version}</version>
<scope>compile</scope>
</dependency>
</dependencies>
<build>
<sourceDirectory>src</sourceDirectory>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
build it and get the resulting jar file.
place the jar inside [ES_folder]/lib
edit elasticsearch.yml and add
script.native.my_script.type: com.foo.script.MyScriptNativeScriptFactory
restart ES nodes.
use it in aggregations:
{
"aggs": {
"max_score": {
"max": {
"script": "my_script",
"lang": "native"
}
}
}
}
My sample above just returns the _score as a script but, of course, it can be used in more advanced scenarios.
EDIT: if you are not allowed to touch the instances, then I don't think you have any options.
ElasticSearch at least of version 1.7.1 and possibly earlier also offers the use of Lucene's Expression scripting language – and as Expression is sandboxed by default it can be used for dynamic inline scripts in much the same way that Groovy was. In our case, where our production ES cluster has just been upgraded from 1.4.1 to 1.7.1, we decided not to use Groovy anymore because of it's non-sandboxed nature, although we really still want to make use of dynamic scripts because of the ease of deployment and the flexibility they offer as we continue to fine-tune our application and its search layer.
While writing a native Java script as a replacement for our dynamic Groovy function scores may have also have been a possibility in our case, we wanted to look at the feasibility of using Expression for our dynamic inline scripting language instead. After reading through the documentation, I found that we were simply able to change the "lang" attribute from "groovy" to "expression" in our inline function_score scripts and with the script.inline: sandbox property set in the .../config/elasticsearch.yml file – the function score script worked without any other modification. As such, we can now continue to use dynamic inline scripting within ElasticSearch, and do so with sandboxing enabled (as Expression is sandboxed by default). Obviously other security measures such as running your ES cluster behind an application proxy and firewall should also be implemented to ensure that outside users have no direct access to your ES nodes or the ES API. However, this was a very simple change, that for now has solved a problem with Groovy's lack of sandboxing and the concerns over enabling it to run without sandboxing.
While switching your dynamic scripts to Expression may only work or be applicable in some cases (depending on the complexity of your inline dynamic scripts), it seemed it was worth sharing this information in the hopes it could help other developers.
As a note, one of the other supported ES scripting languages, Mustache only appears to be usable for creating templates within your search queries. It does not appear to be usable for any of the more complexing scripting needs such as function_score, etc., although I am not sure this was entirely apparent during the first read through of the updated ES documentation.
Lastly, one further issue to be mindful of is that the use of Lucene Expression scripts are marked as an experimental feature in the latest ES release and the documentation notes that as this scripting extension is undergoing significant development work at this time, its usage or functionality may change in later versions of ES. Thus if you do switch over to using Expression for any of your scripts (dynamic or otherwise), it should be noted in your documentation/developer notes to revisit these changes before upgrading your ES installation next time to ensure your scripts remain compatible and work as expected.
For our situation at least, unless we were willing to allow non-sandboxed dynamic scripting to be enabled again in the latest version of ES (via the script.inline: on option) so that inline Groovy scripts could continue to run, switching over to Lucene Expression scripting seemed like the best option for now.
It will be interesting to see what changes occur to the scripting choices for ES in future releases, especially given that the (apparently ineffective) sandboxing option for Groovy will be completely removed by version 2.0. Hopefully other protections can be put in place to enable dynamic Groovy usage, or perhaps Lucene Expression scripting will take Groovy's place and will enable all the types of dynamic scripting that developers are already making use of.
For more notes on Lucene Expression see the ES documentation here: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-scripting.html#_lucene_expressions_scripts – this page is also the source of the note regarding the planned removal of Groovy's sandboxing option from ES v2.0+. Further Lucene Expression documentation can be found here: http://lucene.apache.org/core/4_9_0/expressions/index.html?org/apache/lucene/expressions/js/package-summary.html

Resources