I'm using:
Quarkus 1.6.1.Final
Vertx 3.9.1 (provided by quarkus-vertx dependency, see pom.xml below)
And I can't get the clusered Eventbus working. I've followed the instructions listed here:
https://vertx.io/docs/vertx-hazelcast/java/
I've also enabled clustering in Quarkus:
quarkus.vertx.cluster.clustered=true
quarkus.vertx.cluster.port=8081
quarkus.vertx.prefer-native-transport=true
quarkus.http.port=8080
And here is my pom.xml:
<dependencies>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-resteasy</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-resteasy-mutiny</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-vertx</artifactId>
</dependency>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-hazelcast</artifactId>
<version>3.9.2</version>
<exclusions>
<exclusion>
<groupId>io.vertx</groupId>
<artifactId>vertx-core</artifactId>
</exclusion>
<!-- <exclusion>-->
<!-- <groupId>com.hazelcast</groupId>-->
<!-- <artifactId>hazelcast</artifactId>-->
<!-- </exclusion>-->
</exclusions>
</dependency>
<!-- <dependency>-->
<!-- <groupId>com.hazelcast</groupId>-->
<!-- <artifactId>hazelcast-all</artifactId>-->
<!-- <version>3.9</version>-->
<!-- </dependency>-->
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-native-epoll</artifactId>
<classifier>linux-x86_64</classifier>
</dependency>
</dependencies>
And the error I get is the following:
Caused by: java.lang.ClassNotFoundException: com.hazelcast.core.MembershipListener
As you can see in my pom.xml, I've also added the dependency hazelcast-all:3.9 and excluded the hazelcast dependency from vertx-hazelcast:3.9.2, then this error disappears but another comes up:
Caused by: com.hazelcast.config.InvalidConfigurationException: cvc-complex-type.2.4.a: Invalid content was found starting with element '{"http://www.hazelcast.com/schema/config":memcache-protocol}'. One of '{"http://www.hazelcast.com/schema/config":public-address, "http://www.hazelcast.com/schema/config":reuse-address, "http://www.hazelcast.com/schema/config":outbound-ports, "http://www.hazelcast.com/schema/config":join, "http://www.hazelcast.com/schema/config":interfaces, "http://www.hazelcast.com/schema/config":ssl, "http://www.hazelcast.com/schema/config":socket-interceptor, "http://www.hazelcast.com/schema/config":symmetric-encryption, "http://www.hazelcast.com/schema/config":member-address-provider}' is expected.
Am I doing something wrong or forgetting something, or is this simply a bug in Quarkus or in Vertx ?
Thx in advance for any help.
I think the most probable reason of your issue is that you are using the quarkus-universe-bom which enforces a version of Hazelcast (we have an Hazelcast extension there) which is not compatible with vertx-hazelcast.
Check your dependency tree with mvn dependency:tree and make sure the Hazelcast artifacts are of the version required by vertx-hazelcast.
Another option would be to simply use the quarkus-bom which does not enforce an Hazelcast version and let vertx-hazelcast drag the dependency by itself.
It seems like a bug in Quarkus and this issue is related to:
https://github.com/quarkusio/quarkus/issues/10889
Bringing this from its winter sleep...
I am looking to use quarkus 2 + vert.x 4 and use either the shared data vert.x api or vert.x cluster manager in order to achieve an in-process, distributed cache (as opposed to an external dist. cache cluster)
What's unclear to me, also by looking at the git issue described above (thats still open), is if I can count on these APIs working for me at this time with the versions I mentioned.
Any comments will be great!
Thanks in advance...
[UPDATE]: looks like the clustered cache works with no issues using the shared data API along with quarkus, vertx, hazlecast & mutiny bindings for vertx (all with latest versions).
all I needed to do is set quarkus.vertx.cluster.clustered=true in quarkus properties file, use vertx.sharedData().getClusterWideMap implementation for the distrubuted cache and add gradle/maven 'io.vertx:vertx-hazelcast:4.3.1' support.
in general, thats all it took for a small poc code.
thanks
Related
I'm creating a Instances instance using weka. When I define attributes, I get the following exception: "The constructor Attribute(String, boolean) is undefined". The following is the code I have tried:
...
Attribute dtzg = new Attribute("att1Name", 0);
Attribute pDea = new weka.core.Attribute("att2Name", true);
...
My pom weka dependency is the following:
<!-- https://mvnrepository.com/artifact/nz.ac.waikato.cms.weka/weka-stable -->
<dependency>
<groupId>nz.ac.waikato.cms.weka</groupId>
<artifactId>weka-stable</artifactId>
<version>3.8.0</version>
</dependency>
I would expect that I would be able to use the constructor "Attribute(java.lang.String attributeName, boolean createStringAttribute)" because it is listed as constructor in the javadoc here
I discovered that the documentation I was referring to is about the "develop" version of weka, while I imported in my pom the "stable" version of weka. So, if I exchange the dependency above with the following, the compiler does not complain:
<!-- https://mvnrepository.com/artifact/nz.ac.waikato.cms.weka/weka-dev -->
<dependency>
<groupId>nz.ac.waikato.cms.weka</groupId>
<artifactId>weka-dev</artifactId>
<version>3.9.3</version>
</dependency>
However, I'm curious about the difference between the two versions. I'll ask a question about it, if I'll have time.
I am playing with some Spring Boot code to convert a java class to a Json schema and am getting strange behavior just by addin the dependency to the POM file as in
<dependency>
<groupId>com.fasterxml.jackson.module</groupId>
<artifactId>jackson-module-jsonSchema</artifactId>
<version>2.9.4</version>
</dependency>
the error I am getting is:
The Bean Validation API is on the classpath but no implementation could be found
Action:
Add an implementation, such as Hibernate Validator, to the classpath
Any suggestion on reading on this or resolving.
Thanks.
Yes this comes because the artifact mentioned by you depends on:
<dependency>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
<version>1.1.0.Final</version>
</dependency>
But this is only the validation API, a real validator that can do the 'real' work has to be added.
It would be interesting to see your pom.xml because many Spring Boot Starters already come with a validation implementation, e.g.:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
…comes with…
<dependency>
<groupId>org.hibernate.validator</groupId>
<artifactId>hibernate-validator-parent</artifactId>
</dependency>
By the way, the behavior described by you is also documented in this Spring Boot issue.
It will be fixed with this pull request which mandates a validation implementation only if a validation action is really performed (#Validated).
I have an partial Mock that is not working anymore after an update from PowerMock 1.4.8 to 1.7.0RC4 to solve a sonar/jacoco incompatibility that was lowing the code coverage. After that, all the mocks that use matchers as parameters are not being called. The code looks like:
Calculator calc = mock(Calculator.class);
when(calc.getA()).thenReturn(new BigDecimal("1"));
when(calc.getB()).thenReturn(new BigDecimal("2"));
when(calc.calculate(any(Date.class), anyInt(), any(MyObject.class))).thenCallRealMethod();
The last mock method is never call when passing real parameters. I notice that this only happens due the use of Matchers of non-primitive types such as MyObject. Can anyone help me with this issue?
My pom.xml:
<dependency>
<groupId>org.powermock</groupId>
<artifactId>powermock-module-junit4</artifactId>
<version>1.7.0RC4</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.powermock</groupId>
<artifactId>powermock-core</artifactId>
<version>1.7.0RC4</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.powermock</groupId>
<artifactId>powermock-api-mockito2</artifactId>
<version>1.7.0RC4</version>
<scope>test</scope>
</dependency>
PS: I am already using ArgumentMatchers instead of Matchers api.
Thanks in advance
The exact Exception is as follows
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [varchar <-> java.math.BigDecimal]
These are the versions of Software I am using
Spark 1.5
Datastax-cassandra 3.2.1
CDH 5.5.1
The code I am trying to execute is a Spark program using the java api and it basically reads data (csv's) from hdfs and loads it into cassandra tables . I am using the spark-cassandra-connector. I had a lot of issues regarding the google s guava library conflict initially which I was able to resolve by shading the guava library and building a snap-shot jar with all the dependencies.
However I was able to load data for some files but for some files I get the Codec Exception . When I researched on this issue I got these following threads on the same issue.
https://groups.google.com/a/lists.datastax.com/forum/#!topic/java-driver-user/yZyaOQ-wazk
https://groups.google.com/a/lists.datastax.com/forum/#!topic/java-driver-user/yZyaOQ-wazk
After going through these discussion what I understand is either it is a wrong version of the cassandra-driver I am using . Or there is still a class path issue related to the guava library as cassandra 3.0 and later versions use guava 16.0.1 and the discussions above say that there might be a lower version of the guava present in the class path .
Here is pom.xml file
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.5.0</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.5.0-M3</version>
</dependency>
<dependency>
<groupId>org.apache.cassandra</groupId>
<artifactId>cassandra-clientutil</artifactId>
<version>3.2.1</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<relocations>
<relocation>
<pattern>com.google</pattern>
<shadedPattern>com.pointcross.shaded.google</shadedPattern>
</relocation>
</relocations>
<minimizeJar>false</minimizeJar>
<shadedArtifactAttached>true</shadedArtifactAttached>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
and these are the dependencies that were downloaded using the above pom
spark-core_2.10-1.5.0.jar
spark-cassandra-connector- java_2.10-1.5.0-M3.jar
spark-cassandra-connector_2.10-1.5.0-M3.jar
spark-repl_2.10-1.5.1.jar
spark-bagel_2.10-1.5.1.jar
spark-mllib_2.10-1.5.1.jar
spark-streaming_2.10-1.5.1.jar
spark-graphx_2.10-1.5.1.jar
guava-16.0.1.jar
cassandra-clientutil-3.2.1.jar
cassandra-driver-core-3.0.0-alpha4.jar
Above are some of the main dependencies on in my snap-shot jar.
Y is the CodecNotFoundException ? Is it because of the class path (guava) ? or cassandra-driver (cassandra-driver-core-3.0.0-alpha4.jar for datastax cassandra 3.2.1) or because of the code .
Another point is all the dates I am inserting to columns who's data type is timestamp .
Also when I do a spark-submit I see the class path in the logs , There are other guava versions which are under the hadoop libs . R these causing the problem ?
How do we specify the a user-specific class path while do a spark-submit. Will that help ?
Would be glad to get some points on these.
Thanks
Following is the stacktrace
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [timestamp <-> java.lang.String]
at com.datastax.driver.core.CodecRegistry.notFound(CodecRegistry.java:689)
at com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:550)
at com.datastax.driver.core.CodecRegistry.findCodec(CodecRegistry.java:530)
at com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:485)
at com.datastax.driver.core.AbstractGettableByIndexData.codecFor(AbstractGettableByIndexData.java:85)
at com.datastax.driver.core.BoundStatement.bind(BoundStatement.java:198)
at com.datastax.driver.core.DefaultPreparedStatement.bind(DefaultPreparedStatement.java:126)
at com.cassandra.test.LoadDataToCassandra$1.call(LoadDataToCassandra.java:223)
at com.cassandra.test.LoadDataToCassandra$1.call(LoadDataToCassandra.java:1)
at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1027)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1555)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1121)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1121)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I also got
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [Math.BigDecimal <-> java.lang.String]
When you call bind(params...) on a PreparedStatement the driver expects you to provide values w/ java types that map to the cql types.
This error ([timestamp <-> java.lang.String]) is telling you that there is no such Codec registered that maps the java String to a cql timestamp. In the java driver, the timestamp type maps to java.util.Date. So you have 2 options here:
Where the column being bound is for a timestamp, provide a Date-typed value instead of a String.
Create a codec that maps timestamp <-> String. To do so you could create sub class of MappingCodec as described on the documentation site, that maps String to timestamp:
public class TimestampAsStringCodec extends MappingCodec<String, Date> {
public TimestampAsStringCodec() { super(TypeCodec.timestamp(), String.class); }
#Override
protected Date serialize(String value) { ... }
#Override
protected String deserialize(Date value) { ... }
}
You then would need to register the Codec:
cluster.getConfiguration().getCodecRegistry()
.register(new TimestampAsStringCodec());
Better solution is provided here
The correct mappings that the driver offers out of the box for temporal types are:
DATE <-> com.datastax.driver.core.LocalDate : use getDate()
I work in a project that uses Log4J. One of the requirement is to create a separate log file for each thread; this itself was a odd issue, somewhat sorted by creating a new FileAppender on the fly and attaching it to the Logger instance.
Logger logger = Logger.getLogger(<thread dependent string>);
FileAppender appender = new FileAppender();
appender.setFile(fileName);
appender.setLayout(new PatternLayout(lp.getPattern()));
appender.setName(<thread dependent string>);
appender.setThreshold(Level.DEBUG);
appender.activateOptions();
logger.addAppender(appender);
Everything went fine until we realised that another library we use - Spring Framework v3.0.0 (which use Commons Logging) - does not play ball with the technique above – the Spring logging data is “seen” only by Appenders initialised from the log4.configuration file but not by the runtime created Appenders.
So, back to square one.
After some investigation, I found out that the new and improved LogBack has an appender - SiftingAppender – which does exactly what we need i.e. thread level logging on independent files.
At the moment, moving to LogBack is not an option, so, being stuck with Log4J, how can I achieve SiftingAppender-like functionality and keep Spring happy as well ?
Note: Spring is only used for JdbcTemplate functionality, no IOC; in order to “hook” Spring’s Commons Logging to Log4J I added this line in the log4j.properties file:
log4j.logger.org.springframework=DEBUG
as instructed here.
In Log4j2, we can now use RoutingAppender:
The RoutingAppender evaluates LogEvents and then routes them to a subordinate Appender. The target Appender may be an appender previously configured and may be referenced by its name or the Appender can be dynamically created as needed.
From their FAQ:
How do I dynamically write to separate log files?
Look at the RoutingAppender. You can define multiple routes in the configuration, and put values in the ThreadContext map that determine which log file subsequent events in this thread get logged to.
LogBack is accessed via the slf4j api. There is an adapter library called jcl-over-sjf4j which exposes the commons logging interface but makes all the logging to the slf4j API, which goes directly to the implementation - LogBack. If you are using maven, here are the dependencies:
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.5.8</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>jcl-over-slf4j</artifactId>
<version>1.5.8</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>0.9.18</version>
</dependency>
(and add the commons-logging to the exclusion list, see here)
I struggled for a while to find SiftingAppender-like functionality in log4j (we couldn't switch to logback because of some dependencies), and ended up with a programmatic solution that works pretty well, using an MDC and appending loggers at runtime:
// this can be any thread-specific string
String processID = request.getProcessID();
Logger logger = Logger.getRootLogger();
// append a new file logger if no logger exists for this tag
if(logger.getAppender(processID) == null){
try{
String pattern = "%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n";
String logfile = "log/"+processID+".log";
FileAppender fileAppender = new FileAppender(
new PatternLayout(pattern), logfile, true);
fileAppender.setName(processID);
// add a filter so we can ignore any logs from other threads
fileAppender.addFilter(new ProcessIDFilter(processID));
logger.addAppender(fileAppender);
}catch(Exception e){
throw new RuntimeException(e);
}
}
// tag all child threads with this process-id so we can separate out log output
MDC.put("process-id", processID);
//whatever you want to do in the thread
LOG.info("This message will only end up in "+processID+".log!");
MDC.remove("process-id");
The filter appended above just checks for a specific process id:
public class RunIdFilter extends Filter {
private final String runId;
public RunIdFilter(String runId) {
this.runId = runId;
}
#Override
public int decide(LoggingEvent event) {
Object mdc = event.getMDC("run-id");
if (runId.equals(mdc)) {
return Filter.ACCEPT;
}
return Filter.DENY;
}
}
Hope this helps a bit.
I like to include all of the slf4j facades/re-routers/whateveryoucallthem. Also note the "provided" hack, which keeps dependencies from pulling in commons logging; previously I was using a fake empty commons logging library called version-99.0-does-not-exist.
Also see http://blog.springsource.com/2009/12/04/logging-dependencies-in-spring/
<dependencies>
<dependency>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
<!-- use provided scope on real JCL instead -->
<!-- <version>99.0-does-not-exist</version> -->
<version>1.1.1</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>commons-logging</groupId>
<artifactId>commons-logging-api</artifactId>
<!-- use provided scope on real JCL instead -->
<!-- <version>99.0-does-not-exist</version> -->
<version>1.1</version>
<scope>provided</scope>
</dependency>
<!-- the slf4j commons-logging replacement -->
<!-- if any package is using jakarta commons logging this will -->
<!-- re-route it through slf4j. -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>jcl-over-slf4j</artifactId>
<version>${version.slf4j}</version>
</dependency>
<!-- the slf4j log4j replacement. -->
<!-- if any package is using log4j this will re-route -->
<!-- it through slf4j. -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
<version>${version.slf4j}</version>
</dependency>
<!-- the slf4j java.util.logging replacement. -->
<!-- if any package is using java.util.logging this will re-route -->
<!-- it through slf4j. -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>jul-to-slf4j</artifactId>
<version>${version.slf4j}</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>${version.slf4j}</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>${version.logback}</version>
</dependency>
</dependencies>
<properties>
<version.logback>0.9.15</version.logback>
<version.slf4j>1.5.8</version.slf4j>
</properties>
have you looked at log4j.NDC and MDC? This at least allows you to tag thread specific data to your logging. Not exactly what you're asking, but might be useful. There's a discussion here.