Any intuitive Hazelcast client tool suggestions to check maps in professional edition? I am trying to identify some problems happening in hazelcast layer where we use professional edition. I couldn't find any good hazelcast client tool for professional edition.
You can use Hazelcast Management Center, which is free to use for clusters of up to 3 members. You can download it from here. You can find its documentation here. You can check stats of your maps, view/edit their configuration and browse their entries.
There are several official possibilities:
Hazelcast Management Center - web application
Hazelcast Clients - for example the Java one:
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-enterprise</artifactId>
<version>4.0</version>
</dependency>
ClientConfig clientConfig = new ClientConfig();
clientConfig.getNetworkConfig().addAddress("10.0.0.1");
HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);
System.out.println(client.getMap("test").get("key"));
client.shutdown();
You can also use for instance Groovy shell, which is handy for its tab-completion:
# start the shell
groovysh -cp hazelcast-enterprise-4.0.jar \
-e "System.setSecurityManager(null); import com.hazelcast.core.*; import com.hazelcast.client.*"
# and then interactively do whatever you want
hz = HazelcastClient.newHazelcastClient();
hz.getMap("test").get("key");
BTW. The Maven artifacts for Enterprise edition are not located in Maven Central repository, but in the Hazelcast one: https://repository.hazelcast.com/release/
<repository>
<id>Hazelcast Private Release Repository</id>
<url>https://repository.hazelcast.com/release/</url>
</repository>
Related
I am trying to setup a kafka consumer on legacy web application using xml configuration that is running Spring 5.2 release project (It is not a spring boot project).
After looking up I found a project that sets up the kafka consumer using xml configuration like in here
https://docs.spring.io/spring-kafka/docs/1.3.11.RELEASE/reference/html/_spring_integration.html
However this does not give details on how to connect this with java in a spring web application . All examples are for spring-boot project. I did find out what configuring I need to add to xml
https://github.com/spring-projects/spring-integration-kafka
I also did find an example of Spring web application and kafka but it is for 4.X release and is from 2015.
https://techannotation.wordpress.com/2015/10/26/introduction-to-apache-kafka-using-spring/
Any help on any newer documentation or on how to setup a consumer in java spring web project with xml configuration in 2020 would be appreciated
It is not clear why you mention Spring Integration framework all the time, but you don't accent on it.
Let's see if this doc helps you though: https://docs.spring.io/spring-integration/docs/current/reference/html/kafka.html#kafka !
Pay attention the Spring Integration is 5.4 already.
This sample is Spring Boot based, but still should give you a general idea how to configure Kafka channel adapter with Java: https://github.com/spring-projects/spring-integration-samples/tree/master/basic/kafka
With Help from Gary Russel and Artem Bilan I was able to figure this out. Since I am using legacy spring application I needed this project and not spring integration project
https://spring.io/projects/spring-kafka
I also followed documentation here to setup java config for kafka listener
https://docs.spring.io/spring-kafka/docs/2.3.12.RELEASE/reference/html/#with-java-configuration
I used this to figure out how to have both java config and xml config work together
https://memorynotfound.com/mixing-xml-java-config-spring/
Here is gist of my implementation that worked
https://gist.github.com/praveen2710/7dcf1671379ee6db4581436e1225c673
How can you use Liquibase with an Azure SQL database and Azure Active Directory Authentication? Specifically, I want to connect using ActiveDirectoryPassword authentication mode as documented here:
https://learn.microsoft.com/en-us/sql/connect/jdbc/connecting-using-azure-active-directory-authentication?view=sql-server-ver15#connecting-using-activedirectorypassword-authentication-mode
I cannot figure out how to call the Liquibase CLI to make this happen.
Is this possible?
I was able to get this to work. I am not very familiar with Java (we use Liquibase with a C# project), so I think some of the Java pieces tripped me up.
There were a few things I had to do to make this work:
I needed to add some properties to the URL I sent to Liquibase:
--url="jdbc:sqlserver://REDACTED.database.windows.net;databaseName=REDACTED;authentication=ActiveDirectoryPassword;encrypt=true;trustServerCertificate=true"
ActiveDirectoryPassword is what tells the driver to use the authentication mechanism I wanted. I also had to add encrypt=true and trustServerCertificate=true to avoid some SSL errors I was getting (from: https://learn.microsoft.com/en-us/sql/connect/jdbc/connecting-with-ssl-encryption?view=sql-server-ver15).
I needed the MSAL4J (Azure Active Directory) libraries in my classpath. I added them to the liquibase/lib directory so that the default Liquibase launcher scripts would add them for me. I got caught on this, too, because I needed to use Maven which we do not use. After downloading Maven, I used the copy-dependencies plugin to download the dependencies I needed.
mvn dependency:copy-dependencies
Here was the simple pom.xml I used:
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>com.mycompany.app</groupId>
<artifactId>my-app</artifactId>
<version>1</version>
<dependencies>
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>adal4j</artifactId>
<version>1.6.3</version>
</dependency>
</dependencies>
</project>
I also put these dependencies in the liquibase/lib directory so they were automatically included in the classpath. The instructions from Microsoft were helpful in leading me to the correct places:
https://learn.microsoft.com/en-us/sql/connect/jdbc/connecting-using-azure-active-directory-authentication?view=sql-server-ver15#connecting-using-activedirectorypassword-authentication-mode
Also, not sure it was required to meet my goal, but I upgraded to the latest Liquibase (3.8.7) and latest SQL Server drivers (8.2):
https://learn.microsoft.com/en-us/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server?view=sql-server-ver15
By default, after installation Openfire Hazelcast plugin has
<ssl enabled="false"/>
in its config file. My attempt to enable it broke clustering and Openfire log said that
java.lang.IllegalStateException: SSL/TLS requires Hazelcast Enterprise Edition
Is it correct, so there is no way to make Hazelcast plugins using SSL for communications between Openfire nodes?
Assuming that the Hazelcast Enterprise API is an extension of the Hazelcast API, it might be as simple as recompiling the Openfire Hazelcast plugin with a different Hazelcast dependency.
I did a quick test. The plugin compiles just fine after you swap the dependency on Hazelcast with a dependency on the 'enterprise' variant, like this (your version number might vary):
<dependencies>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-enterprise</artifactId>
<version>3.10.6</version>
</dependency>
</dependencies>
<repositories>
<repository>
<id>hazelcast</id>
<url>https://dl.bintray.com/hazelcast/release/</url>
</repository>
</repositories>
Most of the Hazelcast configuration can be done in the XML file that's already accessible as a stand-alone file in Openfire. There's a good chance that you don't need any code modifications to get things configured in the way you want.
I am not sure if this has been attempted before but you can try the following:
download the enterprise version of Hazelcast and place the hazelcast jar in plugins folder of Openfire. I am not certain about the internals of Openfire but if there exists a maven configuration to import Hazelcast then modify that to load Hazelcast enterprise. Or if nothing else works then try renaming the Hazelcast jar from hazelcast-enterprise.jar to hazelcast.jar.
modify conf/hazelcast-local-config.xml to configure license key and security details SSL.
Fire away.
Here is a link to Openfire doc for Hazelcast plugin: https://www.igniterealtime.org/projects/openfire/plugins/2.4.0/hazelcast/readme.html#config
Please do update here if this works.
Support for TLS/SSL is not included in the open source version of Hazelcast, as the error message indicates it is part of the Enterprise Edition feature set.
https://hazelcast.com/product-features/security-suite/
Correct, purchasing the enterprise edition wouldn't help as far as I can tell. The Hazelcast (open source) plugin for Openfire is maintained by the folks at Ignite Realtime. They only support specific versions of Hazelcast as well.
Is there any reference as to what sets of versions are compatible between aws java sdk, hadoop, hadoop-aws bundle, hive, spark?
For example, I know Spark is not compatible with hive versions above Hive 2.1.1
You cannot drop in a later version of the AWS SDK from what which hadoop-aws was built with and expect the s3a connector to work. Ever. That is now written down quite clearly in the S3A troubleshooting docs
Whatever problem you have, changing the AWS SDK version will not fix things, only change the stack traces you see.
This may seem frustrating, given the rate at which the AWS team push out a new SDK, but you have to understand that (a) the API often changes incompatibly between versions (as you have seen), and (b) every release introduces/moves bugs which end up causing problems.
Here is the 3.x timeline of things which broke on updates of the AWS SDK.
Move 1.11.86 and some tests hang under load.
Fix: move to 1.11.134 leading to logs are full of AWS telling us off for deliberatly calling abort() on a read.
Fix: move to 1.11.199 leading to logs full of stack traces.
Fix: move to 1.11.271 and shaded JAR pulls in netty unshaded.
Every upgrade of the AWS SDK JAR causes a problem, somewhere. Sometimes an edit to the code and recompile, most commonly: logs filling up with false-alarm messages, dependency problems, threading quirks, etc. Things which can take time to surface.
what you see when you get a hadoop release is not just an aws-sdk JAR which it was compiled against, you get a hadoop-aws JAR which contains the workarounds and fixes for whatever problems that release has introduced and which were identified in the minimum of 4 weeks of testing before the hadoop release ships.
Which is why, no, you shouldn't be changing JARs unless you plan to do a complete end-to-end retest of the s3a client code, including load tests. You are encouraged to do that, the hadoop project always welcomes more testing of our pre-release code, with the Hadoop 3.1 binaries ready to play with. But trying to do it yourself by changing JARs? Sadly, an isolated exercise in pain.
In Hadoop documentation, it is stated that by adding hadoop-aws JAR to the build dependencies; it will pull in a compatible aws-sdk JAR.
So, I created a dummy Maven project with these dependencies to download the compatible versions
<properties>
<!-- Your exact Hadoop version here-->
<hadoop.version>3.3.1</hadoop.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-aws</artifactId>
<version>${hadoop.version}</version>
</dependency>
</dependencies>
Then, I checked my dependencies versions, used it in my project and it worked.
I want to use Semantic Role Labeler tool from Illinois in my project.
I've seen the online demo of the SRL tool. To use that tool, the website suggests downloading the Curator. I have downloaded this Curator file. But how do I use the Curator file in Java code or VB.NET code? Can anybody help me?
And I also want to mention a few tips about Curator. The reason that you will need a Curator is that the SRL package as a complicated software now need a few external dependencies, for example, illinois Part of Speech, Shallow parsing, and currently Charniak parser. And Curator is a tool we used to manage those dependencies.
However, the curator will also installed a few other dependency that you may not need for SRL, for example, Named entity tagger, and Wikifier. Those component tends to consume plenty of RAM (for example, the two listed here will need 10GB), so you may want to turn them oof, by comment out the line in $CURATOR_HOME/dist/startServer.sh which should be self-explained.
Once you have Curator up and running, you can call it from you program using a curator-client package, the easiest way to do this in JAVA is using maven:
First add the CCG maven repo to your project:
<repositories>
<repository>
<id>CogcompSoftware</id>
<name>CogcompSoftware</name>
<url>http://cogcomp.cs.illinois.edu/m2repo/</url>
</repository>
</repositories>
And then add the following dependencies:
<dependency>
<groupId>edu.illinois.cs.cogcomp</groupId>
<artifactId>curator-interfaces</artifactId>
<version>0.7</version>
</dependency>
<dependency>
<groupId>org.apache.thrift</groupId>
<artifactId>libthrift</artifactId>
<version>0.8.0</version>
</dependency>
Since the API and data structure are defined in Thrift, so you may use them in other language by generating the curator package via thrift, (However, VB.NET is not supported by Thrift as Daniel pointed out :) ) Watch for the CCG website if you are interested, we are writing a tutorial about how to do this, which should be public available very soon.
And after you have the above dependency, you should be able to follow our walkthrough on
http://cogcomp.cs.illinois.edu/curator/CuratorDemo.html
Let me know if you have any problem.
You are asking several questions.
The standalone SRL is under development and it will be release soon.
The best way to access to SRL is currently installing Curator, which is explained here:
http://cogcomp.cs.illinois.edu/trac/curator.php
I don't think you can use Curator in VB.Net. Since it is designed to the languages supposed by Apache Thrift :
http://thrift.apache.org/about
After installing it you can easily access it in Java. Here is a Walkthrough:
http://cogcomp.cs.illinois.edu/curator/CuratorDemo.html
you can use SENNA which is a free NLP tools developed in ASNI c and could be run in visual studion.net
http://ronan.collobert.com/senna/
it can outputs: part-of-speech (POS) tags, chunking (CHK), name entity recognition (NER), semantic role labeling (SRL) and syntactic parsing (PSG).