Cassandra 3.10 Create Trigger, class doesn't exist - cassandra

I have DSE 5.1 with Cassandra 3.10 and cql 5.0.1 installed on CentOS 7.3.1611 (Core).
I'm following the tutorial from DSE Triggers:
http://docs.datastax.com/en/dse/5.1/cql/cql/cql_reference/cql_commands/cqlCreateTrigger.html?hl=trigger
Which send me to github:
https://github.com/apache/cassandra/tree/trunk/examples/triggers
According to the tutorial, I am compiling the jar with ANT 1.10.1. The compiled jar contain the code:
package org.apache.cassandra.triggers;
import java.io.InputStream;
import java.util.Collection;
import java.util.Collections;
import java.util.Properties;
import org.apache.cassandra.schema.TableMetadata;
import org.apache.cassandra.schema.Schema;
import org.apache.cassandra.db.Mutation;
import org.apache.cassandra.db.partitions.Partition;
import org.apache.cassandra.db.partitions.PartitionUpdate;
import org.apache.cassandra.io.util.FileUtils;
import org.apache.cassandra.utils.FBUtilities;
import org.apache.cassandra.utils.UUIDGen;
public class AuditTrigger implements ITrigger
{
private Properties properties = loadProperties();
public Collection<Mutation> augment(Partition update)
{
String auditKeyspace = properties.getProperty("keyspace");
String auditTable = properties.getProperty("table");
TableMetadata metadata = Schema.instance.getTableMetadata(auditKeyspace, auditTable);
PartitionUpdate.SimpleBuilder audit = PartitionUpdate.simpleBuilder(metadata, UUIDGen.getTimeUUID());
audit.row()
.add("keyspace_name", update.metadata().keyspace)
.add("table_name", update.metadata().table)
.add("primary_key", update.metadata().partitionKeyType.getString(update.partitionKey().getKey()));
return Collections.singletonList(audit.buildAsMutation());
}
private static Properties loadProperties()
{
Properties properties = new Properties();
InputStream stream = AuditTrigger.class.getClassLoader().getResourceAsStream("AuditTrigger.properties");
try
{
properties.load(stream);
}
catch (Exception e)
{
throw new RuntimeException(e);
}
finally
{
FileUtils.closeQuietly(stream);
}
return properties;
}
}
The compiled jar, I am copying to the cassandra triggers location (/etc/cassandra/triggers). Then I run from cql the command:
CREATE TRIGGER test1 ON test.test USING 'org.apache.cassandra.triggers.AuditTrigger';
And send me the error:
ConfigurationException: Trigger class 'org.apache.cassandra.triggers.AuditTrigger' doesn't exist
I copied the jar in all the nodes of my cluster, restarted the cluster and ran the command:
nodetool reloadtriggers
With no success, I also put in the jvm.options the command:
-Dcassandra.triggers_dir=/etc/dse/cassandra/triggers
And in the cassandra-env.sh, I added the command:
JVM_OPTS="$JVM_OPTS -Dcassandra.triggers_dir=/etc/dse/cassandra/triggers"
And I am getting the same error. I tryied adding the jar in the cassandra lib folder (/usr/share/dse/cassandra/lib/) and it seems to not been loading the compiled jar.
When I run the command in cql:
CREATE TRIGGER test1 ON test.test USING 'org.apache.cassandra.triggers.AuditTrigger';
I am still getting the same error
ConfigurationException: Trigger class 'org.apache.cassandra.triggers.AuditTrigger' doesn't exist
So, my question is, How can I make cassandra load my jar correctly and use it to create a trigger in cql?

Related

How to read cassandra FQL logs in java?

I have a bunch of cassandra FQL logs with the "cq4" extension. I would like to read them in Java, is there a Java class that those log entries can be mapped into?
These are the logs I see.
I want to read this with this code:
import net.openhft.chronicle.Chronicle;
import net.openhft.chronicle.ChronicleQueueBuilder;
import net.openhft.chronicle.ExcerptTailer;
import java.io.IOException;
public class Main{
public static void main(String[] args) throws IOException {
Chronicle chronicle = ChronicleQueueBuilder.indexed("/Users/pavelorekhov/Desktop/fql_logs").build();
ExcerptTailer tailer = chronicle.createTailer();
while (tailer.nextIndex()) {
tailer.readInstance(/*class goes here*/)
}
}
}
I think from the code and screenshot you can understand what kind of class I need in order to read log entries into objects. Does that class exist in some cassandra maven dependency?
You are using Chronicle 3.x, which is very old.
I suggest using Chronicle 5.20.123, which is the version Cassandra uses.
I would assume Cassandra has it's own tool for reading the contents of these file however, you can dump the raw messages with net.openhft.chronicle.queue.main.DumpMain
I ended up cloning cassandra's github repo from here: https://github.com/apache/cassandra
In their code they have the FQLQueryIterator class which you can use to read logs, like so:
SingleChronicleQueue scq = SingleChronicleQueueBuilder.builder().path("/Users/pavelorekhov/Desktop/fql_logs").build();
ExcerptTailer excerptTailer = scq.createTailer();
FQLQueryIterator iterator = new FQLQueryIterator(excerptTailer, 1);
while (iterator.hasNext()) {
FQLQuery fqlQuery = iterator.next(); // object that holds the log entry
// do whatever you need to do with that log entry...
}

Liferay 7.2 with JAX-RS/JAXB and JDK 11 Issue

I have created a Liferay Blade "rest" module with the sample code that is provided with that template. I am trying to access the sample code's /greetings endpoint at: http://localhost:8080/o/greetings
The error I'm receiving appears to be common and well-documented because in Java 11, JAXB was completely removed from the JDK altogether:
JAXBException occurred : Implementation of JAXB-API has not been found on module path or classpath.. com.sun.xml.internal.bind.v2.ContextFactory cannot be found by org.apache.aries.jax.rs.whiteboard_1.0.4
The module builds and deploys into Liferay 7.2 with gradle, with no errors and the suggested solutions are not resolving the error (obviously).
Full disclosure, I am not a Java developer and therefore clearly lack a very basic understanding of Java, which could be contributing to the problem.
I have tried including the suggested dependencies to my gradle.build file:
compile('javax.xml.bind:jaxb-api:2.3.0')
compile('javax.activation:activation:1.1')
compile('org.glassfish.jaxb:jaxb-runtime:2.3.0')
I have also tried downloading and deploying the jar files for the above dependencies into Liferay 7.2. No luck.
My gradle.build file:
dependencies {
compileOnly group: "javax.ws.rs", name: "javax.ws.rs-api", version: "2.1"
compileOnly group: "org.osgi", name: "org.osgi.service.component.annotations", version: "1.3.0"
compileOnly group: "org.osgi", name: "org.osgi.service.jaxrs", version: "1.0.0"
compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "4.4.0"
}
The sample class file:
package some.random.super.long.folder.path;
import java.util.Collections;
import java.util.Set;
import com.liferay.portal.kernel.exception.PortalException;
import com.liferay.portal.kernel.log.Log;
import com.liferay.portal.kernel.log.LogFactoryUtil;
import com.liferay.portal.kernel.model.User;
import com.liferay.portal.kernel.service.UserLocalServiceUtil;
import com.liferay.portal.kernel.util.PropsUtil;
import javax.net.ssl.HttpsURLConnection;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Application;
import javax.ws.rs.ServerErrorException;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.SecurityContext;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.jaxrs.whiteboard.JaxrsWhiteboardConstants;
/**
* #author andrew
*/
#Component(
property = {
JaxrsWhiteboardConstants.JAX_RS_APPLICATION_BASE + "=/greetings",
JaxrsWhiteboardConstants.JAX_RS_NAME + "=Greetings.Rest"
},
service = Application.class
)
public class DiningRestServiceApplication extends Application {
public Set<Object> getSingletons() {
return Collections.<Object>singleton(this);
}
#GET
#Produces("text/plain")
public String working() {
return "It works!";
}
#GET
#Path("/morning")
#Produces("text/plain")
public String hello() {
return "Good morning!";
}
#GET
#Path("/morning/{name}")
#Produces("text/plain")
public String morning(
#PathParam("name") String name,
#QueryParam("drink") String drink) {
String greeting = "Good Morning " + name;
if (drink != null) {
greeting += ". Would you like some " + drink + "?";
}
return greeting;
}
}
The expected result I am after is to receive the sample messages at the specified sample's endpoints.
Any help from this wonderful community would be greatly appreciated!
UPDATE
I switched to Java 9 JDK on my machine. I'm still experiencing the aforementioned error message.
Jorge Diaz of the Liferay community just responded in the Liferay forum stating that this looks like an existing bug that is currently being worked on.
https://issues.liferay.com/browse/LPS-92576
https://issues.liferay.com/browse/LPS-97968
Regardless, I confirmed that the combination of Liferay and Java 9 and 11 are not playing nicely with JAXB by downgrading to Java 8, which worked.
Liferay ships an implementation for JAXB in the product. We just need to configure the JVM to look for it instead of trying to look for the default old one.
Just setting the property javax.xml.bind.JAXBContextFactory=com.sun.xml.bind.v2.ContextFactory should do it. You should be able to set it on your environment or executable for the JVM that starts Liferay.
It should be set by default if later Liferay versions.
The solution provided by Carlos Sierra works. Add
-Djavax.xml.bind.JAXBContextFactory=com.sun.xml.bind.v2.ContextFactory
to the list of VM arguments to point to ContextFactory implementation provided by Liferay.
Environment: openjdk version "11.0.5" 2019-10-15 with Liferay DXP 7.2.10 GA1 SP 2 (dxp-2-7210)

cloudbees, groovy, jobs, folders: How to determine the job result, if the job is within a cloudbees folder?

Problem: I'm using a script to determine if a certain amount of jobs are in SUCCESS state.
It worked fine as long as I was not using cloudbees folder plugin. I could easily get the list of projects and get the project result. But after I moved the jobs to the cloudbee folder, the jobs and therefore the job results are no longer available!
Q: Does anybody now how to get the job results with groovy from jobs which are located in a Cloudbees folder?
def job = Jenkins.instance.getItemByFullName('foldername/jobname');
Folder plugin provides the getItems() method which can be used to get all immediate items (jobs/folders) under a folder.
folder.getItems()
Check this link to traverse across all the folders in Jenkins.
Displaying the code snippet below,
import jenkins.*
import jenkins.model.*
import hudson.*
import hudson.model.*
import hudson.scm.*
import hudson.tasks.*
import com.cloudbees.hudson.plugins.folder.*
jen = Jenkins.instance
jen.getItems().each{
if(it instanceof Folder){
processFolder(it)
}else{
processJob(it)
}
}
void processJob(Item job){
}
void processFolder(Item folder){
folder.getItems().each{
if(it instanceof Folder){
processFolder(it)
}else{
processJob(it)
}
}
}

Neo4j Spatial: can't run spatial

I have been trying to work with Neo4j Spatial for my project, but I can't make it work.
With limited documentation and examples I figured out how to load OSM map to the database. But to check if it is loaded, I am trying to execute a spatial query.
While trying to run my code I get this error:
import.java:69: error: cannot access GremlinGroovyPipeline
.startIntersectSearch(layer, bbox)
^
class file for com.tinkerpop.gremlin.groovy.GremlinGroovyPipeline not found
I understand what's wrong (it can't find the required library), but I don't know how to fix it. The reason is when I run Neo4j Spatial tests, LayerTest.java and TestSpatial.java do include GeoPipeline library and it works perfectly fine. However, when I created my simple java file to test Neo4j, and trying to execute commands that depend GeoPipeline library I get the error above.
I read the instructions on github for Neo4j and saw this note:
Note: neo4j-spatial has a mandatory dependency on
GremlinGroovyPipeline from the com.tinkerpop.gremlin.groovy package.
The dependency in neo4j is type 'provided', so when using
neo4j-spatial in your own Java project, make sure to add the following
dependency to your pom.xml, too.
However, I am not using Maven to build my app. It is a simple java file, that I want to run to test if I get how everything works.
here is the code from my java file:
package org.neo4j.gis.spatial;
import java.io.File;
import java.io.IOException;
import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.geotools.data.DataStore;
import org.geotools.data.neo4j.Neo4jSpatialDataStore;
import org.geotools.data.simple.SimpleFeatureCollection;
import org.neo4j.gis.spatial.osm.OSMDataset;
import org.neo4j.gis.spatial.osm.OSMDataset.Way;
import org.neo4j.gis.spatial.osm.OSMGeometryEncoder;
import org.neo4j.gis.spatial.osm.OSMImporter;
import org.neo4j.gis.spatial.osm.OSMLayer;
import org.neo4j.gis.spatial.osm.OSMRelation;
import org.neo4j.gis.spatial.pipes.osm.OSMGeoPipeline;
import org.neo4j.graphdb.Direction;
import org.neo4j.graphdb.Node;
import org.neo4j.graphdb.Relationship;
import com.vividsolutions.jts.geom.Envelope;
import com.vividsolutions.jts.geom.Geometry;
import org.neo4j.kernel.impl.batchinsert.BatchInserter;
import org.neo4j.kernel.impl.batchinsert.BatchInserterImpl;
import org.neo4j.kernel.EmbeddedGraphDatabase;
import org.neo4j.graphdb.GraphDatabaseService;
import org.neo4j.gis.spatial.pipes.GeoPipeline;
class SpatialOsmImport {
public static void main(String[] args)
{
OSMImporter importer = new OSMImporter("ott.osm");
Map<String, String> config = new HashMap<String, String>();
config.put("neostore.nodestore.db.mapped_memory", "90M" );
config.put("dump_configuration", "true");
config.put("use_memory_mapped_buffers", "true");
BatchInserter batchInserter = new BatchInserterImpl("target/dependency", config);
importer.setCharset(Charset.forName("UTF-8"));
try{
importer.importFile(batchInserter, "ott.osm", false);
batchInserter.shutdown();
GraphDatabaseService db = new EmbeddedGraphDatabase("target/dependency");
importer.reIndex(db, 10000);
db.shutdown();
}
catch(Exception e)
{
System.out.println(e.getMessage());
}
GraphDatabaseService database = new EmbeddedGraphDatabase("target/dependency");
try{
SpatialDatabaseService spatialService = new SpatialDatabaseService(database);
Layer layer = spatialService.getLayer("layer_roads");
LayerIndexReader spatialIndex = layer.getIndex();
System.out.println("Have " + spatialIndex.count() + " geometries in " + spatialIndex.getBoundingBox());
Envelope bbox = new Envelope(-75.80, 45.19, -75.7, 45.23);
// Search searchQuery = new SearchIntersectWindow(bbox);
// spatialIndex.executeSearch(searchQuery);
// List<SpatialDatabaseRecord> results = searchQuery.getResults();
List<SpatialDatabaseRecord> results = GeoPipeline
.startIntersectSearch(layer, bbox)
.toSpatialDatabaseRecordList();
doGeometryTestsOnResults(bbox, results);
} finally {
database.shutdown();
}
}
private static void doGeometryTestsOnResults(Envelope bbox, List<SpatialDatabaseRecord> results) {
System.out.println("Found " + results.size() + " geometries in " + bbox);
Geometry geometry = results.get(0).getGeometry();
System.out.println("First geometry is " + geometry);
geometry.buffer(2);
}
}
It is very simple right now, but I can't make it work. How do I include com.tinkerpop.gremlin.groovy.GremlinGroovyPipeline in my app, so it works?
I run everything on Ubuntu 12.04 and java version "1.7.0_25", Java(TM) SE Runtime Environment (build 1.7.0_25-b15).
Any help is greatly appreciated.
the best way to get all the required dependencies in a place where you can include them in your classpath is to run
mvn dependency:copy-dependencies
in neo4j-spatial, and find the libs to include in target/deps, see http://maven.apache.org/plugins/maven-dependency-plugin/usage.html

How to import org.codehaus.groovy.scriptom.* on Groovy?

I'm trying to run a Groovy app to manipulate Excel files on STS (by SpringSource) 2.3.0.
My Groovy version is 1.7.
Class:
package com.mytool
import org.codehaus.groovy.scriptom.ActiveXObject
/**
* #author Mulone
*
*/
class SurveyTool {
static main(args) {
print 'test'
def wshell = new ActiveXObject('Wscript.Shell')
wshell.popup("Scriptom is Groovy")
}
}
Sadly, this is what I get:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
C:\workspace\SurveyTool\src\com\geoadapta\surveytool\SurveyTool.groovy: 6: unable to resolve class org.codehaus.groovy.scriptom.ActiveXObject
# line 6, column 1.
import org.codehaus.groovy.scriptom.ActiveXObject
^
1 error
I also tried to rename ActiveXObject to ActiveXProxy with the same result.
I tried to import scriptom manually from the package scriptom-all-assembly-1.6.0 but I didn't work.
Any idea?
Cheers
Ops, I fixed it by importing all the jar files manually and by putting jacob-1.14.3-x86.dll in the project folder.

Resources