ConfigurationException: Trigger class 'org.apache.cassandra.triggers.AuditTrigger' doesn't exist - cassandra

I am creating trigger using here but it is not working at all and I am getting ConfigurationException: Trigger class 'org.apache.cassandra.triggers.AuditTrigger' doesn't exist.
Steps I followed to create trigger:
1: I have compiled my java file using
javac -cp /CassandraTriggerExample/lib/cassandra-all-3.6.jar
AuditTrigger.Java
2:Jar creation :
jar -cvf trigger-example.jar AuditTrigger.class
3: I checked content of my jar file:
"unzip -l trigger-example.jar"
4: Copied this jar file into:
cassandra_home/conf/triggers
5: Copied AuditTrigger.properties into:
cassandra_home/conf
6: Restarted cassandra server
7: ./nodetool -h localhost reloadtriggers
8: In system.log i can see the entry:
INFO [RMI TCP Connection(2)-127.0.0.1] 2018-07-22 22:15:25,827
CustomClassLoader.java:89 - Loading new jar
/Users/uname/cassandra/conf/triggers/trigger-example.jar
9: Now when i am creating my trigger using :
CREATE TRIGGER test1 ON test.test
USING 'org.apache.cassandra.triggers.AuditTrigger';
I am getting "ConfigurationException: Trigger class 'org.apache.cassandra.triggers.AuditTrigger' doesn't exist".

I think that the problem is that your jar isn't correctly packaged: if your class has name org.apache.cassandra.triggers.AuditTrigger, then it should be located under org/apache/cassandra/triggers/AuditTrigger.class inside jar file...
See this documentation for more details explanation how classes are found...

Did had a similar issue. Could be because you copied it but did not reload the trigger or created the trigger. Got it resolved by following the below checks and execution of command to re-load and create the trigger.
Check
Ensure that the class has name org.apache.cassandra.triggers.AuditTrigger and the same is located under org/apache/cassandra/triggers/AuditTrigger.class inside jar file.
CMD Command
Go to the bin folder of Cassandra installed folder to run the nodetool reloadtriggers command as below.
C:\Cassandra\apache-cassandra-3.11.6\bin>nodetool reloadtriggers
Execute the below statement at cqlsh prompt
CREATE TRIGGER test1 ON test.test USING 'org.apache.cassandra.triggers.AuditTrigger';
Your trigger should be now available!
In case if still the problem persists you can try to restart the server once to see if the same is available.
Find in the below code what I used to publish a message to Kafka consumer upon every insert to Cassandra DB as an example. You can modify the same for update. I used JDK 1.8.0_251, apache-cassandra-3.11.7, kafka_2.13-2.6.0 and Zookeeper-3.6.1.
/**
*
*/
package com.cass.kafka.insert.trigger;
import java.util.Collection;
import java.util.Collections;
import java.util.Iterator;
import java.util.Properties;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import org.apache.cassandra.config.ColumnDefinition;
import org.apache.cassandra.db.Mutation;
import org.apache.cassandra.db.partitions.Partition;
import org.apache.cassandra.db.rows.Cell;
import org.apache.cassandra.db.rows.Row;
import org.apache.cassandra.db.rows.Unfiltered;
import org.apache.cassandra.db.rows.UnfilteredRowIterator;
import org.apache.cassandra.triggers.ITrigger;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
/**
* #author Dinesh.Lomte
*
*/
public class InsertCassTriggerForKafkaPublish implements ITrigger {
private String topic;
private Producer<String, String> producer;
private ThreadPoolExecutor threadPoolExecutor;
/**
*
*/
public InsertCassTriggerForKafkaPublish() {
Thread.currentThread().setContextClassLoader(null);
topic = "test";
producer = new KafkaProducer<String, String>(getProps());
threadPoolExecutor = new ThreadPoolExecutor(4, 20, 30,
TimeUnit.SECONDS, new LinkedBlockingDeque<Runnable>());
}
/**
*
*/
#Override
public Collection<Mutation> augment(Partition partition) {
threadPoolExecutor.execute(() -> handleUpdate(partition));
return Collections.emptyList();
}
/**
*
* #param partition
*/
private void handleUpdate(Partition partition) {
if (!partition.partitionLevelDeletion().isLive()) {
return;
}
UnfilteredRowIterator it = partition.unfilteredIterator();
while (it.hasNext()) {
Unfiltered un = it.next();
Row row = (Row) un;
if (row.primaryKeyLivenessInfo().timestamp() != Long.MIN_VALUE) {
Iterator<Cell> cells = row.cells().iterator();
Iterator<ColumnDefinition> columns = row.columns().iterator();
while (cells.hasNext() && columns.hasNext()) {
ColumnDefinition columnDef = columns.next();
Cell cell = cells.next();
if ("payload_json".equals(columnDef.name.toString())) {
producer.send(new ProducerRecord<>(
topic, columnDef.type.getString(cell.value())));
break;
}
}
}
}
}
/**
*
* #return
*/
private Properties getProps() {
Properties properties = new Properties();
properties.put("bootstrap.servers", "localhost:9092");
properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
return properties;
}
}

Related

Use Gmail API in a keyword in Katalon Studio

I use this tutorial to connect to Gmail API: https://developers.google.com/gmail/api/quickstart/java
I would like to make a keyword in Katalon Studio, which depends on Gmail API.
I modified from sample code that line:
InputStream in = GmailQuickstart.class.getResourceAsStream(CREDENTIALS_FILE_PATH);
to this:
InputStream ins = new FileInputStream(CREDENTIALS_FILE_PATH);
JAR files are added, project is running and browser window is opened to get token. After successful authorization I got error message:
Caused by: java.lang.NoSuchMethodError:
com.google.api.client.http.HttpRequest.setResponseReturnRawInputStream(Z)Lcom/google/api/client/http/HttpRequest;
UPDATE: List of imported dependencies:
commons-codec-1.15.jar
commons-logging-1.2.jar
google-api-client-1.31.3.jar
google-api-client-extensions-1.6.0-beta.jar
google-api-client-jackson2-1.31.3.jar
google-api-client-java6-1.31.3.jar
google-api-services-gmail-v1-rev110-1.25.0.jar
google-http-client-1.39.1.jar
google-http-client-jackson2-1.39.1.jar
google-oauth-client-java6-1.31.4.jar
google-oauth-client-jetty-1.31.4.jar
guava-30.1.1-jre.jar
httpclient-4.5.13.jar
httpcore-4.4.14.jar
j2objc-annotations-1.3.jar
jackson-core-2.12.2.jar
jsr305-3.0.2.jar
https://docs.katalon.com/katalon-studio/docs/external-libraries.html#exclude-built-in-libraries
With the ability to remove built-in libraries stored in the .classpath file of a project folder, you can replace a built-in library with an external one for flexible libraries usage in a test project.
Requirements
An active Katalon Studio Enterprise license.
Katalon Studio version 7.8.
UPD:
i got katalon 7.9.1 and here how i was able to do it:
add the following class into KS project:
include/scripts/groovy/(default package)/GroovyBox.java
import groovy.lang.*;
import java.util.regex.Pattern;
import java.util.Map;
import java.util.List;
/** run groovy script in isolated classloader*/
public class GroovyBox {
GroovyShell gs;
public GroovyBox(ClassLoader parentCL, Pattern excludeClassPattern ) {
FilteredCL fcl = new FilteredCL(parentCL, excludeClassPattern);
gs = new GroovyShell(fcl);
}
public GroovyBox withClassPath(List<String> classPathList) {
GroovyClassLoader cl = gs.getClassLoader();
for(String cp: classPathList) cl.addClasspath(cp);
return this;
}
public Script parse(String scriptText) {
return gs.parse(scriptText);
}
public static class FilteredCL extends GroovyClassLoader{
Pattern filterOut;
public FilteredCL(ClassLoader parent,Pattern excludeClassPattern){
super(parent);
filterOut = excludeClassPattern;
}
#Override protected Class<?> loadClass(String name, boolean resolve) throws ClassNotFoundException{
if(filterOut.matcher(name).matches())throw new ClassNotFoundException("class not found "+ name);
return super.loadClass(name, resolve);
}
}
}
now add a test case - actually you can move code from test case into a class...
import ... /* all katalon imports here*/
assert method1() == 'HELLO WORLD'
def method1() {
def gb = new GroovyBox(this.getClass().getClassLoader().getParent(), ~/^com\.google\..*/)
def script = gb.parse('''
#Grab(group='com.google.api-client', module='google-api-client', version='1.31.3')
import com.google.api.client.http.HttpRequest
def c = HttpRequest.class
println( "methods execute:: "+c.methods.findAll{it.name=='execute'} )
println( "methods setResponseReturnRawInputStream:: "+c.methods.findAll{it.name=='setResponseReturnRawInputStream'} )
println greeting
return greeting.toUpperCase()
''')
script.setBinding([greeting:'hello world'] as Binding)
return script.run()
}
options to define external dependencies:
#Grab(...) as a first line of parsed script - loads all required dependencies from maven central (by default). for example #Grab(group='com.google.api-client', module='google-api-client', version='1.31.3') corresponds to this artifact.
sometimes you need to specify specific maven repository then add #GrabResolver(name='central', root='https://repo1.maven.org/maven2/')
if you want to specify local file dependencies then in the code above:
def gb = new GroovyBox(...).withClassPath([
'/path/to/lib1.jar',
'/path/to/lib2.jar'
])

Spring Batch thread-safe Map job repository

the Spring Batch docs say of the Map-backed job repository:
Note that the in-memory repository is volatile and so does not allow restart between JVM instances. It also cannot guarantee that two job instances with the same parameters are launched simultaneously, and is not suitable for use in a multi-threaded Job, or a locally partitioned Step. So use the database version of the repository wherever you need those features.
I would like to use a Map job repository, and I do not care about restarting, prevention of concurrent job executions, etc. but I do care about being able to use multi-threading and local partitioning.
My batch application has some partitioned steps, and at first glance it seems to run just fine with a Map-backed job repository.
What is the reason it said to be not possible with MapJobRepositoryFactoryBean? Looking at the implementation of Map DAOs, they are using ConcurrentHashMap. Is this not thread-safe ?
I would advise you to follow the documentation, rather than relying on implementation details. Even if the maps are individually thread-safe, there might be race conditions in changes than involve more than one of these maps.
You can use an in-memory database very easily. Example
#Grapes([
#Grab('org.springframework:spring-jdbc:4.0.5.RELEASE'),
#Grab('com.h2database:h2:1.3.175'),
#Grab('org.springframework.batch:spring-batch-core:3.0.6.RELEASE'),
// must be passed with -cp, for whatever reason the GroovyClassLoader
// is not used for com.thoughtworks.xstream.io.json.JettisonMappedXmlDriver
//#Grab('org.codehaus.jettison:jettison:1.2'),
])
import org.h2.jdbcx.JdbcDataSource
import org.springframework.batch.core.Job
import org.springframework.batch.core.JobParameters
import org.springframework.batch.core.Step
import org.springframework.batch.core.StepContribution
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory
import org.springframework.batch.core.launch.JobLauncher
import org.springframework.batch.core.scope.context.ChunkContext
import org.springframework.batch.core.step.tasklet.Tasklet
import org.springframework.batch.repeat.RepeatStatus
import org.springframework.beans.factory.annotation.Autowired
import org.springframework.context.annotation.AnnotationConfigApplicationContext
import org.springframework.context.annotation.Bean
import org.springframework.context.annotation.Configuration
import org.springframework.core.io.ResourceLoader
import org.springframework.jdbc.datasource.init.DatabasePopulatorUtils
import org.springframework.jdbc.datasource.init.ResourceDatabasePopulator
import javax.annotation.PostConstruct
import javax.sql.DataSource
#Configuration
#EnableBatchProcessing
class AppConfig {
#Autowired
private JobBuilderFactory jobs
#Autowired
private StepBuilderFactory steps
#Bean
public Job job() {
return jobs.get("myJob").start(step1()).build()
}
#Bean
Step step1() {
this.steps.get('step1')
.tasklet(new MyTasklet())
.build()
}
#Bean
DataSource dataSource() {
new JdbcDataSource().with {
url = 'jdbc:h2:mem:temp_db;DB_CLOSE_DELAY=-1'
user = 'sa'
password = 'sa'
it
}
}
#Bean
BatchSchemaPopulator batchSchemaPopulator() {
new BatchSchemaPopulator()
}
}
class BatchSchemaPopulator {
#Autowired
ResourceLoader resourceLoader
#Autowired
DataSource dataSource
#PostConstruct
void init() {
def populator = new ResourceDatabasePopulator()
populator.addScript(
resourceLoader.getResource(
'classpath:/org/springframework/batch/core/schema-h2.sql'))
DatabasePopulatorUtils.execute populator, dataSource
}
}
class MyTasklet implements Tasklet {
#Override
RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
println 'TEST!'
}
}
def ctx = new AnnotationConfigApplicationContext(AppConfig)
def launcher = ctx.getBean(JobLauncher)
def jobExecution = launcher.run(ctx.getBean(Job), new JobParameters([:]))
println "Status is: ${jobExecution.status}"

Does Spring-data-Cassandra 1.3.2.RELEASE support UDT annotations?

Is #UDT (http://docs.datastax.com/en/developer/java-driver/2.1/java-driver/reference/mappingUdts.html) supported by Spring-data-Cassandra 1.3.2.RELEASE? If not, how can I add workaround for this
Thanks
See the details here:
https://jira.spring.io/browse/DATACASS-172
I faced with the same issue and it sounds like it does not(
debug process shows me that spring data cassandra check for
#Table, #Persistent or #PrimaryKeyClass Annotation only and raise exception
in other case
>
Invocation of init method failed; nested exception is org.springframework.data.cassandra.mapping.VerifierMappingExceptions:
Cassandra entities must have the #Table, #Persistent or #PrimaryKeyClass Annotation
But I found the solution.
I figured out the approach that allows me to manage entities that include UDT and the ones that don't. In my application I use spring cassandra data project together with using of direct datastax core driver. The repositories that don't contain object with UDT use spring cassanta data approach and the objects that include UDT use custom repositories.
Custom repositories use datastax mapper and they work correctly with UDT
(they located in separate package, see notes below why it's needed):
package com.fyb.cassandra.custom.repositories.impl;
import java.util.List;
import java.util.UUID;
import javax.annotation.PostConstruct;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.cassandra.config.CassandraSessionFactoryBean;
import com.datastax.driver.core.ResultSet;
import com.datastax.driver.mapping.Mapper;
import com.datastax.driver.mapping.MappingManager;
import com.datastax.driver.mapping.Result;
import com.google.common.collect.Lists;
import com.fyb.cassandra.custom.repositories.AccountDeviceRepository;
import com.fyb.cassandra.dto.AccountDevice;
public class AccountDeviceRepositoryImpl implements AccountDeviceRepository {
#Autowired
public CassandraSessionFactoryBean session;
private Mapper<AccountDevice> mapper;
#PostConstruct
void initialize() {
mapper = new MappingManager(session.getObject()).mapper(AccountDevice.class);
}
#Override
public List<AccountDevice> findAll() {
return fetchByQuery("SELECT * FROM account_devices");
}
#Override
public void save(AccountDevice accountDevice) {
mapper.save(accountDevice);
}
#Override
public void deleteByConditions(UUID accountId, UUID systemId, UUID deviceId) {
final String query = "DELETE FROM account_devices where account_id =" + accountId + " AND system_id=" + systemId
+ " AND device_id=" + deviceId;
session.getObject().execute(query);
}
#Override
public List<AccountDevice> findByAccountId(UUID accountId) {
final String query = "SELECT * FROM account_devices where account_id=" + accountId;
return fetchByQuery(query);
}
/*
* Take any valid CQL query and try to map result set to the given list of appropriates <T> types.
*/
private List<AccountDevice> fetchByQuery(String query) {
ResultSet results = session.getObject().execute(query);
Result<AccountDevice> accountsDevices = mapper.map(results);
List<AccountDevice> result = Lists.newArrayList();
for (AccountDevice accountsDevice : accountsDevices) {
result.add(accountsDevice);
}
return result;
}
}
And the spring data related repos that resonsible for managing entities that don't include UDT objects looks like as follows:
package com.fyb.cassandra.repositories;
import org.springframework.data.cassandra.repository.CassandraRepository;
import com.fyb.cassandra.dto.AccountUser;
import org.springframework.data.cassandra.repository.Query;
import org.springframework.stereotype.Repository;
import java.util.List;
import java.util.UUID;
#Repository
public interface AccountUserRepository extends CassandraRepository<AccountUser> {
#Query("SELECT * FROM account_users WHERE account_id=?0")
List<AccountUser> findByAccountId(UUID accountId);
}
I've tested this solution and it's works 100%.
In addition I've attached my POJO objects:
Pojo that uses only data stax annatation:
package com.fyb.cassandra.dto;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import com.datastax.driver.mapping.annotations.ClusteringColumn;
import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.Frozen;
import com.datastax.driver.mapping.annotations.FrozenValue;
import com.datastax.driver.mapping.annotations.PartitionKey;
import com.datastax.driver.mapping.annotations.Table;
#Table(name = "account_systems")
public class AccountSystem {
#PartitionKey
#Column(name = "account_id")
private java.util.UUID accountId;
#ClusteringColumn
#Column(name = "system_id")
private java.util.UUID systemId;
#Frozen
private Location location;
#FrozenValue
#Column(name = "user_token")
private List<UserToken> userToken;
#Column(name = "product_type_id")
private int productTypeId;
#Column(name = "serial_number")
private String serialNumber;
}
Pojo without using UDT and using only spring data cassandra framework:
package com.fyb.cassandra.dto;
import java.util.Date;
import java.util.UUID;
import org.springframework.cassandra.core.PrimaryKeyType;
import org.springframework.data.cassandra.mapping.Column;
import org.springframework.data.cassandra.mapping.PrimaryKeyColumn;
import org.springframework.data.cassandra.mapping.Table;
#Table(value = "accounts")
public class Account {
#PrimaryKeyColumn(name = "account_id", ordinal = 0, type = PrimaryKeyType.PARTITIONED)
private java.util.UUID accountId;
#Column(value = "account_name")
private String accountName;
#Column(value = "currency")
private String currency;
}
Note, that the entities below use different annotations:
#PrimaryKeyColumn(name = "account_id", ordinal = 0, type = PrimaryKeyType.PARTITIONED)and #PartitionKey
#ClusteringColumn and #PrimaryKeyColumn(name = "area_parent_id", ordinal = 2, type = PrimaryKeyType.CLUSTERED)
At first glance - it's uncomfortable, but it allows you to work with objects that includes UDT and that don't.
One important note. That two repos(that use UDT and don't should reside in different packages) cause Spring config looking for base packages with repos:
#Configuration
#EnableCassandraRepositories(basePackages = {
"com.fyb.cassandra.repositories" })
public class CassandraConfig {
..........
}
User Defined data type is now supported by Spring Data Cassandra. The latest release 1.5.0.RELEASE uses Cassandra Data stax driver 3.1.3 and hence its working now. Follow the below steps to make it working
How to use UserDefinedType(UDT) feature with Spring Data Cassandra :
We need to use the latest jar of Spring data Cassandra (1.5.0.RELEASE)
group: 'org.springframework.data', name: 'spring-data-cassandra', version: '1.5.0.RELEASE'
Make sure it uses below versions of the jar :
datastax.cassandra.driver.version=3.1.3
spring.data.cassandra.version=1.5.0.RELEASE
spring.data.commons.version=1.13.0.RELEASE
spring.cql.version=1.5.0.RELEASE
Create user defined type in Cassandra : The type name should be same as defined in the POJO class
Address data type
CREATE TYPE address_type (
id text,
address_type text,
first_name text,
phone text
);
Create column-family with one of the columns as UDT in Cassandra:
Employee table:
CREATE TABLE employee(
employee_id uuid,
employee_name text,
address frozen,
primary key (employee_id, employee_name)
);
In the domain class, define the field with annotation -CassandraType and DataType should be UDT:
#Table("employee") public class Employee {
-- othere fields--
#CassandraType(type = DataType.Name.UDT, userTypeName = "address_type")
private Address address;
}
Create domain class for the user defined type : We need to make sure that column name in the user defined type schema
has to be same as field name in the domain class.
#UserDefinedType("address_type") public class Address { #CassandraType(type = DataType.Name.TEXT)
private String id; #CassandraType(type = DataType.Name.TEXT) private String address_type; }
In the Cassandra Config, Change this :
#Bean public CassandraMappingContext mappingContext() throws Exception {
BasicCassandraMappingContext mappingContext = new BasicCassandraMappingContext();
mappingContext.setUserTypeResolver(new SimpleUserTypeResolver(cluster().getObject(), cassandraKeyspace));
return mappingContext;
}
User defined type should have the same name across everywhere. for e.g
#UserDefinedType("address_type")
#CassandraType(type = DataType.Name.UDT, userTypeName = "address_type")
CREATE TYPE address_type

Unable to connect to cassandra using Hector

I am unable to access Casandra using Hector. Following is the code
import java.util.Arrays;
import java.util.List;
import me.prettyprint.cassandra.service.CassandraHostConfigurator;
import me.prettyprint.cassandra.service.ThriftCluster;
import me.prettyprint.cassandra.service.ThriftKsDef;
import me.prettyprint.hector.api.Cluster;
import me.prettyprint.hector.api.Keyspace;
import me.prettyprint.hector.api.ddl.ColumnFamilyDefinition;
import me.prettyprint.hector.api.ddl.KeyspaceDefinition;
import me.prettyprint.hector.api.factory.HFactory;
import me.prettyprint.hector.api.mutation.Mutator;
public class Hector {
public static void main (String[] args){
boolean cfExists = false;
Cluster cluster = HFactory.getOrCreateCluster("mycluster", new CassandraHostConfigurator("host:9160"));
Keyspace keyspace = HFactory.createKeyspace("Keyspace1", cluster);
// first check if the key space exists
KeyspaceDefinition keyspaceDetail = cluster.describeKeyspace("Keyspace1");
// if not, create one
if (keyspaceDetail == null) {
CassandraHostConfigurator cassandraHostConfigurator = new CassandraHostConfigurator("host:9160");
ThriftCluster cassandraCluster = new ThriftCluster("mycluster", cassandraHostConfigurator);
ColumnFamilyDefinition cfDef = HFactory.createColumnFamilyDefinition("Keyspace1", "base");
cassandraCluster.addKeyspace(new ThriftKsDef("Keyspace1", "org.apache.cassandra.locator.SimpleStrategy", 1,
Arrays.asList(cfDef)));
} else {
// even if the key space exists, we need to check if the column family exists
List<ColumnFamilyDefinition> columnFamilyDefinitions = keyspaceDetail.getCfDefs();
for (ColumnFamilyDefinition def : columnFamilyDefinitions) {
String columnFamilyName = def.getName();
if (columnFamilyName.equals("tcs_im"))
cfExists = true;
}
}
}
}
Encountering following error
log4j:WARN No appenders could be found for logger (me.prettyprint.cassandra.connection.CassandraHostRetryService).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.IllegalAccessError: tried to access class me.prettyprint.cassandra.service.JmxMonitor from class me.prettyprint.cassandra.connection.HConnectionManager
at me.prettyprint.cassandra.connection.HConnectionManager.(HConnectionManager.java:78)
at me.prettyprint.cassandra.service.AbstractCluster.(AbstractCluster.java:69)
at me.prettyprint.cassandra.service.AbstractCluster.(AbstractCluster.java:65)
at me.prettyprint.cassandra.service.ThriftCluster.(ThriftCluster.java:17)
at me.prettyprint.hector.api.factory.HFactory.createCluster(HFactory.java:176)
at me.prettyprint.hector.api.factory.HFactory.getOrCreateCluster(HFactory.java:155)
at com.im.tcs.Hector.main(Hector.java:20)
Please help as to why is it happening.
We use a CassandraConnection class as a convenience-class:
import me.prettyprint.cassandra.connection.DynamicLoadBalancingPolicy;
import me.prettyprint.cassandra.service.CassandraHostConfigurator;
import me.prettyprint.cassandra.service.ExhaustedPolicy;
import me.prettyprint.cassandra.service.OperationType;
import me.prettyprint.hector.api.Cluster;
import me.prettyprint.hector.api.HConsistencyLevel;
import me.prettyprint.hector.api.Keyspace;
import me.prettyprint.hector.api.factory.HFactory;
import java.util.HashMap;
import java.util.Map;
/**
* lazy connect
*/
final class CassandraConnection {
// Constants -----------------------------------------------------
private static final String HOSTS = "localhost";
private static final int PORT = "9160";
private static final String CLUSTER_NAME = "myCluster";
private static final int TIMEOUT = 500);
private static final String KEYSPACE = "Keyspace1";
private static final ConsistencyLevelPolicy CL_POLICY = new ConsistencyLevelPolicy();
// Attributes ----------------------------------------------------
private Cluster cluster;
private volatile Keyspace keyspace;
// Constructors --------------------------------------------------
CassandraConnection() {}
// Methods --------------------------------------------------------
Cluster getCluster() {
if (null == cluster) {
CassandraHostConfigurator config = new CassandraHostConfigurator();
config.setHosts(HOSTS);
config.setPort(PORT);
config.setUseThriftFramedTransport(true);
config.setUseSocketKeepalive(true);
config.setAutoDiscoverHosts(false);
// maxWorkerThreads provides the throttling for us. So hector can be let to grow freely...
config.setExhaustedPolicy(ExhaustedPolicy.WHEN_EXHAUSTED_GROW);
config.setMaxActive(1000); // hack since ExhaustedPolicy doesn't work
// suspend hosts if response is unacceptable for web response
config.setCassandraThriftSocketTimeout(TIMEOUT);
config.setUseHostTimeoutTracker(true);
config.setHostTimeoutCounter(3);
config.setLoadBalancingPolicy(new DynamicLoadBalancingPolicy());
cluster = HFactory.createCluster(CLUSTER_NAME, config);
}
return cluster;
}
Keyspace getKeyspace() {
if (null == keyspace) {
keyspace = HFactory.createKeyspace(KEYSPACE, getCluster(), CL_POLICY);
}
return keyspace;
}
private static class ConsistencyLevelPolicy implements me.prettyprint.hector.api.ConsistencyLevelPolicy {
#Override
public HConsistencyLevel get(final OperationType op) {
return HConsistencyLevel.ONE;
}
#Override
public HConsistencyLevel get(final OperationType op, final String cfName) {
return get(op);
}
}
}
Example of use:
private final CassandraConnection conn = new CassandraConnection();
SliceQuery<String, String, String> sliceQuery = HFactory.createSliceQuery(
conn.getKeyspace(), StringSerializer.get(), StringSerializer.get(), StringSerializer.get());
sliceQuery.setColumnFamily("myColumnFamily");
sliceQuery.setRange("", "", false, Integer.MAX_VALUE);
sliceQuery.setKey("myRowKey");
ColumnSlice<String, String> columnSlice = sliceQuery.execute().get();

Spring LDAP Template Usage

Please take a look at the test class below. I am trying to do an LDAP search with Spring LDAP Template. I am able to search and produce a list of entries corresponding to the search criteria without the Spring LDAP template by using the DirContext as shown in the method searchWithoutTemplate(). But when I use a LdapTemplate, I end up with a NPE as shown further below. I am sure I must be missing something. Can someone help please?
import java.util.Hashtable;
import javax.naming.Context;
import javax.naming.NamingEnumeration;
import javax.naming.NamingException;
import javax.naming.directory.Attribute;
import javax.naming.directory.Attributes;
import javax.naming.directory.DirContext;
import javax.naming.directory.InitialDirContext;
import javax.naming.directory.SearchControls;
import javax.naming.directory.SearchResult;
import javax.naming.ldap.LdapName;
import org.springframework.ldap.core.AttributesMapper;
import org.springframework.ldap.core.LdapTemplate;
import org.springframework.ldap.core.support.DefaultDirObjectFactory;
import org.springframework.ldap.core.support.LdapContextSource;
public class LDAPSearchTest {
//bind params
static String url="ldap://<IP>:<PORT>";
static String userName="cn=Directory Manager";
static String password="password123";
static String bindDN="dc=XXX,dc=com";
//search params
static String base = "ou=StandardUser,ou=XXXCustomers,ou=People,dc=XXX,dc=com";
static String filter = "(objectClass=*)";
static String[] attributeFilter = { "cn", "uid" };
static SearchControls sc = new SearchControls();
public static void main(String[] args) throws Exception {
// sc.setSearchScope(SearchControls.SUBTREE_SCOPE);
sc.setReturningAttributes(attributeFilter);
searchWithTemplate(); //NPE
//searchWithoutTemplate(); //works fine
}
public static void searchWithTemplate() throws Exception {
DefaultDirObjectFactory factory = new DefaultDirObjectFactory();
LdapContextSource cs = new LdapContextSource();
cs.setUrl(url);
cs.setUserDn(userName);
cs.setPassword(password);
cs.setBase(bindDN);
cs.setDirObjectFactory(factory.getClass ());
LdapTemplate template = new LdapTemplate(cs);
template.afterPropertiesSet();
System.out.println((template.search(new LdapName(base), filter, sc,
new AttributesMapper() {
public Object mapFromAttributes(Attributes attrs)
throws NamingException {
System.out.println(attrs);
return attrs.get("uid").get();
}
})));
}
public static void searchWithoutTemplate() throws NamingException{
Hashtable env = new Hashtable(11);
env.put(Context.INITIAL_CONTEXT_FACTORY,"com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.PROVIDER_URL, url);
//env.put(Context.SECURITY_AUTHENTICATION, "simple");
env.put(Context.SECURITY_PRINCIPAL, userName);
env.put(Context.SECURITY_CREDENTIALS, password);
DirContext dctx = new InitialDirContext(env);
NamingEnumeration results = dctx.search(base, filter, sc);
while (results.hasMore()) {
SearchResult sr = (SearchResult) results.next();
Attributes attrs = sr.getAttributes();
System.out.println(attrs);
Attribute attr = attrs.get("uid");
}
dctx.close();
}
}
Exception is:
Exception in thread "main" java.lang.NullPointerException
at org.springframework.ldap.core.support.AbstractContextSource.getReadOnlyContext(AbstractContextSource.java:125)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:287)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:237)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:588)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:546)
at LDAPSearchTest.searchWithTemplate(LDAPSearchTest.java:47)
at LDAPSearchTest.main(LDAPSearchTest.java:33)
I am using Spring 2.5.6 and Spring LDAP 1.3.0
A quick scan showed that it's the authenticationSource field of AbstractContextSource that is the culprit. That file includes the following comment on the afterPropertiesSet() method:
/**
* Checks that all necessary data is set and that there is no compatibility
* issues, after which the instance is initialized. Note that you need to
* call this method explicitly after setting all desired properties if using
* the class outside of a Spring Context.
*/
public void afterPropertiesSet() throws Exception {
...
}
That method then goes on to create an appropriate authenticationSource if you haven't provided one.
As your test code above is most definitely not running within a Spring context, and you haven't explicitly set an authenticationSource, I think you need to edit your code as follows:
...
cs.setDirObjectFactory(factory.getClass ());
// Allow Spring to configure the Context Source:
cs.afterPropertiesSet();
LdapTemplate template = new LdapTemplate(cs);

Resources