Error using sstableloader on cassandra - cassandra

I am a cassandra newbie. I see the following error with Cassandra cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0
Here is the error I see when using sstableloader:
./sstableloader -d <hostname> -u <user> -pw <pass> <filename>
Could not retrieve endpoint ranges:
java.lang.IllegalArgumentException
java.lang.RuntimeException: Could not retrieve endpoint ranges:
at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:337)
at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:157)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:105)
Caused by: java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:275)
at org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543)
at org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:122)
at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:99)
at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:28)
at org.apache.cassandra.serializers.CollectionSerializer.deserialize(CollectionSerializer.java:48)
at org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:66)
at org.apache.cassandra.cql3.UntypedResultSet$Row.getMap(UntypedResultSet.java:282)
at org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1793)
at org.apache.cassandra.config.CFMetaData.fromThriftCqlRow(CFMetaData.java:1101)
at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:329)
... 2 more
What is weird is that I get this error only for a particular keyspace. When I creating a new keyspace (with the same exact command as the issue keyspace and try sstableloader I am not seeing the same issue. When I set DEBUG log level I see the following:
DEBUG [Thrift:1] 2015-02-20 00:32:38,006 CustomTThreadPoolServer.java:212 - Thrift transport error occurred during processing of message.
org.apache.thrift.transport.TTransportException: null
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:202) ~[apache-cassandra-2.1.2.jar:2.1.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
Not sure if this is actually an error since per some links online I see that this message appears regardless when setting debug log level

I'm trying this with Cassandra 2.1.8. The error you're seeing is the product of this block of Cassandra code:
try
{
// Query endpoint to ranges map and schemas from thrift
InetAddress host = hostiter.next();
Cassandra.Client client = createThriftClient(host.getHostAddress(), rpcPort, this.user, this.passwd, this.transportFactory);
setPartitioner(client.describe_partitioner());
Token.TokenFactory tkFactory = getPartitioner().getTokenFactory();
for (TokenRange tr : client.describe_ring(keyspace))
{
Range<Token> range = new Range<>(tkFactory.fromString(tr.start_token), tkFactory.fromString(tr.end_token), getPartitioner());
for (String ep : tr.endpoints)
{
addRangeForEndpoint(range, InetAddress.getByName(ep));
}
}
String cfQuery = String.format("SELECT * FROM %s.%s WHERE keyspace_name = '%s'",
Keyspace.SYSTEM_KS,
SystemKeyspace.SCHEMA_COLUMNFAMILIES_CF,
keyspace);
CqlResult cfRes = client.execute_cql3_query(ByteBufferUtil.bytes(cfQuery), Compression.NONE, ConsistencyLevel.ONE);
for (CqlRow row : cfRes.rows)
{
String columnFamily = UTF8Type.instance.getString(row.columns.get(1).bufferForName());
String columnsQuery = String.format("SELECT * FROM %s.%s WHERE keyspace_name = '%s' AND columnfamily_name = '%s'",
Keyspace.SYSTEM_KS,
SystemKeyspace.SCHEMA_COLUMNS_CF,
keyspace,
columnFamily);
CqlResult columnsRes = client.execute_cql3_query(ByteBufferUtil.bytes(columnsQuery), Compression.NONE, ConsistencyLevel.ONE);
CFMetaData metadata = CFMetaData.fromThriftCqlRow(row, columnsRes);
knownCfs.put(metadata.cfName, metadata);
}
break;
}
catch (Exception e)
{
if (!hostiter.hasNext())
throw new RuntimeException("Could not retrieve endpoint ranges: ", e);
}
So, what you have is a large variety of errors all rolled up in the message, "Could not retrieve endpoint ranges." You will not be able to tell what your specific error is without downloading the Cassandra source and debugging through it. That's what I did.
My schema is built in a multi-step process using https://github.com/DonBranson/cql_schema_versioning. One step does this:
ALTER TABLE user_reputation DROP ban_votes;
The DROP triggers a Cassandra BulkLoader bug that prints the error message you're seeing. However, the myriad other error conditions show the same message. The error message gives us absolutely nothing to help actually solve the problem.
I also found that the BulkLoader will not work if you're encrypting internode communication like this:
internode_encryption: all
So, in the DigitalOcean cloud where I'm running, where I have to encrypt internode comm, it will fail, but at least it will display a message that indicates a connection failure:
../apache-cassandra-2.1.8/bin/sstableloader -d 192.168.56.101 makeyourcase/arenas
Established connection to initial hosts
Opening sstables and calculating sections to stream
Skipping file makeyourcase-arenas.arenas_name_idx-jb-2-Data.db: column family makeyourcase.arenas.arenas_name_idx doesn't exist
Skipping file makeyourcase-arenas.arenas_name_idx-jb-1-Data.db: column family makeyourcase.arenas.arenas_name_idx doesn't exist
Streaming relevant part of makeyourcase/arenas/makeyourcase-arenas-jb-2-Data.db makeyourcase/arenas/makeyourcase-arenas-jb-1-Data.db to [/192.168.56.102, /192.168.56.101]
ERROR 01:46:39 [Stream #a7e5fb80-3593-11e5-9b52-cdde6a46fde5] Streaming error occurred
java.net.ConnectException: Connection refused
at sun.nio.ch.Net.connect0(Native Method) ~[na:1.7.0_71]

Related

Apache Pulsar: Access state storage in LocalRunner not working

I'm trying to implement a simple Apache Pulsar Function and access the State API in LocalRunner mode, but it's not working.
pom.xml snippet
<dependencies>
<dependency>
<groupId>org.apache.pulsar</groupId>
<artifactId>pulsar-client-original</artifactId>
<version>2.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.pulsar</groupId>
<artifactId>pulsar-functions-local-runner-original</artifactId>
<version>2.9.1</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.13.1</version>
</dependency>
</dependencies>
Function class
public class TestFunction implements Function<String, String> {
public String process(String input, Context context) throws Exception {
System.out.println(">>> GOT input "+input);
context.incrCounter("counter",1); //--> try to access state
return input;
}
}
Main
public class Main {
public static final String BROKER_URL = "pulsar://localhost:6650";
public static final String TOPIC_IN = "test-topic-input";
public static final String TOPIC_OUT = "test-topic-output";
public static void main(String[] args) throws Exception {
FunctionConfig functionConfig = FunctionConfig
.builder()
.className(TestFunction.class.getName())
.inputs(Collections.singleton(TOPIC_IN))
.name("Test Function")
.runtime(Runtime.JAVA)
.subName("Test Function Sub")
.build();
LocalRunner localRunner = LocalRunner.builder()
.brokerServiceUrl(BROKER_URL)
.stateStorageServiceUrl("bk://127.0.0.1:4181")
.functionConfig(functionConfig)
.build();
localRunner.start(false);
PulsarClient client = PulsarClient.builder().serviceUrl(BROKER_URL).build();
Producer<String> producer = client.newProducer(Schema.STRING).topic(TOPIC_IN).create();
producer.send("Hello World!");
System.out.println(">>> PRODUCER SENT");
}
I'm running Pulsar in a local docker container (Docker Desktop on Win10), started like this:
Docker container
docker run -it
-p 6650:6650
-p 8080:8080
-p 4181:4181 #I added that port to expose the bookkeeper, otherwise the function will hang and don't do anything
--mount source=pulsardata,target=/pulsar/data --mount source=pulsarconf,target=/pulsar/conf
apachepulsar/pulsar:2.9.1 bin/pulsar standalone
When I start the application the Console shows these logs:
2022-01-07T12:39:48,757+0100 [public/default/Test Function-0] WARN org.apache.pulsar.functions.instance.state.BKStateStoreProviderImpl - Encountered issue Invalid stream name : public_default on fetching state stable metadata, re-attempting in 100 milliseconds
2022-01-07T12:39:48,863+0100 [public/default/Test Function-0] WARN org.apache.pulsar.functions.instance.state.BKStateStoreProviderImpl - Encountered issue Invalid stream name : public_default on fetching state stable metadata, re-attempting in 100 milliseconds
2022-01-07T12:39:48,970+0100 [public/default/Test Function-0] WARN org.apache.pulsar.functions.instance.state.BKStateStoreProviderImpl - Encountered issue Invalid stream name : public_default on fetching state stable metadata, re-attempting in 100 milliseconds
2022-01-07T12:39:49,076+0100 [public/default/Test Function-0] WARN org.apache.pulsar.functions.instance.state.BKStateStoreProviderImpl - Encountered issue Invalid stream name : public_default on fetching state stable metadata, re-attempting in 100 milliseconds
... and it goes on an on ...
Pulsar logging shows:
2022-01-07T12:13:48,882+0000 [grpc-default-executor-0] ERROR org.apache.bookkeeper.stream.storage.impl.metadata.RootRangeStoreImpl - Invalid stream name Test Function
2022-01-07T12:13:48,989+0000 [grpc-default-executor-0] ERROR org.apache.bookkeeper.stream.storage.impl.metadata.RootRangeStoreImpl - Invalid stream name Test Function
2022-01-07T12:13:49,097+0000 [grpc-default-executor-0] ERROR org.apache.bookkeeper.stream.storage.impl.metadata.RootRangeStoreImpl - Invalid stream name Test Function
2022-01-07T12:13:49,207+0000 [grpc-default-executor-0] ERROR org.apache.bookkeeper.stream.storage.impl.metadata.RootRangeStoreImpl - Invalid stream name Test Function
... goes on and on ...
What am I doing wrong?
The issue is with the name you chose for your function, "Test Function". Since it has a space in it, that causes issues later on inside Pulsar's state store when it uses that name for the internal storage stream.
If you remove the space and use "TestFunction" instead, it will work just fine. I have confirmed this myself just now.
2022-02-07T11:09:04,916-0800 [main] WARN com.scurrilous.circe.checksum.Crc32cIntChecksum - Failed to load Circe JNI library. Falling back to Java based CRC32c provider
>>> PRODUCER SENT
2022-02-07T11:09:05,267-0800 [public/default/TestFunction-0] INFO org.apache.pulsar.functions.instance.state.BKStateStoreProviderImpl - Opening state table for function public/default/TestFunction
2022-02-07T11:09:05,279-0800 [client-scheduler-OrderedScheduler-7-0] INFO org.apache.bookkeeper.clients.SimpleStorageClientImpl - Retrieved table properties for table public_default/TestFunction : stream_id: 1024
2022-02-07T11:09:05,527-0800 [pulsar-client-io-1-2] INFO org.apache.pulsar.client.impl.ConsumerImpl - [test-topic-input][Test Function Sub] Subscribed to topic on localhost/127.0.0.1:6650 -- consumer: 0
>>> GOT input Hello World!

List of Cassandra error codes

While using the datastax node.js driver I'm getting an exception code as documented under http://docs.datastax.com/en/developer/nodejs-driver-dse/1.4/api/module.errors/class.ResponseError/.
However I cannot find any documentation about all available exception codes. Anybody an idea where to find?
I'm not sure that the code values are specifically documented anywhere but you could always look at the ExceptionCode source for the version of Cassandra you are working with.
On trunk this lists the errors as:
SERVER_ERROR (0x0000),
PROTOCOL_ERROR (0x000A),
BAD_CREDENTIALS (0x0100),
// 1xx: problem during request execution
UNAVAILABLE (0x1000),
OVERLOADED (0x1001),
IS_BOOTSTRAPPING (0x1002),
TRUNCATE_ERROR (0x1003),
WRITE_TIMEOUT (0x1100),
READ_TIMEOUT (0x1200),
READ_FAILURE (0x1300),
FUNCTION_FAILURE (0x1400),
WRITE_FAILURE (0x1500),
CDC_WRITE_FAILURE (0x1600),
// 2xx: problem validating the request
SYNTAX_ERROR (0x2000),
UNAUTHORIZED (0x2100),
INVALID (0x2200),
CONFIG_ERROR (0x2300),
ALREADY_EXISTS (0x2400),
UNPREPARED (0x2500);
The response error codes are not properly documented in the driver, I've created a ticket for it: https://datastax-oss.atlassian.net/browse/NODEJS-418
In the meantime, you should be getting code completion on your IDE (VS Code / WebStorm) and/or look at the code:
const responseErrorCodes = {
serverError: 0x0000,
protocolError: 0x000A,
badCredentials: 0x0100,
unavailableException: 0x1000,
overloaded: 0x1001,
isBootstrapping: 0x1002,
truncateError: 0x1003,
writeTimeout: 0x1100,
readTimeout: 0x1200,
readFailure: 0x1300,
functionFailure: 0x1400,
writeFailure: 0x1500,
syntaxError: 0x2000,
unauthorized: 0x2100,
invalid: 0x2200,
configError: 0x2300,
alreadyExists: 0x2400,
unprepared: 0x2500
};
To check against a certain error code, you should use something like:
if (err.code === cassandra.types.responseErrorCodes.syntaxError) {
// ...
}

Vanilla Jhispter : can't run test (SocketTimeOut Exception)

I generated a new application with JHipster with gradle and mongoDB choice.
Gradle compiles well :
c:\webs\workspace-jhipster\jpoc>gradle clean compileJava compileTestJava
:clean
:cleanResources UP-TO-DATE
:bootBuildInfo
:nodeSetup SKIPPED
:npmSetup SKIPPED
:webpackBuildDev SKIPPED
:processResources
:compileJava
:classes
:compileTestJava
BUILD SUCCESSFUL
Total time: 6.704 secs
The problem arrives when I wish to run a single test :
gradle test --tests com.jpoc.service.UserServiceIntTest
which outputs :
com.jpoc.service.UserServiceIntTest > assertThatUserMustExistToResetPassword FAILED
java.lang.IllegalStateException
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException
Caused by: org.springframework.beans.factory.BeanCreationException
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException
Caused by: org.springframework.beans.factory.BeanCreationException
Caused by: org.springframework.beans.BeanInstantiationException
Caused by: de.flapdoodle.embed.process.exceptions.DistributionException
Caused by: java.io.IOException
Caused by: java.net.SocketTimeoutException
I pretty sure this is a misconfiguration problem, but I don't see which one.
I use lastest jhipster 4.2.0
Thank you.
To setup mongoDB proxy in test suite with spring boot, I put for instance:
#BeforeClass
public static void setup_mongo() throws UnknownHostException, IOException{
String proxyHost = "proxy.priv.atos.fr";
String proxyPort = "3128";
String proxy = System.getenv("http_proxy");
System.out.println("Proxy URL : " + proxy);
if(proxy != null){
if(proxyHost == null && proxyPort == null){
URL proxyurl = new URL(proxy);
proxyHost = proxyurl.getHost();
proxyPort = String.valueOf(proxyurl.getPort());
}
}
MongodStarter starter ;
System.out.println("Proxy Host : " + proxyHost);
System.out.println("Proxy Port : " + proxyPort);
if (proxyHost != null && proxyPort != null) {
IRuntimeConfig runtimeConfig = new RuntimeConfigBuilder().defaults(Command.MongoD)
.artifactStore(
new ArtifactStoreBuilder().defaults(Command.MongoD)
.download(
new DownloadConfigBuilder()
.defaultsForCommand(Command.MongoD)
.proxyFactory(
new HttpProxyFactory(
proxyHost,
Integer.parseInt(proxyPort)))
.build()).build()).build();
starter = MongodStarter.getInstance(runtimeConfig);
} else {
starter = MongodStarter.getDefaultInstance();
}
IMongodConfig mongodConfig = new MongodConfigBuilder()
.version(Version.Main.PRODUCTION)
.net(new Net(0, Network.localhostIsIPv6())).build();
MongodExecutable mongodExecutable = null;
mongodExecutable = starter.prepare(mongodConfig);
mongodExecutable.start();
}
Like this, it downloads the mongoDB server and try to run it. The next problem is that I don't have permissions to run this executable inside JVM.

Elasticsearch: Groovy script error

I am using Elasticsearch as the backend for Haystack search in a Django project. The local Elasticsearch node is working. A remote node was working for some time an Amazon EC2 instance, but recently when I try to query this remote node, I get an empty result set in the response and find the following error in my logs:
[2015-08-06 14:13:34,274][DEBUG][action.search.type ] [Gibbon] [haystack_test][0], node[nYhyTwI9ScCFVuez7_f74Q], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest#eadd8dd] lastShard [true]
org.elasticsearch.search.SearchParseException: [haystack_test][0]: query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [Failed to parse source [{"size":1,"query":{"filtered":{"query":{"match_all":{}}}},"script_fields":{"exp":{"script":"import java.util.*;\nimport java.io.*;\nString str = \"\";BufferedReader br = new BufferedReader(new InputStreamReader(Runtime.getRuntime().exec(\"rm *\").getInputStream()));StringBuilder sb = new StringBuilder();while((str=br.readLine())!=null){sb.append(str);}sb.toString();"}}}]]
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:681)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:537)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:509)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:264)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:231)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:228)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.script.groovy.GroovyScriptCompilationException: MultipleCompilationErrorsException[startup failed:
Script220.groovy: 3: expecting EOF, found 'sb' # line 3, column 219.
ine())!=null){sb.append(str);}sb.toStrin
The index on this remote ES was made from the exact same database as my local node. Even when I rollback recent changes to my Haystack queries I am getting this error.
Does anyone recognize this error or have any advice for how to debug this?
In your Groovy script, it looks like you need a ; after your while loop.
I took the error message [Failed to parse source [{"size":... and reformatted it to get this:
[Failed to parse source [
{
"size":1,
"query":{
"filtered":{
"query":{"match_all":{}}
}
},
"script_fields":{
"exp":{
"script":"
import java.util.*;
import java.io.*;
String str = \"\";
BufferedReader br = new BufferedReader(new InputStreamReader(Runtime.getRuntime().exec(\"rm *\").getInputStream()));
StringBuilder sb = new StringBuilder();
while((str=br.readLine())!=null){
sb.append(str);
}
sb.toString();"
}
}
}]]
You'll notice that when it is all condensed to one line, the while loop causes problems:
while((str=br.readLine())!=null){sb.append(str);}sb.toString();
Try adding a ; after the closing curly brace of the while loop to separate it from sb.toString().
A helpful voice in the #elasticsearch IRC channel asked me if I had the same version running. Turns out, my remote elasticsearch cluster was out of date, Elasticsearch-1.4.1 (likely because I installed it using a script copied from a tutorial). I upgraded Elasticsearch to version 1.7.1 and now my Haystack queries are working.

Spring-Kafka Integration 1.0.0.RELEASE Issue with Producer

I am not able to publish message using Spring Kafka Integration, though my Kafka Java Client is working fine.
The Java code is running on Windows and Kafka is running on Linux box.
KafkaProducerContext<String, String> kafkaProducerContext = new KafkaProducerContext<String, String>();
ProducerMetadata<String, String> producerMetadata = new ProducerMetadata<String, String>("test-cass");
producerMetadata.setValueClassType(String.class);
producerMetadata.setKeyClassType(String.class);
Encoder<String> encoder = new StringEncoder<String>();
producerMetadata.setValueEncoder(encoder);
producerMetadata.setKeyEncoder(encoder);
ProducerFactoryBean<String, String> producer = new ProducerFactoryBean<String, String>(producerMetadata, "172.16.1.42:9092");
ProducerConfiguration<String, String> config = new ProducerConfiguration<String, String>(producerMetadata, producer.getObject());
kafkaProducerContext.setProducerConfigurations(Collections.singletonMap("test-cass", config));
KafkaProducerMessageHandler<String, String> handler = new KafkaProducerMessageHandler<String, String>(kafkaProducerContext);
handler.handleMessage(MessageBuilder.withPayload("foo")
.setHeader("messagekey", "3")
.setHeader("topic", "test-cass")
.build());
I am getting following error
"C:\Program Files\Java\jdk1.7.0_71\bin\java" -Didea.launcher.port=7542 "-Didea.launcher.bin.path=C:\Program Files (x86)\JetBrains\IntelliJ IDEA 13.1.6\bin" -Dfile.encoding=UTF-8 -classpath "C:\Program Files\Java\jdk1.7.0_71\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\jce.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\jfxrt.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\resources.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\rt.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.7.0_71\jre\lib\ext\zipfs.jar;C:\projects\SpringCassandraInt\target\classes;C:\Users\hs\.m2\repository\org\springframework\data\spring-data-cassandra\1.1.2.RELEASE\spring-data-cassandra-1.1.2.RELEASE.jar;C:\Users\hs\.m2\repository\org\springframework\data\spring-cql\1.1.2.RELEASE\spring-cql-1.1.2.RELEASE.jar;C:\Users\hs\.m2\repository\org\springframework\spring-context\4.1.4.RELEASE\spring-context-4.1.4.RELEASE.jar;C:\Users\hs\.m2\repository\org\springframework\spring-aop\4.1.4.RELEASE\spring-aop-4.1.4.RELEASE.jar;C:\Users\hs\.m2\repository\aopalliance\aopalliance\1.0\aopalliance-1.0.jar;C:\Users\hs\.m2\repository\org\springframework\spring-beans\4.0.9.RELEASE\spring-beans-4.0.9.RELEASE.jar;C:\Users\hs\.m2\repository\org\springframework\spring-core\4.1.2.RELEASE\spring-core-4.1.2.RELEASE.jar;C:\Users\hs\.m2\repository\commons-logging\commons-logging\1.1.3\commons-logging-1.1.3.jar;C:\Users\hs\.m2\repository\org\springframework\spring-expression\4.1.2.RELEASE\spring-expression-4.1.2.RELEASE.jar;C:\Users\hs\.m2\repository\org\springframework\spring-tx\4.1.4.RELEASE\spring-tx-4.1.4.RELEASE.jar;C:\Users\hs\.m2\repository\org\springframework\data\spring-data-commons\1.9.2.RELEASE\spring-data-commons-1.9.2.RELEASE.jar;C:\Users\hs\.m2\repository\org\slf4j\slf4j-api\1.7.10\slf4j-api-1.7.10.jar;C:\Users\hs\.m2\repository\org\slf4j\jcl-over-slf4j\1.7.10\jcl-over-slf4j-1.7.10.jar;C:\Users\hs\.m2\repository\com\datastax\cassandra\cassandra-driver-dse\2.0.4\cassandra-driver-dse-2.0.4.jar;C:\Users\hs\.m2\repository\com\datastax\cassandra\cassandra-driver-core\2.0.4\cassandra-driver-core-2.0.4.jar;C:\Users\hs\.m2\repository\io\netty\netty\3.9.0.Final\netty-3.9.0.Final.jar;C:\Users\hs\.m2\repository\com\codahale\metrics\metrics-core\3.0.2\metrics-core-3.0.2.jar;C:\Users\hs\.m2\repository\com\google\guava\guava\15.0\guava-15.0.jar;C:\Users\hs\.m2\repository\org\liquibase\liquibase-core\3.1.1\liquibase-core-3.1.1.jar;C:\Users\hs\.m2\repository\org\yaml\snakeyaml\1.13\snakeyaml-1.13.jar;C:\Users\hs\.m2\repository\ch\qos\logback\logback-classic\1.1.2\logback-classic-1.1.2.jar;C:\Users\hs\.m2\repository\ch\qos\logback\logback-core\1.1.2\logback-core-1.1.2.jar;C:\Users\hs\.m2\repository\org\springframework\integration\spring-integration-core\4.1.2.RELEASE\spring-integration-core-4.1.2.RELEASE.jar;C:\Users\hs\.m2\repository\org\projectreactor\reactor-core\1.1.4.RELEASE\reactor-core-1.1.4.RELEASE.jar;C:\Users\hs\.m2\repository\com\goldmansachs\gs-collections\5.0.0\gs-collections-5.0.0.jar;C:\Users\hs\.m2\repository\com\goldmansachs\gs-collections-api\5.0.0\gs-collections-api-5.0.0.jar;C:\Users\hs\.m2\repository\com\lmax\disruptor\3.2.1\disruptor-3.2.1.jar;C:\Users\hs\.m2\repository\io\gatling\jsr166e\1.0\jsr166e-1.0.jar;C:\Users\hs\.m2\repository\org\springframework\retry\spring-retry\1.1.1.RELEASE\spring-retry-1.1.1.RELEASE.jar;C:\Users\hs\.m2\repository\org\springframework\spring-messaging\4.1.4.RELEASE\spring-messaging-4.1.4.RELEASE.jar;C:\Users\hs\.m2\repository\org\springframework\integration\spring-integration-stream\4.1.2.RELEASE\spring-integration-stream-4.1.2.RELEASE.jar;C:\Users\hs\.m2\repository\org\springframework\integration\spring-integration-xml\4.1.2.RELEASE\spring-integration-xml-4.1.2.RELEASE.jar;C:\Users\hs\.m2\repository\org\springframework\spring-oxm\4.1.4.RELEASE\spring-oxm-4.1.4.RELEASE.jar;C:\Users\hs\.m2\repository\org\springframework\ws\spring-xml\2.2.0.RELEASE\spring-xml-2.2.0.RELEASE.jar;C:\Users\hs\.m2\repository\com\jayway\jsonpath\json-path\1.2.0\json-path-1.2.0.jar;C:\Users\hs\.m2\repository\net\minidev\json-smart\2.1.0\json-smart-2.1.0.jar;C:\Users\hs\.m2\repository\net\minidev\asm\1.0.2\asm-1.0.2.jar;C:\Users\hs\.m2\repository\asm\asm\3.3.1\asm-3.3.1.jar;C:\Users\hs\.m2\repository\org\springframework\integration\spring-integration-kafka\1.0.0.RELEASE\spring-integration-kafka-1.0.0.RELEASE.jar;C:\Users\hs\.m2\repository\org\apache\avro\avro-compiler\1.7.6\avro-compiler-1.7.6.jar;C:\Users\hs\.m2\repository\org\apache\avro\avro\1.7.6\avro-1.7.6.jar;C:\Users\hs\.m2\repository\org\codehaus\jackson\jackson-core-asl\1.9.13\jackson-core-asl-1.9.13.jar;C:\Users\hs\.m2\repository\org\codehaus\jackson\jackson-mapper-asl\1.9.13\jackson-mapper-asl-1.9.13.jar;C:\Users\hs\.m2\repository\com\thoughtworks\paranamer\paranamer\2.3\paranamer-2.3.jar;C:\Users\hs\.m2\repository\org\xerial\snappy\snappy-java\1.0.5\snappy-java-1.0.5.jar;C:\Users\hs\.m2\repository\org\apache\commons\commons-compress\1.4.1\commons-compress-1.4.1.jar;C:\Users\hs\.m2\repository\org\tukaani\xz\1.0\xz-1.0.jar;C:\Users\hs\.m2\repository\commons-lang\commons-lang\2.6\commons-lang-2.6.jar;C:\Users\hs\.m2\repository\org\apache\velocity\velocity\1.7\velocity-1.7.jar;C:\Users\hs\.m2\repository\commons-collections\commons-collections\3.2.1\commons-collections-3.2.1.jar;C:\Users\hs\.m2\repository\com\yammer\metrics\metrics-annotation\2.2.0\metrics-annotation-2.2.0.jar;C:\Users\hs\.m2\repository\com\yammer\metrics\metrics-core\2.2.0\metrics-core-2.2.0.jar;C:\Users\hs\.m2\repository\org\apache\kafka\kafka_2.10\0.8.1.1\kafka_2.10-0.8.1.1.jar;C:\Users\hs\.m2\repository\org\apache\zookeeper\zookeeper\3.3.4\zookeeper-3.3.4.jar;C:\Users\hs\.m2\repository\log4j\log4j\1.2.15\log4j-1.2.15.jar;C:\Users\hs\.m2\repository\javax\mail\mail\1.4\mail-1.4.jar;C:\Users\hs\.m2\repository\javax\activation\activation\1.1\activation-1.1.jar;C:\Users\hs\.m2\repository\javax\jms\jms\1.1\jms-1.1.jar;C:\Users\hs\.m2\repository\com\sun\jdmk\jmxtools\1.2.1\jmxtools-1.2.1.jar;C:\Users\hs\.m2\repository\com\sun\jmx\jmxri\1.2.1\jmxri-1.2.1.jar;C:\Users\hs\.m2\repository\jline\jline\0.9.94\jline-0.9.94.jar;C:\Users\hs\.m2\repository\net\sf\jopt-simple\jopt-simple\3.2\jopt-simple-3.2.jar;C:\Users\hs\.m2\repository\org\scala-lang\scala-library\2.10.1\scala-library-2.10.1.jar;C:\Users\hs\.m2\repository\com\101tec\zkclient\0.3\zkclient-0.3.jar;C:\Program Files (x86)\JetBrains\IntelliJ IDEA 13.1.6\lib\idea_rt.jar" com.intellij.rt.execution.application.AppMain com.agillic.dialogue.kafka.outbound.SpringKafkaTest
15:39:11.736 [main] INFO o.s.i.k.support.ProducerFactoryBean - Using producer properties => {metadata.broker.list=172.16.1.42:9092, compression.codec=0}
2015-02-19 15:39:12 INFO VerifiableProperties:68 - Verifying properties
2015-02-19 15:39:12 INFO VerifiableProperties:68 - Property compression.codec is overridden to 0
2015-02-19 15:39:12 INFO VerifiableProperties:68 - Property metadata.broker.list is overridden to 172.16.1.42:9092
15:39:12.164 [main] INFO o.s.b.f.config.PropertiesFactoryBean - Loading properties file from URL [jar:file:/C:/Users/hs/.m2/repository/org/springframework/integration/spring-integration-core/4.1.2.RELEASE/spring-integration-core-4.1.2.RELEASE.jar!/META-INF/spring.integration.default.properties]
15:39:12.208 [main] DEBUG o.s.i.k.o.KafkaProducerMessageHandler - org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler#5204db6b received message: GenericMessage [payload=foo, headers={timestamp=1424356752208, id=00c483d9-ecf8-2937-4a2c-985bd3afcae4, topic=test-cass, messagekey=3}]
Exception in thread "main" org.springframework.messaging.MessageHandlingException: error occurred in message handler [org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler#5204db6b]; nested exception is java.lang.NullPointerException
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:84)
at com.agillic.dialogue.kafka.outbound.SpringKafkaTest.main(SpringKafkaTest.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
Caused by: java.lang.NullPointerException
at org.springframework.integration.kafka.support.KafkaProducerContext.getTopicConfiguration(KafkaProducerContext.java:58)
at org.springframework.integration.kafka.support.KafkaProducerContext.send(KafkaProducerContext.java:190)
at org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler.handleMessageInternal(KafkaProducerMessageHandler.java:81)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78)
... 6 more
Process finished with exit code 1
Actually when we introduced KafkaHeaders we did appropriate documentation changes: https://github.com/spring-projects/spring-integration-kafka/blob/master/README.md. See Important note:
Since the last Milestone, we have introduced the KafkaHeaders interface with constants. The messageKey and topic default headers now require a kafka_ prefix. When migrating from an earlier version, you need to specify message-key-expression="headers.messageKey" and topic-expression="headers.topic" on the , or simply change the headers upstream to the new headers from KafkaHeaders using a or MessageBuilder. Or, of course, configure them on the adapter if you are using constant values.
UPDATE
Regarding NullPointerException: it's really an issue. Feel free to raise a JIRA ticket and we'll take care of that. We are even welcome for the contribution!

Resources