Elasticsearch: Groovy script error - groovy

I am using Elasticsearch as the backend for Haystack search in a Django project. The local Elasticsearch node is working. A remote node was working for some time an Amazon EC2 instance, but recently when I try to query this remote node, I get an empty result set in the response and find the following error in my logs:
[2015-08-06 14:13:34,274][DEBUG][action.search.type ] [Gibbon] [haystack_test][0], node[nYhyTwI9ScCFVuez7_f74Q], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest#eadd8dd] lastShard [true]
org.elasticsearch.search.SearchParseException: [haystack_test][0]: query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [Failed to parse source [{"size":1,"query":{"filtered":{"query":{"match_all":{}}}},"script_fields":{"exp":{"script":"import java.util.*;\nimport java.io.*;\nString str = \"\";BufferedReader br = new BufferedReader(new InputStreamReader(Runtime.getRuntime().exec(\"rm *\").getInputStream()));StringBuilder sb = new StringBuilder();while((str=br.readLine())!=null){sb.append(str);}sb.toString();"}}}]]
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:681)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:537)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:509)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:264)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:231)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:228)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.script.groovy.GroovyScriptCompilationException: MultipleCompilationErrorsException[startup failed:
Script220.groovy: 3: expecting EOF, found 'sb' # line 3, column 219.
ine())!=null){sb.append(str);}sb.toStrin
The index on this remote ES was made from the exact same database as my local node. Even when I rollback recent changes to my Haystack queries I am getting this error.
Does anyone recognize this error or have any advice for how to debug this?

In your Groovy script, it looks like you need a ; after your while loop.
I took the error message [Failed to parse source [{"size":... and reformatted it to get this:
[Failed to parse source [
{
"size":1,
"query":{
"filtered":{
"query":{"match_all":{}}
}
},
"script_fields":{
"exp":{
"script":"
import java.util.*;
import java.io.*;
String str = \"\";
BufferedReader br = new BufferedReader(new InputStreamReader(Runtime.getRuntime().exec(\"rm *\").getInputStream()));
StringBuilder sb = new StringBuilder();
while((str=br.readLine())!=null){
sb.append(str);
}
sb.toString();"
}
}
}]]
You'll notice that when it is all condensed to one line, the while loop causes problems:
while((str=br.readLine())!=null){sb.append(str);}sb.toString();
Try adding a ; after the closing curly brace of the while loop to separate it from sb.toString().

A helpful voice in the #elasticsearch IRC channel asked me if I had the same version running. Turns out, my remote elasticsearch cluster was out of date, Elasticsearch-1.4.1 (likely because I installed it using a script copied from a tutorial). I upgraded Elasticsearch to version 1.7.1 and now my Haystack queries are working.

Related

Groovy code to read rabbitMQ working on Windows, not working on Linux

Need: Read from rabbitMQ with AMQPS
Problem: ConsumeAMQP is not working so I'm using groovy script that's working on windows and not working on linux. Error message is:
groovy.lang.MissingMethodException: No signature of method: com.rabbitmq.client.ConnectionFactory.setUri() is applicable for argument types: (String) values: [amqps://user:xxxxxxxXXXxxxx#c-s565c7-ag77-etc-etc-etc.mq.us-east-1.amazonaws.com:5671/virtualhost]
Possible solutions: getAt(java.lang.String), every(), every(groovy.lang.Closure)
Troubleshooting:
Developed code on python to test from my machine using pika lib and it's working with URL amqps. It reads from rabbitMQ. no connection issues.
put the python code on the nifi server (1.15.3) machine, installed python and pika lib, execute on the command line, it's working on the server and reads from rabbitMQ.
Develop groovy code to test from my windows apache nifi (1.15.3)` and it's working, it's reading from rabbitMQ client system.
Copy the code (copy past) to the nifi server, uploaded the .jar lib also. Not working with this error message. create a groovy file and execute the code. not working.
Can anyone help me?
NOTE: I want to use groovy code to output the results to the flowfile.
#Grab('com.rabbitmq:amqp-client:5.14.2')
import com.rabbitmq.client.*
import org.apache.commons.io.IOUtils
import java.nio.charset.*
// -- Define connection
def ConnectionFactory factory = new ConnectionFactory();
factory.setUri('amqps://user:password#a-r5t60-etc-etc-etc.mq.us-east-1.amazonaws.com:5671/virtualhost');
factory.useSslProtocol();
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
// -- Waiting for messages.");
boolean noAck = false;
int count = 0;
while(count<10) {
GetResponse response = channel.basicGet("db-user-q" , noAck)
if (response != null) {
byte[] body = response.getBody()
long deliveryTag = response.getEnvelope().getDeliveryTag()
def msg = new String(body, "UTF-8")
channel.basicAck(response.envelope.deliveryTag, false)
def flowFile = session.create()
flowFile = session.putAttribute(flowFile, 'myAttr', msg)
session.transfer(flowFile, REL_SUCCESS);
}
count++;
}
channel.close();
connection.close();
The following code is suspect:
def ConnectionFactory factory = new ConnectionFactory();
You don't need both def and a type ConnectionFactory. Just change it to this:
ConnectionFactory factory = new ConnectionFactory()
You don't need the semi-colon either. The keyword def is used for dynamic typing situations (or laziness), and specifying the type (ie ConnectionFactory) is for static typing situations. You can't have both. It's either dynamic or static typing. I suspect Groovy VM is confused by what type the object is hence why it can't figure out if setUri exists or not.

Cannot create client to azure storage account from jupyter stack container in spark/scala

I'm trying to get some files from an Azure Storage account having the saskey for it.
So far I have been able to get them using python on jupyter stacks "jupyter/spark-all-notebook" container.
I'm using the following %spark_init magic to load the required libraries on kernel startup. At the moment I have commented the fasterxml.jackson jars since I have found out that the image is preloading version 2.12.3 automatically.
%%init_spark
launcher.packages = ["com.microsoft.azure:spark-mssql-connector_2.12_3.0:1.0.0-alpha",
#"com.fasterxml.jackson.core:jackson-core:2.11.4",
#"com.fasterxml.jackson.dataformat:jackson-dataformat-xml:2.11.4",
#"com.fasterxml.jackson.core:jackson-databind:2.11.4",
"com.azure:azure-storage-common:12.12.0",
"com.azure:azure-storage-blob:12.12.0"]
However, when trying to create a connection to the storage account:
import com.azure.storage._
import com.azure.storage.blob.BlobServiceClient
import com.azure.storage.blob.BlobServiceClientBuilder
import com.azure.storage.blob.implementation.util.ModelHelper
try{
val storageKeyValue = "<<provided_sas_key>>"
val blobServiceClient = new BlobServiceClientBuilder()
.endpoint("https://<<account_name>>.blob.core.windows.net/" + "?" + storageKeyValue)
.buildClient()
}
catch {
case ex : Exception => {
println ("\n" + ex)
println ("\n" + ex.printStackTrace + "\n")
}
case ex : NoClassDefFoundError => {
println (ex)
println(ex.printStackTrace)
}
}
I'm getting an error for undefined function:
java.lang.NoSuchMethodError: 'com.fasterxml.jackson.databind.cfg.MutableCoercionConfig com.fasterxml.jackson.dataformat.xml.XmlMapper.coercionConfigDefaults()'
at com.fasterxml.jackson.dataformat.xml.XmlMapper.<init>(XmlMapper.java:175)
at com.fasterxml.jackson.dataformat.xml.XmlMapper.<init>(XmlMapper.java:144)
at com.fasterxml.jackson.dataformat.xml.XmlMapper.<init>(XmlMapper.java:126)
at com.fasterxml.jackson.dataformat.xml.XmlMapper.builder(XmlMapper.java:209)
at com.azure.core.util.serializer.JacksonAdapter.<init>(JacksonAdapter.java:137)
at com.azure.storage.blob.implementation.util.ModelHelper.<clinit>(ModelHelper.java:49)
at com.azure.storage.blob.BlobUrlParts.parse(BlobUrlParts.java:371)
at com.azure.storage.blob.BlobServiceClientBuilder.endpoint(BlobServiceClientBuilder.java:132)
at liftedTree1$1(<console>:44)
... 37 elided
I have tried changing the version for jackson core, dataformat and databind, but I think the notebook is loading 2.12.3 by default.
Am I missing a dependency or something?
You might be missing "javax.xml.stream" dependency. Here you can find the Jar but i don't know how you can add it to python.
https://mvnrepository.com/artifact/javax.xml.stream/stax-api/1.0-2

List of Cassandra error codes

While using the datastax node.js driver I'm getting an exception code as documented under http://docs.datastax.com/en/developer/nodejs-driver-dse/1.4/api/module.errors/class.ResponseError/.
However I cannot find any documentation about all available exception codes. Anybody an idea where to find?
I'm not sure that the code values are specifically documented anywhere but you could always look at the ExceptionCode source for the version of Cassandra you are working with.
On trunk this lists the errors as:
SERVER_ERROR (0x0000),
PROTOCOL_ERROR (0x000A),
BAD_CREDENTIALS (0x0100),
// 1xx: problem during request execution
UNAVAILABLE (0x1000),
OVERLOADED (0x1001),
IS_BOOTSTRAPPING (0x1002),
TRUNCATE_ERROR (0x1003),
WRITE_TIMEOUT (0x1100),
READ_TIMEOUT (0x1200),
READ_FAILURE (0x1300),
FUNCTION_FAILURE (0x1400),
WRITE_FAILURE (0x1500),
CDC_WRITE_FAILURE (0x1600),
// 2xx: problem validating the request
SYNTAX_ERROR (0x2000),
UNAUTHORIZED (0x2100),
INVALID (0x2200),
CONFIG_ERROR (0x2300),
ALREADY_EXISTS (0x2400),
UNPREPARED (0x2500);
The response error codes are not properly documented in the driver, I've created a ticket for it: https://datastax-oss.atlassian.net/browse/NODEJS-418
In the meantime, you should be getting code completion on your IDE (VS Code / WebStorm) and/or look at the code:
const responseErrorCodes = {
serverError: 0x0000,
protocolError: 0x000A,
badCredentials: 0x0100,
unavailableException: 0x1000,
overloaded: 0x1001,
isBootstrapping: 0x1002,
truncateError: 0x1003,
writeTimeout: 0x1100,
readTimeout: 0x1200,
readFailure: 0x1300,
functionFailure: 0x1400,
writeFailure: 0x1500,
syntaxError: 0x2000,
unauthorized: 0x2100,
invalid: 0x2200,
configError: 0x2300,
alreadyExists: 0x2400,
unprepared: 0x2500
};
To check against a certain error code, you should use something like:
if (err.code === cassandra.types.responseErrorCodes.syntaxError) {
// ...
}

redis Error : ERR wrong number of arguments for 'set' command

i have an error with redis set command on local redis server(127.0.0.1:6379)
versions:
npm version : 2.15.0;
node version : 4.4.2;
nodejs verison : 0.10.25;
redis version : 2.7.1;
Error:
events.js:141 throw er; // Unhandled 'error' event
ReplyError: ERR wrong number of arguments for 'set' command at parseError
(/opt/xxx/xxx/node_modules/redis/node_modules/redis
parser/lib/parser.js:193:12) at parseType
(/opt/xxx/xxx/node_modules/redis/node_modules/redis-
parser/lib/parser.js:303:14)
all of my codes look like this:
redis.set("key","value")
on my local machine the code is running successfully , but on aws linux machine i got this error.
var matchedMaps = map.get(publicURIField);
if(matchedMaps) {
matchedMaps.forEach(function(matchedMap){
var patternToValidate = matchedMap.pattern;
var type = matchedMap.type;
var tagID = matchedMap.tagID;
var patternToCheck = "cs-uri-stem";
var patternToSave = "";
if(type==1){
patternToCheck = "c-referrer";
}
var regexToFind = new RegExp(patternToValidate.substring(1,patternToValidate.length-1));
var matchedPattern;
if (regexToFind.test(rawLogParsed[patternToCheck].toString())) {
if (matchedMap.regexType=="&"){
matchedMap.patterns.forEach(function(patternObject){
var key = patternObject.pattern.split("=")[0];
var value = rawLogParsed[patternToCheck].toString().split(key)[1];
if(rawLogParsed[patternToCheck].toString().split(key)[1].split("&")){
value = rawLogParsed[patternToCheck].toString().split(key)[1].split("&")[0];
}
patternToSave += key+value+"&";
});
}else{
matchedMap.patterns.forEach(function(patternObject){
if(patternObject.pattern.indexOf("*")>-1){
patternObject.pattern = patternObject.pattern.replace(/\*!/g, '.*');
}
patternToSave += rawLogParsed[patternToCheck].toString().match(patternObject.pattern)+"/";
});
}
patternToSave = patternToSave.substring(0,patternToSave.length-1);
var matchedField = publicURIField,matchedPattern = patternToSave
,key = tagID + "_"+userID+"_"+ matchedField + "_" + matchedPattern + "_" + type + "_" + fixedMinuteNumber;
if (tagUsageInfo[startKeyForRedis+key] == undefined) {
var tagObject = {
pattern:matchedPattern,
matchedField:matchedField,
userID:userID,
tagName:matchedMap.tagName,
monthNumber:parseInt(mMonthToCheck),
minuteNumber: parseInt(fixedMinuteNumber),
hourNumber: parseInt(yearMonthDayHourToCheck),
dayNumber: parseInt(yearMonthDayToCheck),
tagID: tagID,
matchedPattern: matchedPattern,
totalRequests: 1,
totalEgress: parseInt(bytes),
totalTransfered: parseInt(bytes),
totalRest: parseInt(totalWorld),
totalUS: parseInt(totalUS)
}
if(isIngress){
tagObject.totalIngres += parseInt(bytes);
}
dbclient1.set(startKeyForRedis+"tagUsage_"+key,JSON.stringify(tagObject));
tagUsageInfo[startKeyForRedis+"tagUsage_"+key] = startKeyForRedis+key;
}
else {
dbclient1.get(startKeyForRedis+"tagUsage_"+key, function(err, tagObject) {
var tagObjectJson = JSON.parse(tagObject);
tagObjectJson.totalRequests += 1;
tagObjectJson.totalEgress += parseInt(bytes);
tagObjectJson.totalTransfered += parseInt(bytes);
tagObjectJson.totalRest += parseInt(totalWorld);
tagObjectJson.totalUS += parseInt(totalUS);
tagObjectJson.totalRequests += 1;
if(isIngress){
tagObject.totalIngres += parseInt(bytes);
}
dbclient1.del(startKeyForRedis+"tagUsage_"+key);
dbclient1.set(startKeyForRedis+"tagUsage_"+key, JSON.stringify(tagObjectJson));
});
}
}
});
}
any help?
1)If your trying to run redis on windows set accepts only two arguments cause the redis version issue
2)try latest version of redis on linux it will work
Try installing this version of Redis on windows from the following link. You can find more information here https://github.com/ServiceStack/redis-windows
This link provides three options to install Redis on windows
Option 1) Install Redis on Ubuntu on Windows
Option 2) Running the latest version of Redis with Vagrant
Option 3) Running Microsoft's native port of Redis
I personally prefer option 3.
Hope this helped. Thanks.
all of my codes look like this: [...]
It's not important how all of your code looks like. It's important how the specific line that caused the problem looks like but unfortunately you didn't include it.
The errors that you provided include some files and line numbers but you seem to have removed the ones that are related to your code. If you read those messages carefully then you should be able to know what lines those errors are related to and focus on those lines.
If the errors show up on a server and not on your desktop then I would suspect that maybe you're trying to use some environment variables or files on the file system to populate some variables in your program, and those are not available on the server resulting in putting undefined there.
You will surely find the problem when you add console.log() statements to every place where you want to access Redis so that you first print it and then call to Redis. That way at least you will know what data is causing the problem. I suspect that you are having some undefined values or something like that.
Remember that JSON.stringify(undefined) returns undefined instead of a valid JSON string. Something like that may be causing problems. Adding debug messages will help to narrow it down.
Some extra advice: You can use prefix parameter of the redis module then you will not have to add startKeyForRedis+ all over the place. You can set a prefix once and have it prepended automatically. See the docs:
https://www.npmjs.com/package/redis
I was doing learning to use KUE a nodejs library for job scheduling that uses redis for saving data.
I got this error while running client.js(that puts jobs in queue) and worker.js(that process the schedule jobs).
I was running worker before running the client and that is why this happened.
I reversed the order and everything went fine!
To fix this error on windows Redis from v.3 required.
That's why I've took zipped release 3.0.504 from here and now all is working.
Quite simple.
I have faced similar type of error which was because of older version of Redis. This is compatibility issue which fixes a bug in Redis after Redis 2.6.12. Make sure you install recent version of Redis v3.X.

Error using sstableloader on cassandra

I am a cassandra newbie. I see the following error with Cassandra cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0
Here is the error I see when using sstableloader:
./sstableloader -d <hostname> -u <user> -pw <pass> <filename>
Could not retrieve endpoint ranges:
java.lang.IllegalArgumentException
java.lang.RuntimeException: Could not retrieve endpoint ranges:
at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:337)
at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:157)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:105)
Caused by: java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:275)
at org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543)
at org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:122)
at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:99)
at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:28)
at org.apache.cassandra.serializers.CollectionSerializer.deserialize(CollectionSerializer.java:48)
at org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:66)
at org.apache.cassandra.cql3.UntypedResultSet$Row.getMap(UntypedResultSet.java:282)
at org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1793)
at org.apache.cassandra.config.CFMetaData.fromThriftCqlRow(CFMetaData.java:1101)
at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:329)
... 2 more
What is weird is that I get this error only for a particular keyspace. When I creating a new keyspace (with the same exact command as the issue keyspace and try sstableloader I am not seeing the same issue. When I set DEBUG log level I see the following:
DEBUG [Thrift:1] 2015-02-20 00:32:38,006 CustomTThreadPoolServer.java:212 - Thrift transport error occurred during processing of message.
org.apache.thrift.transport.TTransportException: null
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) ~[libthrift-0.9.1.jar:0.9.1]
at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:202) ~[apache-cassandra-2.1.2.jar:2.1.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
Not sure if this is actually an error since per some links online I see that this message appears regardless when setting debug log level
I'm trying this with Cassandra 2.1.8. The error you're seeing is the product of this block of Cassandra code:
try
{
// Query endpoint to ranges map and schemas from thrift
InetAddress host = hostiter.next();
Cassandra.Client client = createThriftClient(host.getHostAddress(), rpcPort, this.user, this.passwd, this.transportFactory);
setPartitioner(client.describe_partitioner());
Token.TokenFactory tkFactory = getPartitioner().getTokenFactory();
for (TokenRange tr : client.describe_ring(keyspace))
{
Range<Token> range = new Range<>(tkFactory.fromString(tr.start_token), tkFactory.fromString(tr.end_token), getPartitioner());
for (String ep : tr.endpoints)
{
addRangeForEndpoint(range, InetAddress.getByName(ep));
}
}
String cfQuery = String.format("SELECT * FROM %s.%s WHERE keyspace_name = '%s'",
Keyspace.SYSTEM_KS,
SystemKeyspace.SCHEMA_COLUMNFAMILIES_CF,
keyspace);
CqlResult cfRes = client.execute_cql3_query(ByteBufferUtil.bytes(cfQuery), Compression.NONE, ConsistencyLevel.ONE);
for (CqlRow row : cfRes.rows)
{
String columnFamily = UTF8Type.instance.getString(row.columns.get(1).bufferForName());
String columnsQuery = String.format("SELECT * FROM %s.%s WHERE keyspace_name = '%s' AND columnfamily_name = '%s'",
Keyspace.SYSTEM_KS,
SystemKeyspace.SCHEMA_COLUMNS_CF,
keyspace,
columnFamily);
CqlResult columnsRes = client.execute_cql3_query(ByteBufferUtil.bytes(columnsQuery), Compression.NONE, ConsistencyLevel.ONE);
CFMetaData metadata = CFMetaData.fromThriftCqlRow(row, columnsRes);
knownCfs.put(metadata.cfName, metadata);
}
break;
}
catch (Exception e)
{
if (!hostiter.hasNext())
throw new RuntimeException("Could not retrieve endpoint ranges: ", e);
}
So, what you have is a large variety of errors all rolled up in the message, "Could not retrieve endpoint ranges." You will not be able to tell what your specific error is without downloading the Cassandra source and debugging through it. That's what I did.
My schema is built in a multi-step process using https://github.com/DonBranson/cql_schema_versioning. One step does this:
ALTER TABLE user_reputation DROP ban_votes;
The DROP triggers a Cassandra BulkLoader bug that prints the error message you're seeing. However, the myriad other error conditions show the same message. The error message gives us absolutely nothing to help actually solve the problem.
I also found that the BulkLoader will not work if you're encrypting internode communication like this:
internode_encryption: all
So, in the DigitalOcean cloud where I'm running, where I have to encrypt internode comm, it will fail, but at least it will display a message that indicates a connection failure:
../apache-cassandra-2.1.8/bin/sstableloader -d 192.168.56.101 makeyourcase/arenas
Established connection to initial hosts
Opening sstables and calculating sections to stream
Skipping file makeyourcase-arenas.arenas_name_idx-jb-2-Data.db: column family makeyourcase.arenas.arenas_name_idx doesn't exist
Skipping file makeyourcase-arenas.arenas_name_idx-jb-1-Data.db: column family makeyourcase.arenas.arenas_name_idx doesn't exist
Streaming relevant part of makeyourcase/arenas/makeyourcase-arenas-jb-2-Data.db makeyourcase/arenas/makeyourcase-arenas-jb-1-Data.db to [/192.168.56.102, /192.168.56.101]
ERROR 01:46:39 [Stream #a7e5fb80-3593-11e5-9b52-cdde6a46fde5] Streaming error occurred
java.net.ConnectException: Connection refused
at sun.nio.ch.Net.connect0(Native Method) ~[na:1.7.0_71]

Resources