I am using Nifi 1.1.1 and am trying to put data using a PutCassandraQL processor, but am getting
datastax.driver.core.exceptions.Transport exception:[null] Cannot
connect
while trying an option : node-0:servername.com:PORT, node-2:servername.com:PORT, node-3:servername.com:PORT
edit :
node-0.servername.com:9042,node-2.servername.com:9042,node-3.servername.com:9042
as given in the doumentation. Can someone tell me the cause of this error with an example of the properway to give the 'Cassandra Contact point' in Nifi
The Cassandra Contact Points property expects a hostname or IP followed by a colon and then the port number where the Cassandra node is listening. So if you have 3 nodes at:
node-0.servername.com:9042
node-2.servername.com:9042
node-3.servername.com:9042
Your Contact Points setting would be:
node-0.servername.com:9042,node-2.servername.com:9042,node-3.servername.com:9042
Related
How to use bind variables in a select statement.
When I am using it directly it is retrieving the values as below.
select event_hour
from stage_insight.insight_hourly_ts
where tag_id='UP247490.UPSYSCPWLV001A'
LIMIT 1;
How to use it dynamically?
select event_hour
from stage_insight.insight_hourly_ts
where tag_id = ? ;
For the second one, an error is displayed like, wrong amount of bind variables....
I am working with DataStax DevCenter. So, here I am trying to fetch the values directly from CassandraDB.
ResponseError: Invalid amount of bind variables\n
at FrameReader.readError (D:\\EACApp\\eac-app-management\\node_modules\\cassandra-driver\\lib\\readers.js:326:15)\n
at Parser.parseBody (D:\\EACApp\\eac-app-management\\node_modules\\cassandra-driver\\lib\\streams.js:194:66)\n
at Parser._transform (D:\\EACApp\\eac-app-management\\node_modules\\cassandra-driver\\lib\\streams.js:137:10)\n
at Parser.Transform._read (_stream_transform.js:205:10)\n at Parser.Transform._write (_stream_transform.js:193:12)\n
at writeOrBuffer (_stream_writable.js:352:12)\n at Parser.Writable.write (_stream_writable.js:303:10)\n
at Protocol.ondata (_stream_readable.js:719:22)\n at Protocol.emit (events.js:315:20)\n
cqlsh> show version
[cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4]
It isn't possible to use bind variables in DevCenter since they are only available when using prepared statements programatically.
If you are using bind variables in your Node.js app, my best guess is that you are passing the query parameters incorrectly although it's hard to say since you haven't provided enough information about your issue. In fact, the information you provided in your original question do not match what you've stated in the comments section.
Since you're new to Stack Overflow, a friendly suggestion that you learn how to ask good questions. The general guidance is that you (a) provide a good summary of the problem that includes software/component versions, the full error message + full stack trace; (b) describe what you've tried to fix the problem, details of investigation you've done; and (c) minimal sample code that replicates the problem.
In your case, you need to provide:
the CQL table schema
the driver you're using + version
minimal sample code
If you don't provide sufficient information in your questions, you are less likely to get help from forums or not get the right answers. Cheers!
I am using java8 and cassandra in my application.
The datatype of current_date in cassandra table is 'date'.
I am using entities to map to the table values. and the datatype in entity for the same field is com.datastax.driver.core.LocalDate.
When I am trying to retrieve a record
'Select * from table where current_date='2017-06-06';'
I am getting the following error'
Caused by: com.datastax.driver.core.exceptions.CodecNotFoundException: Codec
not found for requested operation:
['org.apache.cassandra.db.marshal.SimpleDateType' <->
com.datastax.driver.core.LocalDate]
I faced a similar error message while querying cassandra from Presto.
I needed to set to cassandra.protocol-version=V4 in cassandra.properties in Presto to resolve the problem in my case.
If you get this problem while using a java SDK application, check whether changing protocol version resolves the problem. In some cases, you have to write your own codec implementation.
By default, Java driver will map date type into com.datastax.driver.core.LocalDate Java type.
If you need to convert date to java.time.LocalDate, then you need to add extras to your project :
You can specify codec for given column only:
#Column(codec = LocalDateCodec.class) java.time.LocalDate current_date ;
If these two didnot work, please have a look into the code how you are creating the session,cluster etc to connect to database. Since date is a new addition to cassandra data type, Protocol version can also have an impact.
Update the version accordingly
I have searched high and low for an answer to this, but I have been stuck for 2 days. I am attempting to read data into BRO IDS from a file using :
Input::add_table([$source=sinkhole_list_location,
$name="sinkhole", $idx=Idx, $val=Val, $destination=sinkhole_list2, $mode=Input::REREAD]);
The file is formatted as stated by Bro documentation:
fields ip ipname
10.10.20.20 hi
8.8.8.8 hey
192.168.1.1 yo
Yet whenever I run this, or any of the other scripts out there on my Bro IDS I always get HEADERS ARE INCORRECT. What format should the file be in??????
error: sinkhole_ip.dat/Input::READER_ASCII: Did not find requested field ip in input data file sinkhole_ip.dat.
1481713377.164791 error: sinkhole_ip.dat/Input::READER_ASCII: Init: cannot open sinkhole_ip.dat; headers are incorrect
I can answer my own question here, its in the use of tab seperated files which BRO uses by default. Every single field must be tabbed.
Then you can output the table contents as a test within... Input::end_of_data event() as once this event has been received all data from the input file is available in the table.
I'm new to Graylog2. I'm using it for analyze the stored logs from Elasticsearch.
I have done the setup successfully using this link http://www.richardyau.com/?p=377
But, I parsed the logs to elasticsearch under the index name called "xg-*". Not sure why same has not been replicated in graylog2.
when I check the indices status in graylogs2 web interface, it shows only "graylog2_0" index. Not showing my index.
someone please help me what is the reason behind it.
Elasticsearch indices details:
[root#xg bin]# curl http://localhost:9200/_cat/indices?pretty
green open graylog2_0 4 0 0 0 576b 576b
yellow open xg-2015.12.12 5 1 56 0 335.4kb 335.4kb
[root#xg bin]#
Graylog2 Web indices details:
Graylog doesn't support other indexing schemes than its own. If you want to use Graylog to analyze your data, you also have to ingest it through Graylog.
it is the first time i use cassandra, so please excuse me if my question is naive :)
I have downloaded and extracted cassandra 1.2.4
i have run it using /usr/local/apache-cassandra-1.2.4/bin/cassandra -f
now i connect to it
root#Alaa:/usr/local/apache-cassandra-1.2.4# ./bin/cassandra-cli
Connected to: "Test Cluster" on 127.0.0.1/9160
Welcome to Cassandra CLI version 1.2.4
Type 'help;' or '?' for help.
Type 'quit;' or 'exit;' to quit.
[default#unknown] show cluster name
...
and those three dots remain forever!! any idea what is wrong?
You need to terminate the command with a ;, otherwise the shell has no way of telling that you're "done" entering a query/command:
show cluster name;
^---
That's why the help;, quit;, and exit; examples printed as part of the startup all include a ;...