Cassandra Pig example failing with wide row input enabled - cassandra

Using Cassandra 1.1.6, Pig 0.10.0 and Hadoop 1.1.0, I can successfully run the pig_cassandra example script in provided with cassandra in examples/pig.
But when I change
rows = LOAD 'cassandra://PigTest/SomeApp' USING CassandraStorage();
to:
rows = LOAD 'cassandra://PigTest/SomeApp?widerows=true' USING CassandraStorage();
I get the following error:
java.lang.IndexOutOfBoundsException: Index: 8, Size: 2
at java.util.ArrayList.rangeCheck(ArrayList.java:604)
at java.util.ArrayList.get(ArrayList.java:382)
at org.apache.pig.data.DefaultTuple.get(DefaultTuple.java:156)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POProject.processInputBag(POProject.java:579)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POProject.getNext(POProject.java:248)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.getNext(PhysicalOperator.java:316)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.processPlan(POForEach.java:332)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNext(POForEach.java:284)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:290)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNext(POForEach.java:233)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:290)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POPreCombinerLocalRearrange.getNext(POPreCombinerLocalRearrange.java:126)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:290)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNext(POForEach.java:233)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:290)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNext(POLocalRearrange.java:256)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:271)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:266)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
This happens when running in both in local and mapreduce mode, or if I set PIG_WIDEROW_INPUT=true.
The following Pig Latin script will fail with the "widerows=true" parameter present.
rows = LOAD 'cassandra://PigTest/SomeApp?widerows=true' USING CassandraStorage();
cols = FOREACH rows GENERATE flatten(columns.name);
DUMP cols;
I can't seem to get beyond this, not read the static columns in the SomeApp column family when using wide row input. The same issue is present with other column families.

I had a similar issue. It may be because of bugs in get_paged_slices which were fixed in later 1.1.x releases. The solution would be to upgrade Cassandra to 1.1.8 1.1.9
See:
CASSANDRA-4919: StorageProxy.getRangeSlice sometimes returns incorrect number of columns
CASSANDRA-4816: Broken get_paged_slice
CASSANDRA-5098: CassandraStorage doesn't decode name in widerow mode

Related

Hbase | Hbase col qualifier hidden using Hbase shell cmds but visible via hbaserdd spark code

I am stuck in a very odd situation related to Hbase design i would say.
Hbase version >> Version 2.1.0-cdh6.2.1
So, the problem statement is, in Hbase, we have a row in our table.
We perform new insert and then subsequent updates of the same Hbase row, as we receive the data from downstream.
say we received data like below
INSERT of {a=1,b=1,c=1,d=1,rowkey='row1'}
UPDATE of {b=1,c=1,d=1,rowkey='row1'}
and
say the final row is like this in our Hbase table
hbase(main):008:0> get 'test', 'row1'
COLUMN CELL
cf:b timestamp=1288380727188, value=value1
cf:c timestamp=1288380727188, value=value1
cf:d timestamp=1288380727188, value=value1
1 row(s) in 0.0400 seconds
So, cf:a, column qualifier is missing in above data as visible above when fetched via scan or get commands. But as per our ingestion flow/process, it should have been there. So, we are triaging as to where it went or what happened and so on. Still the analysis is in process and we are kind of clueless as to where it is.
Now, cut story short, we have a spark util to read the Hbase table into a Rdd, via
hbasecontext.hbaseRdd API function, convert it into a dataframe and display the tabular data. So, we ran this spark util on the same table to help locate this row and very surprisingly it returned 2 rows for the this same rowkey 'row1', where 1st row was the same as above get/scan row (above data) and the 2nd row had our missing column cf:a (surprising it had the same value which was expected). Say the output dataframe appeared something like below.
rowkey |cf:a |cf:b|cf:c|cf:d
row1 |null | 1 | 1 | 1 >> cf:a col qualifier missing (same as in Hbase shell)
row1 | 1 | 1 | 1 | 1 >> This cf:a was expected
We checked our Hbase table schema as well, so we dont have multiple versions of the cf:a in the describe or we dont do versioning on the table. The schema of the Hbase table describe has
VERSIONS => '1'
Anyways, i am clueless as to how hbaseRdd is able to read that row or missing col qualifier, but the Hbase shell cmds via get, scans does not read the missing col qualifier or row.
Any Hbase expert or suggestions please.
Fyi, i tried Hbase shell cmds as well via get - versions on the row, but it only returns the above get data and not the missing cf:a.
Is the col qualifier cf:a marked for deletion or something like that, which the Hbase shell cmd doesn't show ?
Any help would be appreciated.
Thanks !!
This is a strange problem, which I suspect has to do with puts with the same rowkey having different column qualifiers at different times. However, I just tried to recreate this behaviour and I don't seem to be getting this problem. But I have a regular HBase 2.x build, as opposed to yours.
One option I would recommend to explore the problem more closely is to inspect the HFiles physically, outside of hbase shell. You can use the HBase HFile utility to print the physical key-value content at the HFile level. Obviously try to do this on a small HFile! Don't forget to flush and major-compact your table before you do it though, because HBase stores all updates in memory while it can.
You can launch the utility as below, and it will print all key-values sequentially:
hbase hfile -f hdfs://HDFS-NAMENODE:9000/hbase/data/default/test/29cfaecf083bff2f8aa2289c6a078678/f/09f569670678405a9262c8dfa7af8924 -p --printkv
In the above command, HDFS-NAMENODE is your HDFS server, default is your namespace (assuming you have none), test is your table name, and f is the column family name. You can find out the exact path to your HFiles by using the HDFS browse command recursively:
hdfs dfs -ls /hbase/data
[Updated] We worked with Cloudera and found the issue was due to the Hbase region servers getting overlapped. Cloudera fixed it for us. I dont have the full details how they did it.

Codec not found for requested operation: ['org.apache.cassandra.db.marshal.SimpleDateType' <-> com.datastax.driver.core.LocalDate]

I am using java8 and cassandra in my application.
The datatype of current_date in cassandra table is 'date'.
I am using entities to map to the table values. and the datatype in entity for the same field is com.datastax.driver.core.LocalDate.
When I am trying to retrieve a record
'Select * from table where current_date='2017-06-06';'
I am getting the following error'
Caused by: com.datastax.driver.core.exceptions.CodecNotFoundException: Codec
not found for requested operation:
['org.apache.cassandra.db.marshal.SimpleDateType' <->
com.datastax.driver.core.LocalDate]
I faced a similar error message while querying cassandra from Presto.
I needed to set to cassandra.protocol-version=V4 in cassandra.properties in Presto to resolve the problem in my case.
If you get this problem while using a java SDK application, check whether changing protocol version resolves the problem. In some cases, you have to write your own codec implementation.
By default, Java driver will map date type into com.datastax.driver.core.LocalDate Java type.
If you need to convert date to java.time.LocalDate, then you need to add extras to your project :
You can specify codec for given column only:
#Column(codec = LocalDateCodec.class) java.time.LocalDate current_date ;
If these two didnot work, please have a look into the code how you are creating the session,cluster etc to connect to database. Since date is a new addition to cassandra data type, Protocol version can also have an impact.
Update the version accordingly

ERROR for load files in HBase at Azure with ImportTsv

Trying to load tsv file in HBase running in HDInsight in Microsoft Azure cloud using a recommended approach connecting through Remote Desktop and running on the command line trying to load t1.tsv file (with two tab separated columns) from hdfs into hbase t1 table:
C:\apps\dist\hbase-0.98.0.2.1.5.0-2057-hadoop2\bin>hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=HBASE_ROW_KEY,num t1 t1.tsv
and get:
ERROR: One or more columns in addition to the row key and timestamp(optional) are required
Usage: importtsv -Dimporttsv.columns=a,b,c
replacing order of the specified columns to num,HBASE_ROW_KEY
C:\apps\dist\hbase-0.98.0.2.1.5.0-2057-hadoop2\bin>hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=num,HBASE_ROW_KEY t1 t1.tsv
I get:
ERROR: Must specify exactly one column as HBASE_ROW_KEY
Usage: importtsv -Dimporttsv.columns=a,b,c
This tells me that comma separator in the column list is not recognized or column name is incorrect I also tried to use column with qualifier as num:v and as 'num' - nothing helps
Any ideas what could be wrong here? Thanks.
>hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns="HBASE_ROW_KEY,d:c1,d:c2" testtable /example/inputfile.txt
This works for me. I think there are some differences between terminals in Linux and Windows, thus in windows you need to add quotation marks to clarify this is a value string, otherwise might not be recognized.

Cannot modify CompositeType comparator on Cassandra column family

Using cassandra-cli, an attempt to modify a CompositeType comparator (eg., to add or remove a field) fails with an error:
[default#KS] describe CF;
ColumnFamily: CF
Cells sorted by:
org.apache.cassandra.db.marshal.CompositeType(
org.apache.cassandra.db.marshal.LongType,
org.apache.cassandra.db.marshal.LongType,
org.apache.cassandra.db.marshal.UTF8Type)
[default#KS] truncate CF;
CF truncated.
[default#KS] update column family CF with comparator =
'CompositeType(
org.apache.cassandra.db.marshal.LongType,
org.apache.cassandra.db.marshal.UTF8Type)';
comparators do not match or are not compatible.
An attempt to work around this by dropping and recreating the column family worked fine until restarting, apparently caused by this issue:
cassandra Exception encountered during startup: index (1) must be less than size (1)
How should this case be handled properly? I suppose doing a nodetool flush after the drop would prevent issues with the commit log having incompatible data? Is there a way to modify the CompositeType comparator without doing a drop/create?
There is no way to modify the CompositeType comparator without doing a drop/create? After drop, you need do a nodetool clearsnapshot. Check this link
http://wiki.apache.org/cassandra/CassandraCli08

Cassandra display of column value

I have upgraded Cassandra from v0.6 to v0.7.2 following the instructions in NEWS.txt. It seemed to be successful, except that the column value has changed.
For example, in 0.6, there was a column that looked like this:
(column=Price, value='2.5')
Now, in 0.7.2, the same column has changed to this:
(column=Price, value=32392e3939)
How can I fix this problem?
The CLI no longer makes assumptions about the type of data you're viewing, so all outputs are in hex unless the data type is known or you tell the CLI to assume a data type.
See this section of the documentation on human readable data in the CLI for more details.

Resources