I want to view the "rowkey" with its stored data in cassandra 3.0. I know, the depreciated cassandra-cli had the 'list'-command. However, in cassandra 3.0, I cannot find the replacement for the 'list'-command. Anyone knows the new cli-command for 'list'?
You can use sstabledump utility as #chris-lohfink suggested. How to use it? Create keyspace, table in it populate some data:
cqlsh> CREATE KEYSPACE IF NOT EXISTS minetest WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
cqlsh> CREATE TABLE object_coordinates (
... object_id int PRIMARY KEY,
... coordinate text
... );
cqlsh> use minetest;
cqlsh:minetest> insert into object_coordinates (object_id, coordinate) values (564682,'59.8505,34.0035');
cqlsh:minetest> insert into object_coordinates (object_id, coordinate) values (1235,'61.7814,40.3316');
cqlsh:minetest> select object_id, coordinate, writetime(coordinate) from object_coordinates;
object_id | coordinate | writetime(coordinate)
-----------+-----------------+-----------------------
1235 | 61.7814,40.3316 | 1480436931275615
564682 | 59.8505,34.0035 | 1480436927707627
(2 rows)
object_id is a primary (partition key) key, coordinate is clustering one.
Flush changes to disk:
# nodetool flush
Find sstable on disk and analyze it:
# cd /var/lib/cassandra/data/minetest/object_coordinates-e19d4c40b65011e68563f1a7ec2d3d77
# ls
backups mc-1-big-CompressionInfo.db mc-1-big-Data.db mc-1-big-Digest.crc32 mc-1-big-Filter.db mc-1-big-Index.db mc-1-big-Statistics.db mc-1-big-Summary.db mc-1-big-TOC.txt
# sstabledump mc-1-big-Data.db
[
{
"partition" : {
"key" : [ "1235" ],
"position" : 0
},
"rows" : [
{
"type" : "row",
"position" : 18,
"liveness_info" : { "tstamp" : "2016-11-29T16:28:51.275615Z" },
"cells" : [
{ "name" : "coordinate", "value" : "61.7814,40.3316" }
]
}
]
},
{
"partition" : {
"key" : [ "564682" ],
"position" : 43
},
"rows" : [
{
"type" : "row",
"position" : 61,
"liveness_info" : { "tstamp" : "2016-11-29T16:28:47.707627Z" },
"cells" : [
{ "name" : "coordinate", "value" : "59.8505,34.0035" }
]
}
]
}
]
Or with -d flag:
# sstabledump mc-1-big-Data.db -d
[1235]#0 Row[info=[ts=1480436931275615] ]: | [coordinate=61.7814,40.3316 ts=1480436931275615]
[564682]#43 Row[info=[ts=1480436927707627] ]: | [coordinate=59.8505,34.0035 ts=1480436927707627
Output says that 1235 and 564682 and saves coordinates in those partitions.
Link to doc http://www.datastax.com/dev/blog/debugging-sstables-in-3-0-with-sstabledump
PS. sstabledump is provided by cassandra-tools package in ubuntu.
Related
I tried creating hive table on avro using avsc spec file and need to renam some of the columns . used alias but seems its not working. the columns are returned as null when i query the table
SPARK DATAFRAME TO SAVE DATA
val data=Seq(("john","adams"),("john","smith"))
val columns = Seq("fname","lname")
import spark.sqlContext.implicits._
val df=data.toDF(columns:_*)
df.write.format("avro").save("/test")
AVSC Spec file
{
"type" : "record",
"name" : "test",
"doc" : " import of test",
"fields" : [ {
"name" : "first_name",
"type" : [ "null", "string" ],
"default" : null,
"aliases" : [ "fname" ],
"columnName" : "fname",
"sqlType" : "12"
}, {
"name" : "last_name",
"type" : [ "null", "string" ],
"default" : null,
"aliases" : [ "lname" ],
"columnName" : "lname",
"sqlType" : "12"
} ],
"tableName" : "test"
}
EXTERNAL HIVE TABLE
create external table test
STORED AS AVRO
LOCATION '/test'
TBLPROPERTIES ('avro.schema.url'='/test.avsc');
HIVE QUERY
SELECT last_name from test;
returns null even though there is data in avro with the original name ie lname
My question is somewhat similar to this ( Athena/Presto - UNNEST MAP to columns ). But in my case, I know what columns I need before hand.
My use case is this
I have a json blob which contains the following structures
{
"reqId" : "1234",
"clientId" : "client",
"response" : [
{
"name" : "Susan",
"projects" : [
{
"name" : "project1",
"completed" : true
},
{
"name" : "project2",
"completed" : false
}
]
},
{
"name" : "Adams",
"projects" : [
{
"name" : "project1",
"completed" : true
},
{
"name" : "project2",
"completed" : false
}
]
}
]
}
I need to create a view which will return output something like this
name | project | Completed |
----------+-------------+------------+
Susan | project1 | true |
Susan | project2 | false |
Adams | project1 | true |
Adams | project2 | false |
I tried the following and other approaches. This one was the closest I can get
WITH dataset AS (
SELECT 'Susan' as name, transform(filter(CAST(json_extract('{
"projects": [{"name":"project1", "completed":false}, {"name":"project3", "completed":false},
{"name":"project2", "completed":true}]}', '$.projects') AS ARRAY<MAP<VARCHAR, VARCHAR>>), p -> (p['name'] != 'project1')), p -> ROW(map_values(p))) AS projects
)
SELECT * from dataset
CROSS JOIN UNNEST(projects)
This is the output I am getting
name projects _col2
1 Susan [{field0=[project3, false]}, {field0=[project2, true]}] {field0=[project3, false]}
2 Susan [{field0=[project3, false]}, {field0=[project2, true]}] {field0=[project2, true]}
I basically want to unnest the key-value pairs of my map as separate columns. How do I do this in presto / Athena?
Your JSON example seems to be invalid, it misses a , after "name" : "Susan" and "name" : "Adams". Besides that, you can achieve your expected output by this query, you need to UNNEST two times and also requires some casting:
with dataset as
(
select json_parse('{"reqId" : "1234","clientId" : "client","response" : [{"name" : "Susan","projects" : [{"name" : "project1","completed" : true},{"name" : "project2","completed" : false}]},{"name" : "Adams","projects" : [{"name" : "project1","completed" : true},{"name" : "project2","completed" : false}]}]}') as json_col
)
,unnest_response as
(
select *
from dataset
cross join UNNEST(cast(json_extract(json_col, '$.response') as array<JSON>)) as t (response)
)
select
json_extract_scalar(response, '$.name') name,
json_extract_scalar(project, '$.name') project_name,
json_extract_scalar(project, '$.completed') project_completed
from unnest_response
cross join UNNEST(cast(json_extract(response, '$.projects') as array<JSON>)) as t (project);
I have a dataset like this:
{
"_id" : ObjectId("5ede1b6c317aca326c2f18d7"),
"createdate" : ISODate("2020-06-11T18:30:00.000Z"),
"userHolder" : [
{
"time" : "12:00",
"user" : [
"5ede1ff42b3e633edc0ba10e"
]
},
{
"time" : "16:30",
"user" : []
}
],
},
{
"_id" : ObjectId("5ede1b6c317aca326c2f18d8"),
"createdate" : ISODate("2020-06-121T18:30:00.000Z"),
"userHolder" : [
{
"time" : "12:30",
"user" : [
"5ede1ff42b3e633edc0ba10f"
]
},
{
"time" : "13:00",
"user" : [
"5ede1ff42b3e633edc0ba10e"
]
},
{
"time" : "12:00",
"user" : [
"5ede1ff42b3e633edc0ba10f"
]
},
{
"time" : "16:30",
"user" : []
}
],
}
I split the half hour entry. i,e full day 48 columns on userHolder columns. Like 12:30, 13:00, 13:30 and so on. If user not have entry then that column will not create.
So if I want to search 5ede1ff42b3e633edc0ba10e this id on the complete table then how to write the query.
I tried to use >$all operator but this not works on nested structure.
There is a $elemMatch but for that query will be too large as I have to write the 48 conditions of timestamp. Expected result is query return the _id of the entry so that it will clear that these id will exist on n numbers of entry. I want the Data not count.
Any help is really appreciated for that.
I am unable to understand how Cassandra counters are stored on the disk.
Create test table
create table testcounter (
id text,
count counter,
PRIMARY KEY(id))
WITH compaction = {'class':
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy',
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class':
'org.apache.cassandra.io.compress.LZ4Compressor'}
Add data
update testcounter set count = count + 10 where id = 'testrow';
Check sstable
nodetool flush test testcounter
sstabledump /usr/local/var/lib/cassandra/data/test/testcounter-87d6ae20908e11e9a5779f988085883a/mc-1-big-Data.db
Response from sstabledump
[
{
"partition" : {
"key" : [ "testrow" ],
"position" : 0
},
"rows" : [
{
"type" : "row",
"position" : 63,
"cells" : [
{ "name" : "count", "value" : 422215477737628, "tstamp" : "2019-06-16T23:30:34.423470Z" }
]
}
]
}
Update existing data
update testcounter set count = count + 10 where id = 'testrow';
update testcounter set count = count + 10 where id = 'testrow';
Flush
nodetool flush test testcounter
At this point, there are two sets of db files.
ls /usr/local/var/lib/cassandra/data/test/testcounter-87d6ae20908e11e9a5779f988085883a/
backups mc-1-big-Digest.crc32 mc-1-big-Statistics.db mc-2-big-CompressionInfo.db mc-2-big-Filter.db mc-2-big-Summary.db
mc-1-big-CompressionInfo.db mc-1-big-Filter.db mc-1-big-Summary.db mc-2-big-Data.db mc-2-big-Index.db mc-2-big-TOC.txt
mc-1-big-Data.db mc-1-big-Index.db mc-1-big-TOC.txt mc-2-big-Digest.crc32 mc-2-big-Statistics.db
sstabledump for mc-1
[
{
"partition" : {
"key" : [ "testrow" ],
"position" : 0
},
"rows" : [
{
"type" : "row",
"position" : 63,
"cells" : [
{ "name" : "count", "value" : 422215477737628, "tstamp" : "2019-06-16T23:30:34.423470Z" }
]
}
]
}
sstabledump for mc-2
[
{
"partition" : {
"key" : [ "testrow" ],
"position" : 0
},
"rows" : [
{
"type" : "row",
"position" : 65,
"cells" : [
{ "name" : "count", "value" : 422215477737628, "tstamp" : "2019-06-16T23:34:37.245893Z" }
]
}
]
}
It looks like there are no tombstones and even the counter values are not stored. What is happening behind-the-scenes?
After 2.1 its actually a read before write then stores essentially a packed tuple which isnt very obvious or easy to deserialize. Might be worth opening a jira to have sstabledump deserialize the context and make it more readable.
For more details see: https://www.datastax.com/dev/blog/whats-new-in-cassandra-2-1-a-better-implementation-of-counters
According to this issue, Cassandra's storage format was updated in 3.0.
If previously I could use cassandra-cli to see how the SSTable is built, to get something like this:
[default#test] list phonelists;
-------------------
RowKey: scott
=> (column=, value=, timestamp=1374684062860000)
=> (column=phonenumbers:bill, value='555-7382', timestamp=1374684062860000)
=> (column=phonenumbers:jane, value='555-8743', timestamp=1374684062860000)
=> (column=phonenumbers:patricia, value='555-4326', timestamp=1374684062860000)
-------------------
RowKey: john
=> (column=, value=, timestamp=1374683971220000)
=> (column=phonenumbers:doug, value='555-1579', timestamp=1374683971220000)
=> (column=phonenumbers:patricia, value='555-4326', timestamp=137468397122
What would the internal formal look like in the latest version of Cassandra? Could you provide an example?
What utility can I use to see the internal representation of the table in Cassandra in a way listed above, but with a new SSTable format?
All that I have found on the internet is that the partition header how stores column names, row stores clustering values and that there are no duplicated values.
How can I look into it?
Prior to 3.0 sstable2json was a useful utility for getting an understanding of how data is organized in SSTables. This feature is not currently present in cassandra 3.0, but there will be an alternative eventually. Until then myself and Chris Lohfink have developed an alternative to sstable2json (sstable-tools) for Cassandra 3.0 which you can use to understand how data is organized. There is some talk about bringing this into cassandra proper in CASSANDRA-7464.
A key differentiator between the storage format between older verisons of Cassandra and Cassandra 3.0 is that an SSTable was previously a representation of partitions and their cells (identified by their clustering and column name) whereas with Cassandra 3.0 an SSTable now represents partitions and their rows.
You can read about these changes in more detail by visiting this blog post by the primary developer of these changes who does a great job explaining it in detail.
The largest benefit you will see is that in the general case your data size will shrink (in some cases by a large factor), as a lot of the overhead introduced by CQL has been eliminated by some key enhancements.
Here's an example showing the difference between C* 2 and 3.
Schema:
create keyspace demo with replication = {'class': 'SimpleStrategy', 'replication_factor': 1};
use demo;
create table phonelists (user text, person text, phonenumbers text, primary key (user, person));
insert into phonelists (user, person, phonenumbers) values ('scott', 'bill', '555-7382');
insert into phonelists (user, person, phonenumbers) values ('scott', 'jane', '555-8743');
insert into phonelists (user, person, phonenumbers) values ('scott', 'patricia', '555-4326');
insert into phonelists (user, person, phonenumbers) values ('john', 'doug', '555-1579');
insert into phonelists (user, person, phonenumbers) values ('john', 'patricia', '555-4326');
sstable2json C* 2.2 output:
[
{"key": "scott",
"cells": [["bill:","",1451767903101827],
["bill:phonenumbers","555-7382",1451767903101827],
["jane:","",1451767911293116],
["jane:phonenumbers","555-8743",1451767911293116],
["patricia:","",1451767920541450],
["patricia:phonenumbers","555-4326",1451767920541450]]},
{"key": "john",
"cells": [["doug:","",1451767936220932],
["doug:phonenumbers","555-1579",1451767936220932],
["patricia:","",1451767945748889],
["patricia:phonenumbers","555-4326",1451767945748889]]}
]
sstable-tools toJson C* 3.0 output:
[
{
"partition" : {
"key" : [ "scott" ]
},
"rows" : [
{
"type" : "row",
"clustering" : [ "bill" ],
"liveness_info" : { "tstamp" : 1451768259775428 },
"cells" : [
{ "name" : "phonenumbers", "value" : "555-7382" }
]
},
{
"type" : "row",
"clustering" : [ "jane" ],
"liveness_info" : { "tstamp" : 1451768259793653 },
"cells" : [
{ "name" : "phonenumbers", "value" : "555-8743" }
]
},
{
"type" : "row",
"clustering" : [ "patricia" ],
"liveness_info" : { "tstamp" : 1451768259796202 },
"cells" : [
{ "name" : "phonenumbers", "value" : "555-4326" }
]
}
]
},
{
"partition" : {
"key" : [ "john" ]
},
"rows" : [
{
"type" : "row",
"clustering" : [ "doug" ],
"liveness_info" : { "tstamp" : 1451768259798802 },
"cells" : [
{ "name" : "phonenumbers", "value" : "555-1579" }
]
},
{
"type" : "row",
"clustering" : [ "patricia" ],
"liveness_info" : { "tstamp" : 1451768259908016 },
"cells" : [
{ "name" : "phonenumbers", "value" : "555-4326" }
]
}
]
}
]
While the output is larger (that is more of a consequence of the tool). The key differences you can see are:
Data is now a collection of Partitions and their Rows (which include cells) instead of a collection of Partitions and their Cells.
Timestamps are now at the row level (liveness_info) instead of at the cell level. If some row cells differentiate in their timestamps, the new storage engine does delta encoding to save space and associated the difference at the cell level. This also includes TTLs. As you can imagine this saves a lot of space if you have a lot of non-key columns as the timestamp does not need to be repeated.
The clustering information (in this case we are clustered on 'person') is now present at the Row level instead of cell level, which saves a bunch of overhead as the clustering column values don't have to be at the cell level.
I should note that in this particular example data case the benefits of the new storage engine aren't completely realized since there is only 1 non-clustering column.
There are a number of other improvements not shown here (like the ability to store row-level range tombstones).