Checking data if its present in a node - cassandra

I have a 6 node cluster and I kept the replication factor as 3 and 3 for the two data centers. I inserted a single row as test and later more rows. As the replication factor is 6 I want to check if the data is written into all nodes. How can I individually check if the data is present in that node. Worst option i got is shutting down the remaining 5 nodes and checking select statement from the 1 node and repeating same on all nodes. Is there any better way to check this? THanks for your time.

You can use nodetool getendpoints for this. Here is a sample table to keep track of Blade Runners.
aploetz#cqlsh:stackoverflow> SELECT * FROm bladerunners;
id | type | datetime | data
--------+--------------+--------------------------+---------------------------------------------
B25881 | Blade Runner | 2015-02-16 18:00:03+0000 | Holden- Fine as long as nobody unplugs him.
B26354 | Blade Runner | 2015-02-16 18:00:03+0000 | Deckard- Filed and monitored.
(2 rows)
Now if I exit back out to my command prompt, I can use nodetool getendpoints, followed by my keyspace, table, and a partition key value. The list of nodes containing data for that key should be displayed underneath:
aploetz#dockingBay94:~$ nodetool getendpoints stackoverflow bladerunners 'B26354'
127.0.0.1

Related

Best way to filter to a specific row in pyspark dataframe

I have what seems like a simple question, but I cannot figure it out. I am trying to filter to a specific row, based on an id (primary key) column, because I want to spot-check it against the same id in another table where a transform has been applied.
More detail... I have a dataframe like this:
| id | name | age |
| 1112 | Bob | 54 |
| 1123 | Sue | 23 |
| 1234 | Jim | 37 |
| 1251 | Mel | 58 |
...
except it has ~3000MM rows and ~2k columns. The obvious answer is something like df.filter('id = 1234').show(). The problem is that I have ~300MM rows and this query takes forever (as in 10-20 minutes on a ~20 node AWS EMR cluster).
I understand that it has to do table scan, but fundamentally I don't understand why something like df.filter('age > 50').show() finishes in ~30 seconds and the id query takes so long. Don't they both have to do the same scan?
Any insight is very welcome. I am using pyspark 2.4.0 on linux.
Don't they both have to do the same scan?
That depends on the data distribution.
First of all show takes only as little data as possible, so as long there is enough data to collect 20 rows (defualt value) it can process as little as a single partition, using LIMIT logic (you can check Spark count vs take and length for a detailed description of LIMIT behavior).
If 1234 was on the first partition and you've explicitly set limit to 1
df.filter('id = 1234').show(1)
the time would be comparable to the other example.
But if limit is smaller than number of values that satisfy the predicate, or values of interest reside in the further partitions, Spark will have to scan all data.
If you want to make it work faster you'll need data bucketed (on disk) or partitioned (in memory) using field of interest, or use one of the proprietary extensions (like Databricks indexing) or specialized storage (like unfortunately inactive, succint).
But really, if you need fast lookups, use a proper database - this what they are designed for.

How to select max timestamp in a partition using Cassandra

I have a problem modeling my data using Cassandra. I would like to use it as an event store. My events have creation timestamp. Those event belong to a partition which is identified by an id.
Now I'd like to see most recent event for each id and then filter this ids according to the timestamp.
So I have something like this:
ID | CREATION_TIMESTAMP | CONTENT
---+---------------------------------+----------------
1 | 2018-11-09 12:15:45.841000+0000 | {SOME_CONTENT}
1 | 2018-11-09 12:15:55.654656+0000 | {SOME_CONTENT}
2 | 2018-11-09 12:15:35.982354+0000 | {SOME_CONTENT}
2 | 2018-11-09 12:35:25.321655+0000 | {SOME_CONTENT}
2 | 2018-11-09 13:15:15.068498+0000 | {SOME_CONTENT}
I tried grouping by partition id and querying for max of creation_timestamp but that is not allowed and I should specify partition id using EQ or IN. Additional reading led me to believe that this is entirely wrong way of approaching this problem but I don't know whether NoSQL is not suitable tool for the job or I am simply approaching this problem from wrong angle?
You can easily achieve this by having your CREATION_TIMESTAMP as clustering column and ordered DESC. Then you would query by your id and using limit 1 (which will return the most recent event since the data is order DESC in that partition key).
can you please share your table definition .
by looking at your data you can use ID as partition key and CREATION_TIMESTAMP as clustering column.
you can use select MAX(CREATION_TIMESTAMP) from keyspace.table where ID='value';

View Cassandra Partitions using CQLSH

Using Cassandra, how do i see how many partitions were created base on how i created the primary key? I have been following a tutorial and it mentions to go to bin/cassandra-cli and use the LIST command. However, the latest Cassandra install does not come with this and I have read other articles online that have indicated that cli is now deprecated.
Is there anyway for me to see the partitions that were created using cqlsh?
Thanks in advance!
First of all you have to investigate your cassandra.yaml file to see the number of tokens that are currently configured. This tells you how many partitions each node will own:
$ grep num_tokens conf/cassandra.yaml
...
num_tokens: 128
...
$ grep initial_token conf/cassandra.yaml
...
# initial_token: 1
...
If initial token is commented out, that means that the node will figure out it's own partition ranges during start-up.
Next you can check partition ranges using nodetool ring command:
$ bin/nodetool ring
Datacenter: DC1
==========
Address Rack Status State Load Owns Token
9167006318991683417
127.0.0.2 r1 Down Normal ? ? -9178420363247798328
127.0.0.2 r1 Down Normal ? ? -9127364991967065057
127.0.0.3 r1 Down Normal ? ? -9063041387589326037
This shows you which partition range belongs to which node in the cluster.
In the example above each node owns 128 partition ranges. The range between -9178420363247798327 and -9127364991967065057 belongs to the node 127.0.0.2.
You can use this simple select to tell each row's partition key:
cqlsh:mykeyspace> select token(key), key, added_date, title from mytable;
system.token(key) | key | added_date | title
----------------------+-----------+--------------------------+----------------------
-1651127669401031945 | first | 2013-10-16 00:00:00+0000 | Hello World
-1651127669401031945 | first | 2013-04-16 00:00:00+0000 | Bye World
356242581507269238 | second | 2014-01-29 00:00:00+0000 | Lorem Ipsum
356242581507269238 | second | 2013-03-17 00:00:00+0000 | Today tomorrow
356242581507269238 | second | 2012-04-03 00:00:00+0000 | It's good to meet you
(5 rows)
Finding the partition key in partition ranges will tell you where the record is stored.
Also you can use nodetool to do the same in one simple step:
$ bin/nodetool getendpoints mykeyspace mytable 'first'
127.0.0.1
127.0.0.2
This tells where the records with the partition key 'first' are located.
NOTE: If some of the nodes are down, getendpoints command won't list those nodes, even though they should store the record based on replication settings.
cassandra-cli is not the same thing as cqlsh. Read this for more information: https://wiki.apache.org/cassandra/CassandraCli
The simplest way to get the number of partitions (keys) is with nodetool.
nodetool tablestats <keyspace>.<table>
Keyspace and table are optional. The number of partitions is listed under the value for Number of keys (estimate).
If you want the number of rows then Chris answer is correct.
SELECT * FROM <keyspace>.<table>;
This will show you all rows in the table. Just remember that this is a very costly operation as Cassandra has to fetch this data from all nodes in the cluster that has any data for that table.
can just do a select * from table, if look at headers the partition and row keys are colored differently so can figure it out that way.

Cassandra "truncate" does not empty tables

I have encountered this problem recently. When I populated my tables (called event and index) to more than 1 million, and tried to truncate them for new tests, the tables were not empty after the truncation. CQL shows something like
cqlsh> select count(*) from event limit 100000000;
count
---------
2033492
cqlsh> truncate event;
cqlsh> select count(*) from event limit 100000000;
count
-------
25
(1 rows)
cqlsh> select count(*) from event limit 100000000;
count
-------
27
(1 rows)
cqlsh> select count(*) from event limit 100000000;
count
-------
34
(1 rows)
cqlsh> select event_id, dateOf(time_token), unixTimestampOf(time_token), writetime(time_token) from event limit 100000000;
event_id | dateOf(time_token) | unixTimestampOf(time_token) | writetime(time_token)
--------------------------------------+--------------------------+-----------------------------+-----------------------
567c4f2b-c86a-4663-a8ec-50f70d183b62 | 2014-07-22 22:29:04-0400 | 1406082544416 | 1406082544416000
20a2f9e7-cdcb-4c2d-93e7-a646d0910e6b | 2014-07-22 15:12:29-0400 | 1406056349772 | 1406056349774000
... ...
0d983cec-4ba5-4df8-ada8-eb347add57bf | 2014-07-22 22:20:53-0400 | 1406082053926 | 1406082053930000
(34 rows)
cqlsh>
After the "truncate" command, the "select count(*)" returned numbers quickly changing, and stabilized at 34. To be sure there is no other program inserting records at the time, I ran a CQL statement showing all records were created on July 22 or 23, which is 4 - 5 days ago.
I tried "truncate" command several times, and the results were the same.
This happened in 2 environments. The first environment is on my laptop where I created 3 Cassandra instances cluster using localhost IPs (127.0.0.2, 127.0.0.3, and 127.0.0.4), while the second environment is 3 node Cassandra cluster, with each node on a separate Linux CentOS 6.5 machine. I am using Cassandra 2.0.6.
Could someone help me to figure out what is going on? Thanks in advance.
It is a bug in Cassandra 2.0.6, and got fixed in at least 2.0.10.
Apparently, it is not a well-known (well-published) bug as many DataStax experts did not know it either when I reproduced it to them at the Cassandra Summit 2014. They were also puzzled until the CQL architect dropped by and said he fixed a mysterious bug in recent release. He asked me to upgrade to 2.0.10, and the problem is gone. There are no more lingering records after "truncate" in 2.0.10.
Truncate doesn't truncate hints so hints awaiting delivery will still get delivered. This could be causing your issue, especially if you inserted lots of rows quickly that could have caused a few dropped mutations. However, hints are normally delivered in minutes, not days, so there must be something else wrong if hints are causing your issue. You can see when hints are delivered from the logs.
The safest way to delete all data is to drop the table and recreate under a different name (or in a different keyspace).
There is one thing you absolutely have to make sure before truncating is all the nodes are up.
If you are using Astyanax
/* keyspace variable is Keyspace Type */
keyspace.truncateColumnFamily(ColumnFamilyName);
Note: Even after truncating you will have to manually delete all the table metadata

cassandra where clause .could not find proper rows

i have a coulmn family in cassandra 1.2 as below:
time | class_name | level_log | message | thread_name
-----------------+-----------------------------+-----------+---------------+-------------
121118135945759 | ir.apk.tm.test.LoggerSimple | DEBUG | This is DEBUG | main
121118135947310 | ir.apk.tm.test.LoggerSimple | ERROR | This is ERROR | main
121118135947855 | ir.apk.tm.test.LoggerSimple | WARN | This is WARN | main
121118135946221 | ir.apk.tm.test.LoggerSimple | DEBUG | This is DEBUG | main
121118135951461 | ir.apk.tm.test.LoggerSimple | WARN | This is WARN | main
when i use this query:
SELECT * FROM LogTM WHERE token(time) > token(0);
i get nothing!!! but as you see all of time values are greater than zero!
this is CF schema:
CREATE TABLE logtm(
time bigint PRIMARY KEY ,
level_log text ,
thread_name text ,
class_name text ,
msg text
);
can any body help?
thanks :)
If you're not using an ordered partitioner (if you don't know what that means you don't) that query doesn't do what you think. Just because two timestamps sort one way doesn't mean that their tokens do. The token is the (Murmur3) hash of the cell value (unless you've changed the partitioner).
If you need to do range queries you can't do it on the partition key, only on clustering keys. One way you can do it is to use a schema like this:
CREATE TABLE LogTM (
shard INT,
time INT,
class_name ASCII,
level_log ASCII,
thread_name ASCII,
message TEXT,
PRIMARY KEY (shard, time, class_name, level_log, thread_name)
)
If you set shard to zero the schema will be roughly equivalent to what you're doing now, but the query SELECT * FROM LogTM WHERE timestamp > 0 will give you the results you expect.
However, the performance will be awful. With a single value of shard only a single partition/row will be created, and you will only use a single node of your cluster (and that node will be very busy trying to compact that single row).
So you need to figure out a way to spread the load across more nodes. One way is to pick a random shard between something like 0 and 359 (or 0 and 255 if you like multiples of two, the exact range isn't important, it just needs to be an order of magnitude or so larger than the number of nodes) for each insert, and read from all shards when you read back: SELECT * FROM LogTM WHERE shard IN (0,1,2,...) (you need to include all shards in the list, in place of ...).
You can also pick the shard by hashing the message, that way you don't have to worry about duplicates.
You need to tell us more about what exactly it is that you're trying to do, especially how you intend to query the data. Don't go do the thing I described above, it is probably completely wrong for your use case, I just wanted to give you an example so that I could explain what is going on inside Cassandra.

Resources