cassandra - multitenancy : trying to create 4000 keyspaces, fails at 700 - cassandra

Environment: Jruby, Rails 2.3.8, cassandra-gem, cassandra 1.1
We are creating a new keyspace per tenant in our cassandra backed multi-tenant application. While testing we found Cassandra fail
at 400 keyspaces on a 8GB RAM machine, 7200 RPM disk
at 700 keyspaces on a 24 GB RAM machine, 7200 RPM disk
Also found it run painfully slow after 100 or so keyspaces. It took ~7 hours to create 700 keyspaces and consumed ~35GB of disk space with no data entered into it.
We then switched our testing to validate the maximum number of column families in a keyspace, hoping to use <tenant id>_<columnfamily name> named CFs. This test also failed at ~3000 CFs.
Now we are looking at <tenant_id>_<key> for row keys and use the same CFs for all tenants. This is based on the notes at https://github.com/rantav/hector/wiki/Virtual-Keyspaces.
The question is, would prepending tenant_id to key work for the following CF, given the key validation class is LongType?.
ColumnFamily: monthly_unique_user_counts
"count of unique users per month"
Key Validation Class: org.apache.cassandra.db.marshal.LongType
Default column value validator: org.apache.cassandra.db.marshal.LongType
Columns sorted by: org.apache.cassandra.db.marshal.LongType
Row cache size / save period in seconds / keys to save : 0.0/0/all
Row Cache Provider: org.apache.cassandra.cache.ConcurrentLinkedHashCacheProvider
Key cache size / save period in seconds: 200000.0/14400
For a few other CFs, the Key Validation Class is
Key Validation Class: org.apache.cassandra.db.marshal.TimeUUIDType
Would the <tenant_id>_<key> concept work in that CF?.

Related

Issues with Cassandra for when performing large number of writes

We are trying to write a large number of records (upwards of 5 million at a time) into Cassandra. These are being read from tab delimited files and are being imported into Cassandra using executeAsync.
We have been using much smaller datasets (~330k records) which will be more common. Until recently, our script has been silently stopping its import at around 65k records. Since upgrading the RAM from 2Gb to 4Gb the number of records importing have since doubled but we are still not successfully importing all the records.
This is an example of the process we are running at present:
$cluster = \Cassandra::cluster()->withContactPoints('127.0.0.1')->build();
$session = $cluster->connect('example_data');
$statement = $session->prepare("INSERT INTO example_table (example_id, column_1, column_2, column_3, column_4, column_5, column_6) VALUES (uuid(), ?, ?, ?, ?, ?, ?)");
$futures = array();
$data = array();
foreach ($results as $row) {
$data = array($row[‘column_1’], $row[‘column_2’], $row[‘column_3’], $row[‘column_4’], $row[‘column_5’], $row[‘column_6’]);
$futures = $session->executeAsync($statement, new \Cassandra\ExecutionOptions(array(
'arguments' => $data
)));
}
We suspect that this might be down to the heap running out of space:
DEBUG [SlabPoolCleaner] 2017-02-27 17:01:17,105 ColumnFamilyStore.java:1153 - Flushing largest CFS(Keyspace='dev', ColumnFamily='example_data') to free up room. Used total: 0.67/0.00, live: 0.33/0.00, flushing: 0.33/0.00, this: 0.20/0.00
DEBUG [SlabPoolCleaner] 2017-02-27 17:01:17,133 ColumnFamilyStore.java:854 - Enqueuing flush of example_data: 89516255 (33%) on-heap, 0 (0%) off-heap
The table we are inserting this data is as follows:
CREATE TABLE example_data (
example_id uuid PRIMARY KEY,
column_1 int,
column_2 varchar,
column_3 int,
column_4 varchar,
column_5 int,
column_6 int
);
CREATE INDEX column_5 ON example_data (column_5);
CREATE INDEX column_6 ON example_data (column_6);
We have attempted to use the batch method but believe it is not appropriate here as it causes the Cassandra process to run at a high level of CPU usage (~85%).
We are using the latest version of DSE/Cassandra available from the repository.
Cassandra 3.0.11.1564 | DSE 5.0.6
2gb (and 4gb really) is not even the minimum recommended for Cassandra in development or production. Running on it is possible but it requires more tweaking since its below what the defaults are tuned for. Even tweaked you shouldnt expect much performance before it starts having issues keeping up (errors your getting) and you need to add more nodes.
https://docs.datastax.com/en/landing_page/doc/landing_page/planning/planningHardware.html
Production: 32 GB to 512 GB; the minimum is 8 GB for Cassandra only and 32 GB for DataStax Enterprise analytics and search nodes.
Development in non-loading testing environments: no less than 4 GB.
DSE Graph: 2 to 4 GB in addition to your particular combination of DSE Search or DSE Analytics. If you want a large dedicated graph cache, add more RAM.
Also your spamming writes with executeAsync and not applying any backpressure. Eventually you will overrun any system like that. You either need to add some kind of throttling, feedback, or just use synchronous requests.

YCSB low read throughput cassandra

The YCSB Endpoint benchmark would have you believe that Cassandra is the golden child of Nosql databases. However, recreating the results on our own boxes (8 cores with hyperthreading, 60 GB memory, 2 500 GB SSD), we are having dismal read throughput for workload b (read mostly, aka 95% read, 5% update).
The cassandra.yaml settings are exactly the same as the Endpoint settings, barring the different ip addresses, and our disk configuration (1 SSD for data, 1 for a commit log). While their throughput is ~38,000 operations per second, ours is ~16,000 regardless (relatively) of the threads/number of client nodes. I.e. one worker node with 256 threads will report ~16,000 ops/sec, while 4 nodes will each report ~4,000 ops/sec
I've set the readahead value to 8KB for the SSD data drive. I'll put the custom workload file below.
When analyzing disk io & cpu usage with iostat, it seems that the reading throughput is consistently ~200,000 KB/s, which seems to suggest that the ycsb cluster throughput should be higher (records are 100 bytes). ~25-30% of cpu seems to be under %iowait, 10-25% in use by the user.
top and nload stats are not ostensibly bottlenecked (<50% memory usage, and 10-50 Mbits/sec for a 10 Gb/s link).
# The name of the workload class to use
workload=com.yahoo.ycsb.workloads.CoreWorkload
# There is no default setting for recordcount but it is
# required to be set.
# The number of records in the table to be inserted in
# the load phase or the number of records already in the
# table before the run phase.
recordcount=2000000000
# There is no default setting for operationcount but it is
# required to be set.
# The number of operations to use during the run phase.
operationcount=9000000
# The offset of the first insertion
insertstart=0
insertcount=500000000
core_workload_insertion_retry_limit = 10
core_workload_insertion_retry_interval = 1
# The number of fields in a record
fieldcount=10
# The size of each field (in bytes)
fieldlength=10
# Should read all fields
readallfields=true
# Should write all fields on update
writeallfields=false
fieldlengthdistribution=constant
readproportion=0.95
updateproportion=0.05
insertproportion=0
readmodifywriteproportion=0
scanproportion=0
maxscanlength=1000
scanlengthdistribution=uniform
insertorder=hashed
requestdistribution=zipfian
hotspotdatafraction=0.2
hotspotopnfraction=0.8
table=usertable
measurementtype=histogram
histogram.buckets=1000
timeseries.granularity=1000
The key was increasing native_transport_max_threads in the casssandra.yaml file.
Along with the increased settings in the comment (increasing connections in ycsb client as well as concurrent read/writes in cassandra), Cassandra jumped to ~80,000 ops/sec.

Cassandra sstables accumulating

I've been testing out Cassandra to store observations.
All "things" belong to one or more reporting groups:
CREATE TABLE observations (
group_id int,
actual_time timestamp, /* 1 second granularity */
is_something int, /* 0/1 bool */
thing_id int,
data1 text, /* JSON encoded dict/hash */
data2 text, /* JSON encoded dict/hash */
PRIMARY KEY (group_id, actual_time, thing_id)
)
WITH compaction={'class': 'DateTieredCompactionStrategy',
'tombstone_threshold': '.01'}
AND gc_grace_seconds = 3600;
CREATE INDEX something_index ON observations (is_something);
All inserts are done with a TTL, and should expire 36 hours after
"actual_time". Something that is beyond our control is that duplicate
observations are sent to us. Some observations are sent in near real
time, others delayed by hours.
The "something_index" is an experiment to see if we can slice queries
on a boolean property without having to create separate tables, and
seems to work.
"data2" is not currently being written-- it is meant to be written by
a different process than writes "data1", but will be given the same
TTL (based on "actual_time").
Particulars:
Three nodes (EC2 m3.xlarge)
Datastax ami-ada2b6c4 (us-east-1) installed 8/26/2015
Cassandra 2.2.0
Inserts from Python program using "cql" module
(had to enable "thrift" RPC)
Running "nodetool repair -pr" on each node every three hours (staggered).
Inserting between 1 and 4 million rows per hour.
I'm seeing large numbers of data files:
$ ls *Data* | wc -l
42150
$ ls | wc -l
337201
Queries don't return expired entries,
but files older than 36 hours are not going away!
The large number SSTables is probably caused by the frequent repairs you are running. Repair would normally only be run once a day or once a week, so I'm not sure why you are running repair every three hours. If you are worried about short term downtime missing writes, then you could set the hint window to three hours instead of running repair so frequently.
You might have a look at CASSANDRA-9644. This sounds like it is describing your situation. Also CASSANDRA-10253 might be of interest.
I'm not sure why your TTL isn't working to drop old SSTables. Are you setting the TTL on a whole row insert, or individual column updates? If you run sstable2json on a data file, I think you can see the TTL values.
Full disclosure: I have a love/hate relationship with DTCS. I manage a cluster with hundreds of terabytes of data in DTCS, and one of the things it does absolutely horribly is streaming of any kind. For that reason, I've recommended replacing it ( https://issues.apache.org/jira/browse/CASSANDRA-9666 ).
That said, it should mostly just work. However, there are parameters that come into play, such as timestamp_resolution, that can throw things off if set improperly.
Have you checked the sstable timestamps to ensure they match timestamp_resolution (default: microseconds)?

Cassandra Stress Test results evaluation

I have been using the cassandra-stress tool to evaluate my cassandra cluster for quite some time now.
My problem is that I am not able to comprehend the results generated for my specific use case.
My schema looks something like this:
CREATE TABLE Table_test(
ID uuid,
Time timestamp,
Value double,
Date timestamp,
PRIMARY KEY ((ID,Date), Time)
) WITH COMPACT STORAGE;
I have parsed this information in a custom yaml file and used parameters n=10000, threads=100 and the rest are default options (cl=one, mode=native cql3, etc). The Cassandra cluster is a 3 node CentOS VM setup.
A few specifics of the custom yaml file are as follows:
insert:
partitions: fixed(100)
select: fixed(1)/2
batchtype: UNLOGGED
columnspecs:
-name: Time
size: fixed(1000)
-name: ID
size: uniform(1..100)
-name: Date
size: uniform(1..10)
-name: Value
size: uniform(-100..100)
My observations so far are as follows:
With n=10000 and time: fixed(1000), the number of rows getting inserted is 10 million. (10000*1000=10000000)
The number of row-keys/partitions is 10000(i.e n), within which 100 partitions are taken at a time (which means 100 *1000 = 100000 key-value pairs) out of which 50000 key-value pairs are processed at a time. (This is because of select: fixed(1)/2 ~ 50%)
The output message also confirms the same:
Generating batches with [100..100] partitions and [50000..50000] rows (of[100000..100000] total rows in the partitions)
The results that I get are the following for consecutive runs with the same configuration as above:
Run Total_ops Op_rate Partition_rate Row_Rate Time
1 56 19 1885 943246 3.0
2 46 46 4648 2325498 1.0
3 27 30 2982 1489870 0.9
4 59 19 1932 966034 3.1
5 100 17 1730 865182 5.8
Now what I need to understand are as follows:
Which among these metrics is the throughput i.e, No. of records inserted per second? Is it the Row_rate, Op_rate or Partition_rate? If it’s the Row_rate, can I safely conclude here that I am able to insert close to 1 million records per second? Any thoughts on what the Op_rate and Partition_rate mean in this case?
Why is it that the Total_ops vary so drastically in every run ? Has the number of threads got anything to do with this variation? What can I conclude here about the stability of my Cassandra setup?
How do I determine the batch size per thread here? In my example, is the batch size 50000?
Thanks in advance.
Row Rate is the number of CQL Rows that you have inserted into your database. For your table a CQL row is a tuple like (ID uuid, Time timestamp, Value double, Date timestamp).
The Partition Rate is the number of Partitions C* had to construct. A Partition is the data-structure which holds and orders data in Cassandra, data with the same partition key ends up located on the same node. This Partition rate is equal to the number of unique values in the Partition Key that were inserted in the time window. For your table this would be unique values for (ID,Date)
Op Rate is the number of actually CQL operations that had to be done. From your settings it is running unlogged Batches to insert the data. Each insert contains approximately 100 Partitions (Unique combinations of ID and Date) which is why OP Rate * 100 ~= Partition Rate
Total OP should include all operations, read and write. So if you have any read operations those would also be included.
I would suggest changing your batch size to match your workload, or keep it at 1 depending on your actual database usage. This should provide a more realistic scenario. Also it's important to run much longer than just 100 total operations to really get a sense of your system's capabilities. Some of the biggest difficulties come when the size of the dataset increases beyond the amount of RAM in the machine.

In Cassandra 1.2 - CQL 3 is it possible to abort a secondary index build?

Been using a 6GB dataset with each source record being ~1KB in length when I accidentally added an index on a column that I am pretty sure has a 100% cardinality.
Tried dropping the index from cqlsh but by that point the two node cluster had gone into a run away death spiral with loadavg surpassing 20 on each node and cqlsh hung on the drop command for 30 minutes. Since this was just a test setup, I shut-down and destroyed the cluster and restarted.
This is a fairly disconcerting problem as it makes me fear a scenario where a junior developer is on a production cluster and they set an index on a similar high cardinality column. I scanned through the documentation and looked at the options in nodetool but there didn't seem to be anything along the lines of "abort job or abort building index".
Test environment:
2x m1.xlarge EC2 instances with 2 Raid 0 ephemeral disks
Dataset was 6GB, 1KB per record.
My question in summary: Is it possible to abort the process of building a secondary index AND or possible to stop/postpone running builds (indexing, compaction) for a later date.
nodetool -h node_address stop index_build
See: http://www.datastax.com/docs/1.2/references/nodetool#nodetool-stop

Resources