I have got a problem with scalability Cassandra database. In spite of increase the number of nodes from 2 to 8, performance of database doesn't grow.
Cassandra Version: 3.7
Cassandra Hardware x8: 1vCPU 2.5 Ghz, 900 MB RAM, SSD DISK 20GB, 10 Gbps LAN
Benchmark Hardware x1: 16vCPU 2.5 GHz, 8 GB RAM, SSD DISK 5GB, 10 Gbps LAN
Default settings were changed in cassandra.yaml:
cluster_name: 'tst'
seeds: "192.168.0.101,192.168.0.102,...108"
listen_address: 192.168.0.xxx
endpoint_snitch: GossipingPropertyFileSnitch
rpc_address: 192.168.0.xxx
concurrent_reads: 8
concurrent_writes: 8
concurrent_counter_writes: 8
Keyspace:
create keyspace tst WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : '2' };
Example table:
CREATE TABLE shares (
c1 int PRIMARY KEY,
c2 varchar,
c3 int,
c4 int,
c5 int,
c6 varchar,
c7 int
);
Examplary query used in tests:
INSERT INTO shares (c1, c1, c3, c4, c5, c6, c7) VALUES (%s, '%s', %s, %s, %s, '%s', %s)
For connect with base I will use https://github.com/datastax/java-driver. In multi-threads I use one of cluster object and one of session object according to the instructions. Connecting:
PoolingOptions poolingOptions = new PoolingOptions();
poolingOptions.setConnectionsPerHost(HostDistance.LOCAL, 5, 300);
poolingOptions.setCoreConnectionsPerHost(HostDistance.LOCAL, 10);
poolingOptions.setPoolTimeoutMillis(5000);
QueryOptions queryOptions = new QueryOptions();
queryOptions.setConsistencyLevel(ConsistencyLevel.QUORUM);
Builder builder = Cluster.builder();
builder.withPoolingOptions(poolingOptions);
builder.withQueryOptions(queryOptions);
builder.withLoadBalancingPolicy(new RoundRobinPolicy());
this.setPoints(builder); // here all of the nodes are added
Cluster cluster = builder.build()
Code of query:
public ResultSet execute(String query) {
ResultSet result = this.session.execute(query);
return result;
}
During test work, using of memory on all of the nodes is 80%, and CPU 100%. I am surprised by using connections in monitor (is too low):
[2016-09-10 09:39:51.537] /192.168.0.102:9042 connections=10, current load=62, max load=10240
[2016-09-10 09:39:51.556] /192.168.0.103:9042 connections=10, current load=106, max load=10240
[2016-09-10 09:39:51.556] /192.168.0.104:9042 connections=10, current load=104, max load=10240
[2016-09-10 09:39:51.556] /192.168.0.101:9042 connections=10, current load=196, max load=10240
[2016-09-10 09:39:56.467] /192.168.0.102:9042 connections=10, current load=109, max load=10240
[2016-09-10 09:39:56.467] /192.168.0.103:9042 connections=10, current load=107, max load=10240
[2016-09-10 09:39:56.467] /192.168.0.104:9042 connections=10, current load=115, max load=10240
[2016-09-10 09:39:56.468] /192.168.0.101:9042 connections=10, current load=169, max load=10240
[2016-09-10 09:40:01.468] /192.168.0.102:9042 connections=10, current load=113, max load=10240
[2016-09-10 09:40:01.468] /192.168.0.103:9042 connections=10, current load=84, max load=10240
[2016-09-10 09:40:01.468] /192.168.0.104:9042 connections=10, current load=92, max load=10240
[2016-09-10 09:40:01.469] /192.168.0.101:9042 connections=10, current load=205, max load=10240
Code of the monitor: https://github.com/datastax/java-driver/tree/3.0/manual/pooling#monitoring-and-tuning-the-pool
I am trying to test scalability of few NoSQL databases. In case of Redis base it was linear scalability, here she is not at all and I don't know why. Thanks for your help!
1GB RAM on each machine is a very low target. This could be causing too much GC pressure. Check your log to see the GC activity and try to understand if this 100% CPU cap is due to JVM GC'ing all the time.
Another quirk: how many threads are you running on each machine? If you are trying to scale with this code (your code):
Code of query:
public ResultSet execute(String query) {
ResultSet result = this.session.execute(query);
return result;
}
then you won't go very far. Synchronous queries are hopelessly slow. Even if you try to use more threads then 1GB of RAM could be (I already know it is...) too low... You should probably write async queries, for both resource consumption and scalability.
Related
I am attempting to calculate some moving averages in Spark, but am running into issues with skewed partitions. Here is the simple calculation I'm trying to perform:
Getting the base data
# Variables
one_min = 60
one_hour = 60*one_min
one_day = 24*one_hour
seven_days = 7*one_day
thirty_days = 30*one_day
# Column variables
target_col = "target"
partition_col = "partition_col"
df_base = (
spark
.sql("SELECT * FROM {base}".format(base=base_table))
)
df_product1 = (
df_base
.where(F.col("product_id") == F.lit("1"))
.select(
F.col(target_col).astype("double").alias(target_col),
F.unix_timestamp("txn_timestamp").alias("window_time"),
"transaction_id",
partition_col
)
)
df_product1.persist()
Calculating running averages
window_lengths = {
"1day": one_day,
"7day": seven_days,
"30day": thirty_days
}
# Create window specs for each type
part_windows = {
time: Window.partitionBy(F.col(partition_col))
.orderBy(F.col("window_time").asc())
.rangeBetween(-secs, -one_min)
for (time, secs) in window_lengths.items()
}
cols = [
# Note: not using `avg` as I will be smoothing this at some point
(F.sum(target_col).over(win)/F.count("*").over(win)).alias(
"{time}_avg_target".format(time=time)
)
for time, win in part_windows.items()
]
sample_df = (
df_product1
.repartition(2000, partition_col)
.sortWithinPartitions(F.col("window_time").asc())
.select(
"*",
*cols
)
)
Now, I can collect a limited subset of these data (say just 100 rows), but if I try to run the full query, and, for example, aggregate the running averages, Spark gets stuck on some particularly large partitions. The vast majority of the partitions have fewer than 1million records in them. Only about 50 of them have more than 1M record and only about 150 have more than 500K
However, a small handful have more than 2.5M (~10), and 3 of them have more than 5M records. These partitions have run for more than 12 hours and failed to complete. The skew in these partitions are a natural part of the data representing larger activity in these distinct values of the partitioning column. I have no control over the definition of the values of this partitioning column.
I am using a SparkSession with dynamic allocation enabled, 32G of RAM and 4 cores per executor, and 4 executors minimum. I have attempted to up the executors to 96G with 8 cores per executor and 10 executors minimum, but the job still does not complete.
This seems like a calculation which shouldn't take 13 hours to complete. The df_product1 DataFrame contains just shy of 300M records.
If there is other information that would be helpful in resolving this problem, please comment below.
I was testing node repair on my Cassandra cluster (v3.11.5) while simultaneously stress-testing it with cassandra-stress (v3.11.4). The disk space run out and the repair failed. As a result gossip got disabled on the nodes. Sstables that were being anticompacted got cleaned up (effectively = deleted), which dropped the disk usage by ~half (to ~1.5TB per node) within a minute. And this I understand.
What I do not undestand is what happened next. The sstables started getting continuously compacted into smaller ones and eventually deleted. As a result the disk usage continued to drop (this time slowly), after a day or so it went from ~1.5TB per node to ~50GB per node. The data that was residing in the cluster was randomly generated by the cassandra-stress, so I see no way to confirm whether it's intact, however I find highly unlikely that it is, as the disk usage dropped that much. Also I have no TTL set up (at least that I would know of, might be missing something), so I would not expect the data being deleted. But I believe this is the case.
Anyway, can anyone point me to what is happening?
Table schema:
> desc test-table1;
CREATE TABLE test-keyspace1.test-table1 (
event_uuid uuid,
create_date timestamp,
action text,
business_profile_id int,
client_uuid uuid,
label text,
params text,
unique_id int,
PRIMARY KEY (event_uuid, create_date)
) WITH CLUSTERING ORDER BY (create_date DESC)
AND bloom_filter_fp_chance = 0.1
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.DeflateCompressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
Logs:
DEBUG [CompactionExecutor:7] 2019-11-23 20:17:19,828 CompactionTask.java:255 - Compacted (59ddec80-0e20-11ea-9612-67e94033cb24) 4 sstables to [/data/cassandra/data/test-keyspace1/test-table1-f592e9600b9511eab562b36ee84fdea9/md-3259-big,] to level=0. 93.264GiB to 25.190GiB (~27% of original) in 5,970,059ms. Read Throughput = 15.997MiB/s, Write Throughput = 4.321MiB/s, Row Throughput = ~909/s. 1,256,595 total partitions merged to 339,390. Partition merge counts were {2:27340, 3:46285, 4:265765, }
(...)
DEBUG [CompactionExecutor:7] 2019-11-24 03:50:14,820 CompactionTask.java:255 - Compacted (e1bd7f50-0e4b-11ea-9612-67e94033cb24) 32 sstables to [/data/cassandra/data/test-keyspace1/test-table1-f592e9600b9511eab562b36ee84fdea9/md-3301-big,] to level=0. 114.787GiB to 25.150GiB (~21% of original) in 14,448,734ms. Read Throughput = 8.135MiB/s, Write Throughput = 1.782MiB/s, Row Throughput = ~375/s. 1,546,722 total partitions merged to 338,859. Partition merge counts were {1:12732, 2:42441, 3:78598, 4:50454, 5:36032, 6:52989, 7:21216, 8:34681, 9:9716, }
DEBUG [CompactionExecutor:15] 2019-11-24 03:50:14,852 LeveledManifest.java:423 - L0 is too far behind, performing size-tiering there first
DEBUG [CompactionExecutor:15] 2019-11-24 03:50:14,852 CompactionTask.java:155 - Compacting (85e06040-0e6d-11ea-9612-67e94033cb24) [/data/cassandra/data/test-keyspace1/test-table1-f592e9600b9511eab562b36ee84fdea9/md-3259-big-Data.db:level=0, /data/cassandra/data/test-keyspace1/test-table1-f592e9600b9511eab562b36ee84fdea9/md-3299-big-Data.db:level=0, /data/cassandra/data/test-keyspace1/test-table1-f592e9600b9511eab562b36ee84fdea9/md-3298-big-Data.db:level=0, /data/cassandra/data/test-keyspace1/test-table1-f592e9600b9511eab562b36ee84fdea9/md-3300-big-Data.db:level=0, /data/cassandra/data/test-keyspace1/test-table1-f592e9600b9511eab562b36ee84fdea9/md-3301-big-Data.db:level=0,]
(...)
DEBUG [NonPeriodicTasks:1] 2019-11-24 06:02:50,117 SSTable.java:105 - Deleting sstable: /data/cassandra/data/test-keyspace1/test-table1-f592e9600b9511eab562b36ee84fdea9/md-3259-big
edit:
I performed some additional testing. To my best knowledge there is no TTL set up, see query result straight after cassandra-stress started inserting data:
> SELECT event_uuid, create_date, ttl(action), ttl(business_profile_id), ttl(client_uuid), ttl(label), ttl(params), ttl(unique_id) FROM test-table1 LIMIT 1;
event_uuid | create_date | ttl(action) | ttl(business_profile_id) | ttl(client_uuid) | ttl(label) | ttl(params) | ttl(unique_id)
--------------------------------------+---------------------------------+-------------+--------------------------+------------------+------------+-------------+----------------
00000000-001b-adf7-0000-0000001badf7 | 2018-01-10 10:08:45.476000+0000 | null | null | null | null | null | null
So neither TTL nor tombstones deletion should be related to the issue. It's likely that there are no duplicates, as the data is highly randomized. No Replication Factor changes were made, as well.
What I found out is that the data volume decrease starts every time after cassandra-stress gets stopped. Sadly, still don't know the exact reason.
I guess, when you think of it from a Cassandra perspective there really are only a few options on why your data shrinks:
1) TTL expired past GC Grace
2) Deletes past GC grace
3) The same records exists in multiple sstables (i.e. "updates")
4) Change in RF to a lower number (essentially a "cleanup" - token reassignment)
In any of the above cases when compaction runs it will either remove or reconcile records which could shrink up the space consumption. Without having the sstables around any more, it's hard to determine which, if not a combination of the above, occurred.
-Jim
I am streaming data from kafka and trying to limit the number of events per batch to 10 events. After processing for 10-15 batches, there is a sudden spike in the batch size. Below are my settings:
spark.streaming.kafka.maxRatePerPartition=1
spark.streaming.backpressure.enabled=true
spark.streaming.backpressure.pid.minRate=1
spark.streaming.receiver.maxRate=2
Please check this image for the streaming behavior
This is the bug in spark, please reffer to: https://issues.apache.org/jira/browse/SPARK-18371
The pull request isn't merged yet, but you may pick it up and build spark on your own.
To summarize the issue:
If you have the spark.streaming.backpressure.pid.minRate set to a number <= partition count, then an effective rate of 0 is calculated:
val totalLag = lagPerPartition.values.sum
...
val backpressureRate = Math.round(lag / totalLag.toFloat * rate)
...
(the second line calculates rate per partition where rate is rate comming from PID and defaults to minRate, when PID calculates it shall be smaller)
As here: DirectKafkaInputDStream code
This resulting to 0 causes the fallback to (unreasonable) head of partitions:
...
if (effectiveRateLimitPerPartition.values.sum > 0) {
val secsPerBatch = context.graph.batchDuration.milliseconds.toDouble / 1000
Some(effectiveRateLimitPerPartition.map {
case (tp, limit) => tp -> (secsPerBatch * limit).toLong
})
} else {
None
}
...
maxMessagesPerPartition(offsets).map { mmp =>
mmp.map { case (tp, messages) =>
val lo = leaderOffsets(tp)
tp -> lo.copy(offset = Math.min(currentOffsets(tp) + messages, lo.offset))
}
}.getOrElse(leaderOffsets)
As in DirectKafkaInputDStream#clamp
This makes the backpressure basically not working when your actual and minimal receive rate/msg/ partitions is smaller ~ equal to partitions count and you experience significant lag (e.g. messages come in spikes and you have constant processing powers).
I have a table:
CREATE TABLE my_table (
user_id text,
ad_id text,
date timestamp,
PRIMARY KEY (user_id, ad_id)
);
The lengths of the user_id and ad_id that I use are not longer than 15 characters.
I query the table like this:
Set<String> users = ... filled somewhere
Session session = ... builded somewhere
BoundStatement boundQuery = ... builded somewhere
(using query: "SELECT * FROM my_table WHERE user_id=?")
List<Row> rowAds =
users.stream()
.map(user -> session.executeAsync(boundQuery.bind(user)))
.map(ResultSetFuture::getUninterruptibly)
.map(ResultSet::all)
.flatMap(List::stream)
.collect(toList());
The Set of users has aproximately 3000 elements , and each users has aproximately 300 ads.
This code is excecuted in 50 threads in the same machine, (with differents users), (using the same Session object)
The algorithm takes between 2 and 3 seconds to complete
The Cassandra cluster has 3 nodes, with a replication factor of 2. Each node has 6 cores and 12 GB of ram.
The Cassandra nodes are in 60% of their CPU capacity, 33% of ram, 66% of ram (including page cache)
The querying machine is 50% of it's cpu capacity, 50% of ram
How do I improve the read time to less than 1 second?
Thanks!
UPDATE:
After some answers(thank you very much), I realized that I wasn' t doing the queries in parallel, so I changed the code to:
List<Row> rowAds =
users.stream()
.map(user -> session.executeAsync(boundQuery.bind(user)))
.collect(toList())
.stream()
.map(ResultSetFuture::getUninterruptibly)
.map(ResultSet::all)
.flatMap(List::stream)
.collect(toList());
So now the queries are being done in parrallel, this gave me times of aprox 300 milliseconds, so great improvement there!.
But my question continues, can it be faster?
Again, thanks!
users.stream()
.map(user -> session.executeAsync(boundQuery.bind(user)))
.map(ResultSetFuture::getUninterruptibly)
.map(ResultSet::all)
.flatMap(List::stream)
.collect(toList());
A remark. On the 2nd map() you're calling ResultSetFuture::getUninterruptibly. It's a blocking call so you don't benefit much from asynchronous exec ...
Instead, try to transform a list of Futures returned by the driver (hint: ResultSetFuture is implementing the ListenableFuture interface of Guava) into a Future of List
See: http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/util/concurrent/Futures.html#successfulAsList(java.lang.Iterable)
I'm running Cassandra cluster
Software version: 2.0.9
Nodes: 3
Replication factor: 2
I'm having a very simple table where I insert and update data.
CREATE TABLE link_list (
url text,
visited boolean,
PRIMARY KEY ((url))
);
There is no expire on rows and I'm not doing any DELETEs. As soon as I run my application it quickly slows down due to the increasing number of tombstoned cells:
Read 3 live and 535 tombstoned cells
It gets up to thousands in few minutes.
My question is what is responsible for generating those cells if I'm not doing any deletions?
// Update
This is the implementation I'm using to talk to Cassandra with com.datastax.driver.
public class LinkListDAOCassandra implements DAO {
public void save(Link link) {
save(new VisitedLink(link.getUrl(), false));
}
#Override
public void save(Model model) {
save((Link) model);
}
public void update(VisitedLink link) {
String cql = "UPDATE link_list SET visited = ? WHERE url = ?";
Cassandra.DB.execute(cql, ConsistencyLevel.QUORUM, link.getVisited(), link.getUrl());
}
public void save(VisitedLink link) {
String cql = "SELECT url FROM link_list_inserted WHERE url = ?";
if(Cassandra.DB.execute(cql, ConsistencyLevel.QUORUM, link.getUrl()).all().size() == 0) {
cql = "INSERT INTO link_list_inserted (url) VALUES (?)";
Cassandra.DB.execute(cql, ConsistencyLevel.QUORUM, link.getUrl());
cql = "INSERT INTO link_list (url, visited) VALUES (?,?)";
Cassandra.DB.execute(cql, ConsistencyLevel.QUORUM, link.getUrl(), link.getVisited());
}
}
public VisitedLink getByUrl(String url) {
String cql = "SELECT * FROM link_list WHERE url = ?";
for(Row row : Cassandra.DB.execute(cql, url)) {
return new VisitedLink(row.getString("url"), row.getBool("visited"));
}
return null;
}
public List<Link> getLinks(int limit) {
List<Link> links = new ArrayList();
ResultSet results;
String cql = "SELECT * FROM link_list WHERE visited = False LIMIT ?";
for(Row row : Cassandra.DB.execute(cql, ConsistencyLevel.QUORUM, limit)) {
try {
links.add(new Link(new URL(row.getString("url"))));
}
catch(MalformedURLException e) { }
}
return links;
}
}
This is the execute implementation
public ResultSet execute(String cql, ConsistencyLevel cl, Object... values) {
PreparedStatement statement = getSession().prepare( cql ).setConsistencyLevel(cl);
BoundStatement boundStatement = new BoundStatement( statement );
boundStatement.bind(values);
return session.execute(boundStatement);
}
// Update 2
An interesting finding from the cfstats shows that only one table has tombstones. It's link_list_visited. Does it mean that updating a column with a secondary index will create tombstones?
Table (index): link_list.link_list_visited
SSTable count: 2
Space used (live), bytes: 5055920
Space used (total), bytes: 5055991
SSTable Compression Ratio: 0.3491883995187955
Number of keys (estimate): 256
Memtable cell count: 15799
Memtable data size, bytes: 1771427
Memtable switch count: 1
Local read count: 85703
Local read latency: 2.805 ms
Local write count: 484690
Local write latency: 0.028 ms
Pending tasks: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0.00000
Bloom filter space used, bytes: 32
Compacted partition minimum bytes: 8240
Compacted partition maximum bytes: 7007506
Compacted partition mean bytes: 3703162
Average live cells per slice (last five minutes): 3.0
Average tombstones per slice (last five minutes): 674.0
The only major differences between a secondary index and an extra column family to manually hold the index is that the secondary index only contains information about the current node (i.e. it does not contain information about other node's data) and the operations over the secondary index as a result of an update on the primary table are atomic operations. Other than that you can see it as a regular column family with the same weak spots, a high number of updates on the primary column family will lead to a high number of deletes on the index table because the updates on the primary table will be translated as a delete/insert operation on the index table. Said deletions in the index table are the source of the tombstones. Cassandra deletes are logical deletes until the next repair process (when the tombstones will be removed).
Hope it helps!