After unsuccessful nodetool repair operation I got two big sstable files (last two in the listing below) instead of one, each having the same size as a single file before. And now this files cannot be merged back by common tools (nodetool clean, nodetool compact, nodetool repair). Tables are replicated to another cassandra node (replication_factor: 2), and there are two big sstable files as well now.
-rw-r--r-- 1 cassandra cassandra 16M Mar 5 12:36 mc-116413-big-Data.db
-rw-r--r-- 1 cassandra cassandra 34M Mar 5 01:21 mc-116320-big-Index.db
-rw-r--r-- 1 cassandra cassandra 39M Mar 3 22:46 mc-116125-big-Index.db
-rw-r--r-- 1 cassandra cassandra 66M Mar 5 12:25 mc-116412-big-Data.db
-rw-r--r-- 1 cassandra cassandra 262M Mar 5 05:51 mc-116365-big-Data.db
-rw-r--r-- 1 cassandra cassandra 263M Mar 5 08:46 mc-116386-big-Data.db
-rw-r--r-- 1 cassandra cassandra 263M Mar 5 11:42 mc-116407-big-Data.db
-rw-r--r-- 1 cassandra cassandra 7.2G Mar 5 03:18 mc-116345-big-Data.db
-rw-r--r-- 1 cassandra cassandra 43G Mar 3 22:46 mc-116125-big-Data.db
-rw-r--r-- 1 cassandra cassandra 48G Mar 5 01:21 mc-116320-big-Data.db```
I suppose that one of this files contains duplicated data. How can I compact files back to a single file?
Maybe I'm not looking properly but I don't see any duplicate SSTable files in the file listing you posted.
If you're referring to these 2:
-rw-r--r-- 1 cassandra cassandra 43G Mar 3 22:46 mc-116125-big-Data.db
-rw-r--r-- 1 cassandra cassandra 48G Mar 5 01:21 mc-116320-big-Data.db
They're not duplicates because they have 2 different generation IDs -- 116125 and 116320. This means they also have different ancestors.
If you're referring to these:
-rw-r--r-- 1 cassandra cassandra 39M Mar 3 22:46 mc-116125-big-Index.db
-rw-r--r-- 1 cassandra cassandra 43G Mar 3 22:46 mc-116125-big-Data.db
-rw-r--r-- 1 cassandra cassandra 34M Mar 5 01:21 mc-116320-big-Index.db
-rw-r--r-- 1 cassandra cassandra 48G Mar 5 01:21 mc-116320-big-Data.db
Again, they're not duplicates of each other. The *-Data.db files contain the actual data. The *-Index.db files are component files which contain the partition index, i.e. the index of the partitions within the data files which are used for fast retrieval.
If you're interested, I've explained it in a bit more detail in this post -- https://community.datastax.com/questions/5219/. Cheers!
[UPDATE] To respond to this follow-up question:
Could you suppose why this two files don`t compacted in a single file,
as usual do?
Assuming the table is configured with SizeTieredCompactionStrategy, it will require similar-sized sstables as candidates before they get compacted together.
The default minimum sstable candidates is min_threshold: 4 so you need 4 similarly-sized sstables for a compaction to be triggered.
Related
I can see kafka logs are growing rapidly and flooding the filesystem.
How can i change settings for kafka to write less logs and rotate this logs frequently.
location of files is - /opt/kafka/kafka_2.12-2.2.2/logs and their size -
5.9G server.log.2020-11-24-14
5.9G server.log.2020-11-24-15
5.9G server.log.2020-11-24-16
5.7G server.log.2020-11-24-17
sample logs from above file.
[2020-11-24 14:59:59,999] WARN Exception when following the leader (org.apache.zookeeper.server.quorum.Learner)
java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:326)
at org.apache.zookeeper.common.AtomicFileOutputStream.write(AtomicFileOutputStream.java:74)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
at java.io.BufferedWriter.flush(BufferedWriter.java:254)
at org.apache.zookeeper.server.quorum.QuorumPeer.writeLongToFile(QuorumPeer.java:1391)
at org.apache.zookeeper.server.quorum.QuorumPeer.setCurrentEpoch(QuorumPeer.java:1426)
at org.apache.zookeeper.server.quorum.Learner.syncWithLeader(Learner.java:454)
at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:83)
at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:981)
[2020-11-24 14:59:59,999] INFO shutdown called (org.apache.zookeeper.server.quorum.Learner)
java.lang.Exception: shutdown Follower
at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:169)
at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:985)
[2020-11-24 14:59:59,999] INFO Shutting down (org.apache.zookeeper.server.quorum.FollowerZooKeeperServer)
[2020-11-24 14:59:59,999] INFO LOOKING (org.apache.zookeeper.server.quorum.QuorumPeer)
[2020-11-24 14:59:59,999] INFO New election. My id = 1, proposed zxid=0x1000001d2 (org.apache.zookeeper.server.quorum.FastLeaderElection)
[2020-11-24 14:59:59,999] INFO Notification: 1 (message format version), 1 (n.leader), 0x1000001d2 (n.zxid), 0x2 (n.round), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state) (org.apache.zookeeper.server.quorum.FastLeaderElection)
it also writes to /opt/kafka/kafka_2.12-2.2.2/kafka.log.
[2020-12-05 16:51:10,109] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-12-05 16:51:10,109] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-12-05 16:51:10,109] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-12-05 16:51:10,110] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-12-05 17:01:09,528] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-12-05 17:11:09,528] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka is used for elastic stack.
below is the entry from server.properties file.
# A comma seperated list of directories under which to store log files
log.dirs=/var/log/kafka
it has log files as
/var/log/kafka
drwxr-xr-x 2 kafka users 4.0K Dec 5 16:51 heartbeat-1
drwxr-xr-x 2 kafka users 4.0K Dec 5 16:51 __consumer_offsets-12
drwxr-xr-x 2 kafka users 4.0K Dec 5 16:51 auditbeat-0
drwxr-xr-x 2 kafka users 4.0K Dec 5 16:51 apm-2
drwxr-xr-x 2 kafka users 4.0K Dec 5 16:51 __consumer_offsets-28
drwxr-xr-x 2 kafka users 4.0K Dec 5 16:51 filebeat-2
drwxr-xr-x 2 kafka users 4.0K Dec 5 16:51 __consumer_offsets-38
drwxr-xr-x 2 kafka users 4.0K Dec 5 16:51 __consumer_offsets-44
drwxr-xr-x 2 kafka users 4.0K Dec 5 16:51 __consumer_offsets-6
drwxr-xr-x 2 kafka users 4.0K Dec 5 16:51 __consumer_offsets-16
drwxr-xr-x 2 kafka users 4.0K Dec 5 16:51 metricbeat-0
drwxr-xr-x 2 kafka users 4.0K Dec 5 16:51 __consumer_offsets-22
drwxr-xr-x 2 kafka users 4.0K Dec 5 16:51 __consumer_offsets-32
-rw-r--r-- 1 kafka users 747 Dec 5 18:02 recovery-point-offset-checkpoint
-rw-r--r-- 1 kafka users 4 Dec 5 18:02 log-start-offset-checkpoint
-rw-r--r-- 1 kafka users 749 Dec 5 18:03 replication-offset-checkpoint
no DEBUG level logs is enabled in files in /opt/kafka/kafka_2.12-2.2.2/config path.
How can I make sure it doesn't make such a hugh files in /opt/kafka/kafka_2.12-2.2.2/logs and also how can I rotate them regularly with compression.
Thanks,
log.dirs is the actual broker storage, not the process logs, therefore should not be in /var/log with other process logs
Almost 6G a day is not unreasonable, but you can modify the log4j.properties file to only keep around 1 or 2 days from the rolling file appender
Generally, as any Linux administration task, you'd have separate disk volumes for /var/log, your OS storage, and any dedicated disks for server data - say a mount at /kafka
Version: DSE 6.7.5, CQL spec 3.4.5.
I have 8GB commitlog_total_space_in_mb.
Folder is currently at 13GB.
Looking at the date stamps in the folder it appears that it forgets about commitlogs or it may be failing to delete the commitlogs when it flushes memtables.
Happens on multiple nodes.
-rw-r--r--. 1 cassandra cassandra 33554338 Sep 20 02:00 CommitLog-600-1568892978830.log
-rw-r--r--. 1 cassandra cassandra 33554227 Sep 20 02:02 CommitLog-600-1568892978853.log
-rw-r--r--. 1 cassandra cassandra 33554217 Sep 20 02:02 CommitLog-600-1568892978862.log
-rw-r--r--. 1 cassandra cassandra 33554337 Sep 20 02:03 CommitLog-600-1568892978863.log
-rw-r--r--. 1 cassandra cassandra 33554169 Sep 20 02:04 CommitLog-600-1568892978864.log
-rw-r--r--. 1 cassandra cassandra 33554412 Sep 20 08:19 CommitLog-600-1568892954896.log
-rw-r--r--. 1 cassandra cassandra 33554326 Sep 20 08:19 CommitLog-600-1568892954901.log
-rw-r--r--. 1 cassandra cassandra 33554133 Sep 20 08:20 CommitLog-600-1568892954904.log
-rw-r--r--. 1 cassandra cassandra 33554281 Sep 20 08:20 CommitLog-600-1568892954905.log
-rw-r--r--. 1 cassandra cassandra 33553885 Sep 20 08:20 CommitLog-600-1568892954906.log
When i perform a nodetool flush/drain it will not remove any of the old files.
-rw-r--r--. 1 cassandra cassandra 33554338 Sep 20 02:00 CommitLog-600-1568892978830.log
-rw-r--r--. 1 cassandra cassandra 33554227 Sep 20 02:02 CommitLog-600-1568892978853.log
-rw-r--r--. 1 cassandra cassandra 33554217 Sep 20 02:02 CommitLog-600-1568892978862.log
-rw-r--r--. 1 cassandra cassandra 33554337 Sep 20 02:03 CommitLog-600-1568892978863.log
-rw-r--r--. 1 cassandra cassandra 33554169 Sep 20 02:04 CommitLog-600-1568892978864.log
-rw-r--r--. 1 cassandra cassandra 28 Sep 20 08:46 CommitLog-600-1568892981041.log
When I start the node back up it goes through them and crashes around the final commitlog. https://pastebin.com/Kw9Kee5C
CassandraDaemon.java:129 - Exception in thread Thread[PerDiskMemtableFlushWriter_0:11,5,main] java.lang.AssertionError: null
It wont start back up unless I move some of the last commitlogs out or all of them out.
What can I do to fix this problem
I have resolved this for my issue for the time being by changing compaction to
compaction = {'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
For some reason having cell type of map with the following compaction was causing me the errors.
{'class': 'org.apache.cassandra.db.compaction.TimeWindowCompactionStrategy', 'compaction_window_size': '30', 'compaction_window_unit': 'DAYS', 'max_threshold': '32', 'min_threshold': '4', 'split_during_flush': 'true'}
I am writing a spark dataframe to CSV file using below code
println("Total number of reports: " + reportDf.count())
reportDf
.coalesce(1)
.write.format("com.databricks.spark.csv")
.csv("output/cluster.csv")
And o/p is:
Total number of reports: 48720
spark#monikatest:~/output/cluster.csv$ ll
total 12
drwxrwxr-x 2 spark spark 4096 Mar 27 20:56 ./
drwxrwxr-x 3 spark spark 4096 Mar 27 20:56 ../
-rw-r--r-- 1 spark spark 0 Mar 27 20:56 _SUCCESS
-rw-r--r-- 1 spark spark 8 Mar 27 20:56 ._SUCCESS.crc
No data written to file, only success file present.
Can anyone please suggest how to overcome this error.
I am testing kafka integration with spark as consumer. For debugging , have set-up log.retention.minutes = 2 in server.properties which cleans up .log file every 2mins. But .index file is not cleaned up
[cloudera#quickstart airline1-1]$ ls -l
total 0
-rw-r--r-- 1 root root 10485760 Apr 29 15:08 00000000000000000101.index
-rw-r--r-- 1 root root 0 Apr 29 15:08 00000000000000000101.log
-rw-r--r-- 1 root root 10485756 Apr 29 15:08 00000000000000000101.timeindex
Wondering why .index files are not cleaned up. Any insight would be helpful to understand what's happening in the background.
Also please share recommended approach to clean up log and index during testing. Found many google links referring to stop kakfa server -> remove topic partition files -> Restart kafka. But not inclined towards this approach , it could impact offsets state maintained in zookeeper.
Thanks very much!
I was reading about HDFS and was wondering, if there is any specific format in which data in a block is arranged.
Suppose there is a file of 265 MB that is copied to a Hadoop cluster and the HDFS block size is 64 MB. So the file is broken into 5 parts- 64 MB + 64 MB + 64 MB + 64 MB + 9 MB, and distributed among data nodes. Correct ?
I have a doubt that is there any format within the 64 MB block in which data is stored ?
If there is any format/structure in which the data is stored within the block, then the stored data should be less than 64 MB, since the data structure/header etc, itself may take some space.
Since HDFS data node is a logical filesystem (It runs on top of linux and there is no separate partition for HDFS), all the blocks should be stored as files in the linux partition. Correct ?
How to know the name of the file on linux that actually stores the 64 MB HDFS block ?
Anyone, if can answer these doubts/questions, that would be great. Thanks in advance.
Regards,
(*Vipul)() ;
No, the data is just split on 64MB boundary. Metadata is stored in a small separate file and on the Namenode
No, it is exactly the size you specified, and the data is split on exact boundaries of 64MB. If you have 5 parts - 64 MB + 64 MB + 64 MB + 64 MB + 9 MB, then the last file would be 9MB, all the others are 64MB
Yes, the blocks are stored as a files, each block is represented as a separate file with some small amount of metadata stored in a separate file
hdfs fsck / -files -blocks -locations
Here's an example of how the block files are stored with 128MB block size:
-rw-r--r--. 1 hdfs hadoop 134217728 Jan 12 09:17 blk_1073741825
-rw-r--r--. 1 hdfs hadoop 1048583 Jan 12 09:17 blk_1073741825_1001.meta
-rw-r--r--. 1 hdfs hadoop 134217728 Jan 12 09:18 blk_1073741826
-rw-r--r--. 1 hdfs hadoop 1048583 Jan 12 09:18 blk_1073741826_1002.meta
-rw-r--r--. 1 hdfs hadoop 134217728 Jan 12 09:18 blk_1073741827
-rw-r--r--. 1 hdfs hadoop 1048583 Jan 12 09:18 blk_1073741827_1003.meta
-rw-r--r--. 1 hdfs hadoop 134217728 Jan 12 09:18 blk_1073741828
-rw-r--r--. 1 hdfs hadoop 1048583 Jan 12 09:18 blk_1073741828_1004.meta
-rw-r--r--. 1 hdfs hadoop 134217728 Jan 12 09:19 blk_1073741829
-rw-r--r--. 1 hdfs hadoop 1048583 Jan 12 09:19 blk_1073741829_1005.meta
-rw-r--r--. 1 hdfs hadoop 134217728 Jan 12 09:19 blk_1073741830
-rw-r--r--. 1 hdfs hadoop 1048583 Jan 12 09:19 blk_1073741830_1006.meta
-rw-r--r--. 1 hdfs hadoop 87776064 Jan 12 09:19 blk_1073741831
-rw-r--r--. 1 hdfs hadoop 685759 Jan 12 09:19 blk_1073741831_1007.meta