When I run nodetool clearsnapshot I get the normal "Requested clearing snapshot(s)" message, but the snapshot is never removed. What can I do to troubleshoot why this is occurring? Is it acceptable for me to just manually remove the snapshot directories from the tablespace directories as a workaround for this?
nodetool clearsnapshot 1472489985541
Requested clearing snapshot(s) for [1472489985541]
nodetool listsnapshots | awk '{print $1}' | grep ^1 | sort -u
1472489985541
1473165734236
1473690660090
1474296554367
Is it acceptable for me to just manually remove the snapshot directories from the tablespace directories as a workaround for this?
Yes, you can always safely remove the snapshots directories manually. They are just hard links to actual SSTables
In order to delete a snapshot from all keyspaces using the snapshot name, you must specify the -t flag in your clearsnapshot command.
Related
I am not able to identify what is causing my ec2 disk space to reach 100% of capacity.
I have a script which deletes files in tmp folder.But still randomly sometimes my disk capacity reaches 100%.
I have attached the logs of df -i to show disk utilization.
Error
PM2 | Error: ENOSPC: no space left on device, write
PM2 | at Object.writeSync (fs.js:679:3)
PM2 | at Object.writeFileSync (fs.js:1393:26)
PM2 | at ProcessContainer (/usr/lib/node_modules/pm2/lib/ProcessContainer.js:70:10)
PM2 | at Object.<anonymous> (/usr/lib/node_modules/pm2/lib/ProcessContainer.js:103:3)
PM2 | at Module._compile (internal/modules/cjs/loader.js:999:30)
I am using command df -i to find
[![enter image description here][1]][1]
[![enter image description here][2]][2]
du -h -d 1
Check the user .pm2/logs directory, if your node app as errors or many regular logs this can increase disk space used.
I think that 8 Go is too small. I think you should upgrade your server to allocate more space. This will solved your problem.
If you can't or if you don't want to add disk space, you can take a look at the /var/log directory to delete some extra log. On long term, you can use logrotate to compress log files and upload compressed one to another place in order to keep /var/log as small a possible.
UPDATE
Also, i am not a specialist of ubuntu and snap, but your /snap directory is 2,1Go in size. You can check this to see if snap retain old version of snap package or if there is some cache that can be cleared.
Here is a bash script to remove old snaps version that i find here : https://www.debugpoint.com/clean-up-snap/
#!/bin/bash
#Removes old revisions of snaps
#CLOSE ALL SNAPS BEFORE RUNNING THIS
set -eu
LANG=en_US.UTF-8 snap list --all | awk '/disabled/{print $1, $3}' |
while read snapname revision; do
snap remove "$snapname" --revision="$revision"
done
You can also delete files in /var/lib/snapd/cache it's a snap cache that can be cleared.
But as i say, not a specialist of Ubuntu, so not tested.
You can use the dh utility
cd /
du -h -d 1
it will show the disk usage for every folder in /, then you can cd in the biggest ones and repeat the same.
You can also run
du | sort -n
and you'll get (after a while) all the folders size in the filesystem (ordered by ascending size). By my experience I'd take a first look at /home, /tmp and /var.
I want to take Cassandra backup at every 1hr interval and move it to Shared location.
Cassandra taking the snapshot in the default location, how can I take the snapshot on /opt/backup location?
You can't (with snapshots).
nodetool snapshot -t <tag> <keyspace> is a quite simple tool - it just creates hard links for every file in your keyspace data directories to snapshots/<tag>.
Since these are hard links they have to be on the same filesystem. Benefit of those hard links is that a snapshot is quite fast and doesn't consume additional disk space initially (when sstables got compacted / deleted the files remain in the snapshot).
If you want those backups in a different location use -t <tag> while creating your snapshot. I made up a demo with demosnapshot and a simple script (not fully elaborated but shows the idea:
$ cat cassandrabackup.sh
#!/bin/bash
TAG=`date +%Y%m%d%H%M%S`
BACKUP_LOC=/tmp/backup/`hostname`
KEYSPACE=demokeyspace
echo creating snapshot $TAG
nodetool snapshot -t $TAG $KEYSPACE
echo sync to backup location $BACKUP_LOC
find /var/lib/cassandra -type f -path "*snapshots/$TAG*" -printf %P\\0 | rsync -avP --files-from=- --from0 /var/lib/cassandra/ $BACKUP_LOC
echo removing snapshot $TAG
nodetool clearsnapshot -t $TAG
The script creates a snaphot with a specific tag (datetime), rsyncs the contents to a backup location and then removes the snapshot. If KEYSPACE is not defined all keyspaces are backuped.
Result is like this:
$ ./cassandrabackup.sh
creating snapshot 20170823132936
Requested creating snapshot(s) for [demokeyspace] with snapshot name [20170823132936] and options {skipFlush=false}
Snapshot directory: 20170823132936
sync to backup location /tmp/backup/host1.domain.tld
building file list ...
6 files to consider
data1/
data1/demokeyspace/
data1/demokeyspace/demotable-0bbb579087ef11e7aa786377cd3ba823/
data1/demokeyspace/demotable-0bbb579087ef11e7aa786377cd3ba823/snapshots/
data1/demokeyspace/demotable-0bbb579087ef11e7aa786377cd3ba823/snapshots/20170823132936/
data1/demokeyspace/demotable-0bbb579087ef11e7aa786377cd3ba823/snapshots/20170823132936/manifest.json
13 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=0/6)
sent 305 bytes received 50 bytes 710.00 bytes/sec
total size is 13 speedup is 0.04
removing snapshot 20170823132936
Requested clearing snapshot(s) for [all keyspaces] with snapshot name [20170823132936]
$ ifjke#fsca01:~$ find /tmp/backup/
/tmp/backup/
/tmp/backup/host1.domain.tld
/tmp/backup/host1.domain.tld/data2
/tmp/backup/host1.domain.tld/data2/demokeyspace
/tmp/backup/host1.domain.tld/data2/demokeyspace/demotable-0bbb579087ef11e7aa786377cd3ba823
/tmp/backup/host1.domain.tld/data2/demokeyspace/demotable-0bbb579087ef11e7aa786377cd3ba823/snapshots
/tmp/backup/host1.domain.tld/data2/demokeyspace/demotable-0bbb579087ef11e7aa786377cd3ba823/snapshots/20170823125951
/tmp/backup/host1.domain.tld/data2/demokeyspace/demotable-0bbb579087ef11e7aa786377cd3ba823/snapshots/20170823125951/manifest.json
/tmp/backup/host1.domain.tld/data2/demokeyspace/demotable-0bbb579087ef11e7aa786377cd3ba823/snapshots/20170823130014
/tmp/backup/host1.domain.tld/data2/demokeyspace/demotable-0bbb579087ef11e7aa786377cd3ba823/snapshots/20170823130014/manifest.json
/tmp/backup/host1.domain.tld/data1
/tmp/backup/host1.domain.tld/data1/demokeyspace
/tmp/backup/host1.domain.tld/data1/demokeyspace/demotable-0bbb579087ef11e7aa786377cd3ba823
/tmp/backup/host1.domain.tld/data1/demokeyspace/demotable-0bbb579087ef11e7aa786377cd3ba823/snapshots
/tmp/backup/host1.domain.tld/data1/demokeyspace/demotable-0bbb579087ef11e7aa786377cd3ba823/snapshots/20170823132936
/tmp/backup/host1.domain.tld/data1/demokeyspace/demotable-0bbb579087ef11e7aa786377cd3ba823/snapshots/20170823132936/manifest.json
$
As I did that error by myself in the past - include hostname in the backups ;)
Apart from that there is also an incremental backup feature in cassandra:
http://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsBackupIncremental.html
I have a cluster which I am considering enabling incremental repair on. If anything goes wrong I'd like to disable incremental repair on every node. How do I do that?
Turn node off and use sstablerepairedset to remove the repair time for each sstable so that they will all be candidates for future compactions.
find '/path/cassandra/data/keyspace/table/' -iname "*Data.db*" > sstables.txt
sudo -u cassandra sstablerepairedset --is-unrepaired -f sstables.txt
Then just go back to using repair with no -inc or in later versions use the -full flag
I am using Jconsole for monitoring Cassandra. I can get value like how much load each keyspace is having.
I want to find out disk space usage for each node in a cluster by remotely.
Is there any way to do so?
A shell script can do the trick
for i in node1_ip node2_ip ... nodeN_ip
do
ssh user#$i "du -sh /var/lib/cassandra/data" >> /tmp/disk_usage.txt
done
Replace /var/lib/cassandra/data if your data folder is put somewhere else
on running this query:
{ "start_absolute":1359695700000, "end_absolute":1422853200000,
"metrics":[{"tags":{"Building_id":["100"]},"name":"meterreadings","group_by":[{"name":"time","group_count":"12","range_size":{"value":"1","unit":"MONTHS"}}],"aggregators":[{"name":"sum","align_sampling":true,"sampling":{"value":"1","unit":"Months"}}]}]}
I am getting the following response:
500 {"errors":["Too many open files"]}
Here this link it is written that increase the size of file-max.
My file-max output is:
cat /proc/sys/fs/file-max
382994
it is already very large, do I need to increase its limit
What version are you using? Are you using a lot of grou-by in your queries?
You may need to restart kairosDB as a workaround.
Can you check if you have deleted (ghost) files handles (replace by kairosDB process ID in the command line below)?
ls -l /proc/<PID>/fd | grep kairos_cache | grep -v '(delete)' | wc -l
THere was a fix in 0.9.5 for unclosed file handles.
There's a fix pending for next release (1.0.1).
cf. https://github.com/kairosdb/kairosdb/pull/180, https://github.com/kairosdb/kairosdb/issues/132, and https://github.com/kairosdb/kairosdb/issues/175.