Why fdisk and lsblk show different partition size? - linux

If we see the size of sdb2 is 39.5GB in fdisk whereas 1k in the lsblk output. What can be the reason for this?
Disk /dev/sdb: 300 GiB, 322122547200 bytes, 629145600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x674589c1
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 2048 999423 997376 487M 83 Linux
/dev/sdb2 1001470 83884031 82882562 39.5G 5 Extended
/dev/sdb5 1001472 83884031 82882560 39.5G 8e Linux LVM
root#ubuntu1604:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 300G 0 disk
sdb 8:16 0 300G 0 disk
├─sdb1 8:17 0 487M 0 part /boot
├─sdb2 8:18 0 1K 0 part
└─sdb5 8:21 0 39.5G 0 part
├─ubuntu1604--vg-root 252:0 0 35.5G 0 lvm /
└─ubuntu1604--vg-swap_1 252:1 0 4G 0 lvm [SWAP]
sr0 11:0 1 1024M 0 rom
root#ubuntu1604:~#

lsblk shows size in IEC units (GiB, MiB, etc). A gibibyte (GiB) is larger than a gigabyte (GB)
fdisk shows size in SI (GB, MB, etc).

Related

Advise on stopping compaction to reduce slowness

I am seeing high CPU and memory usage of cassandra on the seed node. Is it advisable to stop compaction(nodetool stop) and enable in offpeak hours. Should I do manual compaction or enable autocompaction. I see lot of Native-Transport-Requests. I have three seed nodes. This is the first seed node.
Pool Name Active Pending Completed Blocked All time blocked
ReadStage 0 0 54255 0 0
MiscStage 0 0 0 0 0
CompactionExecutor 2 2566 352765 0 0
MutationStage 0 0 2659921760 0 0
MemtableReclaimMemory 0 0 180958 0 0
PendingRangeCalculator 0 0 21 0 0
GossipStage 0 0 338375 0 0
SecondaryIndexManagement 0 0 0 0 0
HintsDispatcher 0 0 63 0 0
RequestResponseStage 0 1 1684328696 0 0
Native-Transport-Requests 4 0 1538523706 0 47006391
ReadRepairStage 0 0 2197 0 0
CounterMutationStage 0 0 0 0 0
MigrationStage 0 0 0 0 0
MemtablePostFlush 1 1 216220 0 0
PerDiskMemtableFlushWriter_0 1 1 180958 0 0
ValidationExecutor 0 0 33250 0 0
Sampler 0 0 0 0 0
MemtableFlushWriter 1 1 180958 0 0
InternalResponseStage 0 0 141677 0 0
ViewMutationStage 0 0 0 0 0
AntiEntropyStage 0 0 166254 0 0
CacheCleanupExecutor 0 0 0 0 0
Repair#9 0 0 5719 0 0
I do see high compactions. Is it advisable to disable compactions using nodetool stop
$ nodetool info
ID : ebeda774-cea8-40bb-9322-69c6fcded5a9
Gossip active : true
Thrift active : true
Native Transport active: true
Load : 535.37 GiB
Generation No : 1636316595
Uptime (seconds) : 73152
Heap Memory (MB) : 19542.18 / 32168.00
Off Heap Memory (MB) : 1337.98
Data Center : us-west2
Rack : a
Exceptions : 15
Key Cache : entries 152283, size 23.07 MiB, capacity 100 MiB, 23835 hits, 280738 requests, 0.085 recent hit rate, 14400 save period in seconds
Row Cache : entries 0, size 0 bytes, capacity 0 bytes, 0 hits, 0 requests, NaN recent hit rate, 0 save period in seconds
Counter Cache : entries 0, size 0 bytes, capacity 50 MiB, 0 hits, 0 requests, NaN recent hit rate, 7200 save period in seconds
Chunk Cache : entries 6782, size 423.88 MiB, capacity 480 MiB, 23947952 misses, 24381819 requests, 0.018 recent hit rate, 250.977 microseconds miss latency
Percent Repaired : 0.49796724500672584%
Token : (invoke with -T/--tokens to see all 256 tokens)
$ free -h
total used free shared buff/cache available
Mem: 62G 53G 658M 1.0M 8.5G 8.5G
Swap: 0B 0B 0B
~$ nodetool compactionstats
pending tasks: 197
....
id compaction type keyspace table completed total unit progress
5e555610-40b2-11ec-9b5a-27bc920e6e55 Compaction mykeyspace table1 27299674 89930474 bytes 30.36%
5e55f251-40b2-11ec-9b5a-27bc920e6e55 Compaction mykeyspace table2 13922048 74426264 bytes 18.71%
Active compaction remaining time : 0h00m02s
I would definitely not run compaction manually. Much of the compaction thresholds are file-size based, which means that forcing it creates files sized outside of the normal progression. The result, is that the chances of compaction running on that table again are extremely slim. Basically, once you start down that path, you'll be running manual compactions forever.
I would also say that compaction is a good thing. You want it to happen, as compacted files are necessary to keep reads performing well. Of course, that's not much of a consolation when the compaction process is affecting operational activity.
tl;dr;
One I have done in the past, is to lower compaction throughput during the day. Not sure what throughput you're running with currently, but you can find this out by running nodetool getcompactionthroughput:
% bin/nodetool getcompactionthroughput
Current compaction throughput: 64 MB/s
So at the times when customer/operational traffic is high, you can reduce that significantly:
% bin/nodetool setcompactionthroughput 1
% bin/nodetool getcompactionthroughput
Current compaction throughput: 1 MB/s
1 MB / second is the lowest that compaction throughput can be set. If you set it to zero, it's "un-throttled," which means it'll consume all the resources that it can get at. Setting it to 1 brings its resource use (and speed) down to a trickle.
Once the busy daily traffic subsides, that setting can be turned back up:
% bin/nodetool setcompactionthroughput 256
Current compaction throughput: 256 MB/s
This can be accomplished with a scheduled job for each command.

kmalloc-256 seems taking most of the memory resource. How can I free this?

I have a Linux instance (Amazon Linux Linux ip-xxx 4.9.20-11.31.amzn1.x86_64 #1) which runs Jenkins. It occasionally stops working because of the lack of the memory needed for a job.
Based on my investigation with free command and /proc/meminfo, it seems that Slab is taking up most of the memory available on the instance.
[root#ip-xxx ~]# free -tm
total used free shared buffers cached
Mem: 7985 7205 779 0 19 310
-/+ buffers/cache: 6876 1108
Swap: 0 0 0
Total: 7985 7205 779
[root#ip-xxx ~]# cat /proc/meminfo | grep "Slab\|claim"
Slab: 6719244 kB
SReclaimable: 34288 kB
SUnreclaim: 6684956 kB
I could find the way to purge dentry cache by running echo 3 > /proc/sys/vm/drop_caches, but how can I purge kmalloc-256? Or, is there a way to find which process is using kmalloc-256 memory space?
[root#ip-xxx ~]# slabtop -o | head -n 15
Active / Total Objects (% used) : 26805556 / 26816810 (100.0%)
Active / Total Slabs (% used) : 837451 / 837451 (100.0%)
Active / Total Caches (% used) : 85 / 111 (76.6%)
Active / Total Size (% used) : 6696903.08K / 6701323.05K (99.9%)
Minimum / Average / Maximum Object : 0.01K / 0.25K / 8.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
26658528 26658288 99% 0.25K 833079 32 6664632K kmalloc-256
21624 21009 97% 0.12K 636 34 2544K kernfs_node_cache
20055 20055 100% 0.19K 955 21 3820K dentry
10854 10646 98% 0.58K 402 27 6432K inode_cache
10624 9745 91% 0.03K 83 128 332K kmalloc-32
7395 7395 100% 0.05K 87 85 348K ftrace_event_field
6912 6384 92% 0.02K 27 256 108K kmalloc-16
6321 5581 88% 0.19K 301 21 1204K cred_jar

How to find the highest disk space usage mount?

I'm looking for a command where only the highest disk space usage mount will be shown. So The maximum %usage mount will be shown.
Running df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vx/dsk/appdg/boom-vol
1.0G 19M 943M 2% /opt/blah99
500G 349G 152G 70% /opt/blah/data
/dev/vx/dsk/isdg/boom-shared-vol
50G 1.6G 46G 4% /opt/blah99/product/shared
/dev/vx/dsk/isdg/boom-bc-vol
150G 64G 81G 45% /opt/blah99/product/a_01
/dev/vx/dsk/isdg/boom-bt-vol
150G 47G 98G 33% /opt/blah99/product/a_02
Output should be -
500G 349G 152G 70% /opt/blah/data
What you are looking for is:
df -h | grep -vw "^\/dev" | sort -k 5 -n | tail -n 2 | head -n 1
Output of df -h | grep -vw "^\/dev":
Filesystem Size Used Avail Use% Mounted on
1.0G 19M 943M 2% /opt/blah99
500G 349G 152G 70% /opt/blah/data
50G 1.6G 46G 4% /opt/blah99/product/shared
150G 64G 81G 45% /opt/blah99/product/a_01
150G 47G 98G 33% /opt/blah99/product/a_02
Sorting by column 5 in numeric order: df -h | grep -vw "^\/dev" | sort -k 5 -n:
50G 1.6G 46G 4% /opt/blah99/product/shared
1.0G 19M 943M 2% /opt/blah99
150G 47G 98G 33% /opt/blah99/product/a_02
150G 64G 81G 45% /opt/blah99/product/a_01
500G 349G 152G 70% /opt/blah/data
Filesystem Size Used Avail Use% Mounted on
Getting second row from the end: df -h | grep -vw "^\/dev" | sort -k 5 -n | tail -n 2 | head -n 1:
500G 349G 152G 70% /opt/blah/data

How to omit heading in df -k command of SunOs

Input: df -k
Output:
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s0 10332220 443748 9785150 5% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 45475864 1688 45474176 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
sharefs 0 0 0 0% /etc/dfs/sharetab
/dev/dsk/c0t0d0s3 10332220 3513927 6714971 35% /usr
I want to omit the 1st line Filesystem kbytes used avail capacity Mounted on from the output.
I used df -k | tail -n+2 in linux to get exactly what i wanted, but in SunOs I get
zenvo% df -k | tail -n+2
usage: tail [+/-[n][lbc][f]] [file]
tail [+/-[n][l][r|f]] [file]
How can i achieve the Required output:
/dev/dsk/c0t0d0s0 10332220 443748 9785150 5% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 45475864 1688 45474176 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
sharefs 0 0 0 0% /etc/dfs/sharetab
/dev/dsk/c0t0d0s3 10332220 3513927 6714971 35% /usr
Note: No. of rows might change
I know it's an old thread, but the shortest and the clearest of all:
df -k | sed 1d
I haven't used SunOS but using sed you should be able to delete the first line like this:
df -k | sed -e /Filesystem/d
edit: But you would have to be careful that the word Filesystem doesn't show up elsewhere in the output. A better solution would be:
df -k | sed -e /^Filesystem/d
If you want to omit the first line of any result, you can use tail:
<command> | tail -n +2
So in your case:
df -k | tail -n +2
https://man7.org/linux/man-pages/man1/tail.1.html
What about:
df -k | tail -$((`df -k | wc -l`-1))

Finding File WIth Fixed File Size (>0) in Unix/Linux

I have a list of file that looks like this
4 -rw-r--r-- 1 neversaint hgc0746 53 May 1 10:37 SRX016372-SRR037477.est_count
4 -rw-r--r-- 1 neversaint hgc0746 53 May 1 10:34 SRX016372-SRR037478.est_count
4 -rw-r--r-- 1 neversaint hgc0746 53 May 1 10:41 SRX016372-SRR037479.est_count
0 -rw-r--r-- 1 neversaint hgc0746 0 Apr 27 11:16 SRX003838-SRR015096.est_count
0 -rw-r--r-- 1 neversaint hgc0746 0 Apr 27 11:32 SRX004765-SRR016565.est_count
What I want to do is to find files that has exactly size 53. But why this command failed?
$ find . -name "*.est_count" -size 53 -print
It works well though if I just want to find file of size 0 with this command:
$ find . -name "*.est_count" -size 0 -print
You need to suffix the size 53 by 'c'. As per find's manpage -
-size n[cwbkMG]
File uses n units of space. The following suffixes can be used:
`b' for 512-byte blocks (this is the default if no suffix is
used)
`c' for bytes
`w' for two-byte words
`k' for Kilobytes (units of 1024 bytes)
`M' for Megabytes (units of 1048576 bytes)
`G' for Gigabytes (units of 1073741824 bytes)
The size does not count indirect blocks, but it does count
blocks in sparse files that are not actually allocated. Bear in
mind that the `%k' and `%b' format specifiers of -printf handle
sparse files differently. The `b' suffix always denotes
512-byte blocks and never 1 Kilobyte blocks, which is different
to the behaviour of -ls.
-size n[ckMGTP]
True if the file's size, rounded up, in 512-byte blocks is n. If
n is followed by a c, then the primary is true if the file's size
is n bytes (characters). Similarly if n is followed by a scale
indicator then the file's size is compared to n scaled as:
k kilobytes (1024 bytes)
M megabytes (1024 kilobytes)
G gigabytes (1024 megabytes)
T terabytes (1024 gigabytes)
P petabytes (1024 terabytes)
You need to use -size 53c.
This is what I get on A Mac OS 10.5
> man find
...
-size n[c]
True if the file's size, rounded up, in 512-byte blocks is n. If n
is followed by a c, then the primary is true if the file's size is n
bytes (characters).

Resources