FreeNas HDD slot info into the disk bay - freebsd

diskinfo -v /dev/da14 gives an output
id1,enc#n500304801727e03d/type#0/slot#5/elmdesc#Slot04 # Physical path
Which one is actually the slot number of the disk14 in the disk bay.

Related

PostgreSQL out of memory: Linux OOM killer

I am having issues with a large query, that I expect to rely on wrong configs of my postgresql.config. My setup is PostgreSQL 9.6 on Ubuntu 17.10 with 32GB RAM and 3TB HDD. The query is running pgr_dijkstraCost to create an OD-Matrix of ~10.000 points in a network of 25.000 links. Resulting table is thus expected to be very big ( ~100'000'000 rows with columns from, to, costs). However, creating simple test as select x,1 as c2,2 as c3
from generate_series(1,90000000) succeeds.
The query plan:
QUERY PLAN
--------------------------------------------------------------------------------------
Function Scan on pgr_dijkstracost (cost=393.90..403.90 rows=1000 width=24)
InitPlan 1 (returns $0)
-> Aggregate (cost=196.82..196.83 rows=1 width=32)
-> Seq Scan on building_nodes b (cost=0.00..166.85 rows=11985 width=4)
InitPlan 2 (returns $1)
-> Aggregate (cost=196.82..196.83 rows=1 width=32)
-> Seq Scan on building_nodes b_1 (cost=0.00..166.85 rows=11985 width=4)
This leads to a crash of PostgreSQL:
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
normally and possibly corrupted shared memory.
Running dmesg I could trace it down to be an Out of memory issue:
Out of memory: Kill process 5630 (postgres) score 949 or sacrifice child
[ 5322.821084] Killed process 5630 (postgres) total-vm:36365660kB,anon-rss:32344260kB, file-rss:0kB, shmem-rss:0kB
[ 5323.615761] oom_reaper: reaped process 5630 (postgres), now anon-rss:0kB,file-rss:0kB, shmem-rss:0kB
[11741.155949] postgres invoked oom-killer: gfp_mask=0x14201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD), nodemask=(null), order=0, oom_score_adj=0
[11741.155953] postgres cpuset=/ mems_allowed=0
When running the query I also can observe with topthat my RAM is going down to 0 before the crash. The amount of committed memory just before the crash:
$grep Commit /proc/meminfo
CommitLimit: 18574304 kB
Committed_AS: 42114856 kB
I would expect the HDD is used to write/buffer temporary data, when RAM is not enough. But the available space on my hdd does not change during the processing. So I began to dig for missing configs (expecting issues due to my relocated data-directory) and following different sites:
https://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT
https://www.credativ.com/credativ-blog/2010/03/postgresql-and-linux-memory-management
My original settings of postgresql.conf are default except for changes in the data-directory:
data_directory = '/hdd_data/postgresql/9.6/main'
shared_buffers = 128MB # min 128kB
#huge_pages = try # on, off, or try
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
#work_mem = 4MB # min 64kB
#maintenance_work_mem = 64MB # min 1MB
#replacement_sort_tuples = 150000 # limits use of replacement selection sort
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
#max_stack_depth = 2MB # min 100kB
dynamic_shared_memory_type = posix # the default is the first option
I changed the config:
shared_buffers = 128MB
work_mem = 40MB # min 64kB
maintenance_work_mem = 64MB
Relaunched with sudo service postgresql reload and tested the same query, but found no change in behavior. Does this simply mean, such a large query can not be done? Any help appreciated.
I'm having similar trouble, but not with PostgreSQL (which is running happily): what is happening is simply that the kernel cannot allocate more RAM to the process, whichever process it is.
It would certainly help to add some swap to your configuration.
To check how much RAM and swap you have, run: free -h
On my machine, here is what it returns:
total used free shared buff/cache available
Mem: 7.7Gi 5.3Gi 928Mi 865Mi 1.5Gi 1.3Gi
Swap: 9.4Gi 7.1Gi 2.2Gi
You can clearly see that my machine is quite overloaded: about 8Gb of RAM, and 9Gb of swap, from which 7 are used.
When the RAM-hungry process got killed after Out of memory, I saw both RAM and swap being used at 100%.
So, allocating more swap may improve our problems.

reserve,used,available,total disk space in linux

Is there any single linux command or single system call through which I can get all 4 information(Total,used,free,reserved) ?
I have checked following:
df: does not give reserve disk space
stat(): does not give reserve disk space
statfs(): gives total and free only
I tried using "tune2fs -l /dev/vda1" for reserved ,but there is some discrepancy between outputs of tune2fs and df command.The total is not coming as sum of used,free and reserved.
Do a calculation with df output :
reserved=total-used-free

The begining and end adress of the Flash memory

I am trying to run linux on an Arduino Yun board. The Arduino board contains an Atheros AR9331 chipset
On U-Boot These are the steps I am doing:
1- Download the kernel :
ar7240> tftp 0x80060000 openwrt-ar71xx-generic-uImage-lzma.bin;
Load address: 0x80060000
Loading: #################################################################
#################################################################
#################################################################
#################################################################
######################
done
Bytes transferred = 1441863 (160047 hex)
2- Erase Flash in order to copy the kernel:
ar7240> erase 0x9fEa0000 +0x160047
Error: end address (0xa0000046) not in flash!
Bad address format
This is the problem It seems that 0x9fEa0000 +0x160047 exceeds the total size of the flash.
So my questions are:
1- How can I figure the total amount of memory reserved for the flash in Uboot (From wich address it starts and ends), I am thinking about changing 0x9fEa0000 by a fewer address but i'am afraid i can harm other things
This is the output of the help:
ar7240> help
? - alias for 'help'
boot - boot default, i.e., run 'bootcmd'
bootd - boot default, i.e., run 'bootcmd'
bootm - boot application image from memory
cp - memory copy
erase - erase FLASH memory
help - print online help
md - memory display
mm - memory modify (auto-incrementing)
mtest - simple RAM test
mw - memory write (fill)
nm - memory modify (constant address)
ping - send ICMP ECHO_REQUEST to network host
printenv- print environment variables
progmac - Set ethernet MAC addresses
reset - Perform RESET of the CPU
run - run commands in an environment variable
setenv - set environment variables
tftpboot- boot image via network using TFTP protocol
version - print monitor version
2- Is there someone experienced with Atheros AR9331 chipset who can help me find the Flash mapping (From where it starts and ends) from the datasheet
You can determine the flash layout from the kernel boot command line. Either run the printenv command in u-boot or boot into the existing kernel and look through the boot log. You need to find something like the following:
(There are plenty of guides on the internet, I took this one from https://finninday.net/wiki/index.php/Arduino_yun, your board may or may not be the same).
linino> printenv
bootargs=console=ttyATH0,115200 board=linino-yun mem=64M rootfstype=squashfs,jffs2 noinitrd mtdparts=spi0.0:256k(u-boot)ro,64k(u-boot-env)ro,14656k(rootfs),1280k(kernel),64k(nvram),64k(art),15936k#0x50000(firmware)
bootcmd=bootm 0x9fea0000
This means there are the following partitions:
u-boot 0 to 256K (0x0 - 0x40000)
u-boot-env 256k to 320k (0x40000 - 0x50000)
rootfs (squashfs) 320k to 14976k (0x50000 - 0xea0000)
kernel 14976k to 16256k (0xea0000 - 0xfe0000)
nvram 16256k to 16320k (0xfe0000 - 0xff0000)
art 16320k to 16384k (0xff0000 - 0x1000000)
The rootfs partition is 14M, which is much larger than the rootfs image file (less than 8MB) so in theory you can move the kernel image at a lower address. For this you will need to modify the kernel boot line in the u-boot environment block (rootfs aand kernel partition sizes) and the bootcmd parameter so the u-boot know where the new kernel is located.
Flash is mapped to 0x9f000000 so the value in the bootcmd should be 0x9f000000 + the offset of the kernel in bytes.
What I am not sure about is if there is an overlay filesystem for any persistent changes to the flash. Can you boot into the existing system and post the output of df -h and cat /proc/mounts?

Intel NVMe drive Performance degradation with xfs filesystem with sector size other than 4096

I am working with NVMe card on linux(Ubuntu 14.04).
I am finding some performance degradation for Intel NVMe card when formatted with xfs file system with its default sector size(512). or any other sector size less than 4096.
In the experiment I formatted the card with xfs filesystem with default options. I tried running fio with 64k block size on an arm64 platform with 64k page size.
This is the command used
fio --rw=randread --bs=64k --ioengine=libaio --iodepth=8 --direct=1 --group_reporting --name=Write_64k_1 --numjobs=1 --runtime=120 --filename=new --size=20G
I could get only the below values
Run status group 0 (all jobs):
READ: io=20480MB, aggrb=281670KB/s, minb=281670KB/s, maxb=281670KB/s, mint=744454msec, maxt=74454msec
Disk stats (read/write):
nvme0n1: ios=326821/8, merge=0/0, ticks=582640/0, in_queue=582370, util=99.93%
I tried formatting as follows:
mkfs.xfs -f -s size=4096 /dev/nvme0n1
then the values were :
Run status group 0 (all jobs):
READ: io=20480MB, aggrb=781149KB/s, minb=781149KB/s, maxb=781149KB/s, mint=266
847msec, maxt=26847msec
Disk stats (read/write):
nvme0n1: ios=326748/7, merge=0/0, ticks=200270/0, in_queue=200350, util=99.51%
I find no performance degradation when used with
4k page size
Any fio block size lesser than 64k
With ext4 fs with default configs
What could be the issue? Is this any alignment issue? What Am I missing here? Any help appreciated
The issue is your SSD's native sector size is 4K. So your file system's block size should be set to match so that reads and writes are aligned on sector boundaries. Otherwise you'll have blocks that span 2 sectors, and therefore require 2 sector reads to return 1 block (instead of 1 read).
If you have an Intel SSD, the newer ones have a variable sector size you can set using their Intel Solid State Drive DataCenter Tool. But honestly 4096 is still probably the drive's true sector size anyway and you'll get the most consistent performance using it and setting your file system to match.
On ZFS on Linux the setting is ashift=12 for 4K blocks.

How to get rid of "Some devices missing" in BTRFS after reuse of devices?

I have been playing around with BTRFS on a few drives I had lying around. At first I created BTRFS using the entire drive, but eventually I decided I wanted to use GPT partitions on the drives and recreated the filesystem I needed on the partitions that resulted. (This was so I could use a portion of each drive as Linux swap space, FYI.)
When I got this all done, BTRFS worked a treat. But I have annoying messages saying that I have some old filesystems from my previous experimentation that I have actually nuked. I worry this meant that BTRFS was confused about what space on the drives was available, or that some sort of corruption might occur.
The messages look like this:
$ sudo btrfs file show
Label: 'x' uuid: 06fa59c9-f7f6-4b73-81a4-943329516aee
Total devices 3 FS bytes used 159.20GB
devid 3 size 931.00GB used 134.01GB path /dev/sde
*** Some devices missing
Label: 'root' uuid: 5f63d01d-3fde-455c-bc1c-1b9946e9aad0
Total devices 4 FS bytes used 1.13GB
devid 4 size 931.51GB used 1.03GB path /dev/sdd
devid 3 size 931.51GB used 2.00GB path /dev/sdc
devid 2 size 931.51GB used 1.03GB path /dev/sdb
*** Some devices missing
Label: 'root' uuid: e86ff074-d4ac-4508-b287-4099400d0fcf
Total devices 5 FS bytes used 740.93GB
devid 4 size 911.00GB used 293.03GB path /dev/sdd1
devid 5 size 931.51GB used 314.00GB path /dev/sde1
devid 3 size 911.00GB used 293.00GB path /dev/sdc1
devid 2 size 911.00GB used 293.03GB path /dev/sdb1
devid 1 size 911.00GB used 293.00GB path /dev/sda1
As you can see, I have an old filesystem labeled 'x' and an old one labeled 'root', and both of these have "Some devices missing". The real filesystem, the last one shown, is the one that I am now using.
So how do I clean up the old "Some devices missing" filesystems? I'm a little worried, but mostly just OCD and wanting to tidy up this messy output.
Thanks.
To wipe from disks that are NOT part of your wanted BTRFS FS, I found:
How to clean up old superblock ?
...
To actually remove the filesystem use:
wipefs -o 0x10040 /dev/sda
8 bytes [5f 42 48 52 66 53 5f 4d] erased at offset 0x10040 (btrfs)"
from: https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#I_can.27t_mount_my_filesystem.2C_and_I_get_a_kernel_oops.21
I actually figured this out for myself. Maybe it will help someone else.
I poked around in the code to see what was going on. When the btrfs filesystem show command is used to show all filesystems on all devices, it scans every device and partition in /proc/partitions. Each device and each partition is examined to see if there is a BTRFS "magic number" and associated valid root data structure found at 0x10040 offset from the beginning of the device or partition.
I then used hexedit on a disk that was showing up wrong in my own situation and sure enough there was a BTRFS magic number (which is the ASCII string _BHRfS_M) there from my previous experiments.
I simply nailed that magic number by overwriting a couple of the characters of the string with "**", also using hexedit, and the erroneous entries magically disappeared!

Resources