btrfs raid1 with multiple devices [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have 6 devices: 4TB, 3TB, 2TB, 2TB, 1.5TB, 1TB (/dev/sda to /dev/sdf).
First question:
With RAID-1 I'd have:
2TB mirrored in 2TB
1TB mirrored in 0.5#4TB + 0.5#3TB
1.5TB mirrored in 1.25#4TB + 0.25#3TB
the rest 2.25 of 3TB mirrored in the rest 2.25TB of 4TB.
My total size would be in that case (4 + 3 + 2 + 2 + 1.5 + 1) = 13.5/2 = 6.75TB
Will $ mkfs.btrfs --data raid1 --metadata raid1 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf provide me with approximately 6.75TB? If yes, how many disks (and which?) can I afford losing?
Second question:
With the RAID-1 I can afford, for example, losing three disks:
one 2TB disk,
the 1TB disk and
the 1.5TB disk,
without losing data.
How can I have the same freedom in losing the same disks with btrfs?
Thanks!

Btrfs distributes the data (and its RAID 1 copies) block-wise, thus deals very well with hard disks of different size. You will receive the sum of all hard disks, divided by two – and do not need to think on how to put them together in similar sized pairs.
If more than one disk fails, you're always in danger of losing data: RAID 1 cannot deal with losing two disks at the same time. In your example given above, if the wrong two disks die, you always lose data.
Btrfs can increase the chances of losing data if more than one disk fails, as it will distribute the blocks somewhat randomly, chances are higher that some blocks are only stored on the failed two devices. On the other hand, if you lose data, you probably lose less for the same reason. In average, it sums up to the same chance of losing n bits, but if you're interested in the chance of losing only a single bit you're worse of with btrfs.
And then again, you should also consider its advantage of using checksums which help against corrupted data on disk.

Related

Swap data of /dev/sda2 and /dev/sda3 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
To begin with ,I would build up the exact context to begin with. The link(cuz am low on reputations) is a screenshot of partitions in my laptop's hard disk.Hard disk filesystem partitions /dev/sda
As it must have been evident from the screenshot itself../dev/sda2 was a pre-existing partitions which has now been formatted into a clean btrfs format; And /dev/sda3 has ParrotOS in it.
Now I wish to make whole of hard disk memory from /dev/sda2 and /dev/sda3 to ParrotOS without losing any iota of any existing data in /dev/sda3...as per the software used here(Gparted) partitions can be extended only if they have empty unallocated space after them, so there is no apparent option here for to directly unallocate /dev/sda2 and put /dev/sda3 in front of it..Or is it?
Can some generous guys help me to atleast aid me to swap everything from /dev/sda3 so that I can unallocate it and can merge them together into a single large chunk of partition.
If sda2 and sda3 are the same size (low-level size, that is... not FS size... you can see that with say fdisk), then you can copy binary content of sda3 into sda2 with something as simple as:
sudo dd if=/dev/sda3 of=/dev/sda2
After that is done, sda2 would be an exact image of sda3. Just make sure to not have the partition of sda3 (or sda2, of course) mounted so that there are no operations going on against it. When that is copied over, you should be able to mount sda2 and see what you have in sda3.... when that is done, you can then remove sda3 so that sda2 can be extended.... this is not risk free, by the way... and some adjustments will need to be made, for example, adjusting /etc/fstab (among other things).

How to disable the oom killer in linux? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
The community reviewed whether to reopen this question 4 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
My current configs are:
> cat /proc/sys/vm/panic_on_oom
0
> cat /proc/sys/vm/oom_kill_allocating_task
0
> cat /proc/sys/vm/overcommit_memory
1
but when I run a task, it's killed anyway.
> ./test/mem.sh
Killed
> dmesg | tail -2
[24281.788131] Memory cgroup out of memory: Kill process 10565 (bash) score 1001 or sacrifice child
[24281.788133] Killed process 10565 (bash) total-vm:12601088kB, anon-rss:5242544kB, file-rss:64kB
Update
My tasks are used to scientific computing, which costs many memories, it seems that overcommit_memory=1 may be the best choice.
Update 2
Actually, I'm working on a data analyzation project, which costs memory more than 16G, but I was asked to limit them in about 5G. It might be impossible to implement this requirement via optimizing the program itself, because the project uses many sub-commands, and most of them does not contains options like Xms or Xmx in Java.
Update 3
My project should be an overcommited system. Exacetly as what a3f saying, it seems that my apps prefer to crash by xmalloc when mem allocated failed.
> cat /proc/sys/vm/overcommit_memory
2
> ./test/mem.sh
./test/mem.sh: xmalloc: .././subst.c:3542: cannot allocate 1073741825 bytes (4295237632 bytes allocated)
I don't want to surrender, although so many aweful tests make me exhausted.
So please show me a way to the light ; )
The OOM killer won't go away. If there is no memory, someone's got to pay. What you can do is set a limit after which memory allocations fail.
That's exactly what setting vm.overcommit_memory to 2 achieves.
From the docs:
The Linux kernel supports the following overcommit handling modes
2 - Don't overcommit. The total address space commit for the system
is not permitted to exceed swap + a configurable amount (default is
50%) of physical RAM. Depending on the amount you use, in most
situations this means a process will not be killed while accessing
pages but will receive errors on memory allocation as appropriate.
Normally, the kernel will happily hand out virtual memory (overcommit). Only when you reference a page, the kernel has to map the page to a real physical frame. If it can't service that request, a process needs to be killed by the OOM killer to make space.
Disabling overcommit means that e.g. malloc(3) will return NULL if the kernel couldn't commit the amount of memory requested. This makes things a bit more predictable, albeit limited (many applications allocate more than they would ever need).
The possible values of oom_adj range from -17 to +15. The higher the score, more likely the associated process is to be killed by OOM-killer. If oom_adj is set to -17, the process is not considered for OOM-killing.
But, increase ram is better choice ,if increasing ram is not possible, then add swap memory.
To increase swap memory try this link,

Does Linux have a page file? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I found at several places that Linux uses pages and a paging mechanism but I didn't find anywhere where this file is or how to configure it.
All the information I found is about the Linux swap file / partition. There is a difference between paging and swapping:
Paging moves pages (a small frame which contains a piece of data - usually 4 KB but can vary between different OS's) from main memory to a backbend storage, happens always as a normal function of the operating system.
Swapping moves an entire process to storage and happens when the system is memory stressed or on windows 8 when a new application is hibernating.
Does Linux uses it's swap file / partition for both cases?
If so, how could I see how many page are currently paged out? This information is not there in vmstat, free or swapon commands (or that I fail to see it).
Or is there another file used for paging?
If so, how can I configure it (and watch it's usage)?
Or perhaps Linux does not use paging at all and I was mislead?
I would appreciate if the answers will be specific to red hat enterprise Linux both versions 6 and 7 but also a general answer about all Linux's will be good.
Thanks in advance.
On Linux, the swap partition(s) are used for paging.
Linux does not respond to memory pressure by swapping out whole processes. The virtual memory system does demand paging, page by page. Under extreme memory pressure, one or more processes will be killed by the OOM killer. (There are some useful links to documentation in the first NOTE in man malloc)
There is a line in the top header which shows swap partition usage, but if that is all the information you want, use
swapon -s
man swapon for more information.
The swap partition usage is not the same as the number of unmapped pages. A page might be memory-mapped to a file using the mmap call; since that page has backing store in the file, there is no need to also write it to a swap partition, and the system won't use swap space for that. But swap partition usage is a pretty good indicator.
Also note that Linux (unlike Windows) does not allocate swap space for pages when they are allocated. Instead, it adds the new page to the virtual memory map without any backing store. and allocates the swap space when the page needs to be swapped out. The consequence (as described in the malloc manpage referenced earlier) is that a malloc call may succeed in allocating virtual memory, but a subsequent attempt to use that virtual memory may fail.
Although Linux retains the term 'swap partition' as a historical relic, it actually performs paging. So your expectation is borne out; you were just thrown by the archaic terminology.

mhddfs not support file split.. if file size exceed limit of single storage device [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I'm using mhddfs to combine multiple drives that are mounted over network using NFS.
e.g.
There are three machines
Server Name Dir Space
Server 1 /home 10 GB Space
Server 2 /home 10 GB Space
Server 3 /home 10 GB Space
Using NFS i mounted the following:
Server 1 /home to Server 3 /home/mount1
Server 2 /home to Server 3 /home/mount3
Then using mhddfs i merge or unified mount1 and mount 2 e.g.
mhddfs /home/server/mount1,/home/server/mount2 /home/server/mount
Now i have 30 GB space altogether. but when i tried to write the file in mount directory that has more than 10 GB space it fails...
It seems mhddfs can't split large file e.g. 20 GB file.. so that it can store
Please give an idea, that how i can achive this ......
This is an inherent limitation of mhddfs. It works by simply combining the contents of the underlying devices into a single directory, and stores new files into whichever drive has sufficient free space. Since there is no drive in your system that can actually store a 20 GB file, the resulting merged filesystem cannot store one either.

Cheap way to improve I/O [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I see http://www.youtube.com/watch?v=96dWOEa4Djs from http://www.joelonsoftware.com/items/2009/03/27.html and get amazed of the improvenment.
I have a good workstation (Sun Ultra M4, 2 AMD Opteron, 8GB RAM, NVidia FX 1500) and feel as fast as... any other computer in the city (except when rendering).
I blame windows for it (I can't use linux because run 3d max) but now I wonder if is possible improve the I/O.
I run VM (1-3 per time), 3D Max, Photoshop and Python... plus some video encoding and stuff like that.
I have not enough money to buy a SSD and have 2 SATA drivers. What I can do? Is possible mount on windows a RAM drive? How do I use it?
Have you thought about using a RAID array? You can get some decent I/O improvements from a RAID-0 configuration..
although I must ask - are you sure your bottleneck is disk I/O and not memory or CPU? In my experience disk I/O has traditionally been the last bottleneck on a machine (especially in large scale machines) and more often than not memory, poor use of pagefiles and CPU throughput have been the tension points.
Sounds like you're probably CPU bound. All the programs you listed depend highly on memory and CPU rather than disk speed. Since it looks like you have plenty of memory I'm guessing it's mostly CPU slowing you down.
If you really do wish to improve your disk performance without spending much money you can try putting your disks on a Raid 0 setup. This will make your computer treat them like one large data storage volume and speed things up by reading from both disks simultaneously. Keep in mind that this also increases the likelihood that you will lose data since the rail volume could become corrupt or one of the disks could fail (causing data on both disks to be lost).
Alternately, you can try buying a faster disk drive. Newegg sells Western Digital Raptor drives (Currently the fastest SATA non-SSD disks available) from between $100-$150 (after rebate) http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=40000014&Description=raptor&name=Internal%20Hard%20Drives. These could give you a 20-30%+ boost in disk IO depending on how good your current drives are.
Go for UltraSCSI if you want more disk I/O bandwidth. But do not meter your disk speed looking at how fast programs are loading. Better disk subsystems and /or configurations (such as RAID) are only useful at transferring large data blocks, e.g video/audio editing ,not loading operating system files or application executables.
Did you scanned your computer for spyware? ;)

Resources