Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
By default, the ext2/3/4 fs reserves 5% of its capacity to be able to keep running when diskspace is getting low.
I also believe it has something to do with allowing "fragmentation between files" or something like this (I haven't been able to find concrete information about this, and I'm kinda newbie in this domain).
My question is: when do we need to keep these 5%, when can we reduce it to something like 1-2%, or when can we remove it totally ?
The elements that I'm considering atm are the following:
The 5% rule was decided something like 20 years ago when the reserved size wasn't way more than ~100Mbs, which is totally different now; if we're only talking about space needed to execute commands and such, do we really need 20Gbs ?
Could it ever be a good idea to remove this allowed space ? If some of it is needed for "fragmentation" somehow, I believe we should at least keep 1-2% available
Is this space really only useful for partitions that are related in any way to root ? I mean, if we have a partition for some folder in /home (something personal, or data from a database, something else that is not related in any way to the OS), this space may not be needed
I've only seen more and more articles on the web telling about how to reduce the reserved blocks so I believe that it may not be a bad idea 100% of the time, but I've not really been able to have articles explaining deeply the concrete application of "when it can / cannot be used", and "what it exactly does and implies".
So if some of you could provide comprehensive information (as well as a simple answer to the question I exposed above) I would be very thankful.
Those 5% are really kept for root user to be able to login and do some operations in case of full filesystem. And yes, you can decrease the amount (I did this in the past) to 1-2% (depend of the disk size). But be aware for most filesystems this should be defined when you create it and its hard (if possible at all) to change it after.
About zero it - yes, that's also possible. But will be wise to keep some space for root in the /, /root (or whatever is the home of root user), /tmp and eventually /var/tmp
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
According to kernel.org there is the possibility to store dentries in trees instead of lists but you need to enable this flag (EXT4_INDEX_FL) in the inode structure. I this enabled by default or I have to format my partition with some flags?
I need to store lots of small files (same old problem) of about 130k each and I understood that this will help to speed up lookup and also that it is recommended to store those files in a 2 level directories hierarchy. Is there something else I need to consider so that this doesn't blow up if want to store something close to 60.000.000 of this kind of files ? (maybe some other values for block size, number of blocks in a group)
This option is referred to by the e2fsprogs suite as dir_index. It's enabled by default, and you can verify that it's enabled on a file system by running tune2fs -l DEVICE as root.
It is indeed recommended that you shard your files manually so that you don't have a huge number of files in the same directory. While using B-trees makes the operation O(log n) instead of O(n), for large numbers of files, the operation can still be expensive.
If you know you're going to be creating a large number of files, you can set the inode ratio to 4096 with the -i option; this will create a larger number of inodes so that you can hold more files. You can also see common settings for a large number of situations in /etc/mke2fs.conf.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I've seen that df -H --total gives me the total space available but only of mounted devices, and lsblk gives me the sizes of all storage devices, yet not how much space is available within them.
Is there a way I could see the sum total, available storage space of all devices, e.g. hard disks, thumb drives, etc., in one number?
The operation of mounting a medium makes the operating system analyze the file system.
Before a medium is mounted, it exists as a block device and the only fact the OS might know about it might be the capacity.
Other than that, it is just a stream of bytes not interpreted in any way. That "stream of bytes" very probably contains the information of used and unused blocks. But, dependent on file system types, in very different places and can thus not be known by the OS (other than mounting and analyzing it)
You could write a specific application that would extract that information, but I would consider that temporarily mounting the file system. Standard Unix/Linux doesn't come with such an application.
From the df man page, I'd say "No", but the wording indicates that it may be possible on some sytems/distributions with some version(s) of df.
The other problem is how things can be accessed. For example, the system I'm using right now shows 3 160gb disks in it... but df will show one of them at / and the other 2 as a software based RAID-1 setup on /home.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am just wondering how a fork bomb works, I know that there are similar questions but the answers aren't quite what I am looking for (or maybe I just haven't been able to come across one)
How does it work in terms of processes?
Do children keep being produced and then replicating themselves? is the only way to get out of it is by rebooting the system?
Are there any long lasting consequences on the system because of a fork bomb?
Thanks!
How does it work in terms of processes?
It creates so many processes that the system is not able to create any more.
Do children keep being produced and then replicating themselves?
Yes, fork in the name means replication.
is the only way to get out of it is by rebooting the system?
No, it can be stopped by some automated security measures, eg. limiting the number of processes per user.
Are there any long lasting consequences on the system because of a fork bomb?
A fork bomb itself does not change any data but can temporarily (while it runs) cause timeouts, unreachable services or OOM. A well-designed system should handle that but, well, reality may differ.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Last night I have done quite an idiotic thing. In an attempt to delete USB from my friend i have accidentally started
dd if=/dev/zero of=/dev/MyBootDrive
and killed first couple of gigabytes of data from my disk. That data is absolutely not important, i have killed system that needed to be replaced in the first place but. On that partition there is a significant amount of data that should be saved if anyhow possible.
So is there any tool that could make me feel less idiot I obviously am, and save my data from filesystem corrupted like that. I'm aware of some tools but they usually save deleted data, or when partition is changed.
Thanks
Well ext4 replicates the superblock so you can use tools like gpart to find the partition again.
Then also, ext4 stores all the necessary information to read a block group at the beginning of such group. So theoretically it should be possible to restore all the preserved block groups.
It might work to run fsck and point it to a superblock that you might have found (or calculated where it might be).
However, when we lastly tried this, it didn't work for us (but we wrote a new file system over the old one, /dev/zero might be better). We then tried to find files in the raw data, igoring the file system. We could not recover much meaningful data. It is easier for multimedia files than text files though.
Ok I have managed to rescue everything.
It was not magic, I was just plain lucky. I have realized what I'm doing and stopped command after a bit more than 1 sec. So I have nulled just first 1.4GB of data. That was my boot disk, and naturally my / partition was the only one damaged. So obviously every other partition is left intact. But since my partition table is destroyed everything I was able to see is empty hdd. First thing I tried was to recover partitions with gpart but to no avail.
After that I have found this article. Using test disk i have managed to save my /home partition and all data from it.
Now everything is finished I have to agree with the end of this artice:
Well, that would be all. Forget recovery. It's so 70s. Go for backups!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm facing a very ironic situation: The swap that is supposed to help me slows down everything because it starts to swap when 3GB of RAM are used, much less before the 4GB my computer has.
Is it possible to tell Debian to not swap until I reach x amount of RAM ?
Edit: Problem solved.
That stupid swap still kicks in even with swappiness = 0.
So I completely disabled it with swapoff -a and enable it again when I need it with swapon -a
You have a very common misconception that swapping makes your system slow. If that was true, nobody would swap. Your operating system is smart enough to only swap where that provides a benefit.
The tiny amount of swapping you're seeing is memory that likely has never been accessed and, believe it or not, the data swapped is probably still in RAM as well. Opportunistic swapping just permits the OS to discard the data without having to write it out, which may provide a benefit if there's a memory shortage later.
Your performance problems are likely unrelated to swap. This is a common misdiagnosis.
Needing to swap may make a system slow, but having the option to swap if it benefits you doesn't.
As David said, what you want is not possible.
But there is a possibility to configure how much linux tends to swap.
It is the kernel parameter "vm.swappiness".
Default value is 60. As an example, when you set it to extreme low value zero, the kernel will only swap to avoid an out of memory.