Tell Linux to not swap until a certain amount of RAM [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm facing a very ironic situation: The swap that is supposed to help me slows down everything because it starts to swap when 3GB of RAM are used, much less before the 4GB my computer has.
Is it possible to tell Debian to not swap until I reach x amount of RAM ?
Edit: Problem solved.
That stupid swap still kicks in even with swappiness = 0.
So I completely disabled it with swapoff -a and enable it again when I need it with swapon -a

You have a very common misconception that swapping makes your system slow. If that was true, nobody would swap. Your operating system is smart enough to only swap where that provides a benefit.
The tiny amount of swapping you're seeing is memory that likely has never been accessed and, believe it or not, the data swapped is probably still in RAM as well. Opportunistic swapping just permits the OS to discard the data without having to write it out, which may provide a benefit if there's a memory shortage later.
Your performance problems are likely unrelated to swap. This is a common misdiagnosis.
Needing to swap may make a system slow, but having the option to swap if it benefits you doesn't.

As David said, what you want is not possible.
But there is a possibility to configure how much linux tends to swap.
It is the kernel parameter "vm.swappiness".
Default value is 60. As an example, when you set it to extreme low value zero, the kernel will only swap to avoid an out of memory.

Related

Space needed for reserved blocks on a partition [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
By default, the ext2/3/4 fs reserves 5% of its capacity to be able to keep running when diskspace is getting low.
I also believe it has something to do with allowing "fragmentation between files" or something like this (I haven't been able to find concrete information about this, and I'm kinda newbie in this domain).
My question is: when do we need to keep these 5%, when can we reduce it to something like 1-2%, or when can we remove it totally ?
The elements that I'm considering atm are the following:
The 5% rule was decided something like 20 years ago when the reserved size wasn't way more than ~100Mbs, which is totally different now; if we're only talking about space needed to execute commands and such, do we really need 20Gbs ?
Could it ever be a good idea to remove this allowed space ? If some of it is needed for "fragmentation" somehow, I believe we should at least keep 1-2% available
Is this space really only useful for partitions that are related in any way to root ? I mean, if we have a partition for some folder in /home (something personal, or data from a database, something else that is not related in any way to the OS), this space may not be needed
I've only seen more and more articles on the web telling about how to reduce the reserved blocks so I believe that it may not be a bad idea 100% of the time, but I've not really been able to have articles explaining deeply the concrete application of "when it can / cannot be used", and "what it exactly does and implies".
So if some of you could provide comprehensive information (as well as a simple answer to the question I exposed above) I would be very thankful.
Those 5% are really kept for root user to be able to login and do some operations in case of full filesystem. And yes, you can decrease the amount (I did this in the past) to 1-2% (depend of the disk size). But be aware for most filesystems this should be defined when you create it and its hard (if possible at all) to change it after.
About zero it - yes, that's also possible. But will be wise to keep some space for root in the /, /root (or whatever is the home of root user), /tmp and eventually /var/tmp

See available space in all storage devices, mounted or unmounted, through a linux command? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I've seen that df -H --total gives me the total space available but only of mounted devices, and lsblk gives me the sizes of all storage devices, yet not how much space is available within them.
Is there a way I could see the sum total, available storage space of all devices, e.g. hard disks, thumb drives, etc., in one number?
The operation of mounting a medium makes the operating system analyze the file system.
Before a medium is mounted, it exists as a block device and the only fact the OS might know about it might be the capacity.
Other than that, it is just a stream of bytes not interpreted in any way. That "stream of bytes" very probably contains the information of used and unused blocks. But, dependent on file system types, in very different places and can thus not be known by the OS (other than mounting and analyzing it)
You could write a specific application that would extract that information, but I would consider that temporarily mounting the file system. Standard Unix/Linux doesn't come with such an application.
From the df man page, I'd say "No", but the wording indicates that it may be possible on some sytems/distributions with some version(s) of df.
The other problem is how things can be accessed. For example, the system I'm using right now shows 3 160gb disks in it... but df will show one of them at / and the other 2 as a software based RAID-1 setup on /home.

Why SRAM is commonly used in cache memory? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am studying about RAM, i don't understood why Static RAM is commonly used in cache memory.
A memory cache, sometimes called a cache store or RAM cache, is a
portion of memory made of high-speed static RAM (SRAM) instead of the
slower and cheaper dynamic RAM (DRAM) used for main memory. Memory
caching is effective because most programs access the same data or
instructions over and over. By keeping as much of this information as
possible in SRAM, the computer avoids accessing the slower DRAM. †
The same reasoning is in this wiki.
Because it can be faster than dynamic RAM. And more expensive, otherwise it would be used everywhere.
Main reason is dynamic RAM are slower. As you know cache is for fast access so usually static RAM is better choice for cache memory.

linux memory swappiness: why not always 0? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
Linux allows the user to change the system swappiness from 0 to 100. If set to 0, the kernel will disable swapping and all processes will be maintained in memory if spared memory is available. Conversely, if set to 100, the kernel will swap aggressively. My question is, why not always setting the swappiness to 0? As a system user, we may always expect our programs to be maintained in memory rather than swapped to disk. So I think setting the swappiness to 100 is meaningless, correct?
As said on another stack exchange site having some swap is good. It frees up memory from processes that are rarely using it so that more active processes have access to RAM. A swappiness level of about 60 is a good balance as it frees up unused memory without hindering the performance of more active processes dramatically.
It all depends on how much RAM you have and will use.
Setting swappiness to 100 is an extreme situation, for sure. But one may always think of a scenario where this could be necessary.
The other situation, setting swappiness to 0, seems to be that best thing to to, but it isn't. Just for start, what would happen if you had done this and a process needed more memory? A crash, probably.
Then, as #limecore said above, having some swap memory is always good.

why compressed kernel image is used in linux? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have googled this question over the internet but couldn't find anything useful related to this question that "why compressed kernel image like bzImage or vmlinuz is used as initial kernel image",
Possible solutions which i could think of are:
Due to memory constraint?
But initially compressed kernel image is located at hard disk or some other storage media and during boot up time after second stage bootloader, kernel is first decompressed in main memory and then it is executed.
So, when at later stage kernel is to be decompressed in main memory then what is the need to compress it first. I mean if main memory could hold decompressed kernel image then what is the need for kernel compression?
Generally the processor can decompress faster than the I/O system can read. By having less for the I/O system to read, you reduce the time needed for boot.
This assumption doesn't hold for all hardware combinations, of course. But it frequently does.
An additional benefit for embedded systems is that the kernel image takes up less space on the non-volatile storage, which may allow using a smaller (and cheaper) flash chip. Many of these systems have ~ 32MB of system RAM and only ~ 4MB of flash.

Resources