linux memory swappiness: why not always 0? [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
Linux allows the user to change the system swappiness from 0 to 100. If set to 0, the kernel will disable swapping and all processes will be maintained in memory if spared memory is available. Conversely, if set to 100, the kernel will swap aggressively. My question is, why not always setting the swappiness to 0? As a system user, we may always expect our programs to be maintained in memory rather than swapped to disk. So I think setting the swappiness to 100 is meaningless, correct?

As said on another stack exchange site having some swap is good. It frees up memory from processes that are rarely using it so that more active processes have access to RAM. A swappiness level of about 60 is a good balance as it frees up unused memory without hindering the performance of more active processes dramatically.
It all depends on how much RAM you have and will use.

Setting swappiness to 100 is an extreme situation, for sure. But one may always think of a scenario where this could be necessary.
The other situation, setting swappiness to 0, seems to be that best thing to to, but it isn't. Just for start, what would happen if you had done this and a process needed more memory? A crash, probably.
Then, as #limecore said above, having some swap memory is always good.

Related

Why SRAM is commonly used in cache memory? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am studying about RAM, i don't understood why Static RAM is commonly used in cache memory.
A memory cache, sometimes called a cache store or RAM cache, is a
portion of memory made of high-speed static RAM (SRAM) instead of the
slower and cheaper dynamic RAM (DRAM) used for main memory. Memory
caching is effective because most programs access the same data or
instructions over and over. By keeping as much of this information as
possible in SRAM, the computer avoids accessing the slower DRAM. †
The same reasoning is in this wiki.
Because it can be faster than dynamic RAM. And more expensive, otherwise it would be used everywhere.
Main reason is dynamic RAM are slower. As you know cache is for fast access so usually static RAM is better choice for cache memory.

fork() bomb explanation in terms of processes? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am just wondering how a fork bomb works, I know that there are similar questions but the answers aren't quite what I am looking for (or maybe I just haven't been able to come across one)
How does it work in terms of processes?
Do children keep being produced and then replicating themselves? is the only way to get out of it is by rebooting the system?
Are there any long lasting consequences on the system because of a fork bomb?
Thanks!
How does it work in terms of processes?
It creates so many processes that the system is not able to create any more.
Do children keep being produced and then replicating themselves?
Yes, fork in the name means replication.
is the only way to get out of it is by rebooting the system?
No, it can be stopped by some automated security measures, eg. limiting the number of processes per user.
Are there any long lasting consequences on the system because of a fork bomb?
A fork bomb itself does not change any data but can temporarily (while it runs) cause timeouts, unreachable services or OOM. A well-designed system should handle that but, well, reality may differ.

What does 128 threads per processor mean for a supercomputer? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
This article about YARC mentions that the super computer has 128 threads per processor.
Is the same thing concept as hyperthreading, where essentially the cpu has additional registers that allows to act as multiple processors?
Yes, the physical processor will have lots of registers sets representing virtual CPUs ("threads").
What 128 threads lets the the physical processor do, is put a virtual processor to sleep on an external delay (e.g., memory access), and switch to another virtual processor that is not waiting for anything. This means the physical CPU almost always has work to do, so it is highly efficient. The latency of memory access is hidden so it can be highly non-uniform.
To harness such a system, you need an application program which is highly parallel. If your application has only a few parallel elements, such a processor will not have enough threads that are not waiting on memory accesses, and so it will not be efficient. Then all those virtual CPUs are simply wasted resources (worse, they produce extra heat).

Tell Linux to not swap until a certain amount of RAM [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm facing a very ironic situation: The swap that is supposed to help me slows down everything because it starts to swap when 3GB of RAM are used, much less before the 4GB my computer has.
Is it possible to tell Debian to not swap until I reach x amount of RAM ?
Edit: Problem solved.
That stupid swap still kicks in even with swappiness = 0.
So I completely disabled it with swapoff -a and enable it again when I need it with swapon -a
You have a very common misconception that swapping makes your system slow. If that was true, nobody would swap. Your operating system is smart enough to only swap where that provides a benefit.
The tiny amount of swapping you're seeing is memory that likely has never been accessed and, believe it or not, the data swapped is probably still in RAM as well. Opportunistic swapping just permits the OS to discard the data without having to write it out, which may provide a benefit if there's a memory shortage later.
Your performance problems are likely unrelated to swap. This is a common misdiagnosis.
Needing to swap may make a system slow, but having the option to swap if it benefits you doesn't.
As David said, what you want is not possible.
But there is a possibility to configure how much linux tends to swap.
It is the kernel parameter "vm.swappiness".
Default value is 60. As an example, when you set it to extreme low value zero, the kernel will only swap to avoid an out of memory.

why compressed kernel image is used in linux? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have googled this question over the internet but couldn't find anything useful related to this question that "why compressed kernel image like bzImage or vmlinuz is used as initial kernel image",
Possible solutions which i could think of are:
Due to memory constraint?
But initially compressed kernel image is located at hard disk or some other storage media and during boot up time after second stage bootloader, kernel is first decompressed in main memory and then it is executed.
So, when at later stage kernel is to be decompressed in main memory then what is the need to compress it first. I mean if main memory could hold decompressed kernel image then what is the need for kernel compression?
Generally the processor can decompress faster than the I/O system can read. By having less for the I/O system to read, you reduce the time needed for boot.
This assumption doesn't hold for all hardware combinations, of course. But it frequently does.
An additional benefit for embedded systems is that the kernel image takes up less space on the non-volatile storage, which may allow using a smaller (and cheaper) flash chip. Many of these systems have ~ 32MB of system RAM and only ~ 4MB of flash.

Resources