Linux free command meaning [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I got some researh to do about linux. One of the questions is the folowing:
'Is it possible that on a running system the output of the command free in the free column is the same for the first 2 rows? How can you force this?'
I googeled around, and I believe I've found the meaning of these values.
If I add buffers and cache together, and add that to free, it gives me the value of free for buffers and cache. Which means, the memory that could be used by applications if required so.
So, I suppose it could be that the values could be the same for those 2 rows, it depends on the usage. But, I've no idea how I could

The difference between the first and the second rows is that the first is row is does not count caches and buffers under the free column, while the second one does. Here is an example:
total used free shared buffers cached
[1] Mem: 4028712 2972388 1056324 0 315056 835360
[2] -/+ buffers/cache: 1821972 2206740
used[2] = used[1] - buffers - cached
free[2] = free[1] + buffers + cached
So the answer to your question is: it is possible that these two rows are identical (or at least very close to each other), but not likely on a real system, as it requires you to free/exhaust all the cache. If you are willing to experiment, try some suggestions from how to drop cache or write a program that eats away all available RAM.

Related

ext4 enable hashes for directory entries [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
According to kernel.org there is the possibility to store dentries in trees instead of lists but you need to enable this flag (EXT4_INDEX_FL) in the inode structure. I this enabled by default or I have to format my partition with some flags?
I need to store lots of small files (same old problem) of about 130k each and I understood that this will help to speed up lookup and also that it is recommended to store those files in a 2 level directories hierarchy. Is there something else I need to consider so that this doesn't blow up if want to store something close to 60.000.000 of this kind of files ? (maybe some other values for block size, number of blocks in a group)
This option is referred to by the e2fsprogs suite as dir_index. It's enabled by default, and you can verify that it's enabled on a file system by running tune2fs -l DEVICE as root.
It is indeed recommended that you shard your files manually so that you don't have a huge number of files in the same directory. While using B-trees makes the operation O(log n) instead of O(n), for large numbers of files, the operation can still be expensive.
If you know you're going to be creating a large number of files, you can set the inode ratio to 4096 with the -i option; this will create a larger number of inodes so that you can hold more files. You can also see common settings for a large number of situations in /etc/mke2fs.conf.

linux memory swappiness: why not always 0? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
Linux allows the user to change the system swappiness from 0 to 100. If set to 0, the kernel will disable swapping and all processes will be maintained in memory if spared memory is available. Conversely, if set to 100, the kernel will swap aggressively. My question is, why not always setting the swappiness to 0? As a system user, we may always expect our programs to be maintained in memory rather than swapped to disk. So I think setting the swappiness to 100 is meaningless, correct?
As said on another stack exchange site having some swap is good. It frees up memory from processes that are rarely using it so that more active processes have access to RAM. A swappiness level of about 60 is a good balance as it frees up unused memory without hindering the performance of more active processes dramatically.
It all depends on how much RAM you have and will use.
Setting swappiness to 100 is an extreme situation, for sure. But one may always think of a scenario where this could be necessary.
The other situation, setting swappiness to 0, seems to be that best thing to to, but it isn't. Just for start, what would happen if you had done this and a process needed more memory? A crash, probably.
Then, as #limecore said above, having some swap memory is always good.

Image conversion and Inode usage [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
This question has been flagged as irrelevant so I guess this has no real worth to anyone so I tried removing the question but the system won't let me so I am now truncating the content of this post ;)
I think you need to run the actual numbers for both scenarios:
On the fly
how long does one image take to generate and do you want the client to wait that long
do you need to pay by cpu utilization, number of CPUs etc. and what will this cost for X images thumbnailed Y times over 1 year
Stored
how much space will this use and what will it cost
how many files are there? Is the number bigger than the number of inodes in the destination file system, or is the total estimated size bigger than the file system
It^s mostly an economics question, there is no general yes/no answer. When in doubt, I'd probably go with storing them since it's a computation intensive tasks and it's not very efficient to do it over and over again. You could also do a hybrid solution like generate a thumbnail on the fly when it is first requested, then cache it until it wasn't used for certain a number of days.
TL;DR: number of inodes is probably your least concern.

Why is swap memory faster than auxiliary memory though swap is taken from auxiliary memory [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I was asked in the interview regarding O.S
we make virtual memory out of hard disk than why is accesing swap faster than accessing hard disk.
Please help me understanding the concept.Or else redirect me to the proper forum.
First, as #Celada said, there's chances your data will in memory (has not been swaped out) when you map your file to memory or put your data in memory. This may be faster than you directly access your file or your data.
Second, OSes have very efficient swap algorithm that probably better than yours. So for example, if you need to read a very large file(maybe 2GB large or more), you need to do kind of swapping yourself and probably much slower than using OS swap.
Third, in practice, system administrators usually put /swap in a separate partitions or even separate disk or even faster device, so you can take advantage of it.
The hypothesis is nonsense. Accessing swap is not faster than accessing the hard disk if the swap is on the hard disk.

hardlinks in Linux [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
What is the size of the hardlink in Linux? Will it be the size of the inode? If I have two of them?
Thanks in advnace for any explanation, I tried to google it, but didn't find anything
A hard link reuses the inode, but requires a separate directory entry, which takes up 8 bytes plus the length of the file name in ext2. There may be other costs associated, such as when directory indexing is used, also, directories grow by entire blocks.
Think of a hard link as just another name for a file. If a file has 1000 hard links, that just means that it has 1000 different directory entries associated with it, all with potentially different names. For example, if you had 1000 different names, you would still only be one person. You'd take up the same amount of space no matter how many names you had. You'd just have a bit more paperwork for each additional name.

Resources