I read the article about Linux Huge pages technology and misunderstood some important detail.
Here is the phrase:
For example, if you use HugePages with 64-bit hardware, and you want
to map 256 MB of memory, you may need one page table entry (PTE). If
you do not use HugePages, and you want to map 256 MB of memory, then
you must have 256 MB * 1024 KB/4 KB = 65536 PTEs.
I don't understand what is 1024 KB in this formula. I think it should be just 256 MB / 4 KB to calculate the number of table entries. Is there a typo in formula or am I wrong?
I agree that it is confusing. After reading it several times, I believe that it is as simple as a matter of unit conversion. At school the mathematics/physics/chemistry teachers always told us to use the same units when doing operations in order to obtain coherent results.
The value 256 is expressed in megabytes (MB). To divide it by 4 expressed in kilo-bytes (KB), you need to convert it into kilo-bytes. Hence, the multiplication by 1024KB (= 1MB). So, literally the operation is: (256 x 1024) / 4 = 65536 which is the simplification of: (256 x 1024 x 1024) / (4 x 1024)
Related
I am trying to use CreateHeap and PlacedResources in DirectX12. However for CreateHeap it requires a D3D12_HEAP_DESC where it says "applications should pass SizeInBytes (a field of the D3D12_HEAP_DESC) values which are multiples of the effective Alignment". And then they go to show an alignment D3D12_DEFAULT_RESOURCE_PLACEMENT_ALIGNMENT #defined as 64KB., where is says 64KB is the alignment.
Does Microsoft DirectX12 use 65536 Bytes as its definition of 64KB or 64000 Bytes (so basically is 1024 or 1000 bytes the definition of KB for microsoft)? I don't want to waste any bytes and I don't know where I can find the definition for these types of units for microsoft. As Wikipedia shows 1024 KB as the legacy unit of KiloByte, so is microsoft standards up to date is the question.
The "Kibi" .vs "Kilo" difference for the SI units is an important one, particularly for fixed-storage sizes. That said, in programming specifications "KB" almost always means "1024 bytes".
If you look in the d3d12.h header, you will see that the value of D3D12_DEFAULT_RESOURCE_PLACEMENT_ALIGNMENT is base-2:
#define D3D12_DEFAULT_RESOURCE_PLACEMENT_ALIGNMENT ( 65536 )
According to this documentation, CPU shares is calculated as
process_cpu.shares = min( 1024*(application_memory / 8 GB), 1024)
According to this formula, if an application is assigned 1GB memory, then it should get 128 CPU shares. 1024*(1/8). However, if we SSH into the application and check the cpu.shares we get 122
cat /sys/fs/cgroup/cpu/cpu.shares
122
Here are the observations:
Memory
Calculated cpu.share
Observed cpu.share
Difference
1GB
128
122
~5% difference
1.5GB/1536MB
192
184
~5% difference
2GB
256
256
3GB
384
384
4GB
512
512
5GB
640
634
~1% difference
5.5GB/5632MB
704
696
~2% difference
8GB
1024
1024
Why is it that for some values there is this discrepancy (Such as 1G,1.5G,5G etc..) while others (like 2,3,4 - 6,7,8) are consistent with the calculation. I believe that I am missing something from Cgroups perspective for this calculation. Is this specific to CF or is it something to do with the way Linux calculates resource allocation in Cgroups in general? Is there a headroom always reserved?
Why are file sizes all different?
In Windows 10 I can see all of these sizes:
11,116 KB
10.8 MB
11,382,240 Bytes
11,382,784 Bytes
If I use the Console Window:
D:\My Programs\2017\MeetSchedAssist\Inno\Output>dir *.exe
Volume in drive D is DATA
Volume Serial Number is A8B0-A5C6
Directory of D:\My Programs\2017\MeetSchedAssist\Inno\Output
03/04/2018 08:50 11,382,240 MeetSchedAssistSetup.exe
1 File(s) 11,382,240 bytes
0 Dir(s) 719,837,487,104 bytes free
D:\My Programs\2017\MeetSchedAssist\Inno\Output>
I understand that perhaps on the physical media it has to round it to physically take a certain amount of space, but that line above:
Size: 10.8 MB (11,382,240 bytes)
Huh? Why does it not say 11.38 MB?
Once upon a time it has been defined that
1 kB = 1024 B
1 MB = 1024 kB
If you divide your bytes figure all the way down to MB, you'll get all those figures.
Now that they noticed that many people tend to walk into that trap, they have redefined the unit multiples and defined new ones
1 kiB = 1024 B
1 MiB = 1024 kiB
1 kB = 1000 B
1 MB = 1000 kB
but this scheme is not so widespread (seems to be more common with total size specs of storage media).
Funny sidenote: I guess I am not the only one who has learned it the old way and now mixes it up with the current definition all the time. I'd say problems like this are the root cause for humanity being mostly conservatively oriented.
A file is mapped with the the system call:
mmap(65536, 32768, READ, FLAGS, fd, 0)
Pages are 8 KB, so 4 pages worth of the file were mapped (32768/8k = 4 pages). Then the following call is carried out:
munmap(65536, 8192)
Which removes the specified part of the memory map. Which bytes of the file remain mapped? The answer key says that pages 2 and 3 remain, so only bytes 16384 through 32767 remain; however, I'm not sure this is right. Since the len argument (second arg) of the munmap command is 8192 byes, shouldn't only page 0 be removed, leaving bytes 8192 through 32767?
Both you and the answer key are wrong, but in different ways.
Memory pages on most systems are 4 KB (4096 bytes), not 8 KB. I have never heard of a system with 8 KB memory pages.
This makes the entire mapping of 32768 bytes come out to 8 pages. Unmapping the first 8192 bytes (2 pages) would leave the remaining 6 pages (2 through 8) in place.
I'm looking at the file /proc/net/dev and wondering about unit conversion of the receive bytes value.
Here's the part of the file I'm contemplating:
Inter-| Receive
face |bytes
eth0: 7060880392
ifconfig uses /proc/net/dev to produce the following:
eth0 Link encap:Ethernet
...
RX bytes:7060880392 (7.0 GB)
That's what I don't understand. Given that the unit of the value is in bytes (rather than bits), I would have expected to convert to GB through divisions of 1024. 7060880392/1024/1024/1024 = 6.6GB. But clearly ifconfig has used divisions of 1000 to convert from B to GB.
Can someone explain why they did this? I know bandwidth is generally expressed in bits; perhaps the labeling in /proc/net/dev is incorrect in referring to the unit of the value as bytes? I checked the manpage for proc, but there's not a lot of detail on this file.
The term GB represents 10 base, while GiB represents 2 base (1024). Read more on wikipedia: Binary prefix.
I'd make an educated guess that the implementer chose to use GB instead of GiB because the relevant info is how many bytes were sent/received, rather than their division to fit the "computerized" calculation.