I'm looking at the file /proc/net/dev and wondering about unit conversion of the receive bytes value.
Here's the part of the file I'm contemplating:
Inter-| Receive
face |bytes
eth0: 7060880392
ifconfig uses /proc/net/dev to produce the following:
eth0 Link encap:Ethernet
...
RX bytes:7060880392 (7.0 GB)
That's what I don't understand. Given that the unit of the value is in bytes (rather than bits), I would have expected to convert to GB through divisions of 1024. 7060880392/1024/1024/1024 = 6.6GB. But clearly ifconfig has used divisions of 1000 to convert from B to GB.
Can someone explain why they did this? I know bandwidth is generally expressed in bits; perhaps the labeling in /proc/net/dev is incorrect in referring to the unit of the value as bytes? I checked the manpage for proc, but there's not a lot of detail on this file.
The term GB represents 10 base, while GiB represents 2 base (1024). Read more on wikipedia: Binary prefix.
I'd make an educated guess that the implementer chose to use GB instead of GiB because the relevant info is how many bytes were sent/received, rather than their division to fit the "computerized" calculation.
Related
I am trying to use CreateHeap and PlacedResources in DirectX12. However for CreateHeap it requires a D3D12_HEAP_DESC where it says "applications should pass SizeInBytes (a field of the D3D12_HEAP_DESC) values which are multiples of the effective Alignment". And then they go to show an alignment D3D12_DEFAULT_RESOURCE_PLACEMENT_ALIGNMENT #defined as 64KB., where is says 64KB is the alignment.
Does Microsoft DirectX12 use 65536 Bytes as its definition of 64KB or 64000 Bytes (so basically is 1024 or 1000 bytes the definition of KB for microsoft)? I don't want to waste any bytes and I don't know where I can find the definition for these types of units for microsoft. As Wikipedia shows 1024 KB as the legacy unit of KiloByte, so is microsoft standards up to date is the question.
The "Kibi" .vs "Kilo" difference for the SI units is an important one, particularly for fixed-storage sizes. That said, in programming specifications "KB" almost always means "1024 bytes".
If you look in the d3d12.h header, you will see that the value of D3D12_DEFAULT_RESOURCE_PLACEMENT_ALIGNMENT is base-2:
#define D3D12_DEFAULT_RESOURCE_PLACEMENT_ALIGNMENT ( 65536 )
What I understand so far is address width is the number of bits in an address.
For example, 4 bits width address can have 2^4 = 16 cases. And what I'm really uncertain is addressability. Based on what I learned is "the size of the most basic unit that can be named by address". So, if we have 4 bits address width and 2 bits addressability, what happens?
I've been really curious about it for a couple of weeks, but still bummer.
Could you guys explain those things by drawing or something?
I think you do get it. There is the number of address bits, the width if you will, and there is the size of the unit those things address. so 8 bits means you have 256 things, 16 bits of address 65536 things. The size of thing is completely independent of the number of address bits. From a programmers perspective almost always we deal in units of bytes, so 8 bits of address would be 256 bytes, 32 bits of address would be 4 gigabytes. As you dig into the logic it is often wasteful to use a byte based address, if you have a peripheral that has 32 bit wide registers and you can only access those as whole 32 bit registers then do you need to connect address line 0 or 1? often not. so at that peripheral the address bus however wide it is (often the whole address bus to the peripheral is a subset of the address bus higher up closer to the processor/software, and those address bits are in units of 32 bit words.
To make things even more confusing, memory parts are often defined in terms of bits, even if they have an 8 or 16 bit data bus. So you might have a 4M part but that is megabits not megabytes...
I am running iperf measurements between two servers, connected through 10Gbit link. I am trying to correlate the maximum window size that I observe with the system configuration parameters.
In particular, I have observed that the maximum window size is 3 MiB. However, I cannot find the corresponding values in the system files.
By running sysctl -a I get the following values:
net.ipv4.tcp_rmem = 4096 87380 6291456
net.core.rmem_max = 212992
The first value tells us that the maximum receiver window size is 6 MiB. However, TCP tends to allocate twice the requested size, so the maximum receiver window size should be 3 MiB, exactly as I have measured it. From man tcp:
Note that TCP actually allocates twice the size of the buffer requested in the setsockopt(2) call, and so a succeeding getsockopt(2) call will not return the same size of buffer as requested in the setsockopt(2) call. TCP uses the extra space for administrative purposes and internal kernel structures, and the /proc file values reflect the larger sizes compared to the actual TCP windows.
However, the second value, net.core.rmem_max, states that the maximum receiver window size cannot be more than 208 KiB. And this is supposed to be the hard limit, according to man tcp:
tcp_rmem
max: the maximum size of the receive buffer used by each TCP socket. This value does not override the global net.core.rmem_max. This is not used to limit the size of the receive buffer declared using SO_RCVBUF on a socket.
So, how come and I observe a maximum window size larger than the one specified in net.core.rmem_max?
NB: I have also calculated the Bandwidth-Latency product: window_size = Bandwidth x RTT which is about 3 MiB (10 Gbps # 2 msec RTT), thus verifying my traffic capture.
A quick search turned up:
https://github.com/torvalds/linux/blob/4e5448a31d73d0e944b7adb9049438a09bc332cb/net/ipv4/tcp_output.c
in void tcp_select_initial_window()
if (wscale_ok) {
/* Set window scaling on max possible window
* See RFC1323 for an explanation of the limit to 14
*/
space = max_t(u32, sysctl_tcp_rmem[2], sysctl_rmem_max);
space = min_t(u32, space, *window_clamp);
while (space > 65535 && (*rcv_wscale) < 14) {
space >>= 1;
(*rcv_wscale)++;
}
}
max_t takes the higher value of the arguments. So the bigger value takes precedence here.
One other reference to sysctl_rmem_max is made where it is used to limit the argument to SO_RCVBUF (in net/core/sock.c).
All other tcp code refers to sysctl_tcp_rmem only.
So without looking deeper into the code you can conclude that a bigger net.ipv4.tcp_rmem will override net.core.rmem_max in all cases except when setting SO_RCVBUF (whose check can be bypassed using SO_RCVBUFFORCE)
net.ipv4.tcp_rmem takes precedence net.core.rmem_max according to https://serverfault.com/questions/734920/difference-between-net-core-rmem-max-and-net-ipv4-tcp-rmem:
It seems that the tcp-setting will take precendence over the common max setting
But I agree with what you say, this seems to conflict with what's written in man tcp, and I can reproduce your findings. Maybe the documentation is wrong? Please find out and comment!
I'm writing a perl script that track the output of xentop tool, I'm not sure what is the meaning for VBD_RD & VBD_WR
Following http://support.citrix.com/article/CTX127896
VBD_RD number displays read requests
VBD_WR number displays write requests
Dose anyone know how read & write requests are measured in bytes, kilobytes, megabytes??
Any ideas?
Thank you
As far as I understood, xentop shows you two different measuers (two for read and write).
VBD_RD and VBD_WR's measures are unit. The number of times you tried to access to the block device. This does not say anything about the the number of bytes you have read or written.
The second measure read (VBD_RSECT) and write (VBD_WSECT) are measured in "sectors". You can find the size of the sectors by using xenstore-ls (https://serverfault.com/questions/153196/xen-find-vbd-id-for-physical-disks) (in my case it was 512).
The unit of a sector is in bytes (http://xen.1045712.n5.nabble.com/xen-3-3-testing-blkif-Clarify-units-for-sector-sized-blkif-request-params-td2620172.html).
So if the VBD_WR value is 2, VBD_WSECT value is 10, sector size is 512. We have written 10 * 512 bytes in two different requests (you tried to access to the block device two times but we know nothing about how much bytes were written in each request but we only know the total). To find disk I/O you can periodically check these values and take the derivative between those values.
I suppose the sector size might change for each block device somehow, so it might be worthy to check the xenstore-ls output for each domain but I'm not sure. You can probably define it in the cfg file too.
This is what I found out and understood so far. I hope this helps.
I'm running x86_64 RedHat 5.3 (kernel 2.6.18) and looking specifically at net.core.rmem_max from sysctl -a in the context of trying to set UDP buffers. The receiver application misses packets sometimes, but I think the buffer is already plenty large, depending upon what it means:
What are the units of this setting -- bits, bytes, packets, or pages? If bits or bytes, is it from the datagram/ payload (such as 100 bytes) or the network MTU size (~1500 bytes)? If pages, what's the page size in bytes?
And is this the max per system, per physical device (NIC), per virtual device (VLAN), per process, per thread, per socket/ per multicast group?
For example, suppose my data is 100 bytes per message, and each network packet holds 2 messages, and I want to be able to buffer 50,000 messages per socket, and I open 3 sockets per thread on each of 4 threads. How big should net.core.rmem_max be? Likewise, when I set socket options inside the application, are the units payload bytes, so 5000000 on each socket in this case?
Finally, in general how would I find details of the units for the parameters I see via sysctl -a? I have similar units and per X questions about other parameters such as net.core.netdev_max_backlog and net.ipv4.igmp_max_memberships.
Thank you.
You'd look at these docs. That said, many of these parameters really are quite poorly documented, so do expect do do som googling to dig out the gory details from blogs and mailinglists.
rmem_max is the per socket maximum buffer, in bytes. Digging around, this appears to be the memory where whole packets are received, so the size have to include the sizes of whatever/ip/udp headers as well - though this area is quite fuzzy to me.
Keep in mind though, UDP is unreliable. There's a lot of sources for loss, not the least inbetween switches and routers - these have buffers as well.
It is fully documented in the socket(7) man page (it is in bytes).
Moreover, the limit may be set on a per-socket basis with SO_RCVBUF (as documented in the same page).
Read the socket(7), ip(7) and udp(7) man pages for information on how these things actually work. The sysctls are documented there.