I understand that D3D12_DESCRIPTOR_RANGE describes the internal descriptors of the descriptorheap. By the way
I wonder how D3D12_DESCRIPTOR_RANGE calculates the offset of the next range with only the number of descriptors in one heap.
Related
Is there any way to receive the alignment, in bytes, of the offset within the allocation required for a buffer with usage VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT on a given VkDevice?
If I already got such a VkBuffer, then this value can be retrieved from the size field of the VkMemoryRequirements structure received from a call to vkGetBufferMemoryRequirements.
But if I want to obtain this value with a given buffer, do I need to create a "dummy" buffer with size 1 (specifying size 0 yields a validation error, when the validation layer is enabled)?
The alignment requirement for a UBO is a device limitation: VkPhysicalDeviceLimits::minUniformBufferOffsetAlignment. The reason for this is that it applies not just to the requirement for the offset used when binding a buffer to a memory allocation, but also to any offsets used within a buffer to the start of UBO data when using that buffer as a UBO descriptor.
If I understand your question right, you're looking for an alignment for the memoryOffset parameter to vkBindBufferMemory that will be valid for any VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT VkBuffer you create later. Essentially you want the worst-case / most restrictive alignment you'll get in VkMemoryRequirements::alignment for any such buffer. Correct?
I don't think you can directly query such a worst-case alignment. VkPhysicalDeviceLimits::minUniformBufferOffsetAlignment is close, but is a lower bound on the buffer-to-memory alignment requirements, not an upper bound ([1]):
The alignment member satisfies the buffer descriptor offset alignment requirements associated with the VkBuffer’s usage:
If usage included VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT, alignment must be an integer multiple of VkPhysicalDeviceLimits::minUniformBufferOffsetAlignment.
(This means that any minUniformBufferOffsetAlignment-aligned chunk within the VkBuffer can be used for a uniform buffer descriptor. But the base offset of the VkBuffer might need to be more strongly aligned than the offsets of descriptors within it).
However, if you do create a proxy VkBuffer and query its alignment, you are guaranteed that the alignment of other VkBuffers with the same usage and flags will have the same alignment requirement:
The alignment member is identical for all VkBuffer objects created with the same combination of values for the usage and flags members in the VkBufferCreateInfo structure passed to vkCreateBuffer.
Since the buffer size can't affect alignment, it's okay to use a tiny proxy buffer like you proposed.
Is file offset automatic changed to 0, or kept unchanged.
If the file offset is keep unchanged, what happen when read() or write() after truncate().
how file offset of read() or write() change when file is truncated
The file offset of opened file descriptors remains unchanged[1].
what happen when read() or write() after truncate().
read():
Will read valid data if the offset is in range of the file.
Will read bytes equal to 0 if the offset is after the length of file but in the range of truncate[1].
Will return 0 (ie. no bytes read) if the offset is past the end of file[3].
write():
Will write data to the file at the offset specified[4].
If the write is past the end-of-file, the file will be resized with padding zeros[2].
[1] From posix truncate:
If the file previously was larger than length, the extra data is discarded. If the file was previously shorter than length, its size is increased, and the extended area appears as if it were zero-filled.
The truncate() function shall not modify the file offset for any open file descriptions associated with the file.
[2] From posix lseek:
The lseek() function shall allow the file offset to be set beyond the end of the existing data in the file. If data is later written at this point, subsequent reads of data in the gap shall return bytes with the value 0 until data is actually written into the gap.
[3] From posix read:
No data transfer shall occur past the current end-of-file. If the starting position is at or after the end-of-file, 0 shall be returned.
[4] And from posix write:
After a write() to a regular file has successfully returned:
Any successful read() from each byte position: in the file that was modified by that write shall return the data specified by the write() for that position until such byte positions are again modified.
The same thing when you seek to post the end of the file - write extends it and read fails.
Since operating systems and file systems are the most inconsistent software in the world, no answer will spare you from just trying it out.
I've been trying to read the implementation of a kernel module, and I'm stumbling on this piece of code.
unsigned long addr = (unsigned long) buf;
if (!IS_ALIGNED(addr, 1 << 9)) {
DMCRIT("#%s in %s is not sector-aligned. I/O buffer must be sector-aligned.", name, caller);
BUG();
}
The IS_ALIGNED macro is defined in the kernel source as follows:
#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0)
I understand that data has to be aligned along the size of a datatype to work, but I still don't understand what the code does.
It left-shifts 1 by 9, then subtracts by 1, which gives 111111111. Then 111111111 does bitwise-and with x.
Why does this code work? How is this checking for byte alignment?
In systems programming it is common to need a memory address to be aligned to a certain number of bytes -- that is, several lowest-order bits are zero.
Basically, !IS_ALIGNED(addr, 1 << 9) checks whether addr is on a 512-byte (2^9) boundary (the last 9 bits are zero). This is a common requirement when erasing flash locations because flash memory is split into large blocks which must be erased or written as a single unit.
Another application of this I ran into. I was working with a certain DMA controller which has a modulo feature. Basically, that means you can allow it to change only the last several bits of an address (destination address in this case). This is useful for protecting memory from mistakes in the way you use a DMA controller. Problem it, I initially forgot to tell the compiler to align the DMA destination buffer to the modulo value. This caused some incredibly interesting bugs (random variables that have nothing to do with the thing using the DMA controller being overwritten... sometimes).
As far as "how does the macro code work?", if you subtract 1 from a number that ends with all zeroes, you will get a number that ends with all ones. For example, 0b00010000 - 0b1 = 0b00001111. This is a way of creating a binary mask from the integer number of required-alignment bytes. This mask has ones only in the bits we are interested in checking for zero-value. After we AND the address with the mask containing ones in the lowest-order bits we get a 0 if any only if the lowest 9 (in this case) bits are zero.
"Why does it need to be aligned?": This comes down to the internal makeup of flash memory. Erasing and writing flash is a much less straightforward process then reading it, and typically it requires higher-than-logic-level voltages to be supplied to the memory cells. The circuitry required to make write and erase operations possible with a one-byte granularity would waste a great deal of silicon real estate only to be used rarely. Basically, designing a flash chip is a statistics and tradeoff game (like anything else in engineering) and the statistics work out such that writing and erasing in groups gives the best bang for the buck.
At no extra charge, I will tell you that you will be seeing a lot of this type of this type of thing if you are reading driver and kernel code. It may be helpful to familiarize yourself with the contents of this article (or at least keep it around as a reference): https://graphics.stanford.edu/~seander/bithacks.html
From the documentation:
store() should return the number of bytes used from the buffer. If the
entire buffer has been used, just return the count argument.
What does it do with this value? What's the difference if from a buffer of size FOO I read 4 and not 6 bytes?
You must realize that by implementing a sysfs file, you are trying to behave like a file.
Let's see this from the other side first. From the man page of fwrite(3):
RETURN VALUE
fread() and fwrite() return the number of items successfully read or written (i.e., not the number of characters). If an error occurs, or the end-of-file is
reached, the return value is a short item count (or zero).
And even better, from the man page of write(2):
The number of bytes written may be less than count if, for example, there is insufficient space on the underlying physical medium, or the RLIMIT_FSIZE resource
limit is encountered (see setrlimit(2)), or the call was interrupted by a signal handler after having written less than count bytes. (See also pipe(7).)
What this means is that store(), which is implementing the other end of the write(2) function for your particular file should return the number of bytes written (i.e. read by you), in the very least so that write(2) can return that value to the user.
In most cases, if there is no error in the input, you would just want to return count to acknowledge that you have read everything and all is ok.
I have gone through the code of inode in linux kernel code but I am unable to figure where are the data pointers in inode. I know that there are 15 pointers [0-14] out of which 12 are direct, 1 single indirect, 1 double indirect and 1 triple indirect.
Can some one please locate these data members. Also please specify how you located these as I have searched on google many time with different key words but all in vain.
It is up to a specific file system to access its data, so there's no "data pointers" in general (some file systems may be virtual, that means generating their data on the fly or retrieving it from network).
If you're interested in ext4, you can look up the ext4-specific inode structure (struct ext4_inode) in fs/ext4/ext4.h, where data of an inode is indeed referenced by indices of 12 direct blocks, 1 of single indirection, 1 of double indirection and 1 of triple indirection.
This means that blocks [0..11] of an inode's data have numbers e4inode->i_block[0/1/.../11], whereas e4inode->i_block[12] is a number of a block which is filled with data block numbers itself (so it holds indices of inode's data blocks in range [12, 12 + fs->block_size / sizeof(__le32)]. The same trick is applied to i_block[13], only it holds double-indirected indices (blocks filled with indices of blocks that hold list of blocks holding the actual data) starting from index 12 + fs->block_size / sizeof(__le32), and i_block[14] holds triple indirected indices.
As explained here:
http://computer-forensics.sans.org/blog/2010/12/20/digital-forensics-understanding-ext4-part-1-extents
Ext4 uses extents instead of block pointers to track the file content.
If you are interested in ext3/ext2 datastructure where content pointer are used:
http://www.slashroot.in/how-does-file-deletion-work-linux
has many good diagrams to elaborate it. And here:
http://mcgrewsecurity.com/training/extx.pdf
at page 16 has examples of the details of "block pointers" (which are basically block numbers, or offset values relative to the start of the disk image, 1 block usually 512 bytes).
If you want to walk the filesystem phyiscally, say for a ext3 formatted hard drive, see this:
http://wiki.sleuthkit.org/index.php?title=FS_Analysis
but you can always use just "dd" command to do everything, just need to know where to start reading and stop reading, and input for the dd command is usually a replica of the harddisk image itself, for many reasons.