"A memory has 1024 storage units with a width of 64. Suppose the memory is byte addressable. What is the address of the highest addressable memory position?"
Please correct me if I'm wrong.
byte addressable means individual bytes in a word have their own addresses.
there are 8 bytes in a 64 bit word.
therefore 8 x 1024 = 8192 addresses overall.
highest address therefore 8191.
I believe this to be true but am not a 100% sure. Please indicate where my logic falters if indeed it does.
I would say 1023.
There are 1024 storage locations, each numbered 0 to 1023, and each storage location holding 64 bits.
So you have a computer where a byte contains 64 bits. A byte is not 8 bits, but the minimum size of a memory location. All modern computers use 8 bits on a byte, but some older computers used 7, 9 og 14 bits on each byte.
But it's a really badly written question, because it does not define what a storage location is. So your interpretation might be right assuming a standard 8 bit in a byte cpu.
Related
I tried running Apache a few times to see the effect of ASLR
I know that because of alignment, the last byte and a half is 0, and because of "canonization" the first two bytes are irrelevant, so that leaves 4 bytes and a half to randomize which is quite a lot
But I noticed that the first two bytes are always 7fff so does that mean only 2 bytes and a half are random ?
If I have say a 64-bit instruction, which has 2 bytes (16 bits) for opcode and the rest for operand address, I can determine that I have 48bits for the address (64-16). The maximum value that can be displayed with 48 bits plus 1 to account for address 0 is my go to number. This would be 2^48. However, I have the problem with the understanding of this in terms of the iB units.
2^48 is 2^40 (TiB) x 2^8 = 256TiB. But since TiB = 2^40 BYTES, when did the 2^48 become a BYTE? I generally believed that to get the number of bytes I'd have to divide by 8, but this doesn't seem to be the case.
Could someone explain why this works?
A byte is by definition the smallest chunk of memory which has an address. Whatever number of address bits, the resulting address is the address of a byte, by definition. In all (or at least, most) computer architectures existing today, a byte is the same as an octet, that is, eight bits; but historically there were popular computer architectures with 6 bit bytes, or 12 bit bytes, or even other more exotic number of bits per byte.
I am working on following the SHA-2 cryptographically functions as stated in https://en.wikipedia.org/wiki/SHA-2.
I am examining the lines that say:
begin with the original message of length L bits append a single '1' bit;
append K '0' bits, where K is the minimum number >= 0 such that L + 1 + K + 64 is a multiple of 512
append L as a 64-bit big-endian integer, making the total post-processed length a multiple of 512 bits.
I do not understand the last two lines. If my string is short can its length after adding K '0' bits be 512. How should I implement this in Java code?
First of all, it should be made clear that the "string" that is talked about is not a Java String but a bit string. These algorithms are binary/bit based. The implementation will generally not handle bits but bytes. So there is a translation phase where you should see bytes instead of bits.
SHA-512 is operated on in blocks of 512 bits (SHA-224/256) or 1024 bits (SHA-384/512). So basically you have a 64 or 128 byte buffer that you are filling before operating on it. You could also directly cache the data in 32 bit int fields (SHA-224/256) or 64 bit long fields, as that is the word size that is operated on.
Now the padding is relatively simple procedure. The padding is called bit-padding. As it is used in big-endian mode (SHA-2 fortunately uses this instead of the braindead little endian mode in SHA-3) the padding consists of a single bit set on the highest order bit in a byte, with the rest filled by zero's. That makes for a value of (byte) 0x80 which must be put in the buffer.
If you cannot create this padding because the buffer is full then you will have to process the previous block, and then set the first bit of the now available buffer to (byte) 0x80. In the newer Java you can also use (byte) 0b1_0000000 byte the way, which is more explicit.
Now you simply add zero's until you have 8 to 16 bytes left, again depending on the hash output size used. If there aren't enough bytes then fill till the end, process the block, and re-start filling with zero bytes until you have 8 or 16 bytes left again.
Now finally you have to encode the number of bits in those 8 or 16 bytes you've left. So multiply your input by eight, and make sure you encode those bytes in the same way as you'd expect in Java with the least significant bits as much to the right as possible. You might want to use https://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html#putLong-long- for this if you don't want to program it yourself. You may probably forget about anything over 2^56 bytes anyway, so if you have SHA-384/SHA-512 then simply set the first eight bytes to zero.
And that's it, except that you still need to process that last block and then use as many bytes from the left as required for your particular output size.
What I understand so far is address width is the number of bits in an address.
For example, 4 bits width address can have 2^4 = 16 cases. And what I'm really uncertain is addressability. Based on what I learned is "the size of the most basic unit that can be named by address". So, if we have 4 bits address width and 2 bits addressability, what happens?
I've been really curious about it for a couple of weeks, but still bummer.
Could you guys explain those things by drawing or something?
I think you do get it. There is the number of address bits, the width if you will, and there is the size of the unit those things address. so 8 bits means you have 256 things, 16 bits of address 65536 things. The size of thing is completely independent of the number of address bits. From a programmers perspective almost always we deal in units of bytes, so 8 bits of address would be 256 bytes, 32 bits of address would be 4 gigabytes. As you dig into the logic it is often wasteful to use a byte based address, if you have a peripheral that has 32 bit wide registers and you can only access those as whole 32 bit registers then do you need to connect address line 0 or 1? often not. so at that peripheral the address bus however wide it is (often the whole address bus to the peripheral is a subset of the address bus higher up closer to the processor/software, and those address bits are in units of 32 bit words.
To make things even more confusing, memory parts are often defined in terms of bits, even if they have an 8 or 16 bit data bus. So you might have a 4M part but that is megabits not megabytes...
I am aware of possible duplicate questions already in stackoverflow. But those questions do not address my question directly. My understanding is that 32 bit machine uses 32 bit to store memory addresses, therefore, the maximum memory it can have is 2^32 bit. However, 2^32 bit = 2^29 byte = 2^29/10^9 = 0.5 Gigabyte.
I know that the answer should be 4 gigabyte. But I simply cannot figure out where is my mistake. HELP!
I believe that the 2^32 refers to the number of addressable bytes not the total number of bits in memory. you can address 4 billion bytes ( 32 billion bits) or 4 gigs of memory. For instance
Address 0 | Address 1 |... | Address 2^32
........................................
8 bits | 8 bits | ...| 8 bits
EDIT:
The 32 bit machine usually refers to the number of bits you can stuff into the CPU's registers (not RAM). Thus 1 register allows for 32 bits which can address 2^32 bytes of RAM.
EDIT:
Here is a good explanation on superuser:
https://superuser.com/questions/56540/32-bit-vs-64-bit-systems