I'm aware that 32k = 32 * 2^10 which equals **32768 memory addresses**. However it is the byte that is throwing me off here. Does that indicate the width of the addressable memory?
How does the byte come into play here? Does that indicate byte addressable? If it were word addressable does that slash the available memory locations by half? Thanks for any replies in advance
There are basically two things in the computer architecture:
1:Address bus(n-bit processor)
2:Memory,
here, address bus defines the number of addresses in your system, suppose if you have 16 bit processor, that means you may have 2^16 address space,
and if you have 32K byte RAM, then this represents the memory of your system.
Now, how these 32K memory is represented in 2^16 address space that is different and how the memory is accessed is based on the type of addressing, whether it is byte addressable or word addressable.
Related
I have a requirement for large video frame buffer that needs to be physically contiguous. So my question is when kernel driver request physical contig memory, the virtual address returned by kernel will be contiguous or non-contig?
Update:
My apology, let me add more details. For a video buffer of resolution 640x480 with each pixel of 1 byte, the total memory expected 307200 bytes (640x480). For a system that works on 4KiB page, the total pages needed by above buffer will be 75.
Now lets assume in some way this 307200 memory block requested is physically contiguous. But when virtual address of each page is being returned by kernel, will those pages be contiguous or non-contiguous?
Contiguous - the kernel virtual address space mapping is generally 1-1 with physical memory (ie V = P + offset)
While I am trying to understand the high memory problem for 32-bit cpu and Linux, why is there no high-memory problem for 64-bit cpu?
In particular, how is the division of virtual memory into kernel space and user space changed, so that the requirement of high memory doesn't exist for 64-bit cpu?
Thanks.
A 32-bit system can only address 4GB of memory. In Linux this is divided into 3GB of user space and 1GB of kernel space. This 1GB is sometimes not enough so the kernel might need to map and unmap areas of memory which incurs a fairly significant performance penalty. The kernel space is the "high" 1GB hence the name "high memory problem".
A 64-bit system can address a huge amount of memory - 16 EB -so this issue does not occur there.
With 32-bit addresses, you can only address 2^32 bytes of memory (4GB). So if you have more that, you need to address it some special way. With 64-bit addresses, you can address 2^64 bytes of memory without special effort, and that number is way bigger than all the memory that exists on the planet.
That number of bits refers to the word size of the processor. Among other things, the word size is the size of a memory address on your machine. The size of the memory address affects how many bytes can be referenced uniquely. So doing some simple math we find that on a 32 bit system at most 2^32 = 4294967296 memory addresses exist, meaning you have a mathematical limitation to about 4GB of RAM.
However on a 64 bit system you have 2^64 = 1.8446744e+19 memory address available. This means that your computer can theoretically reference almost 20 exabytes of RAM, which is more RAM than anyone has ever needed in the history of computing.
Until now I thought that a 32-bit processor can use 4 GiB of memory because 232 is 4 GiB, but this approach means processor have word size = 1 byte. So a process with 32-bit program counter can address 232 different memory words and hence we have 4 GiB.
But if a processor has word size larger than 1 byte, which is the case with most of processors now days I believe (My understanding is that word size is equal to the width of data bus, so a processor with 64-bit data bus must have a word size = 8 bytes).
Now same processor with 32 bit Program counter can address 2^32 different memory words, but in this case word size is 8 bytes hence it can address more memory which contradicts with 4 GiB thing, so what is wrong in my argument ?
Your premise is incorrect. 32-bit architectures can address more than 4GB of memory, just like most (if not all) 8-bit microcontrollers can use more than 256 bytes of memory. Indeed a 32-bit program counter can address 232 different memory locations, but word-addressable memory is only used in architectures for very special purposes like DSPs or antique architectures in the past. Modern architectures for general computing all use byte-addressable memory
See Why byte-addressable memory and not 4-byte-addressable memory?
Even in 32-bit byte-addressable architectures there are many ways to access more than 4GB of memory. For example 64-bit JVM can address 32GB of memory with 32-bit pointer using compressed Oops. See the Trick behind JVM's compressed Oops
32-bit x86 CPUs can also address 64GB (or more in later versions) of memory with PAE. It basically adds a another level of indirection in the TLB with a few more bits in the address. That allows the whole system to access more than 4GB of memory. However the pointers in applications are still 32-bit long so each process is still limited to 4GB at most. The analog on ARM is LPAE.
The 4GB address space of each process is often split into user and kernel space (before Meltdown), hence limited the user memory even more. There are several ways to workaround this
Spawning multiple processes, which is used in Adobe Premiere CS4
Mapping the needed part of memory into the current address space, like Address Windowing Extensions on Windows
...
CPU (at least x86 family 32-bit) must be able to access any byte/word/dword in 4GB space. So an instruction is encoded such a way that target word size and memory address (usually) belong to different bit-fields. So it doesn't matter whether CPU accesses byte or dword, but the encoded memory address must be the same.
Note that 32-bit OS and x86 CPU technically is able to acccess more than 4GB address space using PAE mode. But it is not supported by, say, the current Windows OS family (except Server editions). Some versions of WinXP, as well as Linux and other 32-bit OS can address 64GB of memory on x86 CPU.
Also, usually OS reserves some part of virtual address space (for OS kernel, Video memory etc.), so user programs may use, say, no more than 3 GB of RAM of the 4GB an OS can address within each process.
Here is the problem I am working on
The Problem: A high speed workstation has 64 bit words and 64 bit addresses with address resolution at the byte level. How many words can in be in the address space of the workstation?
I defined the different terms in the problem
Word Size - Processor natural unit of data. The word size determines the amount of information that can be processed in one go
Byte Level Addressing - Hardware architectures that support accessing individual bytes within a word
64 Bit Addressing - You have have 64 bits to specify an address in Runtime memory that holds an instruction or data
Address Space - Running program's view of memory in the system
How would you go about using all these definitions to solve this problem?
From 64 bits, I know that technically there are 2^64 locations in memory and from 64 bit words, that a processor processes 8 bytes a time. But I don't know how to use that information to conclude how many words are in the address space of the computer.
Thanks to aruisdante's comment, I was able to figure this out.
Basically 64 bit addresses means there are 2 ^ 64 total addresses. Because byte addressable memory is used here, each address will store one byte.
This means that in total, in the address space, 2 ^ 64 bytes can be stored. The problem tells you that the machine has 64 bit words or that each word is 8 bytes long. Therefore you have 2^64/8 or 2^64/2^3 = 2^61 words in the address space.
The address space for a 32 bit system is 0x00000000 to 0xffffffff. From what I understand, this address space will be split among the system memory (RAM), ROM and memory-mapped peripherals. If the entire address space were used to address on the 4GB RAM, all RAM bytes would be accessible. But the address space being distributed with other memory mapped peripherals, does this mean that some RAM will be unaddressable/unutilized?
Here is the memory map of a typical x86 system. As you can see, the lower ranges of memory are riddled with BIOS and ROM data with small gaps in between. There's a substantial portion reserved for memory mapped devices in the upper ranges. All of these details may vary between platforms. It's nothing short of a nightmare to detect which memory areas that can be safely used.
The kernel also typically reserves a large portion of the available memory for its internals, buffers and cache.
With the advent of virtual addressing, the kernel can advertise the address space as one consistent and gapless memory range, while that is not necessarily true behind the scenes.