What is the difference between 32 bit and 64 bit processor - 64-bit

What I understood in a very naive way about 32 bit vs 64 bit processors is:
On 32 bit, an 'int' is represented as 32 bits but n 64 bit processor, int is represented as 64 bit. IS it right? IS this the only difference? From this over simplification, it is not clear why 64 bit is 'better' than 32 bit.
My other question is: If I have a simple client server application(java) where client is 32 bit and server is 64 bit, will that cause a problem when data is transferred between client and server?

In virtually every case, what makes a CPU 64-bits is the number of bits in the general-purpose registers, and as a consequence, the number of bits in a memory address.
Sadly, the C "int" does not vary with CPU size, and neither does the size of any other type AFAIK.
64-bit and 32-bit (and 16-bit and 8-bit) CPUs can process 8, 16, 32 and 64 bit integers [and floating point]. What they cannot all do is address memory with 64-bit addresses [or 32-bit addresses in the case of 16-bit and 8-bit CPUs].
Therefore, it does not matter what size CPU a client or server program is running on. What CAN matter, if the author of the program is not aware of CPU architecture issues, is that without careful specification, data sent by different CPUs can send their bytes in different order (like the 8-bytes of a 64-bit integer or floating-point number) and/or assume the bytes are in a different order.
My advice is to always order data to be transmitted in "little endian" order, because there IS only one little endian order. Sadly, there are several versions of big endian order, depending on CPU register/address size and CPU conventions for data-types larger than their register sizes. But not everyone agrees with me about this point.

Related

What "64 bit" means?

I know this question seems obvious, but I don't manage to find a precise answer.
If on my laptop it is written "Windows 8 64 bit", what "64 bit" exactly refers to? (I know that "Windows 8" is just the name and version of the OS).
I have a few interpretations, but none of them make me entirely happy:
The virtual address space of a process is of size 2^64 units (with unit being some small size). This definition does not make me happy, because even with disc storage, the total storage of my computer is far less than that. So I would never be able in a program to initialize an array of size 2^64.
The registers in memory have a capacity of 64 bits. This also does not make me entirely happy, because my machine could have both 64 bit and 32 bit registers, and perhaps registers of smaller size.
The maximum capacity of registers is 64 bits. This definition could be sensible, but looks "iffy".
So could anyone give me a clear definition, or at least tell that one of the above is correct?
"Windows 64 bit" means that the operating system supports 64-bit addressing.
This, in turn, implies that the CPU also supports 64-bit addressing.
The OS and the CPU are two entirely different things.
Runtime binaries (.exes and .dlls for Windows) are yet another "different thing". 32-bit and 64-bit .exe's have different binary formats, are loaded differently by the OS, and use different runtime resources.
You can't run a 64-bit OS on a 32-bit CPU. But you can run a 32-bit OS on a 64-bit CPU. Similarly, you can't use a 64-bit shared library or executable program on a 32-bit OS.
The key aspect of "64-bit" is 64-bit addressing: that both the CPU and the running program can address up to 2^64 bytes of virtual memory:
In practice, a running program will likely be able to address only a portion of that address space.
You can read more here:
https://en.wikipedia.org/wiki/64-bit_computing
PS:
Yes: CPU registers come in all different sizes. For example, ah is 8-bits, ax 16 bits, eax 32 bits and rax is 64 bits. Furthermore, different registers do "different things". For "64-bit computing", we're primarily interested in those registers that load from and store to virtual memory.

When writing a CPU emulator, how do you choose between simulating a 16, 32, or 64 bit processor?

If I were to write a really simple CPU Emulator, how would you determine how many bits it is, i.e 16 bits or 32 bits?
Now it all depend what you want to do with your CPU.
Is it for self-learning purpose ?
Take a cpu with a simple architecture
Take a cpu which is commonly used (eg: lot of documentation, can use third part binary to test your emulator etc...)
You want to use the CPU for a specific purpose ?
I started with an 8 bit processor for doing a gameboy emulator.
(here is the spec if you want to try: http://problemkaputt.de/pandocs.htm)
First define your need/goal, then choose appropriatly which one is fitted for yourself.
Generally speaking a processor is characterised by its register size (machine word size). A 16 bit processor would typically have 16 bit registers, a 16 bit data bus and a 16 bit address bus (capable of addressing 64 kB of memory). It would make one memory access to read/write a register.
However, there are examples of other setups. For example the Intel 8088 CPU was a 16 bit processor with an 8 bit data bus and a 20 bit address bus (capable of addressing 1024 kB of memory). It did two memory accesses to read/write a register, and combined 16 bit segment registers and 16 bit offset registers into a full 20 bit address.

How does register size affect processor performance?

I've been flying around the internet today trying to wrap my head around this topic. So here's what I understood so far. So the bigger the register size the bigger the instructions a processor can handle?
Quote:
The size of the registers, which is sometimes called the word size, indicates the amount of data with which the computer can work at any given time.
Question 1:
How would this be explained in terms of dealing with RAM? Why would a 32-bit processor be less adept or slower at processing information in this case?
Also, the term addressing. So while a 64-bit processor can "address" 2^64 different locations in RAM, a 32-bit processor can only deal with 2^32.
Question 2:
What does addressing mean? And why would the ability to address more locations be more helpful?
Question 3:
How are these 2 points, 1)Number of addressable locations and 2)Instruction size, related?
I hope my questions aren't confusing. It would be nice if references and examples to RAM as well as comparisons between 32 and 64-bits would be given in the explanations.
As chux already stated, there can be a lot of different bus width's in a computer system. That said, I assume you're talking about usual PC architectures here. Now, to your questions:
Performance difference between 32 and 64 bit systems
The hardware usually is able to operate on bigger numbers than a 32 bit system, so it can e.g. sum two 64 bits numbers in one operation, while a 32 bit system would need at least two (plus some operations to combine the results). This means a software that does lots of operations in big numbers will probably be faster on a 64 bit system, but a software that don't need big numbers will not be faster
A 64 bit processor usually fetch bigger blocks of data from memory than a 32 bit one. If the data bus is 64 bits instead of 32, it'll fetch double the bytes than the 32 bit system
This is actually a negative point in 64 bit system: since you have more addressable memory, you also need more memory for pointers, so 64 bit applications will also use a little more memory than the same aplication compiled for a 32 bit system.
Memory addressing
The memory address is a number that uniquely identifies a position in memory, where data is stored. With a 32 bit number, you can adress 2^32 positions, which is roughly 4 GB. This is why 32 bit PC's cannot use more than 4 GB of memory (they actually can, with some restrictions. See PAE). Using 64 bit numbers means the computer can now address 2^64 positions, which means it could, in principle, use up to 16 exbibytes of memory. In practice, other limits prevent a PC from having all that memory.
Addressable locations vs Memory Size
Since lots of instructions should reference a memory position, this means that some of them will have to be bigger, so they have room for memory adresses.
Bigger instructions usually means bigger software code, but this is not a problem in most cases, because the difference isn't that big, and because most of software size usually is composed of data, rather than code.
Disclaimer: Not all I said is valid for every software/architecture. There are a lot of detais that may have more impact in performace and memory usage than the points I wrote here.
The bit width of a processor's registers, it addressing range and the processor internal/external bus width between the processor and RAM are independent.
A 32-bit processor commonly can handle 32-bit addresses, but it may only handle 24 or maybe 64. Many possibilities have occurred.
Addressing would the the maximum range from 0 to N-1 of unique addresses that could be generated. If there is truly N locations of memory is another matter.
The width of the bus between CPU and RAM dramatically affects performance. This width, independent of CPU reg size and RAM size, throttle throughput.
Addressing range and resister size tend to correlate. Units with wider registers usually have wider address range. There is no rule that forces these 2 to be the same.
Suggest reviewing CPU architectures and micro controller and the theoretical Turing Machine

What is the difference between a 32-bit and 64-bit processor?

I have been trying to read up on 32-bit and 64-bit processors (http://en.wikipedia.org/wiki/32-bit_processing). My understanding is that a 32-bit processor (like x86) has registers 32-bits wide. I'm not sure what that means. So it has special "memory spaces" that can store integer values up to 2^32?
I don't want to sound stupid, but I have no idea about processors. I'm assuming 64-bits is, in general, better than 32-bits. Although my computer now (one year old, Win 7, Intel Atom) has a 32-bit processor.
All calculations take place in the registers. When you're adding (or subtracting, or whatever) variables together in your code, they get loaded from memory into the registers (if they're not already there, but while you can declare an infinite number of variables, the number of registers is limited). So, having larger registers allows you to perform "larger" calculations in the same time. Not that this size-difference matters so much in practice when it comes to regular programs (since at least I rarely manipulate values larger than 2^32), but that is how it works.
Also, certain registers are used as pointers into your memory space and hence limits the maximum amount of memory that you can reference. A 32-bit processor can only reference 2^32 bytes (which is about 4 GB of data). A 64-bit processor can manage a whole lot more obviously.
There are other consequences as well, but these are the two that comes to mind.
First 32-bit and 64-bit are called architectures.
These architectures means that how much data a microprocessor will process within one instruction cycle i.e. fetch-decode-execute
In one second there might be thousands to billions of instruction cycles depending upon a processor design.
32-bit means that a microprocessor can execute 4 bytes of data in one instruction cycle while 64-bit means that a microprocessor can execute 8 bytes of data in one instruction cycle.
Since microprocessor needs to talk to other parts of computer to get and send data i.e. memory, data bus and video controller etc. so they must also support 64-bit data transfer theoretically. However, for practical reasons such as compatibility and cost, the other parts might still talk to microprocessor in 32 bits. This happened in original IBM PC where its microprocessor 8088 was capable of 16-bit execution while it talked to other parts of computer in 8 bits for the reason of cost and compatibility with existing parts.
Imagine that on a 32 bit computer you need to write 'a' as 'A' i.e. in CAPSLOCK, so the operation only requires 2 bytes while computer will read 4 bytes of data resulting in overhead. This overhead increases in 64 bit computer to 6 bytes. So, 64 bit computers not necessarily be fast all the times.
Remember 64 bit windows could be run on a microprocessor only if it supports 64-bit execution.
Processor calls data from Memory i.e. RAM by giving its address to MAR (Memory Address Register). Selector electronics then finds that address in the memory bank and retrieves the data and puts it in MDR (Memory Data Register) This data is recorded in one of the Registers in the Processor for further processing. Thats why size of Data Bus determines the size of Registers in Processor. Now, if my processor has 32 bit register, it can call data of 4 bytes size only, at a time. And if the data size exceeds 32 bits, then it would required two cycles of fetching to have the data in it. This slows down the speed of 32 bit Machine compared to 64 bit, which would complete the operation in ONE fetch cycle only. So, obviosly for the smaller data, it makes no difference if my processors are clocked at the same speed.
Again, with 64 bit processor and 64 bit OS, my instructions will be of 64 bit size always... which unnecessarily uses up more memory space.
32bit processors can address a memory bank with 32 bit address with. So you can have 2^32 memory cells and therefore a limited amount of addressable memory (~ 4GB). Even when you add another memory bank to your machine it can not be addressed. 64bit machines therefore can address up to 2^64 memory cells.
This answer is probably 9 years too late, but I feel that the above answers don't adequately answer the question.
The definition of 32-bit and 64-bit are not well defined or regulated by any standards body. They are merely intuitive concepts. The 32-bit or 64-bit CPU generally refers to the native word size of the CPU's instruction set architecture (ISA). So what is an ISA and what is a word size?
ISA and word size
ISA is the machine instructions / assembly mnemonics used by the CPU. They are the lowest level of a software which directly tell what the hardware to do. Example:
ADD r2,r1,r3 # add instruction in ARM architecture to do r2 = r1 + r3
# r1, r2, r3 refer to values stored in register r1, r2, r3
# using ARM since Intel isn't the best when learning about ISA
The old definition of word size would be the number of bits the CPU can compute in one instruction cycle. In modern context the word size is the default size of the registers or size of the registers the basic instruction acts upon (I know I kept a lot of ambiguity in this definition, but it's an intuitive concept across multiple architectures which don't completely match with each other). Example:
ADD16 r2,r1,r3 # perform addition half-word wise (assuming 32 bit word size)
ADD r2,r1,r3 # default add instruction works in terms of the word size
Example bit-ness of a Pentium Pro CPU with PAE
First, various word sizes in general purpose instrucion:
Arithmetic, logical instructions: 32 bit (Note that this violates old concept of word size since multiply and divide takes more than one cycle)
Branch, jump instructions: 32 bit for indirect addressing, 16-bit for immediate (Again Intel isn't a great example because of CISC ISA and there is enough complexity here)
Move, load, store: 32 bit for indirect, 16 bit for immediate (These instructions may take several cycles, so old definition of word size does not hold)
Second, bus and memory access sizes in hardware architecture:
Logical address size before virtual address translation: 32 bit
Virtual address size: 64-bit
Physical address size post translation: 36 bit (system bus address bus)
System bus data bus size: 256 bit
So from all the above sizes, most people intuitively called this a 32-bit CPU (despite no clear consensus on ALU word size and address bit size).
Interesting point to note here is that in olden days (70s and 80s) there were CPU architectures whose ALU word size was very different from it's memory access size. Also note that we haven't even dealt with the quirks in non-general purpose instructions.
Note on Intel x86_64
Contrary to popular belief, x86_64 is not a 64-bit architecture in the truest sense of the word. It is a 32 bit architecture which supports extension instructions which can do 64 bit operations. It also supports a 64-bit logical address size. Intel themselves call this ISA IA32e (IA32 extended, with IA32 being their 32-bit ISA).
References
ARM instruction examples
Intel addressing modes
From here:
The main difference between 32-bit processors and 64-bit processors is
the speed they operate. 64-bit processors can come in dual core, quad
core, and six core versions for home computing (with eight core
versions coming soon). Multiple cores allow for increase processing
power and faster computer operation. Software programs that require
many calculations to function operate faster on the multi-core 64-bit
processors, for the most part. It is important to note that 64-bit
computers can still use 32-bit based software programs, even when the
Windows operating system is a 64-bit version.
Another big difference between 32-bit processors and 64-bit processors
is the maximum amount of memory (RAM) that is supported. 32-bit
computers support a maximum of 3-4GB of memory, whereas a 64-bit
computer can support memory amounts over 4 GB. This is important for
software programs that are used for graphical design, engineering
design or video editing, where many calculations are performed to
render images, drawings, and video footage.
One thing to note is that 3D graphic programs and games do not benefit
much, if at all, from switching to a 64-bit computer, unless the
program is a 64-bit program. A 32-bit processor is adequate for any
program written for a 32-bit processor. In the case of computer games,
you'll get a lot more performance by upgrading the video card instead
of getting a 64-bit processor.
In the end, 64-bit processors are becoming more and more commonplace
in home computers. Most manufacturers build computers with 64-bit
processors due to cheaper prices and because more users are now using
64-bit operating systems and programs. Computer parts retailers are
offering fewer and fewer 32-bit processors and soon may not offer any
at all.
32-bit and 64-bit are basically the registers size, register the fastest type of memory and is closest to the CPU. A 64-bit processor can store more data for addressing and transmission than a 32-bit register but there are other factors also on the basis of the speed of the processor is measured such as the number of cores, cache memory, architecture etc.
Reference: Difference between 32-bit processor and 64-bit processor
From what is the meaning of 32 bit or 64 bit
process??
by kenshin123 :
The virtual addresses of a process are the mappings of an address
table that correspond to real physical memory on the system. For
reasons of efficiency and security, the kernel creates an abstraction
for a process that gives it the illusion of having its own address
space. This abstraction is called a virtual address space. It's just a
table of pointers to physical memory.
So a 32-bit process is given about 2^32 or 4GB of address space. What
this means under the hood is that the process is given a 32-bit page
table. In addition, this page table has a 32-bit VAS that maps to 4GB
of memory on the system.
So yes, a 64-bit process has a 64-bit VAS. Does that make sense?
there are 8 bits in a byte so if its 32 bit you are processing 4 bytes of data at whatever ghz or mhz your cpu is clocked at per second. so if there is a 64 bit cpu and 32 bit cpu clocked at the same speed the 64 bit cpu would be faster
32 bit processors are processing 32 bits of data based on Ghz of Processor in per second and 64 bit processors are processing 64bit of data per second on what speed your PC has. as well the 34 bit processors works with 4GB of RAM .

What are 16, 32 and 64-bit architectures?

What do 16-bit, 32-bit and 64-bit architectures mean in case of Microprocessors and/or Operating Systems?
In case of Microprocessors, does it mean maximum size of General Purpose Registers or size of Integer or number of Address-lines or number of Data Bus lines or what?
What do we mean by saying "DOS is a 16-bit OS", "Windows in a 32-bit OS", etc...?
My original answer is below, if you want to understand the comments.
New Answer
As you say, there are a variety of measures. Luckily for many CPUs a lot of the measures are the same, so there is no confusion. Let's look at some data (Sorry for image upload, I couldn't see a good way to do a table in markdown).
As you can see, many columns are good candidates. However, I would argue that the size of the general purpose registers (green) is the most commonly understood answer.
When a processor is very varied in size for different registers, it will often be described in more detail, eg the Motorola 68k being described as a 16/32bit chip.
Others have argued it is the instruction bus width (yellow) which also matches in the table. However, in today's world of pipelining I would argue this is a much less relevant measure for most applications than the size of the general purpose registers.
Original answer
Different people can mean different things, because as you say there are several measures. So for example someone talking about memory addressing might mean something different to someone talking about integer arithmetic. However, I'll try and define what i think is the common understanding.
My take is that for a CPU it means "The size of the typical register used for standard operations" or "the size of the data bus" (the two are normally equivalent).
I justify this with the following logic. The Z80 has an 8bit accumulator and an 8 bit databus, while having 16bit memory addressing registers (IX, IY, SP, PC), and a 16bit memory address bus. And the Z80 is called an 8bit microprocessor. This means people must normally mean the main integer arithmetic size, or databus size, not the memory addressing size.
It is not the size of instructions, as the Z80 (again) had 1,2 and 3 byte instructions, though of course the multi-byte were read in multiple reads. In the other direction, the 8086 is a 16bit microprocessor and can read 8 or 16bit instructions. So I would have to disagree with the answers that say it is instruction size.
For Operating systems, I would define it as "the code is compiled to run on a CPU of that size", so a 32bit OS has code compiled to run on a 32 bit CPU (as per the definition above).
How many bits a CPU "is", means what it's instruction word length is.
On a 32 bit CPU, the word length of such instruction is 32 bit, meaning that this is the width what a CPU can handle as instructions or data, often resulting in a bus line with that width.
For a similar reason, registers have the size of the CPU's word length, but you often have larger registers for different purposes.
Take the PDP-8 computer as an example. This was a 12 bit computer. Each instruction was 12 bit long. To handle data of the same width, the accumulator was also 12 bit.
But what makes the 12-bit computer a 12 bit machine, was its instruction word length. It had twelve switches on the front panel with which it could be programmed, instruction by instruction.
This is a good example to break out of the 8/16/32 bit focus.
The bit count is also typically the size of the address bus. It therefore usually tells the maximum addressable memory.
There's a good explanation of this at Wikipedia:
In computer architecture, 32-bit integers, memory addresses, or other data units are those that are at most 32 bits (4 octets) wide. Also, 32-bit CPU and ALU architectures are those that are based on registers, address buses, or data buses of that size. 32-bit is also a term given to a generation of computers in which 32-bit processors were the norm.
Now let's talk about OS.
With OS-es, this is way less bound to the actual "bitty-ness" of the CPU, it usually reflects how opcodes are assembled (for which word length of the CPU) and how registers are adressed (you can't load a 32 bit value in a 16 bit register) and how memory is adressed. Think of it as the completed, compiled program. It is stored as binary instructions and has therefore to fit into the CPUs word length. Task-wise, it has to be able to address the whole memory, otherwise it couldn't do proper memory management.
But what come's down to it, is whether a program is 32 or 64 bit (an OS is essentially a program here) it how its binary instructions are stored and how registers and memory are addressed. All in all, this applies to all kinds of programs, not just OS-es. That's why you have programs compiled for 32 bit or 64 bit.
The difference comes down to the bit width of an instruction set passed to a general purpose register for operating on. 16 bits can operate on 2 bytes, 64 on 8 bytes of instruction at a time. You can often increase throughput of a processor by executing more dense instructions per clock cycle.
The definitions are marketing terms more than precise technical terms.
In fuzzy technical term they are more related to architecturally visible widths than any real implementation register or bus width. For instance the 68008 was classed as a 32-bit CPU, but had 16-bit registers in the silicon and only an 8-bit data bus and 20-odd address bits.
http://en.wikipedia.org/wiki/64-bit#64-bit_data_models the data models mean bitness for the language.
The "OS is x-bit" phrase usually means that the OS was written for x-bit cpu mode, that is, 64-bit Windows uses long mode on x86-64, where registers are 64 bits and address space is 64-bits large and there are other distinct differences from 32-bits mode, where typically registers are 32-bits wide and address space is 32-bits large. On x86 a major difference between 32 and 64 bits modes is presence of segmentation in 32-bits for historical compatibility.
Usually the OS is written with CPU bitness in mind, x86-64 being a notable example of decades of backwards compatibility - you can have everything from 16-bit real-mode programs through 32-bits protected-mode programs to 64-bits long-mode programs.
Plus there are different ways to virtualise, so your program may run as if in 32-bits mode, but in reality it is executed by a non-x86 core at all.
When we talk about 2^n bit architectures in computer science then we are basically talking about memory registers, address buses size or data buses size. The basic concept behind term of 2^n bit architecture is to signify that this here 2^n bit of data can be made use to address/transport the data of size 2^n by processes.
As far as I know, technically, it's the width of the integer pathways. I've heard of 16bit chips that have 32bit addressing. However, in reality, it is the address width. sizeof(void*) is 16bit on a 16bit chip, 32bit on a 32bit, and 64bit on a 64bit.
This leads to problems because C and C++ allow conversions between void* and integral types, and it's safe if the integral type is large enough (the same size as the pointer). This lead to all sorts of unsafe stuff in terms of
void* p = something;
int i = (int)p;
Which will horrifically crash and burn on 64bit code (works on 32bit) because void* is now twice as big as int.
In most languages, you have to work hard to care about the width of the system you're working on.

Resources