The question is basically a follow up to this thread:
Using a 64 bit driver in a 32 bit program. Windows
As I learned when I have a 64 bit driver, which is used over a set of 64 bit DLLs I cannot have a 32 bit process calling the DLLs. We now use some funny interpocess communication to workaround this.
What's unclear is how an automatic 64<->32 bit translation happens when using a "standard device" like a graphics card. Any 32 bit application under a Windows 64 Bit OS should be able to use a printer driver or draw something with GDI by using some Windows DLLs. Somehwere Microsoft has to make a translation from 32 bit to the 64 bit hardware driver for the graphics card or printer. I know that WoW64 does that for registry and file system access but does it also translate to standard drivers?
The specific question is if we had a 64 bit WDM driver for the hardware, could this be easily used by a 32 bit application, with Windows doing the translation 64<->32?
"Standard devices" are considered "standard" because Windows themselves takes responsibility for them. In the case of 64-bits Windows, that means there are both 64 bits and 32 bits DLLs. The 32 bit DLLs are special, and can talk to the 64 bits kernel (including drivers in that kernel). In general, the 32 bits DLLs do not talk to 64 bits DLLs, as there is no 64 bits process in which the latter DLLs could be loaded.
Related
I know this question seems obvious, but I don't manage to find a precise answer.
If on my laptop it is written "Windows 8 64 bit", what "64 bit" exactly refers to? (I know that "Windows 8" is just the name and version of the OS).
I have a few interpretations, but none of them make me entirely happy:
The virtual address space of a process is of size 2^64 units (with unit being some small size). This definition does not make me happy, because even with disc storage, the total storage of my computer is far less than that. So I would never be able in a program to initialize an array of size 2^64.
The registers in memory have a capacity of 64 bits. This also does not make me entirely happy, because my machine could have both 64 bit and 32 bit registers, and perhaps registers of smaller size.
The maximum capacity of registers is 64 bits. This definition could be sensible, but looks "iffy".
So could anyone give me a clear definition, or at least tell that one of the above is correct?
"Windows 64 bit" means that the operating system supports 64-bit addressing.
This, in turn, implies that the CPU also supports 64-bit addressing.
The OS and the CPU are two entirely different things.
Runtime binaries (.exes and .dlls for Windows) are yet another "different thing". 32-bit and 64-bit .exe's have different binary formats, are loaded differently by the OS, and use different runtime resources.
You can't run a 64-bit OS on a 32-bit CPU. But you can run a 32-bit OS on a 64-bit CPU. Similarly, you can't use a 64-bit shared library or executable program on a 32-bit OS.
The key aspect of "64-bit" is 64-bit addressing: that both the CPU and the running program can address up to 2^64 bytes of virtual memory:
In practice, a running program will likely be able to address only a portion of that address space.
You can read more here:
https://en.wikipedia.org/wiki/64-bit_computing
PS:
Yes: CPU registers come in all different sizes. For example, ah is 8-bits, ax 16 bits, eax 32 bits and rax is 64 bits. Furthermore, different registers do "different things". For "64-bit computing", we're primarily interested in those registers that load from and store to virtual memory.
I want to define a great size pointer(64 bit or 128 bit) in gcc which is not depend on platform.
I think there is something like __ptr128 or __ptr64 in MSDN.
sizeof(__ptr128) is 16 bytes.
sizeof(__ptr64 ) is 8 bytes.
is it possible?
it can be useful when you use kernel functions in 64 bit OS which requires 8 bytes pointer argument and you have a 32 bit application which uses 32 bits address and you want to use this kernel function.
Your question makes no sense. Pointers, by definition, are a memory address to something - the size must depend upon the platform. How would you dereference a 128-bit pointer on a hardware platform supporting 64-bit addressing?!
You can create 64 or 128-bit values, but a pointer is directly related to the memory addressing scheme of the underlying hardware.
EDIT
With your additional statement, I think I see what you're trying to do. Unfortunately, I doubt it's possible. If the kernel function you want to use takes a 64-bit pointer argument, it's highly likely to be a 64-bit function (unless you're developing for some unusual hardware).
Even though it's technically possible to mix 64-bit instructions into a 32-bit executable, no compiler will actually let you do this. A 64-bit API call will use 64-bit code, 64-bit registers and a 64-bit stack - it would be extremely awkward for the compiler and operating system to manage arbitrary switching from a 32-bit environment to a 64-bit environment.
You should look at finding the equivalent API for a 32-bit environment. Perhaps you could post the kernel function prototype (name+parameters) you want to use and someone could help you find a better solution.
Just so there's no confusion, __ptr64 in MSDN is not platform independent:
On a 32-bit system, a pointer declared with __ptr64 is truncated to a
32-bit pointer.
Can't comment, but the statement that you can't use 64 bit instructions in a "32 bit executable" is misleading since the definition of "32 bit executable" is subject to interpretation. If you mean an executable that uses 32 bit pointers, then there is nothing at all that says you can't use instructions that manipulate 64 bit values while using 32 bit pointers. The processor doesn't know the difference.
Linux even supports a mode where you can have a 32 bit userspace and a 64 bit kernel space. Thus, each app has access to 4GB of RAM, but the system can access much more. This keeps the size of your pointers down to 4 bytes but does not restrict the use of 64 bit data manipulations.
I'm late to the party but the question makes quite a lot of sense in embedded platforms.
If you combine a CPU with some additional accelerators in the same SOC, they don't necessarily need to have the same address space or address space size.
For the firmware in the accelerator you would want pointers that cover its address space from the CPU and the accelerator's perspective. They are not necessarily the same size.
For example, with a 64 bit CPU and a 32 bit accelerator, the pointer for the firmware can cover 32 bit long address space and the pointer for CPU covers 64 bit address space. C does not have two or more void * types depending on the address spaces you want to talk to.
People generally solve this by casting void * to uintN_t with N as large as needed and passing this around between different parts of the system.
There is none, because gcc was not designed for embedded architectures. There are architectures where multiple sized pointers exist like for example m16c: ram has 16 bit addresses and rom(flash) has 20 bit addresses in the same address space. The performance and size usage is better for smaller pointers.
I don't understand what 32 bit and 64 bit means. It seems that people say 64 bit computers run faster - but why? Does it mean that there are 64 bit integers instead of 32? If it's something like that, is there a way to write a program to determine if we're on a 32 bit or 64 bit machine?
On 64-bit machines pointers are 8 bytes (64 bits). On 32-bit machines they are 4 bytes (32 bits). Thus we can determine by the size of a pointer what we are dealing with, in it's simplest form:
#define IS_64BIT (sizeof(void *) == 8)
The only drawback is that a 64 bit computer running in 32 bit mode will register as 32 bit. Of course, this isn't actually important as for all intents and purposes a 32 bit OS on a 64 bit computer will be a 32 bit computer.
There's actually several different things your asking here.
First of all there's the CPU. Most modern day CPUs (within the past 5-years approx) will support 64-bit.
Now just because the CPU supports it doesn't mean the OS supports it, that's where you have either 64-bit OS or 32-bit OS (32-bit is also known as x86, there's small technical differences in the x86 refers to the CPU instruction set, but for most common usage x86 and 32-bit are interchangeable)
Even if the OS supports it, it doesn't mean the specific program you're running supports 64-bit. What most (if not all?) 64-bit OS's do is they have a 32-bit emulation mode so you can still run 32-bit programs.
Now for your question of how to determine which architecture you're running on, the most reliable way is to ask the OS through some API call.
As for why 64-bit is sometimes considered faster, it because with 32-bits it is only possible to address 4GB of memory, whereas with 64-bit the limit imposed by address space is much higher (as in about 4 billion times higher) and the limiting factor is hardware not address space. As to when and why more memory is faster, that's a separate topic altogether.
64-bit machines do not run faster than 32-bit machines except in cases where 64-bit math is being done or in cases where more than 4 GB of RAM is needed.
64-bit AMD (and later Intel) machines run faster than 32-bit x86 machines because when AMD designed the new instruction set they added more CPU registers and made SSE math the default.
32-bit x86 systems can waste a lot of CPU time pushing data around in RAM, while a x86_64 system can store that data in CPU registers instead. Registers are much faster than level-1 CPU cache. Having more registers also saves CPU instructions that otherwise need to store the old value of a register in RAM, load in a different value from RAM, then load the original value back from RAM.
In some especially register-starved cases the extra registers can gain 30% speed for a program. The benefit is usually much less than that.
The speed benefits from assuming SSE2 are many. In 32-bit CPUs SSE instructions may or may not exist, so to use them the software needs to have clumsy test code and two (or more!) implementation of the math functions. Most software just doesn't care enough and so it never bothers, always falling back on x87 FPU math from the 486 days. The 64-bit CPUs made SSE2 a required part of the instruction set, so all x86_64 programs are free to assume it exists and use it in all cases.
64bit computers do not run faster, per se. It just can support higher precision (larger integers, more precise floats).
In some rare cases, libraries might jam two 32bit numbers into 64bits to perform a large number of parallel operations, possibly resulting in potentially up to 2x speedup. This might occur for some highly optimized scientific/numeric libraries, or in special applications that (for some reason or another) have been highly optimized at a very low level. For example, some multimedia software. It should be noted that such applications could always have made this tradeoff even in 32bit mode, but chose not to; they are merely trading away precision (which they may not need) for parallelism.
Operating system benchmarks which reveal faster performance (maybe <10% improvement) are not necessarily related to 64bit-related optimizations. 64bit architectures may be correlated with having for example more registers or advanced features that programs can take aware of [citation: http://www.tuxradar.com/content/ubuntu-904-32-bit-vs-64-bit-benchmarks ], which may be the cause of a performance difference (as well as other variables).
How to determine whether a CPU is 32bit or 64bit depends on what OS you are using. For example on Linux, you can call uname -a, though there's probably a better way to do so. If you're using C/C++, see the other answer for a way to determine it in a program.
Any tips to programming would be appreciated.
MMX, SSE and 3DNow use 64 or 128 bits registers
However, you won't "program" them yourself unless you're really low level in assembler or writing a compiler. It's transparent to pretty much everyone one
With x86-64 the general purpose registers are 64 bit rather than 32 bit (and there are 16 of them rather than 8). (You also get 16 (128 bit) SSE registers instead of the usual 8.) A decent compiler will therefore often be able to generate more efficient code (less register spill) in x86-64 code compared to old skool 32 bit code.
Is there any specific sectors of Software Engineer/Computer Science where there's a marked difference when developing for 64 bit systems? I've been coding for around 10 years now, and since the break of 64 bit systems, my code hasn't changed one bit.
What applications that a single coder can code as a side project require you to use 64 bit technology?
Anything that requires more than 4 GB of working and program memory would certainly qualify, since that is the maximum amount of memory that a 32 bit system can address directly.
Since 64 bit numbers can reside in the CPU registers, calculations requiring numbers of these sizes would see a performance improvement.
Aside from address space or big calculations, doubling your word size helps more in the low level stuff, and mostly for people who are going to be doing kernel hacking or writing device drivers. For instance, let's say you have a stream of bytes from a network connection and you have to process them. You can now pull those bytes in from main memory to CPU registers 8 at a time rather than 4. But I would think you need a "64 bit aware" string library to take advantage of this.
Anecdotally, we did observe a performance increase when upgrading from 32 bit SQL Server to 64 bit SQL Server (2005) on the same hardware (a 64 bit machine).
We recently ported some of our internally-used libraries to 64-bit. The C code didn't change at all; we just had to get the 64-bit versions of the third-party libraries we link against and figure out which new compiler directives we needed to use. The biggest headache was finding 64-bit versions of our dependencies and refactoring our build system to handle both 32-bit and 64-bit.
That's not to say that other software wouldn't require modification. For example, if you pack your data to fit within word boundaries, you might now be inclined to pack it differently when programming for a 64-bit system.
If you need to ask, you probably will not get any advantage, as you are probably not building into your code any assumptions about size of ints. Rather few use cases, and all fairly low-level, will see any speedup. Bignums and heavy integer arithmetic on very large numbers will be quicker (like crypto).