Is it that hard to port an application to 64 bits? - linux

I was surprised to read that Adobe discontinued the 64 bits version of Flash for Linux. While there is a new 32 bits version, and Adobe advises users to use the 32 bits version of Firefox instead.
Was wondering, as I didn't have to do that yet, is it that hard to port an application to 64 bits? Besides the libraries changes and the recompilation (settings in the Makefile), what makes the port difficult? (Flash is an example)

As noted in an Adobe blog post, Flash's ActionScript engine has a JIT compiler, that compiles the ActionScript code into native code.
x64 has a very different instruction set from x86. Therefore, making the JIT compiler generate x64 code is a non-trivial task, and is far more complicated than just making all the words 64 bits. :-)

The real kicker with porting apps to 64-bit is that every OS seems to treat the primitive types as they please. For example, under most linux environments a long is 4 bytes on a 32 bit system and 8 bytes on a 64 bit system (32bit=4bytes 64bit=8bytes.) Meanwhile, the int stays 4 bytes across 64bit and 32 bit systems. Under windows, the opposite seems to be true, the long stays consistently 4 bytes while the int switches between 32 and 64 bit.
That said, I have ported a medium sized project at work before from 32 bit to 64 bit linux (about 25,000 lines of code) only having to make changes in the the assembly code (GASM) which made several faulty assumptions about datatypes being 4 bytes long. Other than this, I had no problems, which suggests that provided you payed strict attention to your data types when you were first developing, porting should be seemless, perhaps only requiring certain compile switches be changed (like -fpic.) There were a few really bizzare corner cases that came up in my porting experience but I think they were mostly due to undefined behaviour of some GASM more than the porting itself.

If you use a lot of ints and floats, it can be amazingly complex to get it to work suitably, esp. if it is a networked app.
It took over 2 years to port xMule to 64 bits and I don't believe its parent project, eMule, has 64 bit at all.

Ideally it should be a recompilation, in reality it takes a significant amount of effort. Even if it is simple there still has to be a full sweep by the QA team to prove it works and that always takes a while.

The obvious problems I guess would be variable sizes (e.g. longs are 64 bits on most 64 bit compilers), this messes up anything that uses size related operations such as bit shifting / some pointer arithmetic. I think adobe just can't be be bothered to scan through and ensure cross compatibility. Especially when 90%+ of browser use is on 32bit versions, I know flash hasn't ever worked on the 64bit IE but even 64bit Windows 7 defaults to the 32-bit version.
Lot of information on it here if your interested:
http://www.viva64.com/content/articles/64-bit-development/?f=20_issues_of_porting_C++_code_on_the_64-bit_platform.html&lang=en&content=64-bit-development

the coding part of porting to 64bits generally isn't that hard, but can require some time and a lot of hairpulling regarding builds/libs etc. however, the real problem, especially for a widely deployed project like flash is going through the proper testing coverage to cover the many code paths and platforms. 64bit is a horizontal feature, so it can possibly break everything, so everything needs to be tested.
for flash on linux in particular, its probably more of a cost-benefit issue. is catering to the percentage of users actually use linux and 64bit worth the development costs for adobe? probably not, at this point.

Related

Why do modern OS need so much memory?

I'm keeping an old university friends house. I found two of our old computers we used to hack on. They aren't even that old, like 2003 type. Single core, 256MB, 80GB hard drives. Or at least the stickers proudly proclaim.
So I go: Jackpot! I'm going to install Ubuntu and have some fun. But these machines have 256MB of ram. And with hardware use, it's more like less than 190 MB. And no modern linux/bsd distro seems to work on them. At least without MAJOR swap going on, like 30 minutes plus. For every operation.
Now, these machines used to run Windows xp without a hitch. Super fast.
Why have modern OS'es have such large memory requirements? I don't get it. Can someone tell me why they need so much memory? Just claiming sloppy programming or "modern" requirements isn't going to satisfy me.
Can someone give me an actual example of why a modern OS can't run on, say, 16MB of ram?
How can windows XP be so light, yet I can't seem to find linux distro that can run on such a system.
I'm pretty sure I used to program, surf the web, and even game on those old piece of junks. Now they can't even run anything beyond FreeDos?
The simple answer is that Operating Systems are written for the available hardware.
Those that are not (looking at you OS/2) lose market share and die.
The OS provides facilities that are achievable on the given target hardware, and, ignores features that would not be feasible on the target hardware.
So a modern OS expects at least a couple of GB and tunes the IO system to use large buffers, and caches lots of stuff an older OS would leave on disk. A modern OS expects a fast graphics processor and implements lots of silly eye-candy like transparent windows, 3d shading etc.
Dumb reviewers judge software packages on "feature count" so even open source developers feel obliged to match the feature list of commercial products -- most of which are never used.
As machines get more powerful developers feel their time is better spent improving reliability and adding functionality rather than space saveing and performance tweaks.
<opiniated gripe>As the state of the art in computer science evolves there is a tendancy to massivly overengineer producing complex solutions for simple problems</opiniated gripe>
Take a look here one of these should fit the bill.

Using x86 materials to learn assembly on a 64 bit OS?

I am teaching myself/reading up about assembly. Most of the books on assembly refer to x86- all the register names in the code begin with "e" and not "r" (as they would in x86-64). However, I use 64-bit Linux and I was wondering if these books have any value because they are not referring to x86-64.
So in short- is it really worth me using these resources to learn x86-64. Or reworded differently, besides the difference in register naming convention- are there any other differences between the two which could make learning from x86 materials difficult?
64 bit Linux allows running 32bit applications, so you still can create 32 bit applications on your computer. This way, the books and example 32 bit code are fully useful.
The only single problem you might have is if the assembly application dynamically link to some 32 bit shared library. In order to fix this you should install 32 bit compatibility layer.
The assembly programs that use only Linux system calls works fine without this layer, which is actually set of shared libraries compiled for 32 bit.
BTW, in my opinion, writing 32 bit code is still better if you want your programs to be useful for more people. There are still many 32 bit computers around and they will not disappear soon.
It's indeed a bit easier to learn assembly on 32bit since the calling conventions and stack management are simpler.
On 64bit you need to worry about ABI. Not only that but the conventions are not the same for every OSes. For instance, the ABI rules on Mac OS X are different than those on Windows (the registers are not the same and on Windows it only uses 4 registers).
You can compile your assembly code using -arch i386 with the assembler (as). With clang or gcc you can use -m32 (at least on Mac OS X, since I haven't used it on Linux proper). You won't be able to link modules that have different bitness (32bit vs 64bit).
Once you're ready to switch or compile your program for 64bit you will have to make sure that when you handle the stack you need to push 64bit words instead of 32bit ones but that kinda goes with saying.

Can we convert 32 bit library file to 64 bit library file?

I have some 32 bit library files (.a files) in Solaris. I am porting my application to 64 bit Linux environment. Is there any way to convert the 32 bit libraries to 64 bit or should I regenerate the libraries in 64 bit?
It is not just a question of 32-bit vs 64-bit. It's also a question of Solaris versus Linux. These are two operating systems that have different calling conventions and different ABIs. That means things like sizes of data types can be different, the way the compiler puts stuff in registers and on the stack to do a function call is different, the way system calls are done is different, etc.
It is probably possible to convert a static library in the way you want, in some cases, but you would need to write the tools yourself. Compiling from source is way easier, much more reliable, and also something you need to be able to do at will anyway (otherwise you can't easily fix problems in the library, e.g., security issues).
No; you have to recompile them for 64-bit, because a lot of necessary information is lost during the compilation.
Good luck.

Is a 64 bit system safer than a 32 bit one, because there are less exploits for them

Most books and papers on writing software exploits are written for the x86 processor family.
So there are probably a lot of "hackers" or crackers that only know x86 assembly.
Can you conclude from this, that 64 bit software is safer than 32 bit?
No. The x86_64 CPUs out there can run x86 (32bit) code natively, and most operating systems that cater for x86_64 allow this (optionally or not, transparently or not).
You get all the attack vectors that existed for x86, plus anything else that was added with x86_64.
But the x86_64 generation of chips also brought security features like the NX bit. This type of thing can help reducing risk.
I don't think so. 64bit OSes get more common these days, so there will be exploits in the near future too.
What you are talking about is security through obscurity, and in your particular question, it's not security at all.

Does do_div() in Linux work in 32 and 64 bit architectures?

I need to do an integer division in a kernel module and I am using do_div() for that. It seems to work on my machine (I have an i686 processor), however I am not sure that it works everywhere. Could anyone confirm whether do_div() should function correctly in 32 bit and 64 bit architectures, or whether there are any know limitations ?
I use Ubuntu 10.04 with kernel 2.6.38, so I am interested in support for kernels >= 2.6.38.
I would also be interested if anyone knows a better way to do an integer division in the kernel than do_div().
Best Regards
Daniel
do_div() does work on 64bit arch, but unless you really need the remainder and is fully aware of the effects of using do_div(), you should probably be doing bit shifts instead.

Resources