I See this question, and I would like to know if I could do something similar with D.
More specifically, I am developing from a Linux machine, with x86_64 processor.
I am targeting
Linux machines with x86_64 processor (not the biggest problem)
Macs with x86_64 (should be doable...)
Windowes with x86_64 - This is where the cross platform issues come into play. As a part time developer without any access to windows machine, and reluctancy to get one, I am more interested in platform independant coding than porting to windowes.
Windows with i686 (32 bit) - is that even possible to compile code for both 64 bit and 32 bit targets?
Project is currently on a Linux machine, and uses entirely the standard Phoboes library, and the import part is, it heavily uses LuaD and some C source compiled to *.o files.
D makes it very easy to write portable source code, as the standard library provides abstractions for many OS-specific APIs. Otherwise, the same answers for C++ will generally apply to D, although currently all implementations compile D to native code.
You can generate Windows binaries by running a Windows D compiler under Wine. In theory, cross-compilation should be possible with GDC and LDC, but I don't know how mature it is with either toolchain.
If D.NET project succeeded you would have an answer. Current state is that there is no D compiler which targets JVM or CLR, or any other VM. All current D compilers compile into the native machine code. I dreamed of starting a project to make D compiler which targets JVM, but unfortunately have no time for such (enormous) project.
Related
I am compiling a rust application that will be statically linked and then placed on an external server. What settings, config, or table should I look up to find the correct compile target? For most modern windows server and computers, x86_64-pc-windows-msvc should work just fine, but I wanted to know if there was a more concrete way of figuring this out.
Here the rustup docs mention windows installation and considerations, but not how to figure out the target.
Try going to the system you are building for and run echo %PROCESSOR_ARCHITECTURE%. This will give us information about the CPU architecture that can help us decide.
According to the win32 documentation, it will be a value of AMD64, IA64, ARM64, or x86. Conveniently these line up with the available windows rust targets. I can find all of the rust targets by running rustup target list and looking for ones with windows in the name. Here is that output on my machine:
$ rustup target list | grep windows
aarch64-pc-windows-msvc
i586-pc-windows-msvc
i686-pc-windows-gnu
i686-pc-windows-msvc
x86_64-pc-windows-gnu
x86_64-pc-windows-msvc
For the values of PROCESSOR_ARCHITECTURE, we can more or less approximate which is which by just googling them.
AMD64: This is just another name for x86_64 so we need to use either x86_64-pc-windows-msvc or x86_64-pc-windows-gnu.
IA64: ¯\(ツ)/¯ Rust is built on top of LLVM. IA64 has reached its end of life and not much hardware uses it so LLVM decided not to support this architecture. I think gcc probably does support it, but we're already out of luck when it comes to using Rust.
ARM64: This corresponds to the aarch64 architecture so we should use aarch64-pc-windows-msvc.
x86: This actually means we are running in 32bit mode so we need to choose either i686-pc-windows-msvc or i686-pc-windows-gnu.
As for i586-pc-windows-msvc, it refers to the older I5 Pentium architecture. It should be compatible with the newer i686 and x86_64 architectures, but may or may not be as performant. I would avoid it unless you are working with older hardware and need compatibility. I am also assuming it will not be compatible with windows 11 due to the new 64bit requirement.
As for the difference between msvc and gnu, you get to pick. I imagine msvc will be easier to work with, but I have not tried to use the gnu version.
I am working on ARM cortex A7 based embedded system that runs Linux. I am looking for c/c++ compiler (as GCC is around 100 mb) which is compact in size and reliable. I have shortlisted some as SDCC, TCC, OTCC, Digital Mars, NWCC, LCC, Small C, portable C compiler.
I want to know if compilers are dependent on operating system or hardware and how should I proceed to start strip down the list. I am not an expert and I am learning about Linux systems and embedded environment. If you think I am asking wrong question or going in wrong direction, Kindly let me know.
Thanks you
Note
I already have cross compiler on my linux (laptop) system. I compile program to be loaded using this only. But the embedded system is supposed to be able to load with a particular language designed by us, I am hoping to convert that language in to equivalent C code and run it. I tried writing my own interpreter in c that accepts the code in other language and parse it and executes but it's little slow, I tried same instructions in (directly written in) C with satisfactory results.
Edit:
I ended up using g++ on my system to compile code, as main function of system was to use generated code.
Generally, when dealing with embedded systems you are better off cross-compiling and sending the binaries than compiling directly on the device. Even though it may consume you some time setting up the toolchain on the beginning, it definitely pays you back with build time.
There are several pre-built Linaro GCC which are cross-compilers having (generally) x86 linux as host and arm linux as target platforms. This way, you should not worry about compiler size.
here is an interesting question that, if answered positively, would make cross compiling a whole lot easier.
Since gcc is written in C++, would it be possible to recompile the Linux gcc compiler on Windows MinGW G++ or VSC++ compiler, so that the resulting Windows executable would be able to compile c code to linux programs?
If so, what would be needed to do that?
So to simplify, here is what I want to do.
mingw32-g++ gcc.cpp -o gcc.exe
The command will probably not work because it would probably have been done before if it were that easy. What I ask is if this concept would be even possible.
Edit: thanks and expanding the question to NVCC
fvu was able to answer the question for the gcc compiler (please use the answer button next time), so if you had the same question you can thank him (or her) .
As an extention to the question, would it be possible to edit or recompile nvcc or the things it uses so that nvcc.exe can create a linux program from CUDA C code? I read that the windows variant of nvcc can only use the Visual Studio cl.exe and not MinGW or CygWin.
Is it possible to create linux programs with cl.exe? And if so, could that be used to generate linux programs with nvcc.exe?
Read the chapter on cross compiling in the gcc manual, gcc's architecture makes it quite easy to set up a toolchain where the target is different from the development machine.
I never went the exact route you describe, but I have built toolchains under Windows that target ARM9 embedded Linux machines, works like a charm - using cygwin btw. Look here for a gentle introduction. Also very useful info here.
I am not going to comment on what can be done with respect to nvcc, CUDA is somewhere on my (long) list of stuff to tinker with...
Now, can cl generate Linux binaries? The answer to this question is "sort of" : as long as the target processor is from a processor family that's supported by cl, the object files generated by it should probably not contain anything that would inhibit its execution on Linux, as they'll just contain machine code. That's the theory. However:
as Linux uses another executable format, you will need a Windows-hosted linker that understands Windows style object files (afaik, COFF), and links them together to a Linux style (ELF) executable. I never heard of such a beast, although in theory it could exist
the startup code (a tiny program that wraps around your main function) will also be different and needs to be written
and some more, eg library related issues
So, the practical answer is no, although it might be a nice summer project for a bored student :)
I am teaching myself/reading up about assembly. Most of the books on assembly refer to x86- all the register names in the code begin with "e" and not "r" (as they would in x86-64). However, I use 64-bit Linux and I was wondering if these books have any value because they are not referring to x86-64.
So in short- is it really worth me using these resources to learn x86-64. Or reworded differently, besides the difference in register naming convention- are there any other differences between the two which could make learning from x86 materials difficult?
64 bit Linux allows running 32bit applications, so you still can create 32 bit applications on your computer. This way, the books and example 32 bit code are fully useful.
The only single problem you might have is if the assembly application dynamically link to some 32 bit shared library. In order to fix this you should install 32 bit compatibility layer.
The assembly programs that use only Linux system calls works fine without this layer, which is actually set of shared libraries compiled for 32 bit.
BTW, in my opinion, writing 32 bit code is still better if you want your programs to be useful for more people. There are still many 32 bit computers around and they will not disappear soon.
It's indeed a bit easier to learn assembly on 32bit since the calling conventions and stack management are simpler.
On 64bit you need to worry about ABI. Not only that but the conventions are not the same for every OSes. For instance, the ABI rules on Mac OS X are different than those on Windows (the registers are not the same and on Windows it only uses 4 registers).
You can compile your assembly code using -arch i386 with the assembler (as). With clang or gcc you can use -m32 (at least on Mac OS X, since I haven't used it on Linux proper). You won't be able to link modules that have different bitness (32bit vs 64bit).
Once you're ready to switch or compile your program for 64bit you will have to make sure that when you handle the stack you need to push 64bit words instead of 32bit ones but that kinda goes with saying.
Does developing applications for SPARC, IBM PowerPC require separate compliers, other than x86 and x86-64 targets?
If true, how easily could x86, x64 binaries in Linux be ported to SPARC and PowerPC? Is there a way to simulate these environments using virtualization?
First answer is, yes, to develop compiled code for Power Architecture or SPARC you need compilers that will generate code for those processors. A compiler that generates x86 or x86_64 code will not generate code that runs on Power Architecture or SPARC. You might find cross compilers running on x86 (32 or 64) that will generate Power or SPARC code, though. But the other thing to be aware of is the object file format (elf, xcoff, and so on). Instruction set is just part of the picture. You might get clearer answers if your provide more details of your particular starting point and goals.
Second, one normally doesn't talk of porting binaries. We port source code, which may include assembly language as well as C or other languages. The process for doing this includes compiler selection, after which you can begin an iterative process of compiling, porting, compiling, and linking the code for the new hardware. I'm omitting many details. Again, if you provide more specifics in your question, you might get more specific answers.
Third, as others have said, no, you can't use virtualization in the scenarios you allude to. You might find acceptable emulation solutions. Again, please provide more specifics if you can.
No, virtualization is not the answer. Virtualization takes your hardware platform and creates an independent "virtual" machine of the same hardware. So when running on x86, you use virtualization to create a second x86 machine.
To simulate a completely different hardware architecture, you would want to look into emulation.
How easy / hard it is to port software from one architecture to another architecture depends completely on how the software was written. If it uses something particular to one architecture but not the other (for example, x86 can handle non-aligned memory accesses while SPARC does not) you are going to need to fix things like that. Another example that could make it difficult to port would be if the software has assumed a specific endian-ess of the hardware.
SPARC, IBM PowerPC require separate
compliers, other than x86 and x86-64
targets?
I hate to be really snippy, but given that IBM PowerPC and SPARC do not support the x86 or x86-64 command sets (i.e. talk totally separate machine langauge), where did you even get the idea they would be compatible?
Is there a way to simulate these
environments using virtualization?
Possibly yes, but it would be REALLY slow, because you would have to either translate the machine code, or - well - interpret it. Hardware virtualiaztion would not work, given that the CPU architectures are different. SPARC and PowerPC are not just "different labels for the same thing", they are really different internally.
Use Java or LLVM, or try QEMU to test other CPUs.
It's easy if your code was written to be portable, it's not if it wasn't. Varying sizes of data types per platform and code that depends on it, inline assembly, etc. will make it harder.
Home page for LLVM and QEMU:
http://llvm.org/
http://wiki.qemu.org/Main_Page