I'm porting Windows MSVC code to Linux gcc.
_wfopen() seems to be a MSVC specific function to open file with name constructed of wchar_t (UTF-8).
Is there an alternative for _wfopen() on Linux?
Is there an alternative which could be used both on Windows and Linux for UTF-8?
What is the best way to solve this problem instead of putting OS specific #ifdefs?
Related
here is an interesting question that, if answered positively, would make cross compiling a whole lot easier.
Since gcc is written in C++, would it be possible to recompile the Linux gcc compiler on Windows MinGW G++ or VSC++ compiler, so that the resulting Windows executable would be able to compile c code to linux programs?
If so, what would be needed to do that?
So to simplify, here is what I want to do.
mingw32-g++ gcc.cpp -o gcc.exe
The command will probably not work because it would probably have been done before if it were that easy. What I ask is if this concept would be even possible.
Edit: thanks and expanding the question to NVCC
fvu was able to answer the question for the gcc compiler (please use the answer button next time), so if you had the same question you can thank him (or her) .
As an extention to the question, would it be possible to edit or recompile nvcc or the things it uses so that nvcc.exe can create a linux program from CUDA C code? I read that the windows variant of nvcc can only use the Visual Studio cl.exe and not MinGW or CygWin.
Is it possible to create linux programs with cl.exe? And if so, could that be used to generate linux programs with nvcc.exe?
Read the chapter on cross compiling in the gcc manual, gcc's architecture makes it quite easy to set up a toolchain where the target is different from the development machine.
I never went the exact route you describe, but I have built toolchains under Windows that target ARM9 embedded Linux machines, works like a charm - using cygwin btw. Look here for a gentle introduction. Also very useful info here.
I am not going to comment on what can be done with respect to nvcc, CUDA is somewhere on my (long) list of stuff to tinker with...
Now, can cl generate Linux binaries? The answer to this question is "sort of" : as long as the target processor is from a processor family that's supported by cl, the object files generated by it should probably not contain anything that would inhibit its execution on Linux, as they'll just contain machine code. That's the theory. However:
as Linux uses another executable format, you will need a Windows-hosted linker that understands Windows style object files (afaik, COFF), and links them together to a Linux style (ELF) executable. I never heard of such a beast, although in theory it could exist
the startup code (a tiny program that wraps around your main function) will also be different and needs to be written
and some more, eg library related issues
So, the practical answer is no, although it might be a nice summer project for a bored student :)
I don't quite understand the difference between the following C/C++ compilers: GCC, MinGW, Cygwin and MSVC. Are the MinGW and Cygwin implementations of GCC or something entirely different? If I intend to compile for Windows, do I ever need anything other than the MSVC (Visual Studio) compiler?
GCC for Windows is mostly useful for makefiles and code written with gcc-specific non-portable syntax.
cygwin is not a compiler, it's a set of libraries that create a Linux-like environment inside Windows, and common linux tools compiled with those libraries.. It is useful for code written with Unixisms (expect files to behave a certain way, or assume the directory separator is /, or assume Linux paths).
If you have pure Windows code, you'll be happiest with Visual C++ (MSVC). Or, if that doesn't optimize well enough, the Intel C++ compiler.
Visual C++ also works well for portable ISO-conformant code... but Microsoft is a little behind the curve on implmenting C++11 and C++14 features. So that's another reason you might want to use gcc (or clang).
I have a code that I have written in Fortran during my PhD, and now I am collaborating with some researcher that uses Linux, and they need my model, that is basically a single executable file. In the future I will probably make it open source, but up to know they just want the executable, also because they are not programmers and they have never compiled a program in their life. So the question is: is it possible to compile it on my linux machine and then send it to them in order to use it in another linux machine?Or does the linux version and distribution matter?
thank you very much
A.
If you do not use many libraries you can do that. One option is statically linking the executable (-static or similar compiler option). You need to have the static versions of all needed libraries for that. The have .a suffix. They are often not installed by default in Linux distributions and often they are not supplied in the repositories at all.
In my distrbution (OpenSuSE) they are in packages like glibc-devel-static, lapack-devel-static and similar.
The other option would be to compile the executable on a compatible distribution the users will have (GLIBC version is important) and supply all .so dynamically linked libraries they will need with your executable.
All of this assumes you use the same platform, like i586 or amd64 or arm like wallyk comments. I mostly assumed you are on a PC. You can force most compilers to produce a 32-bit or 64-bit executable by -m32 or -m64 option. You need the right version of the development libraries for that.
I am looking to compile a .m file (program) from MATLAB to Linux. I have done it on Windows operating system using
mcc -mv FILENAME.m
I see on the MATLAB website that I can use GNU g++.
Does this work in a similar way to the MATLAB compiler by just writing one line of code in MATLAB or do I have to run it in the Linux terminal?
Also, does this compiler tend to have issues regarding getting the desired output?
What you want to do, is called crosscompiling. Here you want from a Windows computer cross compile a Matlab program to a native Linux executable. As of 2009, this was not possible and most likely isn't now either.
Perhaps you might try using Octave for Linux.
Download GNU Octave
I have a closed executable (with no source) which was compiled with VC++ 7.1 (VS2003).
This executable loads a DLL for which I do have the source code.
I'm trying to avoid compiling this DLL with the VS2003 toolkit, because it involves installing this toolkit on every machine I want to compile it on, and to use a makefile instead of directly use the newer VS project.
I changed parameters like the runtime library (I use /MT instead of /MD to prevent runtime DLL conflicts) and some other language switches to maintain compatibility with the old compiler. Finally it compiled and linked fine with the VS2005 libs.
But then when I tried running it, it crashed. The reason: The DLL sends an std::string (and on other places - an std::vector) back to the exe, and the conflicting STL implementation versions cause something bad to happen.
So my question is: Is there a way to work around it? Or should I continue compiling the DLL with the VC7.1 toolkit?
I'm not very optimistic, but maybe someone will have a good idea regarding this.
Thanks.
As the implentations of the stl libraries change, that breaks binary compatability, which is (to my understanding) when the size/member variables of an object changes. The two sides don't agree on how big a std::string/vector/etc is.
If the old executable thinks a std::string is 32bit char* ptr, and 32bit size_t length, and the new executable thinks a std::string is a 64bit iterator* iterator_list, 64bit char* ptr, 64bit size_t length, then they can't pass strings back and forth. This is why long-standing formats (windows, bmp) have the first member is a 32bit size_of_object.
Long story short, you need the older VS2003 version of the library (I don't think you necessarily need the compiler)