How do I Distribute my Haxe application with Hashlink? - haxe

I've got a Haxe Application that I want to make available to people with a Windows system. I use Hashlink to run the Application locally and it works very nicely.
I am wondering if I'm supposed to distribute my Application with Hashlink. Can it build me an .exe?

It looks like generating distributable binary files isn't supported out of the box today (March 10, 2017):
> haxe -main Main -hl main.c
Code generated in main.c automatic native compilation not yet implemented
Hopefully it will be supported soon!
Note: I'm talking about building a final executable using hashlink. An entirely separate approach I do not cover here is the possibility of delivering the hashlink virtual machine with your output hl bitcode.
Sane people stop reading here.
But in the meantime... it is possible to generate binaries with hashlink today if you build hashlink from source.
Warnings:
This isn't a generic, cross-platform answer to your question -- it's just my experience on Linux.
There will probably soon be a better way than this.
But I wanted to jot these notes down even for myself to recall later.
Here's what I had to do on Ubuntu 14.04, 64-bit:
Install prerequisite libraries for building hl (there may be others I already have installed, like build-essential, etc)
sudo apt-get install libvorbis-dev libturbojpeg libsdl2-dev libopenal-dev libssl-dev
Clone and build the mbedtls library: (rev note: b5ba28)
cd ~/dev/
git clone https://github.com/ARMmbed/mbedtls.git
cd mbedtls
make CFLAGS='-fPIC'
Clone the hashlink repo: (rev note: eaa92b)
cd ~/dev/
git clone https://github.com/HaxeFoundation/hashlink.git
cd hashlink
In the # Linux section of the Makefile, ~line 67, add these flags:
CFLAGS += -I ../mbedtls/include
LIBFLAGS += -L../mbedtls/library
Now build with make
If everything works, you'll see two important output files, hl and libhl.so
Ok, at this point, it's easiest if you just build your project in the hashlink directory. For example:
# Still in the hashlink directory
haxe -cp /path/to/my/project -debug -main Main.hx -hl src/_main.c
Now run make hlc, and if everything works, hlc is the output executable (which depends on libhl.so):
cp libhl.so hlc /tmp/
cd /tmp/
./hlc
Prints:
Main.hx:7: Hello world!

Related

arm-none-eabi-objdump: error while loading shared libraries: libdebuginfod.so.1: cannot open shared object file

If you have an answer for this, or further information, I'd welcome it. I'm following advice from here, to offer some unsolicited help by posting this question then an answer I've already found for it.
I have a bare-metal ARM board for which I'm building a cross-toolchain, from sources for GNU binutils, gcc and gdb, and for SourceWare's Newlib. I got those four working and cross-built a DoNothing.c into an ELF file - but I couldn't disassemble it with this:
$ arm-none-eabi-objdump -S DoNothing.elf
The error was:
$ arm-none-eabi-objdump: error while loading shared libraries: libdebuginfod.so.1: cannot open shared object file: No such file or directory
I'll follow up with a solution.
The error was correct - my system didn't have libdebuginfod.so.1 installed - but I have another cross-binutils, installed from binary for a different target, and its objdump -S works fine on the same host. Why would one build of objdump complain about missing that shared library, when clearly not all builds of objdump need it?
First I tried rebuilding cross binutils, specifying --without-debuginfod as a configure option. No change, which seems odd: surely that should build tools that not only don't use debuginfod but which don't depend on it in any way. (If someone can answer that, or point out what I've misunderstood, it may help people.)
Next I figured debuginfod was inescapable (for my cross-tools built from source at least), so I'd install it to get rid of the error. It's a component of the elfutils package, but installing the latest elfutils available for my Ubuntu 20.04 system didn't bring libdebuginfod.so.1 with it.
I found a later one, for Arch Linux, whose package contents suggested it would - but its package format doesn't match Ubuntu's and installing it was going to involve a lot of work. Instead I opted to build it from the Arch Linux source package. However, running ./configure on that gave a couple of infuriatingly similar errors:
configure: checking libdebuginfod dependencies, --disable-libdebuginfod or --enable-libdebuginfo=dummy to skip
...
configure: error: dependencies not found, use --disable-libdebuginfod to disable or --enable-libdebuginfod=dummy to build a (bootstrap) dummy library.
No combination of those suggestions would allow configure for elfutils-0.182 to run to completion.
The problem of course was my own lack of understanding. The solution came from the Linux From Scratch project: what worked was to issue configure with both of the suggested options, like this:
$ ./configure --prefix=/usr \
--disable-debuginfod \
--enable-libdebuginfod=dummy \
--libdir=/lib
That gave a clean configure; make worked first time, as did make check and then sudo make install which of course installed libdebuginfod.so.1 as required. I then had an arm-none-eabi-objdump which disassembles cross-compiled ELF files without complaining.

Cross-compile a Rust application from Linux to Windows

Basically I'm trying to compile the simplest code to Windows while I am developing on Linux.
fn main() {
println!("Hello, and bye.")
}
I found these commands by searching the internet:
rustc --target=i686-w64-mingw32-gcc main.rs
rustc --target=i686_pc_windows_gnu -C linker=i686-w64-mingw32-gcc main.rs
Sadly, none of them work. It gives me an error about the std crate missing
$ rustc --target=i686_pc_windows_gnu -C linker=i686-w64-mingw32-gcc main.rs
main.rs:1:1: 1:1 error: can't find crate for `std`
main.rs:1 fn main() {
^
error: aborting due to previous error
Is there a way to compile code on Linux that will run on Windows?
Other answers, while technically correct, are more difficult than they need to be. There's no need to use rustc (in fact it's discouraged, just use cargo), you only need rustup, cargo and your distribution's mingw-w64.
Add the target (you can also change this for whatever target you're cross compiling for):
rustup target add x86_64-pc-windows-gnu
You can build your crate easily with:
cargo build --target x86_64-pc-windows-gnu
No need for messing around with ~/.cargo/config or anything else.
EDIT: Just wanted to add that while you can use the above it can also sometimes be a headache. I wanted to add that the rust tools team also maintains a project called cross: https://github.com/rust-embedded/cross
This might be another solution that you want to look into
The Rust distribution only provides compiled libraries for the host system. However, according to Arch Linux's wiki page on Rust, you could copy the compiled libraries from the Windows packages in the download directory (note that there are i686 and x86-64 packages) in the appropriate place on your system (in /usr/lib/rustlib or /usr/local/lib/rustlib, depending on where Rust is installed), install mingw-w64-gcc and Wine and you should be able to cross-compile.
If you're using Cargo, you can tell Cargo where to look for ar and the linker by adding this to ~/.cargo/config (where $ARCH is the architecture you use):
[target.$ARCH-pc-windows-gnu]
linker = "/usr/bin/$ARCH-w64-mingw32-gcc"
ar = "/usr/$ARCH-w64-mingw32/bin/ar"
Note: the exact paths can vary based on your distribution. Check the list of files for the mingw-w64 package(s) (GCC and binutils) in your distribution.
Then you can use Cargo like this:
$ # Build
$ cargo build --release --target "$ARCH-pc-windows-gnu"
$ # Run unit tests under wine
$ cargo test --target "$ARCH-pc-windows-gnu"
UPDATE 2019-06-11
This fails for me with:
Running `rustc --crate-name animation examples/animation.rs --color always --crate-type bin --emit=dep-info,link -C debuginfo=2 --cfg 'feature="default"' -C metadata=006e668c6384c29b -C extra-filename=-006e668c6384c29b --out-dir /home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/examples --target x86_64-pc-windows-gnu -C ar=x86_64-w64-mingw32-gcc-ar -C linker=x86_64-w64-mingw32-gcc -C incremental=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/incremental -L dependency=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/deps -L dependency=/home/roman/projects/rust-sdl2/target/debug/deps --extern bitflags=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/deps/libbitflags-2c7b3e3d10e1e0dd.rlib --extern lazy_static=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/deps/liblazy_static-a80335916d5ac241.rlib --extern libc=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/deps/liblibc-387157ce7a56c1ec.rlib --extern num=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/deps/libnum-18ac2d75a7462b42.rlib --extern rand=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/deps/librand-7cf254de4aeeab70.rlib --extern sdl2=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/deps/libsdl2-3f37ebe30a087396.rlib --extern sdl2_sys=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/deps/libsdl2_sys-3edefe52781ad7ef.rlib -L native=/home/roman/.cargo/registry/src/github.com-1ecc6299db9ec823/winapi-x86_64-pc-windows-gnu-0.4.0/lib`
error: linking with `x86_64-w64-mingw32-gcc` failed: exit code: 1
Maybe this will help https://github.com/rust-lang/rust/issues/44787
Static compile sdl2
There is option to static-compile sdl but it didn't work for me.
Also mixer is not included when used with bundled.
Let's cross-compile examples from rust-sdl2 project from Ubuntu to Windows x86_64
In ~/.cargo/config
[target.x86_64-pc-windows-gnu]
linker = "x86_64-w64-mingw32-gcc"
ar = "x86_64-w64-mingw32-gcc-ar"
Then run this:
sudo apt-get install gcc-mingw-w64-x86-64 -y
# use rustup to add target https://github.com/rust-lang/rustup.rs#cross-compilation
rustup target add x86_64-pc-windows-gnu
# Based on instructions from https://github.com/AngryLawyer/rust-sdl2/
# First we need sdl2 libs
# links to packages https://www.libsdl.org/download-2.0.php
sudo apt-get install libsdl2-dev -y
curl -s https://www.libsdl.org/release/SDL2-devel-2.0.9-mingw.tar.gz | tar xvz -C /tmp
# Prepare files for building
mkdir -p ~/projects
cd ~/projects
git clone https://github.com/Rust-SDL2/rust-sdl2
cd rust-sdl2
cp -r /tmp/SDL2-2.0.9/x86_64-w64-mingw32/lib/* ~/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-pc-windows-gnu/lib/
cp /tmp/SDL2-2.0.9/x86_64-w64-mingw32/bin/SDL2.dll .
Build examples at once
cargo build --target=x86_64-pc-windows-gnu --verbose --examples
Or stop after first fail:
echo; for i in examples/*; do [ $? -eq 0 ] && cargo build --target=x86_64-pc-windows-gnu --verbose --example $(basename $i .rs); done
Run
cargo build will put binaries in target/x86_64-pc-windows-gnu/debug/examples/
Copy needed files:
cp /tmp/SDL2-2.0.4/x86_64-w64-mingw32/bin/SDL2.dll target/x86_64-pc-windows-gnu/debug/examples/
cp assets/sine.wav target/x86_64-pc-windows-gnu/debug/examples/
Then copy directory target/x86_64-pc-windows-gnu/debug/examples/ to your Windows machine and run exe files.
Run in cmd.exe
If you want to see the console output when running exe files, you may run them from cmd.exe.
To open cmd.exe in current directory in file explorer, right click with shift on empty place in window and choose Open command window here.
Backtraces with mingw should work now - if not use msvc https://github.com/rust-lang/rust/pull/39234
There is Docker based solution called cross. All the required tools are in virtualized environment so you don't need to install additional packages for your machine. See Supported targets list.
From project's README:
Features
cross will provide all the ingredients needed for cross compilation without touching your system installation.
cross provides an environment, cross toolchain and cross compiled libraries, that produces the most portable binaries.
“cross testing”, cross can test crates for architectures other than i686 and x86_64.
The stable, beta and nightly channels are supported.
Dependencies
rustup
A Linux kernel with binfmt_misc support is required for cross testing.
One of these container engines is required. If both are installed, cross will default to docker.
Docker. Note that on Linux non-sudo users need to be in the docker group. Read the official post-installation steps. Requires version 1.24 or later.
Podman. Requires version 1.6.3 or later.
Installation
$ cargo install cross
Usage
cross has the exact same CLI as Cargo but as it relies on Docker you'll have to start the daemon before you can use it.
# (ONCE PER BOOT)
# Start the Docker daemon, if it's not already running
$ sudo systemctl start docker
# MAGIC! This Just Works
$ cross build --target aarch64-unknown-linux-gnu
# EVEN MORE MAGICAL! This also Just Works
$ cross test --target mips64-unknown-linux-gnuabi64
# Obviously, this also Just Works
$ cross rustc --target powerpc-unknown-linux-gnu --release -- -C lto
The solution that worked for me was. It is similar to one of the accepted answers but I did not require to add the toolchain.
rustup target add x86_64-pc-windows-gnu
cargo build --target x86_64-pc-windows-gnu
Refer to the documentation for more details.
I've had success on Debian (testing) without using Mingw and Wine just following the official instructions. They look scary, but in the end it didn't hurt that much.
The official instructions also contain info on how to cross-compile C/C++ code. I haven't needed that, so it's something I haven't actually tested.
A couple of remarks for individual points in the official instructions. The numbers match the numbers in the official instructions.
Debian: sudo apt-get install lld
Make a symlink named lld-link to lld somewhere in your $PATH. Example: ln -s /usr/bin/lld local_bin/lld-link
I don't cross-compile C/C++, haven't used this point personally.
This is probably the most annoying part. I installed Rust on a Windows box via rustup, and copied the libraries from the directories named in the official docs to the Linux box. Beware, there were sometimes uppercase library filenames, but lld wants them all lowercase (Windows isn't case-sensitive, Linux is). I've used the following to rename all files in current directory to lowercase:
for f in `find`; do mv -v "$f" "`echo $f | tr '[A-Z]' '[a-z]'`"; done
Personally, I've needed both Kit directories and just one of the VC dirs.
I don't cross-compile C/C++, haven't used this point personally.
Just make $LIB_ROOT in the script at the end of this post point to the lib directory from point 3.
Mandatory
I don't cross-compile C/C++, haven't used this point personally.
Depending the target architecture, either of the following:
rustup target add i686-pc-windows-msvc
rustup target add x86_64-pc-windows-msvc
For cross-building itself, I'm using the following simple script (32-bit version):
#!/bin/sh
# "cargo build" for the 32-bit Windows MSVC architecture.
# Set this to proper directory
LIB_ROOT=~/opt/rust-msvc
# The rest shouldn't need modifications
VS_LIBS="$LIB_ROOT/Microsoft Visual Studio 14.0/VC/lib/"
KIT_8_1_LIBS="$LIB_ROOT/Windows Kits/8.1/Lib/winv6.3/um/x86/"
KIT_10_LIBS="$LIB_ROOT/Windows Kits/10/Lib/10.0.10240.0/ucrt/x86/"
export LIB="$VS_LIBS;$KIT_8_1_LIBS;$KIT_10_LIBS"
cargo build --target=i686-pc-windows-msvc "$#"
I'm using the script the same way I would use cargo build
Hope that helps somebody!

Sys_error using ocamlmklib on an object file

I am compiling a theorem prover on cygwin and I get this error:
$ make
ocamlmklib -o bin/minisatinterface minisat/core/Solver.o minisat/simp/SimpSolver
.o bin/Ointerface.o -lstdc++
** Fatal error: Error while reading minisat/core/Solver.o: Sys_error("Invalid ar
gument")
Makefile:49: recipe for target `bin/libminisatinterface.a' failed
make: *** [bin/libminisatinterface.a] Error 2
It is not clear what kind of invalid argument is here?
The only documentation I have found for ocamlmklib did not help on understanding the error message. Could it not read the file itself or there is a problem with the contents? ls does list the file:
$ ls -l minisat/core/Solver.o
-rw-r--r-- 1 gbuday mkpasswd 2096 jan. 22 10.42 minisat/core/Solver.o
update: if I remove Solver.o I get a different error message:
** Fatal error: Cannot find file "minisat/core/Solver.o"
So the above error message is about the contents of the object file.
I happen to know that this specifically has to do with the build of the ATP Satallax, which can be used with Isabelle Sledgehammer, and I was asked to look at this.
I have no expertise with make files and ocaml. My success at building Satallax v2.7 came purely from following the instruction in INSTALL, with some minimal ability at guessing at what error codes meant, which I mainly needed when building Satallax v2.6 over a year ago.
The first important thing to do is make sure that the tar file is unzipped while working in a Cygwin terminal, rather than under Windows with something like WinZip.
Assuming that you're working in a Cygwin terminal, these are the notes which I made. After that I'll include text from the Satallax INSTALL, and few comments.
Sources: http://www.ps.uni-saarland.de/~cebrown/satallax/
0) tar xvzf satallax-2.7.tar.gz
1) Cygwin Package (these are also for other's like Leo-II):
zlib-devel, make, OCaml devel, gcc devel, g++ devel, libstdc++6-devel
Ubuntu 12 Packages:
sudo apt-get install build-essential
zlibg-dev using the Ubuntu Software Center
ocaml and g++ if they don't come with "build-essential"
2) Put eprover.exe in the path so that ./configure can find it.
a) There are the following lines in the configure files, which shows
that it's configured to find picomus, eprover has to be in the path
or `which eprover` has to be edited.
# Optionally set picomus to your picomus executable
picomus=${PWD}/picosat-936/picomus
# Optionally set eprover to your E theorem prover executable
eprover=`which eprover`
3) Follow the instructions in INSTALL.
a) export MROOT=`pwd` takes care of this next note, which I had to do
for v2.6, info I keep in here in case I need it in the future.
b) export MROOT=<minisat-dir>, where you replace "minisat-dir" with the
/cygdrive/e\E_2\binp\isaprove\satallax-2.6\cygwin\minisat
3) OLD v2.6 NOTE: If you get an error, delete the old source and try
untaring the sources again.
My build of v2.7 went through without problems, other than the test giving errors.
With Satallax v2.7, there is now the requirement that the build find the eprover. Note STEP 3 of INSTALL tells you to modify configure, or put eprover.exe in the path before the build. I put it in the path, which for me is
E:\E_2\dev\Isabelle2013-2\contrib\e-1.8\x86-cygwin
The INSTALL file then gives short instructions:
* Short Instructions
cd minisat
export MROOT=`pwd`
cd core
make Solver.o
cd ../simp
make SimpSolver.o
cd ../../picosat-936
./configure
make
cd ..
./configure
make
./test | grep ERROR
After downloading all needed packages, and putting eprover.exe in the path, it built without errors for me other than the test, but the executable works when used by Isabelle Sledgehammer.
STEP 3 of INSTALL talks about providing the location of the picomus executable, but I'm pretty sure that there's not need to do that because picosat-936\picomus.exe gets built in this build.
If you watch the build messages, it'll tell you what it's looking for and what it finds.
For completeness, I include the text from INSTALL, except for the instructions related to what's pertinent for Coq.
There are a number of requirements in order to compile Satallax.
In short, you need make, ocaml, g++ and the zlib header files.
In Debian and derived Linux systems, you can get these from
the build-essential and zlib1g-dev packages. You need
ocamlopt to obtain a standalone executable.
If you're not the administrator of the computer on which you're installing,
you can quote the previous paragraph to the administrator.
* Short Instructions
cd minisat
export MROOT=`pwd`
cd core
make Solver.o
cd ../simp
make SimpSolver.o
cd ../../picosat-936
./configure
make
cd ..
./configure
make
./test | grep ERROR
./bin/satallax.opt is the native code executable to use.
See test for examples of how to use it.
* Long Instructions
STEP 1:
Compile minisat (see minisat/README)
cd minisat
export MROOT=<minisat-dir> (or setenv in cshell)
cd core
make Solver.o
cd ../simp
make SimpSolver.o
cd ../..
STEP 2 (Optional. Only needed to extract proof information for proof terms.) :
Build picosat (including picomus):
cd picosat-936
./configure
make
cd ..
STEP 3:
If desired, edit the configure script to give the location of your picomus
and eprover executables. (If the executables are not found by the configure script,
you will need to give the location of the executables to satallax via the command line
options -P <picomus> -E <eprover> if they are needed.)
Run the configure script for Satallax.
./configure
STEP 4:
make
uses ocamlopt to make a standalone executable
./bin/satallax.opt
and uses ocamlc to make a bytecode executable
./bin/satallax
that depends on ocamlrun
STEP 5:
Test satallax using the examples in the script file:
./test
As long as you don't see a line with the word ERROR, it should be working.

How do I configure Qt for cross-compilation from Linux to Windows target?

I want to cross compile the Qt libraries (and eventually my application) for a Windows x86_64 target using a Linux x86_64 host machine. I feel like I am close, but I may have a fundamental misunderstanding of some parts of this process.
I began by installing all the mingw packages on my Fedora machine and then modifying the win32-g++ qmake.conf file to fit my environment. However, I seem to be getting stuck with some seemingly obvious configure options for Qt: -platform and -xplatform. Qt documentation says that -platform should be the host machine architecture (where you are compiling) and -xplatform should be the target platform for which you wish to deploy. In my case, I set -platform linux-g++-64 and -xplatform linux-win32-g++ where linux-win32-g++ is my modified win32-g++ configuration.
My problem is that, after executing configure with these options, I see that it invokes my system's compiler instead of the cross compiler (x86_64-w64-mingw32-gcc). If I omit the -xplatform option and set -platform to my target spec (linux-win32-g++), it invokes the cross compiler but then errors when it finds some Unix related functions aren't defined.
Here is some output from my latest attempt: http://pastebin.com/QCpKSNev.
Questions:
When cross-compiling something like Qt for Windows from a Linux host, should the native compiler ever be invoked? That is, during a cross compilation process, shouldn't we use only the cross compiler? I don't see why Qt's configure script tries to invoke my system's native compiler when I specify the -xplatform option.
If I'm using a mingw cross-compiler, when will I have to deal with a specs file? Spec files for GCC are still sort of a mystery to me, so I am wondering if some background here will help me.
In general, beyond specifying a cross compiler in my qmake.conf, what else might I need to consider?
Just use M cross environment (MXE). It takes the pain out of the whole process:
Get it:
$ git clone https://github.com/mxe/mxe.git
Install build dependencies
Build Qt for Windows, its dependencies, and the cross-build tools;
this will take about an hour on a fast machine with decent internet access;
the download is about 500MB:
$ cd mxe && make qt
Go to the directory of your app and add the cross-build tools to the PATH environment variable:
$ export PATH=<mxe root>/usr/bin:$PATH
Run the Qt Makefile generator tool then build:
$ <mxe root>/usr/i686-pc-mingw32/qt/bin/qmake && make
You should find the binary in the ./release directory:
$ wine release/foo.exe
Some notes:
Use the master branch of the MXE repository; it appears to get a lot more love from the development team.
The output is a 32-bit static binary, which will work well on 64-bit Windows.
(This is an update of #Tshepang's answer, as MXE has evolved since his answer)
Building Qt
Rather than using make qt to build Qt, you can use MXE_TARGETS to control your target machine and toolchain (32- or 64-bit). MXE started using .static and .shared as a part of the target name to show which type of lib you want to build.
# The following is the same as `make qt`, see explanation on default settings after the code block.
make qt MXE_TARGETS=i686-w64-mingw32.static # MinGW-w64, 32-bit, static libs
# Other targets you can use:
make qt MXE_TARGETS=x86_64-w64-mingw32.static # MinGW-w64, 64-bit, static libs
make qt MXE_TARGETS=i686-w64-mingw32.shared # MinGW-w64, 32-bit, shared libs
# You can even specify two targets, and they are built in one run:
# (And that's why it is MXE_TARGET**S**, not MXE_TARGET ;)
# MinGW-w64, both 32- and 64-bit, static libs
make qt MXE_TARGETS='i686-w64-mingw32.static x86_64-w64-mingw32.static'
In #Tshepang's original answer, he did not specify an MXE_TARGETS, and the default is used. At the time he wrote his answer, the default was i686-pc-mingw32, now it's i686-w64-mingw32.static. If you explicitly set MXE_TARGETS to i686-w64-mingw32, omitting .static, a warning is printed because this syntax is now deprecated. If you try to set the target to i686-pc-mingw32, it will show an error as MXE has removed support for MinGW.org (i.e. i686-pc-mingw32).
Running qmake
As we changed the MXE_TARGETS, the <mxe root>/usr/i686-pc-mingw32/qt/bin/qmake command will no longer work. Now, what you need to do is:
<mxe root>/usr/<TARGET>/qt/bin/qmake
If you didn't specify MXE_TARGETS, do this:
<mxe root>/usr/i686-w64-mingw32.static/qt/bin/qmake
Update: The new default is now i686-w64-mingw32.static
Another way to cross-compile software for Windows on Linux is the MinGW-w64 toolchain on Archlinux. It is easy to use and maintain, and it provides recent versions of the compiler and many libraries. I personally find it easier than MXE and it seems to adopt newer versions of libraries faster.
First, you will need an arch-based machine (virtual machine or docker container will suffice). It does not have to be Arch Linux, derivatives will do as well. I used Manjaro Linux.
Most of the MinGW-w64 packages are not available at the official Arch repositories, but there is plenty in AUR. The default package manager for Arch (Pacman) does not support installation directly from AUR, so you will need to install and use an AUR wrapper like yay or yaourt. Then installing MinGW-w64 version of Qt5 and Boost libraries is as easy as:
yay -Sy mingw-w64-qt5-base mingw-w64-boost
#yaourt -Sy mingw-w64-qt5-base mingw-w64-qt5-boost #if you use yaourt
This will also install the MinGW-w64 toolchain (mingw-w64-gcc) and other dependencies.
Cross-compiling a Qt project for windows (x64) is then as simple as:
x86_64-w64-mingw32-qmake-qt5
make
To deploy your program you will need to copy corresponding dlls from /usr/x86_64-w64-mingw32/bin/. For example, you will typically need to copy /usr/x86_64-w64-mingw32/lib/qt/plugins/platforms/qwindows.dll to program.exe_dir/platforms/qwindows.dll.
To get a 32bit version you simply need to use i686-w64-mingw32-qmake-qt5 instead. Cmake-based projects work just as easily with x86_64-w64-mingw32-cmake.
This approach worked extremely well for me, was the easiest to set-up, maintain, and extend.
It also goes well with continuous integration services. There are docker images available too.
For example, let's say I want to build QNapi subtitle downloader GUI. I could do it in two steps:
Start the docker container:
sudo docker run -it burningdaylight/mingw-arch:qt /bin/bash
Clone and compile QNapi
git clone --recursive 'https://github.com/QNapi/qnapi.git'
cd qnapi/
x86_64-w64-mingw32-qmake-qt5
make
That's it! In many cases, it will be that easy. Adding your own libraries to the package repository (AUR) is also straightforward. You would need to write a PKBUILD file, which is as intuitive as it can get, see mingw-w64-rapidjson, for example.
Ok I think I've got it figured out.
Based in part on https://github.com/mxe/mxe/blob/master/src/qt.mk and https://www.videolan.org/developers/vlc/contrib/src/qt4/rules.mak
It appears that "initially" when you run configure (with -xtarget, etc.), it configures then runs your "hosts" gcc to build the local binary file ./bin/qmake
./configure -xplatform win32-g++ -device-option CROSS_COMPILE=$cross_prefix_here -nomake examples ...
then you run normal "make" and it builds it for mingw
make
make install
so
yes
only if you need to use something other than msvcrt.dll (its default). Though I have never used anything else so I don't know for certain.
https://stackoverflow.com/a/18792925/32453 lists some configure params.
In order to compile Qt, one must run it's configure script, specifying the host platform with -platform (e.g. -platform linux-g++-64 if you're building on a 64-bit linux with the g++ compiler) and the target platform with -xplatform (e.g. -xplatform win32-g++ if you're cross compiling to windows).
I've also added this flag:
-device-option CROSS_COMPILE=/usr/bin/x86_64-w64-mingw32-
which specifies the prefix of the toolchain I'm using, which will get prepended to 'gcc' or 'g++' in all the makefiles that are building binaries for windows.
Finally, you might get problems while building icd, which apparently is something that is used to add ActiveX support to Qt. You can avoid that by passing the flag -skip qtactiveqt to the configure script. I've got this one out of this bug report: https://bugreports.qt.io/browse/QTBUG-38223
Here's the whole configure command I've used:
cd qt_source_directory
mkdir my_build
cd my_build
../configure \
-release \
-opensource \
-no-compile-examples \
-platform linux-g++-64 \
-xplatform win32-g++ \
-device-option CROSS_COMPILE=/usr/bin/x86_64-w64-mingw32- \
-skip qtactiveqt \
-v
As for yout questions:
1 - Yes. The native compiler will be called in order to build some tools that are needed in the build process. Maybe things like qconfig or qmake, but I'm not entirely sure which tools, exactly.
2 - Sorry. I have no idea what specs files are in the context of compilers =/ . But as far as I know, you wouldn't have to deal with that.
3 - You can specify the cross compiler prefix in the configure command line instead of doing it in the qmake.conf file, as mentioned above. And there's also that problem with idc, whose workaround I've mentioned as well.

How do I create an 'install' package for a Qt application?

Generally to install a package on a linux-based operating system you use
./configure
make
make install
How does this work? And how do I create a package that can be installed this way?
My application uses the Qt framework and I think I'm aiming for something like "MyPackage.tar.gz"
You can create a debian package from your projects. As I understood you want to create a package intended for distibution so I would suggest creating a debian package from your project. Here is an introduction for Debian Packaging system. In the article they at some point describe how to create a "rules" file which is at the core of the build process. Here is a sample of it that I typically use for my Qt/KDE projects:
#!/usr/bin/make -f
#export DH_VERBOSE=1
# This is the debhelper compatability version to use.
#export DH_COMPAT=3
DESTDIR=$(CURDIR)/debian/project
TR_DIR=$(CURDIR)/debian/project/usr/share/qt4/translations
configure:
qmake project.pro
clean:
dh_testdir
dh_testroot
dh_clean
build: configure
dh_testdir
lrelease translations/project_en.ts
$(MAKE)
install: build
mkdir -p $(TR_DIR)
cp translations/project_en.qm $(TR_DIR)
$(MAKE) INSTALL_ROOT=$(CURDIR)/debian/project install
dh_installdirs
binary-arch: build install
dh_testdir
dh_testroot
dh_installmenu
dh_link
dh_strip
dh_compress
dh_fixperms
dh_installdeb
dh_shlibdeps
dh_gencontrol
dh_md5sums
dh_builddeb
This is normally sufficent for small projects.
configure is usually part of GNU build system (autotools), which is not in use in a typical Qt project. qmake is used instead for build file generation and it internally handles most of the tasks configure does for non-qt projects.
The typical build install process for a Qt application is
qmake
make
make install
You could create a simple ./configure script that calls qmake if you need the command names to be identical. You can also use autotools with Qt if you need it, see e.g. Qt Creator Instructions For Autotools
Qt is often used with CMake, which I highly recommend. One notable point is that it likes out-of-source-builds.
Your configure script could be
#!/bin/bash
(mkdir build; cd build; ccmake ..)
and the makefile could be
#!/bin/bash
(cd build; make)
Newer versions of debhelper support qmake. A rules file like,
#!/usr/bin/make -f
%:
dh $# --buildsystem=qmake
Is all that is needed. You need,
bar.file = foo
bar.path = install/dir
INSTALLS += bar
Inside your projects 'pro' or qmake file. qmake will create install targets and the perl file /usr/share/perl5/Debian/Debhelper/Buildsystem/qmake.pm will get called and parse the qmake file. You need to create 'debian/' files, changelog, compat, control, copyright as well the rules file.

Resources