Qt Creator - Build duration longer on Windows - linux

I'm developing an application with Qt on Qt Creator and have been transferring and moving a project and its corresponding files from Windows to Linux, and vice versa. I've been noticing that on Linux the build is a lot faster compared to when I'm building the application on Windows. Is this normal? And if not, how can I fix it?
Normally a build on my Linux machine will take a few seconds, and on Windows it takes a long time and sometimes to the point that I just stop building.

It is not unnormal. Compilation typically creates lots of comparatively small files, and that's not where Windows file systems shine. In any case, double-check virus scanner settings, you might not want your temporary files scanned repeatedly.

Related

How do I use the "nwjc" versions for Mac and Linux, on Windows?

NW.js has this feature it calls "Protect JavaScript Source Code": https://nwjs.readthedocs.io/en/latest/For%20Users/Advanced/Protect%20JavaScript%20Source%20Code/
The JavaScript source code of your application can be protected by compiling to native code and loaded by NW.js. You only have to distribute the compiled code with your app for production.
JS source code is compiled to native code with the tool nwjc , which is provided in the SDK build.
The compiled code is not cross-platform nor compatible between versions of NW.js. So you’ll need to run nwjc for each of the platforms when you package your application.
I downloaded the SDK distributions for Windows, Mac and Linux, and looked into the files inside them.
The Windows one has a "nwjc.exe" file, ready to be used and works. Good.
But the Mac and Linux ones have no .exe, but instead just a "nwjc" executable. This is obviously for running on macOS and Linux, respectively. Huh?
My used OS is Windows. I am developing my NW.js application on Windows, to be distributed on Windows, Mac and Linux. And I cannot run those Linux/macOS executables on a Windows system. But I have to do so, since they are for some reason not cross-platform.
This seems like a dead end to me. I either have to not "protect" my application (and thus have it stolen/copied/broken/hacked), or buy two separate computers (one expensive Mac and one PC for Linux) and do this step on those. Which of course defeats the whole point of "simple" cross-platform development.
Before you say so, I have tried running Linux in a VM and it's terrible. And Macs cannot even be legally emulated AFAIK.
Most likely, this is going to cause my application to be Windows-only, which is really sad since a major reason for going this route was to have "simple" cross-platformness which "just works".
Is there something I'm missing about this?
PS: I already do "crush" my code with Uglify-JS, before the "protection" stage.
The source protection works by saving a copy of the application as it is currently running in the OS's memory. This means you must run the command on the actual operating system, so it can load the app into memory and then save it as a V8 snapshot (V8 is the JS engine used by Chromium/Node/NW.js).
You can use tools like VirtualBox or VMWare to emulate other OS's on Windows. Getting OSX to run in an emulator is a pain, but there are youtube tutorials to explain how. Linux is very easy though, I'd start there if you are unfamiliar. Though running your application on actual hardware and manually testing in each OS is always best.
If this is too cumbersome, then you may consider not worrying about source protection until later. You can still set up your app for distribution on the other OS's without needing to emulate them if you don't use this feature. Though again, always best to manually test on each OS.

What is the best practice to code when the project is on a Guest OS (Virtualbox)?

I have a project and the files are on Guest OS (Red Hat Enterprise Linux) with Virtualbox, my host OS is Mac OS. I used to coding right in RHEL with editor Atom. But my boss told me that it's inefficient to code in a Guest OS, well, it makes sense because Mac OS or Windows is more responsive than linux, so I changed my way:
Copy the whole project located on RHEL to a share folder between Mac OS and RHEL using rsync
Code with Atom in Mac OS
Copy back the project in share folder to the original project in RHEL by rsync
I'm using Atom (not vim in RHEL) because it can edit the whole project in one window which is convenient for my situation. But there is a problem: after copying back the project in Step 3, git status shows everything has been changed even though I just edited only a few files. That is a little annoying.
Is there any better way to code in such environment? any advice is appreciated.
BretzL's suggestion to use shared folders is a good one, but I think it's important to address the underlying issue: your boss' assumption about coding being inefficient or slow just because you're working on a VM is simply not true.
It sounds like your new workflow, which was instituted as a result of his/her advice, is causing you to have a harder time developing that you did on the VM. The shared folders will help with that, but if you have the VM configured to have access to enough cores and memory, then its performance for most tasks will be fine, and there may not be any problem with developing on the VM directly. I do a significant amount of development on a VM, and haven't had any issues. You may experience slower builds on the VM if you're building whole kernels or other large projects, but if that's not the case, it should be fine.
If you didn't have any performance or productivity problems before forcing yourself to work outside of the VM, then... it wasn't a problem.
(I also have an issue with the assumption that Linux is always less responsive than Windows or Mac OS, but that's a debate for a different day.)
VirtualBox supports shared folders, so you dont need to rsync back and forth. Just mount the shared folder into where your application server on RHEL guest expects the code.
I also recommend you take a look at https://www.vagrantup.com/ for managing developer VMs.

How to simulate ThreadX application on Windows OS

I have an application using ThreadX 5.1 as the kernel.
The Image is flashed on to a hardware running an ARM 9 processor.
I'm trying to build a Simulator for the application that can be run on Windows (say XP, 32-bit).
Is there any way I can make it run on Windows, without modifying the entire source code to start calling win32 system calls?
You can build a Simulator for the application that can be run on Windows with "ThreadX for Win32".
"ThreadX for Win32"'s specification is hear.
http://rtos.com/products/threadx/Win32
Yes you can if you are willing to put in the work.
First observe that each threadx system call has an equivalent posix call except for events.
So your threadx program can run as a single process using posix threads, mutexes, etc.
Events can be handled by an external library (there are a few out there).
If you find this difficult in windows then the simplest thing to do is set up a linux vm. I use an ubuntu vm running on Virtual Box. It is very easy to set up. All you will need is the cdt version of eclipse.
Next you need to stub out all of your low level system calls.
This is also easier than you might think. For example, if you have a SPI driver to read and write to flash, you can replace your flash with a big array which is quite easy to work with at this level.
Having said all this, you may get more mileage if your threadx application is modular. Then you can test each module on it's own and you don't need to mess with threads, etc.
As a first approximation this may give you what you need without going the distance to port the whole thing to run under posix.
I have done this successfully in the past and developed a full set of unit tests for a module that allowed me to develop and test it (on my mac) before going to the target. Development is much faster and reliable this way.
Another option you may want to consider is to find a qemu project that supports your microprocessor. With some work you can develop a complete simulator for your platform and then run the real firmware under the emulator.
Good luck.

Native windows Linux

I know about several projects for cross compiling between linux and Windows.
The Wine project is great for running windows application inside Linux.
andLinux is a linux running inside Windows.
My question is, can we compile a complete linux OS with a Windows compiler (like mingw32, visual studio , ...) in order to get a linux system which is fully compatible with the Windows PE executable format ?
As wine demonstrates, the PE format isn't really the problem with compatibility.
PE only defines how the program is pieced together at load time. Under windows, RUNDLL interprets it, loads all the program sections to memory, loads all the supporting dlls to memory and patches up the function pointers so that there is a program sitting in memory ready to go. (See http://msdn.microsoft.com/en-us/library/ms809762.aspx for more details. Its a good read!)
There is little stopping you writing a kernel module to do all of this. With the details in the page linked above it may not be to hard and someone may already have done it.
The real issue is the fundamentals of the operating system. Even if linux could load a PE, there would be problems around the fundamental difference in file names (\ or /) as well as the permissions model which is different and the windows registry which doesn't exist under linux. That's before you get into the different windowing model for GUIs.
Therefore the task of getting a windows program to run under linux is less about the program loader and much more about emulating all of the windows DLLs under Linux. As i understand it, this is the main heart of wine.

Looking for a super tiny linux distro that's sole purpose is running an AIR application?

I'm looking for a really really small linux distribution or process of making my own that's sole purpose is to get an air application to launch full screen and stay there; Essentially I'm building a home kitchen computer that runs entirely as an AIR app.
I have looked into using windows xp; and windows xp embedded but they pose so many issues I figured I'd try modern linux.
I have also seen TinyCore Linux which looks interestingly small but not sure what issues that poses in regards to running AIR and "hardware" accelerated display. I've also thought about stripping down an Ubuntu installation but I'm sure somebody must have done this already; google is just failing me right now...
I'm also interested in running an "embedded" version of say android and running the air app on some arm-based hardware again; with just the AIR runtimes only - although this is less preferred as it's more complex.
I'm also hooking this up to a touch screen monitor (not yet arrived) so I'll need to hunt down or write some drivers for translating the touch events into something AIR can understand... (this was my main intention for using windows in that all the drivers will just work).
What I'm after
Minified Linux kernel with JUST the drivers for the box I need
X Display with accelerated graphics support (Doesn't have to be X if AIR can run on a frame buffer?)
Running a Full screen AIR application (simple enough)
Ability to write back to the filesystem (enough support for AIR)
SSH Access for remote control
Samba for updating the filesystem (easier to maintain the system)
Touch screen support (3M Ex III I think...)
Audio support
Don't need
Don't need any window manager or any other GUI tools unless required by AIR
Don't need any toolbars or file managers or anything; The AIR app is the "OS"
Don't need any package managers or repos
Don't need multi user or logging in; everything can just run as an unprivileged account
Don't need to
I don't mind hand crafting the filesystem and configs if that makes it easier; I'm mainly looking for a "filesystem" that is as tiny as possible that I can just plop my AIR app into and write some scripts to get it to start when the X server starts
Thanks,
Chris
Try an embedded Linux build system such as Buildroot. It can build an entire system from source, and be very lightweight. The basic system is less than 1 MB in size.
Ended up going with Tiny Core. Very tiny and quick to boot up. You can also write extensions for it and you don't have a persistent drive which allows you to just switch the thing off without worry that it's going to break something -- exactly what you need in a kitchen :-D.
My current plan is to:
Just set up a working version using Ubuntu as this is mostly supported by Adobe
Slowly strip it back and try and get as little things to start as possible on boot
Try building my own distro/package from source and selecting only the packages I need
Compile my own kernel with nearly everything turned off and just leave on the things I need

Resources