I used to develop using Visual Studio on windows... (C++)
we recently migrated our app to linux (red-hat) , and currently each employee is building his own app is his own virtual machine using Vmware. out native OS is still Windows.
At first, it seemed that building using g++ was faster then using VS compiler, however, after some time, it seems like it tured out to be pretty slow. Is it beacuse we're using Vmware ?
are there things we can do to accelerate the building process ?
g++ is not a speed daemon, but it performs well. Yes, a VM can have unsteady performance, especially on disk access. You can always try ccache to avoid recompiling the parts you don't need to.
Or, ditch VMWare (and windows underneath) and do it all on Linux. either with a dedicated build box, or on your own machine. if you have to have a full featured GUI for writing, QtCreator is quite up to the task (no, it's not tied to only writing Qt applications).
I never really noticed that g++ was slower than VS or the opposite, but there is ways to make g++ go a lot faster.
ccache for instance. I tried it and it really speeds up the compilation.
ccache is a compiler cache. It speeds up recompilation of C/C++ code by caching previous compilations and detecting when the same compilation is being done again
If you're working on a multicore machine you probably want to do multiprocess compilation, if you're using make you can do make -jX where X is your number of cores. Note you'll have to enable multicore on your virtual machines.
Disable compiler optimizations.
That said, whatever you do, compilation on a virtual machine wont be as efficient as compilation on a real machine.
Related
NW.js has this feature it calls "Protect JavaScript Source Code": https://nwjs.readthedocs.io/en/latest/For%20Users/Advanced/Protect%20JavaScript%20Source%20Code/
The JavaScript source code of your application can be protected by compiling to native code and loaded by NW.js. You only have to distribute the compiled code with your app for production.
JS source code is compiled to native code with the tool nwjc , which is provided in the SDK build.
The compiled code is not cross-platform nor compatible between versions of NW.js. So you’ll need to run nwjc for each of the platforms when you package your application.
I downloaded the SDK distributions for Windows, Mac and Linux, and looked into the files inside them.
The Windows one has a "nwjc.exe" file, ready to be used and works. Good.
But the Mac and Linux ones have no .exe, but instead just a "nwjc" executable. This is obviously for running on macOS and Linux, respectively. Huh?
My used OS is Windows. I am developing my NW.js application on Windows, to be distributed on Windows, Mac and Linux. And I cannot run those Linux/macOS executables on a Windows system. But I have to do so, since they are for some reason not cross-platform.
This seems like a dead end to me. I either have to not "protect" my application (and thus have it stolen/copied/broken/hacked), or buy two separate computers (one expensive Mac and one PC for Linux) and do this step on those. Which of course defeats the whole point of "simple" cross-platform development.
Before you say so, I have tried running Linux in a VM and it's terrible. And Macs cannot even be legally emulated AFAIK.
Most likely, this is going to cause my application to be Windows-only, which is really sad since a major reason for going this route was to have "simple" cross-platformness which "just works".
Is there something I'm missing about this?
PS: I already do "crush" my code with Uglify-JS, before the "protection" stage.
The source protection works by saving a copy of the application as it is currently running in the OS's memory. This means you must run the command on the actual operating system, so it can load the app into memory and then save it as a V8 snapshot (V8 is the JS engine used by Chromium/Node/NW.js).
You can use tools like VirtualBox or VMWare to emulate other OS's on Windows. Getting OSX to run in an emulator is a pain, but there are youtube tutorials to explain how. Linux is very easy though, I'd start there if you are unfamiliar. Though running your application on actual hardware and manually testing in each OS is always best.
If this is too cumbersome, then you may consider not worrying about source protection until later. You can still set up your app for distribution on the other OS's without needing to emulate them if you don't use this feature. Though again, always best to manually test on each OS.
I'm working on a side project which requires me to configure and compile a tiny Linux System based on Ubuntu.
The result should be a tiny OS with the following features:
A Bootloader
A Kernel
A Process
A Thread
Miscellaneous (if possible)
A File System
Virtual memory
A Console
I read lots of documents about it, one of them being: http://users.cecs.anu.edu.au/~okeefe/p2b/buildMin/buildMin.html#toc3
I deleted the file system, and recompiled the kernel using make xconfig. I tried to deactivate modules and configurations many times, but it's not working for me.
How can I configure the kernel for the OS with only the features I listed above? What options can I disable or enable while still having a working system?
Having the Kernel very small is not important for Ubuntu, so maybe choosing Ubuntu is part of your problem. I would use as starting point what OpenWRT does. They do a good work making the Kernel small and it is easy to get started. OpenWrt Buildroot – Usage
try Linux From Scratch. It is a step by step approach on building a minimal Linux system from which you can evolve later on. http://www.linuxfromscratch.org/.
Use Gentoo Linux distribution - it's great for practicing on creation of Linux systems. Gentoo has excellent setup documentation, for example about configuring the kernel.
And Gentoo is a little easier and faster to setup than Linux From Scratch (LFS). If you want to go deeper, then LFS may be a good learning step too.
I'd like to develop haskell code that will run on windows and interact with windows OS APIs, but I would like to do it on a linux machine. How do I accomplish this? I can compile on a windows machine and that works, but not on a linux machine. Haskell can use a LLVM backend, can't it? Can I use LLVM to accomplish this? Or work with MinGW somehow?
I tried many possibilities, including GHC on Wine (didn't work for me, despite many notices advertising that it works "out of the box").
For cross compilation, one problem lies in making GHC find your C libraries and DLLs (for windows). Template Haskell will also give you headaches (because it needs to load linux libraries but then compile for windows).
I never managed to get around those problems properly.
In the end, I opted for installing GHC on a Windows VM, and now I use a script to push stuff to a repo, connect to the windows machine via SSH, pull, clean, recompile and test, all executed from a linux CLI and giving me feedback about what's happening on windows.
EDIT: I'm not providing this answer in an attempt to discourage anyone from trying something smarter. I too am interested in real cross-compilation and, if someone has a good solution, I'm all ears. My alternative method always works, but it really is a pain, having to start a VM just for this. Furthermore, it implies using one VM per OS per architecture, which is quite heavy.
I know about several projects for cross compiling between linux and Windows.
The Wine project is great for running windows application inside Linux.
andLinux is a linux running inside Windows.
My question is, can we compile a complete linux OS with a Windows compiler (like mingw32, visual studio , ...) in order to get a linux system which is fully compatible with the Windows PE executable format ?
As wine demonstrates, the PE format isn't really the problem with compatibility.
PE only defines how the program is pieced together at load time. Under windows, RUNDLL interprets it, loads all the program sections to memory, loads all the supporting dlls to memory and patches up the function pointers so that there is a program sitting in memory ready to go. (See http://msdn.microsoft.com/en-us/library/ms809762.aspx for more details. Its a good read!)
There is little stopping you writing a kernel module to do all of this. With the details in the page linked above it may not be to hard and someone may already have done it.
The real issue is the fundamentals of the operating system. Even if linux could load a PE, there would be problems around the fundamental difference in file names (\ or /) as well as the permissions model which is different and the windows registry which doesn't exist under linux. That's before you get into the different windowing model for GUIs.
Therefore the task of getting a windows program to run under linux is less about the program loader and much more about emulating all of the windows DLLs under Linux. As i understand it, this is the main heart of wine.
I'm considering doing some Linux kernel and device driver development under a vmware VM for testing ( Ubuntu 9.04 as a guest under vmware server 2.0 ) while doing the compiles on the Ubuntu 8.04 host.
I don't want to take the performance hit of doing the compiles under the VM.
I know that the kernel obviously doesn't link to anything outside itself so there shouldn't be any problems in that regard, but
are there any special gotcha's I need to watch out for when doing this?
beyond still having a running computer when the kernel crashes are there any other benefits to this setup?
Are there any guides to using this kind of setup?
Edit
I've seen numerous references to remote debugging in VMware via Workstation 6.0 using GDB on the host. Does anyone know if this works with any of the free versions of VMWare such as Server 2.0.
I'm not sure about ubuntu thing. Given that you are not doing a real cross compilation (i.e. x86->arm), I would consider using make-kpkg package. This should produce an installable .deb
archive with kernel for your system. this would work for me on debian, it might for for you
on ubuntu.
more about make-kpkg:
http://www.debianhelp.co.uk/kernel2.6.htm
I'm not aware of any gotchas. But basically it depends what kind of kernel part you
are working with. The more special HW/driver you need, the more likely VM won't work for you.
probably faster boots and my favorite is the possibility to take screenshot (cut'n'paste) of panic message.
try to browse to vmware communities. this thread looks very promising, although it dicusses
topic for MacOS:
http://communities.vmware.com/thread/185781
Compiling, editing, compiling is quite quick anyway, you don't recompile you whole kernel each time you modify the driver.
Before crashing, you can have deadlock, bad usage of resource that leads to unremovable module, memory leak etc ... All kind of things that needs a reboot even if your machine did not crash, so yes, this can be a good idea.
The gotchas can come in the form of the install step and module dependency generation, since you don't want to install your driver in the host, but in the target machine.