I'm working on a wrapper to compile c++ code having MFC and windows API calls into their linux versions.
The c++ code has the following characteristics:
No GUI component present.
Has a maximum of about 10 MFC classes used mostly for string parsing.
It has lots of windows specific constants used such as HINSTANCE, LPCTSTR and so on.
I'm not allowed to compile using wine in linux. Till now i've come across wxwidgets, it seems quite vast, i doubt if i'll be needing all it's components.
Please share your ideas in creating the wrapper, is there any specific code that is already available which does this task or part of this task ?
There is no automatic or even semi-automatic way to convert an MFC application to wxWidgets. It is, of course, perfectly possible to do it and many, many projects have gone through this transformation but you just need to do it.
See MFC page of wxWiki for some starting points.
Related
I am attempting to develop a GUI application for Tails. I'm doing the initial development on Debian 8 since development directly in Tails can be a pain.
I started out using Anjuta, but the documentation is essentially non-existent. The Anjuta website has nothing at all about how Glade is integrated or how to use it. I can't even track down documentation on how to change the main window title. The only tutorial I found has you start a project and build it using the default files that are generated for a GTKmm project.
Is there a good book or online tutorial out there for doing GUI development in Anjuta?
This is maybe not a complete answer, but it's too large to put in as a comment. I use Anjuta fairly regularly, but I share your feeling about the missing documentation (which is, by the way, not unique for Anjuta). I appreciate Anjuta (and Glade) very much, so don't take the following as criticisms on either program.
I would recommend you consider using PyGTK for GUI creation. It is a lot more productive. You can design the GUI in Glade - exactly the same way you would do for C/C++ - and then implement the code in Python, which you can also edit and manage from Anjuta. There are plenty of code examples, for example on the nullege code search engine.
About the work flow in Anjuta (for C/C++). It is based mainly on the Autotools system, so you should really read up a little on make, Makefile, and related tools. Though in principle Anjuta manages this, you will, sooner or later hit a problem, and some knowledge about Autotools will help you a long way (also this tutorial or this one. This slide series is interesting - probably because it is more graphical. There are even some video tutorials, like this one.).
There is no real necessity to use Glade from inside Anjuta. In fact, Glade has passed a long process distancing itself from 'code generation'. It now only contains an XML generator, which can be called separately. I find the screen space left for Glade inside Anjuta insufficient for comfortable work anyway.
So, in conclusion: If you mainly need a GUI, consider Python + Gtk. If you do need C or C++, Anjuta is a great IDE, but look at Gtk Development examples (like this one). Following those, the use of Anjuta should be a lot clearer.
EDIT:
Very useful answer. I have some underlying legacy code that has to be
C++. Is there a way to mix Python and C++ in Anjuta, or do you know of
any guideposts or tutorials for such?
You can open a C++ project in Anjuta - maybe even import you legacy code directly as a Makefile project. You can also add new files to your C/C++ project and create them as Python files. I've never tried to do that though, and I'm not sure how Anjuta would treat them, for example, in the Makefile(s). I don't have large projects mixing languages at the moment, but for small projects, I like 'Geany', because it doesn't get in the way. You do have to maintain the Makefiles manually.
I have native (unmanaged) C++ DLLs that are being wrapped by a single C++/CLI dll (linking through the .lib files). These unmanaged C++ DLLs have quite a few classes with a ton of methods and a ton const data (e.g. strings, hex values, etc) which are defined in included headers.
But for the C++/CLI wrapper DLL its only a wrapping and marshalling layer for the native dll. However its binary size is as big as the native dll.
I believe this is causing me to hit the hardcoded limit which throws the exception when it is being loaded by a C# application:
System.TypeLoadException: Internal limitation: too many fields
The C# application will never use the fields defined in the headers for the native DLLs.
It was able to alleviate this issue through enabling of string pooling (shaving off a few MB), but it seems like a hack.
Why is a simple wrapper of a DLL the same size as that DLL? Is there a way where I can mark the const data such that the C# application won't load them?
You are falling into a pretty common trap, the C++/CLI compiler works too well. It is capable of compiling any C++03 compatible native C++ code into IL when #pragma managed or /clr is in effect. Works well at runtime too, it gets just-in-time compiled by the jitter to machine code, just like regular managed programs will be.
That's the good news. The bad news is that this code does not execute like managed code. It doesn't get verified and it doesn't get the garbage collector love. It also doesn't run as efficiently as regularly compiled C++ code, you are missing out on the extra time that the C++ code optimizer has available get the absolute best possible machine code.
And the one restriction that made your program bomb. Any global variables and free functions are compiled into members of the hidden <Module> class. Required because the CLR doesn't support globals. Members of a managed class get a metadata token, a number that uniquely identifies them in the metadata tables. A token is a 32-bit value with the low 16-bits used to number them. Kaboom when you created a <Module> class with more than 65535 members.
Clearly this is all quite undesirable. You'll need to pay more attention to what code gets compiled to IL and what code gets compiled to machine code. Your native C++ source code should be compiled without the /clr option in effect. Shift+click select those files and set the option. Where necessary, use #pragma un/managed to switch the compiler back and forth within one source code file.
Why is a simple wrapper of a DLL the same size as that DLL? Is there a way where I can mark the const data such that the C# application won't load them?
This is typically because you're compiling the entire project with /CLR.
If you're very careful to only include the absolute bare minimum requirements into the .cpp files which are compiled with /CLR, and only compile the .cpp files which are managed classes with /CLR, the wrapper projects tend to be far simpler and smaller. The main issue is that any header that's used by a /CLR compiled .cpp file creates proxy types for all of the C++ types, which can explode into a huge number of fields or types in the assembly.
Using the PIMPL idiom to "hide" the native code behind and opaque pointer can also dramatically shrink the number of types exposed to the managed portion of the assembly, as this allows you to not include the main headers within the managed code.
Been using VS2010 express. Writing in C# which i figured was a good middle step towards object oriented languages. One thing that has made my code somewhat repetitive is the inability to use multiple class inheritance in C#. I.e. I can't say class A inherits from class B and class C.
c# is great because you can quickly and easily get windows with buttons textboxes and dials up and running. This is not available in C++ in the express version since the MFC libraries are not included.
Now, I have thought of just desinging a C# front end which saves parameters to a file then execute a c++ which reads the file, runs and then saves a file which i open with another(or the same) c# backend exec to read and play ard with the results. But this would make it cumbersome always executing the whole sequence again if you want to change something. Not to mention debugging, will probably need to have to instances running.
Reading on of the Visual studio 2012 Express for desktop announcment, it stated that "You can also combine C++, C#, and Visual Basic projects into a single solution, making it easy to write a single application using any of the available languages." http://blogs.msdn.com/b/visualstudio/archive/2012/09/12/visual-studio-express-2012-for-windows-desktop-is-here.aspx?PageIndex=3
Now I would be happy with that, after all I dont expect and dont need at this stage to do any wizbank stuff i.e special button functionality/design which is easier and provided with the MFC in C++.
My question is: Has anyone tried this in Visual Studio 2012 in the Express for windows desktop version in Windows 7''? I.e can you combine a c++ and a c# projects which interact, trace the code form one project to another when debugging for example? Are there any special restrictions? I mean if its combining executables only its not much use, but i expect its more than that, but how much more? For example, can an object designed in C# instanciate an object designed in c++ pass it a reference to other objects like classes which hold inputs and outputs or data which are proccessed in the c++ class and still accesible in the c# code to display results etc?
I am asking this before downloading the new express version because I expect it will set me back a couple of months since going from C# to C++ i would think is like going from Visual Basic to C. I wouldnt want to get into all the trouble (i dont mind really but it would be huge step back) to find out i cant "seemlesly" integrate a c# front end with a C++ proccesing solution.
You have three options for interfacing C++ with C#:
pinvoke: You specify function signatures in C# and give them an attribute specifying what DLL they reside in. This is fairly painful to do if you need to pass around any complex types at all.
COM: A C++ DLL would implement a COM object called by the C# code.
C++/CLI: Allows mixing of managed and unmanaged C++ code in a single C++ project. It is very nice for interfacing with other libraries but, in VS 2010 at least, lacks helpful features such as intellisense. If you really wanted to go this route, I would write three projects: Your core C++ code as a static library, your C++/CLI DLL to wrap it, and your C# application.
i have a visual C++ program which performs image matching. I am using openCV. I am looking to run the exe on a linux server. But i dont know how to compile visual C++ code in linux?
Can anyone plz help me in this regard . . .
If you did things smartly while writing the C++ code in MSVC, you isolated all platform-dependent code (i.e., Microsoft extensions to C++ and uses of Windows-only libraries) from the rest right from the start, and know exactly where to do the modifications to make it run on Linux as well.
Unfortunately, your question hints at this being your first attempt at cross-platform coding, and in that case, you probably littered Microsoft-isms all over your code, and have to pick through them one by one. Start the compiler, have a look at its error messages, and go from there. Good luck, it will be a pain, but also a very valuable lesson for your next project.
(I'm not finger-pointing at MSVC here. The very same is true for people who litter their code with GNU-isms and then want to have it compile on MSVC...)
The usual construct looks like this:
#if defined( _MSC_VER )
// Microsoft version
#elif defined( __GNUC__ )
// GCC version
#else
#error Platform / compiler not supported.
#endif
Edit: In case it is not obvious, the idea is to keep the ifdef'ed code above at an absolute minimum. Use typedef's, forwarding functions (i.e., log() to use either Unix or Windows logging), or - if all else fails - macros. Don't use the above all over the code, isolate it in a few header / implementation files, kept in a separate source folder.
You will also want to familiarize yourself with Makefiles (shameless plug: Makefile tutorial) or CMake, because MSVC project files don't work on Linux (obviously).
There's also winelib and stuff. Point your build system to using winegcc/wineg++ as your compiler, and go for it. It can compile a fairly large subset of windows programs. This should be a good option if all you need is to get one or two programs to work.
I've never been a big fan of MFC, but that's not really the point. I read that Microsoft is due to release a new version of MFC in 2010 and it really struck me as odd - I thought MFC was dead (no ill intention, I really did).
Is is MFC used for new developments? If so, whats the benefit? I couldn't imagine it having any benefit over something such as C# (or even just c++ using Win32 APIs for that matter).
There is a ton of code out there using MFC. I see these questions all the time is this still used is that still used the answer is yes. I work in a very large organization which still employs hundreds of people who write in cobol. If it has ever been used in the enterprise it will continue to be used until there is no more hardware to support it, then some company will pay someone to write an emulator so that the old code will still work.
The navy still uses ships with computers with magnetic cores for memory and I'm sure they have people to work on them. Technology once created can never not be supported. its a bit of the case of Deus ex machina where large organizations aren't completely sure what their system do and have such an overriding sense of fear of brining the enterprise to its knees they have no desire to try out you new fangled technologies(BTW we pay IBM for best effort support on OS2).
Also mfc is a perfectly acceptable solution for windows development given it is an object model which wraps the System API which is pretty much all that most people get out of .net.
As an addendum and since this question is up for a bounty this is a quote from MS regarding mfc in VS 11
In every release we need to balance our investment across the various areas of the product. However, we still believe that MFC is the most fully-featured library for building native desktop applications. We are fully committed to supporting and maintaining MFC at a high level of quality. Here’s a short list of some of the issues that we fixed in MFC for Visual Studio 11:
Here is the link if you want to read the full post
Coolness does not factor in choosing the technology for a new system. Yes if you are a student or want to play around you choose whatever you want.
But in the real world each technology has advantages and drawbacks. A year ago one of the teams started a new project, it was decided that it will be done in MFC.
The reason is very simple: they have to use windows api a lot for low level operations with the printer, internet explorer and god knows what else.
C# was not even in the game, the decision was made between MFC and QT, both had the needed functionality, both could easily integrate the low level functionality, the only difference was that some team members already had MFC experience, so they didn't have to waste time and money with trainings.
Let's suppose they choose C# and WPF:
-1 You have to wrap all native C++ and ASM code in a DLL (ouch this can be painful, instead of coding you write wrappers).
-1 You probably need two teams now, one for the ui one for the winapi stuff. It is very unlikely that you'll find a lot of people able to write both C# and winapi stuff. Agreed that either way you need someone to make the interface pretty (programmers usually suck at this and they cost more) but at least with C++ only code, there is no more wait time between two teams, need a ui modification, no problem I don't have to wait for the ui designer, he will make it pretty later.
+1 You can write the UI code in C# and WPF, let's say the UI development is faster, but the UI is only 1/4 of the project, so the total gain is probably very small.
-1 Performance degradation: for every small operation you can't do in C# you call a external DLL (this is a minor issue since the program runs on 8GB RAM Quad Cores).
So in conclusion: MFC is still used for new development because the requirements and the costs decide the technology for a project and it just so happens that MFC is the best in some cases.
MFC is still used for some new development, and a lot of maintenance development (including inside of Microsoft).
While it can be minutely slower than using the Win32 API directly, the performance loss really is tiny -- rarely as much as a whole percent. Using .NET, the performance loss is considerably greater (in my testing, rarely less than 10%, with 20-30% being typical, and higher still for heavy computation. Just for example, I have a program that does Eigenvector/Eigenvalue computation on fairly large arrays. My original version using C++ and MFC runs one test case in just under a minute on our standard test machine. Some of my coworkers decided it would be cool to re-implement it in C#. Their version takes almost three minutes on the same machine (quad core, 16-gigs of RAM, so no, not "legacy" hardware). I'll admit I haven't looked at their code too closely, so maybe it could be improved, but they're decent coders so a 3:1 improvement strikes me as unlikely.
With MFC, it's also easy to bypass the framework and use the Win32 API directly when/if you want to. With .NET, you can use P/Invoke for that, but it's quite painful by comparison.
MFC has been updated with every release of Visual Studio. It just isn't the headline feature item.
As for new development, yes. It is still used and will continue to be so (even though I, like you, prefer not to). Many organizations made the technology decision years ago and have no reason to change.
I do think you are talking about well-established shops though, folks with more interest in maintaining / enhancing what has been written rather than stay on the cutting edge.
The release of the MFC Feature Pack (one or two years ago, iirc) was the biggest extension of MFC since around 10 years and it gave quite a new boost to MFC development. I guess a lot of companies decided to maintain their legacy applications, push them forward and delevelop new applications on its basis.
For me (as someone who has to maintain a large MFC application) the bigger problem is the decreasing development and support of (Microsoft and third-party) components rather than MFC itself. For instance is porting to 64bit not easy if a lot of old and unsupported pure 32bit Active-X components are assembled in the application.
I did a project last year based on MFC. I'm not sure why MFC was chosen, but it was adequate for making a virtual 3D graphic user interface—a building management security system—with 10 frame per second refresh rate run efficiently on win32-based PCs dating back to the mid-1990s. The executable (which requires only core win32 system DLLs) is less than 400K—not an easy accomplishment with modern tools.
There are advantages to staying away from managed code (maybe you're writing a driver UI, or doing COM).
That and there's tons of MFC code out there. Maybe you work for Company X, and need to use one of the zillion DLLs they've been writing over the last dozen years.
I can think of one commercial software title that benefits from using MFC over C#: Wwise[1]. C++ is an obvious choice for the sound engine, so it makes sense to write the authoring tool in C++ as well. It's both an authoring tool and a sound engine. They could have built the authoring tool in C#, and the sound engine in C++, but if they're debugging a problem with the sound engine that's reproducible through the wwise authoring tool, it's easier for them to see the whole call stack just like that.
I think there's some ways of doing a mixed call stack nowadays, but maybe that wasn't there when they first made Wwise? In any case, using MFC ensured that they wouldn't need a solution to the problem of mixed call stacks. The call stack just works.
[1]Wwise is built on MFC: https://www.audiokinetic.com/fr/library/edge/?source=SDK&id=plugin_frontend_windows.html