Xamarin Unified API: What about System.Int32 etc. in Shared Code? - xamarin.ios

I understand that – for example – the usage of int, which in .NET is System.Int32, is being replaced by the usage of nint, which enables the compiler to compile the code to either x64 or x32.
But what about code that is being shared with other applications, let's say an Android app. As far as I am aware nint etc. are only available for iOS and OS X, so one must use int in that shared code again.
Concrete example could be a PCL, that is linked to a Xamarin.iOS app.
What happens to
int Add(int one, int two)
{
return one + two;
}
from the PCL, when being used within the iOS app?

int, which in .NET is System.Int32, is being replaced by nint
That's incorrect. int remains and will always be a 32 bits integer.
What happens is that some of Apple API (both iOS and OSX) are using types like NSInteger. This type is 32bits, on 32 bits processor, and 64 bits on 64 bits processors.
That does not exists in .NET, the closest being IntPtr which is not an easy type to use in general.
At the time the original MonoTouch was written the world (at least the mobile world) was 32bits and the API were bounds using System.Int32. That worked well for years but, eventually, the 64 bits world became mobile.
This is why Xamarin introduced nint (and nuint and nfloat). Those types will vary by the CPU architecture. It lets us (and you) bind the API just like Apple defined them (a int stays an int but a NSInteger becomes an nint).
As for PCL (or shared code) you should avoid those types. They are not available on all platforms (and even if the source was copied you would be missing the JIT/AOT optimizations on them). IOW the only place it should be used is in your platform specific code (iOS and/or OSX).

Now I got it. The point is, that everything that touches native APIs needs to be prepared (by the compiler) to run in x32 or x64 mode. That is what the new types like nint are made for.
Everything that doesn't touch native stuff can still leverage the "native .NET type system" including Int32. Which answers the question of how that sample code above could work in a PCL: as that PCL code will never depend on native APIs, it's okay.

Related

Wrapper for converting Winapi and MFC code to equivalent linux versions

I'm working on a wrapper to compile c++ code having MFC and windows API calls into their linux versions.
The c++ code has the following characteristics:
No GUI component present.
Has a maximum of about 10 MFC classes used mostly for string parsing.
It has lots of windows specific constants used such as HINSTANCE, LPCTSTR and so on.
I'm not allowed to compile using wine in linux. Till now i've come across wxwidgets, it seems quite vast, i doubt if i'll be needing all it's components.
Please share your ideas in creating the wrapper, is there any specific code that is already available which does this task or part of this task ?
There is no automatic or even semi-automatic way to convert an MFC application to wxWidgets. It is, of course, perfectly possible to do it and many, many projects have gone through this transformation but you just need to do it.
See MFC page of wxWiki for some starting points.

Why is PCTSTR not defined but LPCTSTR defined?

I have been assigned to update an old code written in MSVC++ 6. I have been getting unknown definition for PCTSTR but it was not defined even if I included the tchar.h. In my previous experience I know there is an LPTSTR but no PCTSTR.
I grep the C:\Program Files\Microsoft Visual Studio\VC98\Include\ folder and did not found a definition of PCTSTR. But to my surprise when I searched the Windows SDK folder [C:\Program Files\Microsoft SDK] there was no definition of PCTSTR but it was used in one of the samples. [C:\Program Files\Microsoft SDK\Samples\winui\Resource\Iconpro*]. So I am guessing that this may just be a relic from the Windows API of 16-bit windows but I cannot find any thing from google.
Does anyone know what is the PCTSTR for. I am guessing since this was from an old code that this works before. Any ideas how to make this compile? [I changed this to LPCTSTR and it compiled, I want to know if there are other ways than changing the definition name]
The LP in LPCTSTR means Long Pointer. It is an artifact back from the Windows 3 days, a 16 bit operating system. 16-bit code had several memory models to deal with trying to address more than 65536 bytes of memory when you only have a 16-bit cpu register. A short pointer used the default data segment register value and a 16-bit offset. A long pointer was 32-bits, 16-bits to load a segment register and 16-bits for the offset.
The T in LPCTSTR means TCHAR, a typedef for char or wchar_t, depending on the presence of the UNICODE macro.
Which makes PCTSTR a time anachronism, humans-and-dinosaurs movie style. There was never a 16-bit Unicode version of Windows, 32-bit versions of Windows always use 32-bit pointers. It sounds merely like a mistake. Enshrined though, the current version of winnt.h does have a typedef for it, making it the same as LPCTSTR. And used in only one place, the stralign.h header with a strange function named TSTR_ALIGNED_STACK_COPY. However only in a comment.
Mistake. Your workaround was the right choice.
In Windows SDK v7.0a I have on my machine, WinNT.h contains two different typedefs for PCTSTR, depending on whether UNICODE is defined. In both cases, LPCTSTR is defined the same way - so these two are equivalent nowadays.

Is it that hard to port an application to 64 bits?

I was surprised to read that Adobe discontinued the 64 bits version of Flash for Linux. While there is a new 32 bits version, and Adobe advises users to use the 32 bits version of Firefox instead.
Was wondering, as I didn't have to do that yet, is it that hard to port an application to 64 bits? Besides the libraries changes and the recompilation (settings in the Makefile), what makes the port difficult? (Flash is an example)
As noted in an Adobe blog post, Flash's ActionScript engine has a JIT compiler, that compiles the ActionScript code into native code.
x64 has a very different instruction set from x86. Therefore, making the JIT compiler generate x64 code is a non-trivial task, and is far more complicated than just making all the words 64 bits. :-)
The real kicker with porting apps to 64-bit is that every OS seems to treat the primitive types as they please. For example, under most linux environments a long is 4 bytes on a 32 bit system and 8 bytes on a 64 bit system (32bit=4bytes 64bit=8bytes.) Meanwhile, the int stays 4 bytes across 64bit and 32 bit systems. Under windows, the opposite seems to be true, the long stays consistently 4 bytes while the int switches between 32 and 64 bit.
That said, I have ported a medium sized project at work before from 32 bit to 64 bit linux (about 25,000 lines of code) only having to make changes in the the assembly code (GASM) which made several faulty assumptions about datatypes being 4 bytes long. Other than this, I had no problems, which suggests that provided you payed strict attention to your data types when you were first developing, porting should be seemless, perhaps only requiring certain compile switches be changed (like -fpic.) There were a few really bizzare corner cases that came up in my porting experience but I think they were mostly due to undefined behaviour of some GASM more than the porting itself.
If you use a lot of ints and floats, it can be amazingly complex to get it to work suitably, esp. if it is a networked app.
It took over 2 years to port xMule to 64 bits and I don't believe its parent project, eMule, has 64 bit at all.
Ideally it should be a recompilation, in reality it takes a significant amount of effort. Even if it is simple there still has to be a full sweep by the QA team to prove it works and that always takes a while.
The obvious problems I guess would be variable sizes (e.g. longs are 64 bits on most 64 bit compilers), this messes up anything that uses size related operations such as bit shifting / some pointer arithmetic. I think adobe just can't be be bothered to scan through and ensure cross compatibility. Especially when 90%+ of browser use is on 32bit versions, I know flash hasn't ever worked on the 64bit IE but even 64bit Windows 7 defaults to the 32-bit version.
Lot of information on it here if your interested:
http://www.viva64.com/content/articles/64-bit-development/?f=20_issues_of_porting_C++_code_on_the_64-bit_platform.html&lang=en&content=64-bit-development
the coding part of porting to 64bits generally isn't that hard, but can require some time and a lot of hairpulling regarding builds/libs etc. however, the real problem, especially for a widely deployed project like flash is going through the proper testing coverage to cover the many code paths and platforms. 64bit is a horizontal feature, so it can possibly break everything, so everything needs to be tested.
for flash on linux in particular, its probably more of a cost-benefit issue. is catering to the percentage of users actually use linux and 64bit worth the development costs for adobe? probably not, at this point.

Direct3D 11 effect files deprecated?

I've been playing around with Direct3D 11 a little bit lately and have been frustrated by the lack of documentation on the basics of the API (such as simple geometry rendering). One of the points of confusion brought on by the sparse documentation is the (apparent) move away from the use of effects for shaders.
In D3D11 all of the effect (.fx) support has been removed from the D3DX libraries and buried away in a hard to find (sparsely documented, of course) shared source library. None of the included examples use it, preferring instead to compile HLSL files directly. All of this says to me that Microsoft is trying to get people to stop using the effect file format. Is that true? Is there any documentation of any kind that states that? I'm fine doing it either way, but for years now they've been promoting the .fx format so it seems odd that they would suddenly decide to drop it.
Many professional game and graphics developers don't use the effects interfaces in Direct3D, and many of the leading game engines do not use them either. Instead, custom material/effects subsystems are built on top of the lower-level shader and graphics state state management facilities. This allows developers to do things like target both Direct3D and OpenGL through a common asset management pipeline.
The main issue is that the fx_5_0 profile which is needed to compile Effects 11 shaders with the required metadata is deprecated by the HLSL compiler team. The runtime is shared-source, but the compiler is not. In the latest D3DCompiler (#47) it emits a warning about this. fx_5_0 was never updated for some newer language aspects in DirectX 11.1 and 11.2, but works "as is" for Direct3D 11.
The second issue is that you need D3DCompile APIs at runtime to make use of Effects 11. Since D3DCompile was 'development only' for Windows Store apps for Windows 8.0 and Windows phone 8.0, it wasn't an option there. It is technically possible to use Effects 11 today with Windows Store apps for Windows 8.1 and Windows phone 8.1 since D3DCompile #47 is part of the OS and includes the 'deprecated/as-is' compiler support for fx_5_0, but this use is not encouraged.
The bulk of the DirectX SDK samples and all the Windows Store samples avoid use of Effects 11. I did post a few Win32 desktop samples that use it to GitHub.
UPDATE: With the release of the legacy Microsoft.DXSDK.D3DX NuGet repacking of the original D3DX #43, I was able to update the rest of the legacy DirectX SDK samples so they can build with the modern Windows SDK and not require the legacy DirectX SDK to be installed. Most of the Direct3D 9 and Direct3D 10 samples, and a few Direct3D 11 samples, all use legacy Effects. See GitHub.
So in short, yes you are discouraged from using it but you still can at the moment if you can live with the disclaimers.
I'm in the exact same position, and after Googling like crazy for even the simplest sample that uses D3DX11CreateEffectFromMemory, I've too come to the conclusion that .fx file support isn't their highest prio. Although it is strange that they've added the EffectGroup concept, which is new to 11, if they don't want us to use it.
I've played a little with the new reflection API, so it looks like it will be pretty easy to hack together your own functions for setting variables etc, in essence creating your own Effect-class, and the next step is going to be to see what support their is for creating render state blocks via the API. Being able to edit those directly in the .fx file was very nice, so hopefully something like that still exists (or, at worst, I can rip that part from the Effect11 code).
There is an effect runtime provided as a sample in the DirectX SDK that should be able to help you to use .fx files.
Check out the directory: %DXSDK_DIR%\Samples\C++\Effects11
http://msdn.microsoft.com/en-us/library/ff476261(v=VS.85).aspx
This suggests that it can take a shader or an effect.
http://msdn.microsoft.com/en-us/library/ff476190(v=VS.85).aspx
Also, what is the difference between a shader and an effect?

Is MFC still used for new development (with any material volume)?

I've never been a big fan of MFC, but that's not really the point. I read that Microsoft is due to release a new version of MFC in 2010 and it really struck me as odd - I thought MFC was dead (no ill intention, I really did).
Is is MFC used for new developments? If so, whats the benefit? I couldn't imagine it having any benefit over something such as C# (or even just c++ using Win32 APIs for that matter).
There is a ton of code out there using MFC. I see these questions all the time is this still used is that still used the answer is yes. I work in a very large organization which still employs hundreds of people who write in cobol. If it has ever been used in the enterprise it will continue to be used until there is no more hardware to support it, then some company will pay someone to write an emulator so that the old code will still work.
The navy still uses ships with computers with magnetic cores for memory and I'm sure they have people to work on them. Technology once created can never not be supported. its a bit of the case of Deus ex machina where large organizations aren't completely sure what their system do and have such an overriding sense of fear of brining the enterprise to its knees they have no desire to try out you new fangled technologies(BTW we pay IBM for best effort support on OS2).
Also mfc is a perfectly acceptable solution for windows development given it is an object model which wraps the System API which is pretty much all that most people get out of .net.
As an addendum and since this question is up for a bounty this is a quote from MS regarding mfc in VS 11
In every release we need to balance our investment across the various areas of the product. However, we still believe that MFC is the most fully-featured library for building native desktop applications. We are fully committed to supporting and maintaining MFC at a high level of quality. Here’s a short list of some of the issues that we fixed in MFC for Visual Studio 11:
Here is the link if you want to read the full post
Coolness does not factor in choosing the technology for a new system. Yes if you are a student or want to play around you choose whatever you want.
But in the real world each technology has advantages and drawbacks. A year ago one of the teams started a new project, it was decided that it will be done in MFC.
The reason is very simple: they have to use windows api a lot for low level operations with the printer, internet explorer and god knows what else.
C# was not even in the game, the decision was made between MFC and QT, both had the needed functionality, both could easily integrate the low level functionality, the only difference was that some team members already had MFC experience, so they didn't have to waste time and money with trainings.
Let's suppose they choose C# and WPF:
-1 You have to wrap all native C++ and ASM code in a DLL (ouch this can be painful, instead of coding you write wrappers).
-1 You probably need two teams now, one for the ui one for the winapi stuff. It is very unlikely that you'll find a lot of people able to write both C# and winapi stuff. Agreed that either way you need someone to make the interface pretty (programmers usually suck at this and they cost more) but at least with C++ only code, there is no more wait time between two teams, need a ui modification, no problem I don't have to wait for the ui designer, he will make it pretty later.
+1 You can write the UI code in C# and WPF, let's say the UI development is faster, but the UI is only 1/4 of the project, so the total gain is probably very small.
-1 Performance degradation: for every small operation you can't do in C# you call a external DLL (this is a minor issue since the program runs on 8GB RAM Quad Cores).
So in conclusion: MFC is still used for new development because the requirements and the costs decide the technology for a project and it just so happens that MFC is the best in some cases.
MFC is still used for some new development, and a lot of maintenance development (including inside of Microsoft).
While it can be minutely slower than using the Win32 API directly, the performance loss really is tiny -- rarely as much as a whole percent. Using .NET, the performance loss is considerably greater (in my testing, rarely less than 10%, with 20-30% being typical, and higher still for heavy computation. Just for example, I have a program that does Eigenvector/Eigenvalue computation on fairly large arrays. My original version using C++ and MFC runs one test case in just under a minute on our standard test machine. Some of my coworkers decided it would be cool to re-implement it in C#. Their version takes almost three minutes on the same machine (quad core, 16-gigs of RAM, so no, not "legacy" hardware). I'll admit I haven't looked at their code too closely, so maybe it could be improved, but they're decent coders so a 3:1 improvement strikes me as unlikely.
With MFC, it's also easy to bypass the framework and use the Win32 API directly when/if you want to. With .NET, you can use P/Invoke for that, but it's quite painful by comparison.
MFC has been updated with every release of Visual Studio. It just isn't the headline feature item.
As for new development, yes. It is still used and will continue to be so (even though I, like you, prefer not to). Many organizations made the technology decision years ago and have no reason to change.
I do think you are talking about well-established shops though, folks with more interest in maintaining / enhancing what has been written rather than stay on the cutting edge.
The release of the MFC Feature Pack (one or two years ago, iirc) was the biggest extension of MFC since around 10 years and it gave quite a new boost to MFC development. I guess a lot of companies decided to maintain their legacy applications, push them forward and delevelop new applications on its basis.
For me (as someone who has to maintain a large MFC application) the bigger problem is the decreasing development and support of (Microsoft and third-party) components rather than MFC itself. For instance is porting to 64bit not easy if a lot of old and unsupported pure 32bit Active-X components are assembled in the application.
I did a project last year based on MFC. I'm not sure why MFC was chosen, but it was adequate for making a virtual 3D graphic user interface—a building management security system—with 10 frame per second refresh rate run efficiently on win32-based PCs dating back to the mid-1990s. The executable (which requires only core win32 system DLLs) is less than 400K—not an easy accomplishment with modern tools.
There are advantages to staying away from managed code (maybe you're writing a driver UI, or doing COM).
That and there's tons of MFC code out there. Maybe you work for Company X, and need to use one of the zillion DLLs they've been writing over the last dozen years.
I can think of one commercial software title that benefits from using MFC over C#: Wwise[1]. C++ is an obvious choice for the sound engine, so it makes sense to write the authoring tool in C++ as well. It's both an authoring tool and a sound engine. They could have built the authoring tool in C#, and the sound engine in C++, but if they're debugging a problem with the sound engine that's reproducible through the wwise authoring tool, it's easier for them to see the whole call stack just like that.
I think there's some ways of doing a mixed call stack nowadays, but maybe that wasn't there when they first made Wwise? In any case, using MFC ensured that they wouldn't need a solution to the problem of mixed call stacks. The call stack just works.
[1]Wwise is built on MFC: https://www.audiokinetic.com/fr/library/edge/?source=SDK&id=plugin_frontend_windows.html

Resources