I really do not understand that, I'm a beginner in programming.
For example:
StartActivity.access$202(StartActivity.this, 0);
or
StartActivity.a|ccess$402(StartActivity.this, true ^ StartActivity.this.gl_bShowConfig);
The $ character is used in mechanically generated source code or, rarely, to access pre-existing names on legacy systems.
See Java specs: https://docs.oracle.com/javase/specs/jls/se7/html/jls-3.html#jls-3.8
Related
The title says it all. But, are there too many files to be replaced and is there a risk? What I mean is, there are files like d3d11.dll. Could I replace the files with with something like d3d12.dll or something like that?
When code is compiled it uses 'headers' and usually links to 'libraries' which refer to functions inside the dll. When the game loads it maps the DLL into the address space of the executable so that the program can use features in the DLL.
So if the Game does D3D11_DrawTriangles, it will end up calling that feature in d3d11.dll. Dropping in the DX12 DLL won't work because the expected function is no longer there (and besides, the executable would still be looking for the 11 DLL - it wouldn't even load).
Upgrading from DX11 to DX12 is a major undertaking; the graphics APIs are very different.
Put another way: It's like someone dropped a Fiat engine into your Volvo. Would it work? How much effort would it be to rewire all the pipes and electronics to make it work?
I am trying to write source code in one language and have it converted to both native c++ and JS source. Ideally the converted source should be human readable and resemble the original source as best it can. I was hoping haxe could solve this problem for me. So I code in haxescript and have it convert it to its corresponding C++ and JS source. However the examples I'm finding of haxe seems to create the final application for you. So with C++ it will use msbuild (or whatever compiler it finds) and creates the final exe for you from generated C++ code. Does haxe also create the c++ and JS source code for you to view or is it all done internally to haxe and not accessible? If it is accessible then is it possible to remove the building side of haxe so it simply creates the source code and stops?
Thanks
When you generate CPP all the intermediate files are generated and kept wherever you decide to generate your output (the path given using -cpp pathToOutput). The fact that you get an executable is probably because you are using the -main switch. That implies an entry point to your application but that is not really required and you can just pass to the command line a bunch of types that you want to have built in your output.
For JS it is very similar, a single JS file is generated and it only has an entry point if you used -main.
Regarding the other topic, does your Haxe code resembles the generated code the answer is yes, but ... some of the types (like Enum and Abstract) only exist in Haxe so they will generate code that functionally works but it might look quite different. Also Haxe has an always-on optimizer/analyzer that might mungle your code in unexpected ways (the analyzer can be disabled). I still find that it is not that difficult to figure out the Haxe source from the generated code. JS has support for source mapping which is really useful for debugging. So in the end, Haxe doesn't do anything to obfuscate your generated code but also doesn't do much to try to preserve it too strictly.
Sorry about the short title, but I honestly can't get a better description of what is happening because I don't know enough...
Some background first, I am "converting" a multi-byte application to support unicode and I've made the standard char/string wchar_t/wstring changes and the my code is building without problems.
What happens is that when the application is being initialized it hits an assert when it registers the applications's document templates. The code is the standard
CMultiDocTemplate* pRepDocTemplate = NULL;
pRepDocTemplate = new CMultiDocTemplate(IDR_DIAGNOSTIC_REPORT_TYPE,
RUNTIME_CLASS(CDiagnosticReportDoc),
RUNTIME_CLASS(CChildFrame), // custom MDI child frame
RUNTIME_CLASS(CDiagnosticReportView));
and CDiagnosticsReportView has the standard DECLARE_DYNCREATE and IMPLEMENT_DYNCREATE in the header and source.
The assert is at doctmpl.cpp line 29 (mfc120ud.dll - at least is using the correct dll), but I can't find the source code anywhere to actually know what is happening.
The inheritance tree is pretty straightforward:
CDiagnosticReportView
\->CReportViewBase
\->CXTPReportView
\->CView
CXTPReportView is part of a framework that we are using which is provided by Codejock (Codejock extreme toolkitPro). From the build pane I know that it's linking against it's unicode debug dll (ToolkitPro1631vc120UD.dll)
Suffice to say that in the multibyte configuration this problem doesn't occur.
The project is configured to use the UNICODE character set (Project properties->Configuration Properties->General->Character Set).
Any help would be appreciated!
Thanks in advance!
It all had to do with the fact that I was linking against the non UNICODE build of Codejock. Even though I saw a reference to the UNICODE dll, the lib wasn't the correct one!
Problem was solved when I opened my eyes ;)
I have a new project where I cannot use boost::format. I get a compiler error complaining that boost's override of a virtual function, ~basic_altstringbuf, lacks a "throw()". Even the most trivial attempt to use boost::format does that.
I have other projects where it works fine. I have verified that the new project uses the same include-paths for boost, and for the VC++ includes. All the projects have "Enable C++ Exceptions" set to Yes. The only explanation I can come up with is that the projects that work have some #DEFINE or some setting that disables those vile exception specs in the std:: include-files. But I have no idea what or where it might be. Any ideas?
Error 1 error C2694: 'boost::io::basic_altstringbuf::~basic_altstringbuf(void)': overriding virtual function has less restrictive exception specification than base class virtual member function 'std::basic_streambuf<_Elem,_Traits>::~basic_streambuf(void) throw()
EDIT: Corollary question: Is there a Properties-item in VS++ 2012 that will cause the std:: header files to be included without exception-specs? - short of turning off exceptions, that is?
At the request of the original owner of the green check-mark, I am submitting this summary.
The bugs are on the Microsoft side, in header-files for C++ standard library interfaces, and in the VC++ compiler when "Disable Language Extensions" is NOT set. The header files contain exception-specifications that the standard does not call for. When "language extensions" are not enabled, the compiler accepts invalid code. I have filed a bug report.
Boost could work around the problem in this specific case by adding seven characters to a nested include-file, i.e. "throw()" at line 65 in alt_sstream_impl.hpp. I filed a report with boost also, although I made it clear that the bug is not in their code. I am just suggesting a workaround.
All the tedious details are in the two reports linked above.
Check the preprocessor defines.
You might turn on and inspect verbose logging to see the exact flags that are passed to cl.exe
You could keep the preprocessed source and compare the version from the old (working) project with the new (failing) project.
My gut says, something else is being #defined/passed using -D in the old project that is not being defined in the new project, of differently (think of WINVER type macros)
See new answer posted: VC++ 2012 and Boost incompatibility - `throw()` specifications in library headers
EDIT by OP, Jive Dadson - It turned out to be /Za, which enables/disables "Microsoft language extensions." It is the contention of Visual Studio that the C++ standard requires that a program shall not compile if it has a virtual function override that is less restrictive in the "throw()" category than the function it overrides. Boost has a class that derives from basic_streambuf, and has a virtual destructor that lacks "throw()". The original destructor has that evil festoon. My new project will compile boost::format if I turn MS language extensions ON.
So the question becomes, who is wrong, and how? Is it standard-complying to put throw() on that destructor or not? Is the desired behavior (desired by me, that is) actually an "extension"? I seem to recall that MS considered some standard C++11 features to be "extensions," but I am not sure I remember correctly. Anyway, I will leave it to the boosters to decide, if they are interested. https://svn.boost.org/trac/boost/ticket/7477
I have a game which uses std::wstring as its basic string type in thousand of places as well as doing operations with wchar_t and its functions: wcsicmp() wcslen() vsprintf(), etc.
The problem is wstring is not supported in R5c (latest ndk at the time of this writting).
I can't change the code to use std::string because of internationalization and I would be breaking the game engine which is used by many games ...
Which options do I have?
1 - Replace string and wstring with my own string classes
This would give me better platform independency, but it is ridiculous to reimplement the wheel.
I've already started with a COW implementation of strings. I need it to be COW because I use them as keys in hash_maps.
This is of course lots of work and error prone ... but it seems it is something I can do.
2 - Try to fix the NDK recompiling the STLPort with my own implementations of the wide char string functions of the C standart library (wcslen, mbstowcs ... )
This would be the preferable way ... but I have no idea how to do it :(
How do I replace a function (lets say wcslen) in the libstdc++.a or libstlport_static.a? (not sure where they are :()
And as well I'm not sure which functions I need to reimplement, I know wcslen is not working so I guess they should be all ...
3 - Do you have any other idea?
I can't wait for an official fix for this and I will have to go with option #1 if I can't realize how to do #2.
I've read somewhere that if you target 2.3 you can use wstrings, but I ought to target Android 2.1.
PS: Forgot to say I need to use STL of course, but no RTTI and I can live without exceptions.
Thanks in advance!
Try out CrystaX's NDK. It has had stl support long before the official google one. The current version (r5), which is based off the of the official ndk r5, is still beta 3, but it does have wchar_t support.
http://www.crystax.net/android/ndk-r5.php
I'm suffering from the same problem as you, but my only other thought is to load the strings via the JNI (as jstring* in native land), then convert them to UTF characters as necessary. Take a look at the available JNI string functions here:
http://download.oracle.com/javase/1.5.0/docs/guide/jni/spec/functions.html#string_operations
Qt provides an excellent copy-on-write, international-friendly string implementation, QString, that is LGPLed.
You could, in theory extract it from the Qt source and use it in your own project. You will find the QString implementation in src/corelib/tools/qstring.h and .cpp in a Qt source download. You would also need the QChar, QByteArray, QAtomic, and QNamespace includes/classes (all under the corelib folder,) and you should define QT_NO_STL_WCHAR when compiling. (For this I would compile by hand or using my own script/Makefile.) Not simple, but once you get it up and running your life will be a lot simpler. It's better than reinventing the wheel, because it comes with loads of convenience functions and features.
Rather than stripping out just QString, you could also just use the QtCore module as a whole. See the android-lighthouse project for a Qt port to Android. (Also, it might be better to get your sources from there than from the above "vanilla" link, regardless of what you do.)