Why is /Wp64 deprecated? - visual-c++

Why is the /Wp64 flag in Visual C++ deprecated?
cl : Command line warning D9035 :
option 'Wp64' has been deprecated and will be removed in a future release

I think that/Wp64 is deprecated mainly because compiling for a 64-bit target will catch the kinds of errors it was designed to catch (/Wp64 is only valid in 32-bit compiles). The option was added back when 64-bit targets were emerging to help people migrate their programs to 64-bit and help detect code that wasn't '64-bit clean'.
Here's an example of the kinds of problems with /Wp64 that Microsoft just isn't interested in fixing - probably rightly so (from http://connect.microsoft.com/VisualStudio/feedback/details/502281/std-vector-incompatible-with-wp64-compiler-option):
Actually, the STL isn't intentionally incompatible with /Wp64, nor is
it completely and unconditionally incompatible with /Wp64. The
underlying problem is that /Wp64 interacts extremely badly with
templates, because __w64 isn't fully integrated into the type system.
Therefore, if vector<unsigned int> is instantiated before vector<__w64 unsigned int>, then both of them will behave like vector<unsigned int>, and vice versa. On x86, SOCKET is a typedef for __w64 unsigned int. It's not obvious, but vector<unsigned int> is being instantiated
before your vector<SOCKET>, since vector<bool> is backed (in our
implementation) by vector<unsigned int>.
Previously (in VC9 and earlier), this bad interaction between /Wp64
and templates caused spurious warnings. In VC10, however, changes to
the STL have made this worse. Now, when vector::push_back() is given
an element of the vector itself, it figures out the element's index
before doing other work. That index is obtained by subtracting the
element's address from the beginning of the vector. In your repro,
this involves subtracting const SOCKET * - unsigned int *. (The latter
is unsigned int * and not SOCKET * due to the previously described
bug.) This /should/ trigger a spurious warning, saying "I'm
subtracting pointers that point to the same type on x86, but to
different types on x64". However, there is a SECOND bug here, where
/Wp64 gets really confused and thinks this is a hard error (while
adding constness to the unsigned int *).
We agree that this bogus error message is confusing. However, since
it's preceded by an un-silenceable command line deprecation warning
D9035, we believe that that should be sufficient. D9035 already says
that /Wp64 shouldn't be used (although it doesn't go on to say "this
option is super duper buggy, and completely unnecessary now").
In the STL, we could #error when /Wp64 is used. However, that would
break customers who are still compiling with /Wp64 (despite the
deprecation warning) and aren't triggering this bogus error. The STL
could also emit a warning, but the compiler is already emitting D9035.

/Wp64 on 32-bit builds is a waste of time. It is deprecated, and this deprecation makes sense. The way /Wp64 worked on 32-bit builds is it would look for a _w64 annotation on a type. This _w64 annotation would tell the compiler that, even though this type is 32-bits in 32-bit mode, it is 64-bits in 64-bit mode. This turned out to be really flakey, especially where templates are involved.
/Wp64 on 64-bit builds is extremely useful. The documentation (http://msdn.microsoft.com/en-us/library/vstudio/yt4xw8fh.aspx) claims that it is on by default in 64-bit builds, but this is not true. Compiler warnings C4311 and C4312 are only emitted if /Wp64 is explicitly set. Those two warnings indicate when a 32-bit value is put into a pointer, or vice-versa. These are very important for code correctness, and claim to be at warning level 1. I have found bugs in very widespread code that would have been stopped if the developers had turned on /Wp64 for 64-bit builds. Unfortunately, you also get the command line warning that you have observed. I know of no way to squelch this warning, and I have learned to live with it. On the bright side, if you build with as warnings as errors, this command line warning doesn't turn into an error.

Because when using the 64 Bit compiler from VS2010 the compiler does the detection of 64 bit problems automatically... this switch is from back in the day when you could try to detect 64 Bit problem running the 32 Bit compiler...
See http://msdn.microsoft.com/en-us/library/yt4xw8fh%28v=VS.100%29.aspx

You could link to the deprecation warning, but couldn't go to the /Wp64 documentation?
By default, the /Wp64 compiler option is off in the Visual C++ 32-bit compiler and on in the Visual C++ 64-bit compiler.
If you regularly compile your application by using a 64-bit compiler, you can just disable /Wp64 in your 32-bit compilations because the 64-bit compiler will detect all issues.
Emphasis added

Related

Usize breaking zksync when compiling for ARM

This is a complex and hard issue, but I will break it down to the best of my abilities. It comes down to when I am compiling a rust project for ARM64 (goal is to run on rasp pi 4).
A large majority of the libraries compile (704 / 740) but it breaks during compilation when it goes to compile a zksync directory. The the yagna client for golem is what I am compiling, I am using
target - target.arm-unknown-linux-musleabi
linker - arm-linux-gnueabihf-ld
I'd love to hear ideas solutions, or what I am doing wrong so I can get this project running on ARM.
The error code I am getting is
Ok(stat.blocks_available() as u64 * stat.fragment_size())
^^^^^^^^^^^^^^^^^^^^ expected `u64`, found `u32`
among other errors, all revolving around bit differences when converting integers. This had led to me suspecting usize to be the culprit as it bases size off CPU architecture, which would explain the ARM compilation messing it up, and not showing up until you have to handle the int (at conversion).
let me know if there is any more information you need, tried to my best to encapsulate the problem
stat is a Statvfs structure, the return type of Statvfs::blocks_available() is fsblkcnt_t, and the return type of Statvfs::fragment_size() is c_ulong. These two types are defined in the libc package, which is a paper-thin wrapper around the low-level C OS-calls. The types are equivalent to the C types in the OS-specific *.h files. The sizes of the types vary from platform to platform.
The library you're compiling appears to have some assumptions baked in about these sizes and their arithmetic compatibility.
Some bug reports to the library authors would be in order.
If you're willing to patch the library yourself, then you should first investigate your platform's libc package and check the definitions of all types involved in the errors you're seeing. Then fix the arithmetic expressions so the types are compatible and they don't overflow.
I don't think Rust's type system is smart enough to support one body of code that handles all potential combinations of type sizes both safely (without overflow or truncation) and efficiently (without needlessly casting everything up to u128) . It might be that platform-specific patches are necessary.

What would cause gcc to not recognize a "bool" anymore?

I had written plenty of code using Booleans and complied and built with no problem. Then the compiler and even the editor no longer recognizes "bool". A fix I did was to "#include <stdbool.h>" to recognize the Booleans.
But I'd like to know what could possibly cause this problem?
In C11, the type bool is only defined if the standard header stdbool.h is included. Otherwise, the type has to be referred to as _Bool. This was the result of the complete absence of a boolean type in earlier revisions of the standard, and the focus on backwards compatibility in the evolution of said standard.
In C++, the bool type is available without including any header, just like int.
Your question is about GCC, not about the C standard, but while GCC does take some obscure liberties with the C standard if you do not use commandline options such as -std=c11 -pedantic to make it a standard-compliant compiler, in the case of the type bool, it follows the C standard and abstains from defining it.
It is likely that you were compiling code as C++ previously and are now compiling it as C. Another possibility is that you were including an application header that was including stdbool.h or that provided its own definition of bool, and that you ceased to include this header.
(It would even be possible to imagine in theory that the header in question was a system header that was including stdbool.h previously and ceased to when you upgraded your compilation platform. In principle, there is no guarantee about which system header may include what other system headers. In practice, though, since the only purpose of stdbool.h is to preserve compatibility with old code that does not include it, stdbool.h would never be included by another system header.)

"getenv... function ... may be unsafe" - really?

I'm using MSVC to compile some C code which uses standard-library functions, such as getenv(), sprintf and others, with /W3 set for warnings. I'm told by MSVC that:
'getenv': This function or variable may be unsafe. Consider using _dupenv_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS
Questions:
Why would this be unsafe, theoretically - as opposed to its use on other platforms?
Is it unsafe on Windows in practice?
Assuming I'm not writing security-oriented code - should I disable this warning or actually start aliasing a bunch of standard library functions?
getenv() is potentially unsafe in that subsequent calls to that same function may invalidate earlier returned pointers. As a result, usage such as
char *a = getenv("A");
char *b = getenv("B");
/* do stuff with both a and b */
may break, because there's no guarantee a is still usable at that point.
getenv_s() - available in the C standard library since C11 - avoids this by immediately copying the value into a caller-supplied buffer, where the caller has full control over the buffer's lifetime. dupenv_s() avoids this by making the caller responsible for managing the lifetime of the allocated buffer.
However, the signature for getenv_s is somewhat controvertial, and the function may even be removed from the C standard at some point... see this report.
getenv suffers like much of the classic C Standard Library by not bounding the string buffer length. This is where security bugs like buffer overrun often originate from.
If you look at getenv_s you'll see it provides an explicit bound on the length of the returned string. It's recommended for all coding by the Security Development Lifecycle best practice, which is why Visual C++ emits deprecation warnings for the less secure versions.
See MSDN and this blog post
There was an effort by Microsoft to get the C/C++ ISO Standard Library to include the Secure CRT here, some of which was approved for C11 Annex K as noted here. That also means that getenv_s should be part of the C++17 Standard Library by reference. That said, Annex K is officially considered optional for conformance. The _s bounds-checking versions of these functions are also still a subject of some debate in the C/C++ community.

When will _ATL_ALLOW_UNSIGNED_CHAR work?

I'm migrating a Visual C++ project which uses ATL/MFC from VS2010 to VS2013. The project compiles with /J ("assume char is unsigned"), and there is too much code that may or may not rely on that fact to easily remove the compiler flag.
Under VS2013, /J causes a compiler error in atldef.h: ATL doesn't support compilation with /J or _CHAR_UNSIGNED flag enabled. This can be suppressed by defining _ATL_ALLOW_UNSIGNED_CHAR. Microsoft mention this in the MSDN documentation for /J, along with the vague statement: "If you use this compiler option with ATL/MFC, an error might be generated. Although you could disable this error by defining _ATL_ALLOW_CHAR_UNSIGNED, this workaround is not supported and may not always work."
Does anyone know under what circumstances it is safe or unsafe to use _ATL_ALLOW_CHAR_UNSIGNED?
Microsoft struggles to keep ancient codebases, like ATL, compatible with changes in the compiler. The principal trouble-maker here is the AtlGetHexValue() function. It had a design mistake:
The numeric value of the input character interpreted as a hexadecimal digit. For example, an input of '0' returns a value of 0 and an input of 'A' returns a value of 10. If the input character is not a hexadecimal digit, this function returns -1.
-1 is the rub, 9 years ago that broke with /J in effect. And it won't actually return -1 today, it now returns CHAR_MAX ((char)255) if you compile with /J. Required since comparing unsigned char to -1 will always be false and the entire if() statement is omitted. This broke ATL itself, it will also break your code in a very nasty way if you use this function, given that this code is on the error path that is unlikely to get tested.
Shooting off the hip, there were 3 basic ways they could have solved this problem. They could have changed the return value type to int, risking breaking everybody. Or they could have noted the special behavior in the MSDN article, making everybody's eyes roll. Or they could have invoked the "time to move on" option. Which is what they picked, it was about time with MSVC++ being the laughing stock of the programming world back then.
That's about all you need to fear from ATL, low odds that you are using this function and easy to find back. Otherwise an excellent hint to look for the kind of trouble you might get from your own code.

Unfathomable problem with unicode and frameworks

I'm experiencing a very strange problem... The following trivial test code works as it should if it is injected in a single Cocoa application, but when I use it in one of my frameworks, I get absolutely unexpected results...
wchar_t Buf[2048];
wcscpy(Buf, L"/zbxbxklbvasyfiogkhgfdbxbx/bxkfiorjhsdfohdf/xbxasdoipppwejngfd/gjfdhjgfdfdjkg.sdfsdsrtlrt.ljlg/fghlfg");
int len1 = wcslen(L"/zbxbxklbvasyfiogkhgfdbxbx/bxkfiorjhsdfohdf/xbxasdoipppwejngfd/gjfdhjgfdfdjkg.sdfsdsrtlrt.ljlg/fghlfg");
int len2 = wcslen(Buf);
char Buf2[2048];
Buf2[0]=0;
wcstombs(Buf2, Buf, 2048);
// ??? Buf2 == ""
// ??? len1 == len2 == 57, but should be 101
How can this be, have I gone mad? Even if there was a memory corruption, it couldn't possibly corrupt all these values allocated on stack... Why won't even the wcslen(L"MyWideString") work? Changing test string changes its length, but it is always wrong, wcstombs returns -1...
setlocale() is not used anywhere, test string contains only ASCII characters, in order to ease porting I use the -fshort-wchar compiler option, but it works fine in case of a test Cocoa application...
Please help!
I've just tested this again with GCC 4.6. In the standard settings, this works as expected, giving 101 for all the lengths. However, with your option -fshort-wchar I also get unexpected results (51 in my case, and 251 for the final conversion after using setlocale()).
So I looked up the man entry for the option:
Warning: the -fshort-wchar switch causes GCC to generate code that is not binary compatible with code generated without that switch. Use it to conform to a non-default application binary interface.
I think that explains it: When you're linking to the standard library, you are expected to use the correct ABI and type conventions, which you are overriding with that option.
Wide char implementation in C/C++ can be anything, including 1 byte, 2 bytes or 4 bytes. This depends on the compiler and the platform you are compiling to.
Probably wikipedia is not the best place to quote from but in this case:
http://en.wikipedia.org/wiki/Wide_character states that
... width of wchar_t is compiler-specific and can be as small as 8 bits.
and
... wide characters should be 16-bit values under C90 due to historical compatibility reasons. C and C++ compilers that comply with the 10646-1:2000 Unicode standard generally assume 32-bit values....
So, do not assume and use the sizeof(wchar_t).
-fshort-wchar change the compiler's ABI, so you need to recompile glibc, libgcc and all library using wchar_t. Otherwise, wcslen and other functions in glibc are still assume wchar_t is 4 bytes.
see: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=42092

Resources