When will _ATL_ALLOW_UNSIGNED_CHAR work? - visual-c++

I'm migrating a Visual C++ project which uses ATL/MFC from VS2010 to VS2013. The project compiles with /J ("assume char is unsigned"), and there is too much code that may or may not rely on that fact to easily remove the compiler flag.
Under VS2013, /J causes a compiler error in atldef.h: ATL doesn't support compilation with /J or _CHAR_UNSIGNED flag enabled. This can be suppressed by defining _ATL_ALLOW_UNSIGNED_CHAR. Microsoft mention this in the MSDN documentation for /J, along with the vague statement: "If you use this compiler option with ATL/MFC, an error might be generated. Although you could disable this error by defining _ATL_ALLOW_CHAR_UNSIGNED, this workaround is not supported and may not always work."
Does anyone know under what circumstances it is safe or unsafe to use _ATL_ALLOW_CHAR_UNSIGNED?

Microsoft struggles to keep ancient codebases, like ATL, compatible with changes in the compiler. The principal trouble-maker here is the AtlGetHexValue() function. It had a design mistake:
The numeric value of the input character interpreted as a hexadecimal digit. For example, an input of '0' returns a value of 0 and an input of 'A' returns a value of 10. If the input character is not a hexadecimal digit, this function returns -1.
-1 is the rub, 9 years ago that broke with /J in effect. And it won't actually return -1 today, it now returns CHAR_MAX ((char)255) if you compile with /J. Required since comparing unsigned char to -1 will always be false and the entire if() statement is omitted. This broke ATL itself, it will also break your code in a very nasty way if you use this function, given that this code is on the error path that is unlikely to get tested.
Shooting off the hip, there were 3 basic ways they could have solved this problem. They could have changed the return value type to int, risking breaking everybody. Or they could have noted the special behavior in the MSDN article, making everybody's eyes roll. Or they could have invoked the "time to move on" option. Which is what they picked, it was about time with MSVC++ being the laughing stock of the programming world back then.
That's about all you need to fear from ATL, low odds that you are using this function and easy to find back. Otherwise an excellent hint to look for the kind of trouble you might get from your own code.

Related

allow(uncommon_codepoints) is ignored unless specified at crate level

I wrote this the other day:
let µ = ... some expression ...
(As it happens, the µ sign is easily typeable on my keyboard, just AltGr+m. This is why I have a habit to use this letter quite often especially when it is about small values.)
Now I got this:
identifier contains uncommon Unicode codepoints
`#[warn(uncommon_codepoints)]` on by default
No problem, I'll just allow it, I thought, and put this at the front:
#![allow(uncommon_codepoints)]
But no, it's utterly hesitant against greek:
allow(uncommon_codepoints) is ignored unless specified at crate level
`#[warn(unused_attributes)]` on by default
I would think it is at least debatable what "uncommon" exactly is. But I'm not really interested in that discussion, as long as I can turn it off.
So please ... how exactly do I specify something at the crate level? I tried it in main.rs and libs.rs but it wont accept it.
Edit
This really starts to become interesting:
I put the line
#![allow(uncommon_codepoints)]
as line 1 in main.rs and it now stops complaining about the unused_attribute. However, the "uncommon codepoint" warning still appears when compiling the file that contains it (i.e. with cargo build). I am at rustc 1.58.1 (stable, AFAIK)
I also found out that what my keyboard produces is not U+03BC GREEK SMALL LETTER MU but U+00B5 MICRO SIGN. It's still a letter, lowercase. Now, the interesting thing is: the uncommon Unicode warning does not appear for a genuine greek Mu, but for the micro sign it does!
Is there any other place I can turn off annoying and (from my point of view utterly useless) warnings? In general, I highly appreciate rust's detailed and often helpful warnings (though lately I found myself making an unused HashSet just to avoid the warnings about unused imports --- hey I know I will need this later, so please stop nagging), but this unicode thingy is a bit overdone. Its a valid variable name according to rust lexical syntax, and I really do want to use it. Period.

How to retrieve the type of architecture (linux versus Windows) within my fortran code

How can I retrieve the type of architecture (linux versus Windows) in my fortran code? Is there some sort of intrinsic function or subroutine that gives this information? Then I would like to use a switch like this every time I have a system call:
if (trim(adjustl(Arch))=='Linux') then
resul = system('ls > output.txt')
elseif (trim(adjustl(Arch))=='Windows')
resul = system('dir > output.txt')
else
write(*,*) 'architecture not supported'
stop
endif
thanks
A.
The Fortran 2003 standard introduced the GET_ENVIRONMENT_VARIABLE intrinsic subroutine. A simple form of call would be
call GET_ENVIRONMENT_VARIABLE (NAME, VALUE)
which will return the value of the variable called NAME in VALUE. The routine has other optional arguments, your favourite reference documentation will explain all. This rather assumes that you can find an environment variable to tell you what the executing platform is.
If your compiler doesn't yet implement this standard approach it is extremely likely to have a non-standard approach; a routine called getenv used to be available on more than one of the Fortran compilers I've used in the recent past.
The 2008 standard introduced a standard function COMPILER_OPTIONS which will return a string containing the compilation options used for the program, if, that is, the compiler supports this sort of thing. This seems to be less widely implemented yet than GET_ENVIRONMENT_VARIABLE, as ever consult your compiler documentation set for details and availability. If it is available it may also be useful to you.
You may also be interested in the 2008-introduced subroutine EXECUTE_COMMAND_LINE which is the standard replacement for the widely-implemented but non-standard system routine that you use in your snippet. This is already available in a number of current Fortran compilers.
There is no intrinsic function in Fortran for this. A common workaround is to use conditional compilation (through makefile or compiler supported macros) such as here. If you really insist on this kind of solution, you might consider making an external function, e.g., in C. However, since your code is built for a fixed platform (Windows/Linux, not both), the first solution is preferable.

Equality operator while checking condition in C++

IS there a difference between these two conditions:
if (a==5) and if (5==a)?
No, there is no difference at all.
People used to write this expression 5==a instead of a==5 so the could catch a=5 errors on C/C++ where that expression is perfectly valid and always evaluates to true. That way, if programmer writes (by mistake) the expression 5=a then it will get a compiler error.
The two are normally the same.
Some people recommend putting the constant first (if (5==a)) because this way, if you mis-type and leave out one of the = to get: if (5=a), the compiler will give an error message, whereas if (a=5) will compile and execute, but probably not do what you want.
Some compilers will give a warning for the latter (e.g., recent iterations of gnu do) but others don't (and Visual C++ is among the latter).
If 'a' points to an object that overrides ==, then you may get different results in theory.

Fortran77 compiler treatment of PI=4.D0*DATAN(1.D0)

When using the following to compute PI in fortran77, will the compiler evaluate this value or will it be evaluated at run time?
PI=4.D0*DATAN(1.D0)
EDIT: depends on the compiler: see my EDIT below. EDIT END
i second Mick Sharpe's suggestion that it will be evaluated at runtime. just out of curiosity, i compiled PI=4.D0*DATAN(1.D0) with Silverfrost's ftn77 compiler and looked at the generated binary. the relevant part looks like so:
fld1 ; push 1.D0 onto the FPU register stack
call ATAN_X
fmul dbl_404000 ; multiply by 4.D0
so indeed, no compiler cleverness here.
this of course might be different with another compiler (eg. g77). EDIT: apparently, with g77 (the fortran77 front-end for gcc) it is possible (and enabled by default) to use gcc's built-in atan function to auto-fold PI=4.D0*DATAN(1.D0) into a constant. EDIT END
Calls to math functions are normally evaluated at run time. After all, there's nothing to stop you writing your own math functions. This would not be possible if they were evaluated at compile time.

Should I cast a CString passed to Format/printf (and varargs in general)?

I recently took in a small MCF C++ application, which is obviously in a working state. To get started I'm running PC-Lint over the code, and lint is complaining that CStringT's are being passed to Format. Opinion on the internet seems to be divided. Some say that CSting is designed to handle this use case without error, but others (and an MSDN article) say that it should always be cast when passed to a variable argument function. Can Stackoverflow come to any consensus on the issue?
CString has been carefully designed to be passed as part of a variable argument list, so it is safe to use it that way. And you can be fairly sure that Microsoft will take care not to break this particular behavior. So I'd say you are safe to continue using it that way, if you want to.
That said, personally I'd prefer the cast. It is not common behavior that string classes behave that way (e.g. std::string does not) and for mental consistency it may be better to just do it the "safe" way.
P.S.: See this thread for implementation details and further notes on how to cast.

Resources