I had written plenty of code using Booleans and complied and built with no problem. Then the compiler and even the editor no longer recognizes "bool". A fix I did was to "#include <stdbool.h>" to recognize the Booleans.
But I'd like to know what could possibly cause this problem?
In C11, the type bool is only defined if the standard header stdbool.h is included. Otherwise, the type has to be referred to as _Bool. This was the result of the complete absence of a boolean type in earlier revisions of the standard, and the focus on backwards compatibility in the evolution of said standard.
In C++, the bool type is available without including any header, just like int.
Your question is about GCC, not about the C standard, but while GCC does take some obscure liberties with the C standard if you do not use commandline options such as -std=c11 -pedantic to make it a standard-compliant compiler, in the case of the type bool, it follows the C standard and abstains from defining it.
It is likely that you were compiling code as C++ previously and are now compiling it as C. Another possibility is that you were including an application header that was including stdbool.h or that provided its own definition of bool, and that you ceased to include this header.
(It would even be possible to imagine in theory that the header in question was a system header that was including stdbool.h previously and ceased to when you upgraded your compilation platform. In principle, there is no guarantee about which system header may include what other system headers. In practice, though, since the only purpose of stdbool.h is to preserve compatibility with old code that does not include it, stdbool.h would never be included by another system header.)
Related
Working with Haskell and particularly GHC I can see tinfo6 word quite often. Mostly it appears in arch-vendor-os triple x86_64-linux-tinfo6 like if it was some sort of OS. But what really does tinfo6 mean?
it appears in arch-vendor-os triple x86_64-linux-tinfo6
I think you are confusing GNU target triplets with GHC target triplets. A GHC target triplet is <architecture>-<operating system>-<ABI>.
So, tinfo6 is the ABI. I don't know much about GHC, but I do remember that it has a calling convention that is not the C calling convention.
Fun fact: this calling convention can actually not be expressed in C, therefore the C backend of GHC actually calls GCC to generate assembly, then a Perl(!!!) script that is part of the GHC compiler searches for calls in the assembly code and re-writes them to the GHC calling convention; after that, the compiler will call GCC (or rather GAS) again, to assemble the object file. (This rather clever but somewhat crazy hack is one of the reasons for the push to native and LLVM backends.)
So, unfortunately, I don't know what tinfo6 means but I am pretty sure it is the name of the GHC calling convention or ABI.
There is a well-known fact that C++ templates are turing-complete, CSS is turing-complete (!) and that the C# overload resolution is NP-hard (even without generics).
But is C# 4.0 (with co/contravariance, generics etc) compile-time turing complete?
Unlike templates in C++, generics in C# (and other .net lang) are a runtime generated feature. The compiler does do some checking as to verify the types use but, actual substitution happens at runtime. Same goes for Co and contravariance if I'm not mistaken as well as even the preprocessor directives. Lots of CLR magic.
(At the implementation level, the primary difference is that C#
generic type substitutions are performed at runtime and generic type
information is thereby preserved for instantiated objects)
See MSDN
http://msdn.microsoft.com/en-us/library/c6cyy67b(v=vs.110).aspx
Update:
The CLR does preform type checking via information stored in the metadata associated with the compiled assemblies(Vis-à-vis Jit Compliation), It does this as one of its many services,(ShuggyCoUk answer on this question explains it in detail) (others include memory management and exception handling). So with that I would infer that the compiler has a understanding of state as progression and state as in machine internal state (TC,in part, mean being able to review data (symbols) with so reference to previous data(symbols) , conditionally and evaluate) (I hesitated to state the exact def of TC as I, myself am not sure I have it fully grasped, so feel free to fill in the blanks and correct me when applicable ) So with that I would say with a bit of trepidation, yes, yes it can be.
I came across an interesting error when I was trying to link to an MSVC-compiled library using MinGW while working in Qt Creator. The linker complained of a missing symbol that went like _imp_FunctionName. When I realized That it was due to a missing extern "C", and fixed it, I also ran the MSVC compiler with /FAcs to see what the symbols are. Turns out, it was __imp_FunctionName (which is also the way I've read on MSDN and quite a few guru bloggers' sites).
I'm thoroughly confused about how the MinGW linker complains about a symbol beginning with _imp, but is able to find it nicely although it begins with __imp. Can a deep compiler magician shed some light on this? I used Visual Studio 2010.
This is fairly straight-forward identifier decoration at work. The imp_ prefix is auto-generated by the compiler, it exports a function pointer that allows optimizing binding to DLL exports. By language rules, the imp_ is prefixed by a leading underscore, required since it lives in the global namespace and is generated by the implementation and doesn't otherwise appear in the source code. So you get _imp_.
Next thing that happens is that the compiler decorates identifiers to allow the linker to catch declaration mis-matches. Pretty important because the compiler cannot diagnose declaration mismatches across modules and diagnosing them yourself at runtime is very painful.
First there's C++ decoration, a very involved scheme that supports function overloads. It generates pretty bizarre looking names, usually including lots of ? and # characters with extra characters for the argument and return types so that overloads are unambiguous. Then there's decoration for C identifiers, they are based on the calling convention. A cdecl function has a single leading underscore, an stdcall function has a leading underscore and a trailing #n that permits diagnosing argument declaration mismatches before they imbalance the stack. The C decoration is absent in 64-bit code, there is (blessfully) only one calling convention.
So you got the linker error because you forgot to specify C linkage, the linker was asked to match the heavily decorated C++ name with the mildly decorated C name. You then fixed it with extern "C", now you got the single added underscore for cdecl, turning _imp_ into __imp_.
With respect to the following link:
http://www.archlinux.org/news/libpnglibtiff-rebuilds-move-from-testing/
Could someone explain to me why a program should be rebuilt after one of its libraries has been updated?
How does that make any sense since the "main" file is not changed at all?
If the signatures of the functions involved haven't changed, then "rebuilding" the program means that the object files must be linked again. You shouldn't need to compile them again.
An API is contract that describes the interface to the public functions in a library. When the compiler generates code, it needs to know what type of variables to pass to each function, and in what order. It also needs to know the return type, so it knows the size and format of the data that will be returned from the function. When your code is compiled, the address of a library function may be represented as "start of the library, plus 140 bytes." The compiler doesn't know the absolute address, so it simply specifies an offset from the beginning of the library.
But within the library, the contents (that is, the implementations) of the functions may change. When that happens, the length of the code may change, so the addresses of the functions may shift. It's the job of the linker to understand where the entry points of each function reside, and to fill those addresses into the object code to create the executable.
On the other hand, if the data structures in the library have changed and the library requires the callers to manage memory (a bad practice, but unfortunately common), then you will need to recompile the code so it can account for the changes. For example, if your code uses malloc(sizeof(dataStructure)) to allocate memory for a library data structure that's doubled in size, you need to recompile your code because sizeof(dataStructure) will have a larger value.
There are two kinds of compatibility: API and ABI.
API compatibility is about functions and data structures which other programs may rely on. For instance if version 0.1 of libfoo defines an API function called "hello_world()", and version 0.2 removes it, any programs relying on "hello_world()" need updating to work with the new version of libfoo.
ABI compatibility is about the assumptions of how functions and, in particular, data structures are represented in the binaries. If for example libfoo 0.1 also defined a data structure recipe with two fields: "instructions" and "ingredients" and libfoo 0.2 introduces "measurements" before the "ingredients" field then programs based on libfoo 0.1 recipes must be recompiled because the "instructions" and "ingredients" fields will likely be at different positions in the 0.2 version of the libfoo.so binary.
What is a "library"?
If a "library" is only a binary (e.g. a dynamically linked library aka ".dll", ".dylib" or ".so"; or a statically linked library aka ".lib" or ".a") then there is no need to recompile, re-linking should be enough (and even that can be avoided in some special cases)
On the other hand, libraries often consist of more than just the binary object - e.g. the header-files might include some in-line (or macro) logic.
if so, re-linking is not enough, and you might to have to re-compile in order to make use of the newest version of the lib.
Why is the /Wp64 flag in Visual C++ deprecated?
cl : Command line warning D9035 :
option 'Wp64' has been deprecated and will be removed in a future release
I think that/Wp64 is deprecated mainly because compiling for a 64-bit target will catch the kinds of errors it was designed to catch (/Wp64 is only valid in 32-bit compiles). The option was added back when 64-bit targets were emerging to help people migrate their programs to 64-bit and help detect code that wasn't '64-bit clean'.
Here's an example of the kinds of problems with /Wp64 that Microsoft just isn't interested in fixing - probably rightly so (from http://connect.microsoft.com/VisualStudio/feedback/details/502281/std-vector-incompatible-with-wp64-compiler-option):
Actually, the STL isn't intentionally incompatible with /Wp64, nor is
it completely and unconditionally incompatible with /Wp64. The
underlying problem is that /Wp64 interacts extremely badly with
templates, because __w64 isn't fully integrated into the type system.
Therefore, if vector<unsigned int> is instantiated before vector<__w64 unsigned int>, then both of them will behave like vector<unsigned int>, and vice versa. On x86, SOCKET is a typedef for __w64 unsigned int. It's not obvious, but vector<unsigned int> is being instantiated
before your vector<SOCKET>, since vector<bool> is backed (in our
implementation) by vector<unsigned int>.
Previously (in VC9 and earlier), this bad interaction between /Wp64
and templates caused spurious warnings. In VC10, however, changes to
the STL have made this worse. Now, when vector::push_back() is given
an element of the vector itself, it figures out the element's index
before doing other work. That index is obtained by subtracting the
element's address from the beginning of the vector. In your repro,
this involves subtracting const SOCKET * - unsigned int *. (The latter
is unsigned int * and not SOCKET * due to the previously described
bug.) This /should/ trigger a spurious warning, saying "I'm
subtracting pointers that point to the same type on x86, but to
different types on x64". However, there is a SECOND bug here, where
/Wp64 gets really confused and thinks this is a hard error (while
adding constness to the unsigned int *).
We agree that this bogus error message is confusing. However, since
it's preceded by an un-silenceable command line deprecation warning
D9035, we believe that that should be sufficient. D9035 already says
that /Wp64 shouldn't be used (although it doesn't go on to say "this
option is super duper buggy, and completely unnecessary now").
In the STL, we could #error when /Wp64 is used. However, that would
break customers who are still compiling with /Wp64 (despite the
deprecation warning) and aren't triggering this bogus error. The STL
could also emit a warning, but the compiler is already emitting D9035.
/Wp64 on 32-bit builds is a waste of time. It is deprecated, and this deprecation makes sense. The way /Wp64 worked on 32-bit builds is it would look for a _w64 annotation on a type. This _w64 annotation would tell the compiler that, even though this type is 32-bits in 32-bit mode, it is 64-bits in 64-bit mode. This turned out to be really flakey, especially where templates are involved.
/Wp64 on 64-bit builds is extremely useful. The documentation (http://msdn.microsoft.com/en-us/library/vstudio/yt4xw8fh.aspx) claims that it is on by default in 64-bit builds, but this is not true. Compiler warnings C4311 and C4312 are only emitted if /Wp64 is explicitly set. Those two warnings indicate when a 32-bit value is put into a pointer, or vice-versa. These are very important for code correctness, and claim to be at warning level 1. I have found bugs in very widespread code that would have been stopped if the developers had turned on /Wp64 for 64-bit builds. Unfortunately, you also get the command line warning that you have observed. I know of no way to squelch this warning, and I have learned to live with it. On the bright side, if you build with as warnings as errors, this command line warning doesn't turn into an error.
Because when using the 64 Bit compiler from VS2010 the compiler does the detection of 64 bit problems automatically... this switch is from back in the day when you could try to detect 64 Bit problem running the 32 Bit compiler...
See http://msdn.microsoft.com/en-us/library/yt4xw8fh%28v=VS.100%29.aspx
You could link to the deprecation warning, but couldn't go to the /Wp64 documentation?
By default, the /Wp64 compiler option is off in the Visual C++ 32-bit compiler and on in the Visual C++ 64-bit compiler.
If you regularly compile your application by using a 64-bit compiler, you can just disable /Wp64 in your 32-bit compilations because the 64-bit compiler will detect all issues.
Emphasis added