libc macro in autoconf - autoconf

I want to write a macro for finding libc . I found that ldd -version option can be used for finding version. It is giving a lot of information but i want only version, how to get the version.
$ldd --version
ldd (Ubuntu EGLIBC 2.12.1-0ubuntu6) 2.12.1
Copyright (C) 2010 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.

For glibc/eglibc you can get this information from <features.h> as described in this post. Other libc implementations will vary.

Related

What if I want to build GNU m4 from zero without bootstrap from another release copy of GNU m4?

Recently I'm writing a paper associated with open sources and GNU. I need to do some testings to simulate how the pioneers of GNU develop the GNU build system from zero in the early period. But I found one strange thing in README with the statement:
If GNU 'm4' is meant to serve GNU 'autoconf', beware that 'm4' should be fully installed prior to configuring 'autoconf' itself.
Likewise, if you intend on hacking GNU 'm4' from git, the bootstrap process requires that you first install a released copy of GNU 'm4'.
If we follow up this logic, what about the first released copy of GNU m4? Can anybody have clue or hint? Thank you.
If we follow up this logic, what about the first released copy of GNU m4?
Retrocomputing SE would be a better forum for questions about computing history. From a technical perspective, however, it is obvious that the first version of GNU m4 could not have depended on an Autoconf build system if Autoconf also depended on GNU m4, not even if they were part of the same package. In fact, GNU's m4 is not the first or only m4, and it did not always have an Autoconf-based build system. For its part, Autoconf did not always depend specifically on GNU m4.
Bear in mind, too, that
configure scripts similar to those produced by Autoconf were originally written by hand or generated by other tools, and other, less automated approaches predated that.
an Autoconf configure script itself does not ordinarily depend on Autoconf or m4. This is by design. As long as you have a complete distribution of an Autotools-based project (which, by definition, includes a configure script) you do not need to be able to run Autoconf or m4 to build the project.
The Autoconf manual has a chapter on the tool's history that might interest you.

Where can I find code for binary utility 'strings' of Linux?

I am interested in how the binary utilities of Linux are coded and how do they work. Where can I find the source code for them?
Strings is usually part of the binutils and since they are maintained by the Free Software Foundation and licensed under the GNU Public License, the source code is available here:
http://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git
or packages of version related snapshots here:
ftp://sourceware.org/pub/binutils/snapshots
If you want to start with a general overview, try the Wikipedia page for binutils or this explanation of the toolchain (more a general description)

packaging debian files: debian/copyright file format?

I am back porting a bunch of code to run on older kernels. (gcc 4.9 dependencies, x11, vlc etc.) as *.deb files. In this process, ( I am new to packaging) I need to create a copyright file. I can have a blank one, I know, but I would like to know, what is the copyright format?
Do i take the license of the software I am packaging? Or can I give the package a different license than the source licence?
I have been reading:
https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
but I am still confused about license and copyright, and whether or not license and copyrights are different for the *.deb file I am making.
Can someone clarify this?
First, according to Debian policy, it's not necessary to use any particular format for the debian/copyright file, as long as the reader can easily tell what copyrights and license terms apply to the package or to individual files (and, of course, as long as those license terms are actually met). I personally appreciate it when a packager uses the copyright-format/1.0, though; it's usually much clearer to read.
The license information in debian/copyright should cover the copyright(s) and licensing of the package you're distributing, as well as any additional copyright and licensing you want to apply to your packaging.
You can't give the package a different license than the source license unless you have permission to do so (nothing gives you automatic permission to license someone else's property on your own terms). Some open source licenses imply that it's ok to redistribute the source or derivatives of the source under a different license, as long as the copyright notice and a disclaimer are kept intact.
It's fairly common, for example, for code licensed under an MIT/X11-style license to be incorporated into code under a BSD-style or GPL license. The resulting combined work is then distributable as long as the terms of both licenses are met (this is not a very onerous requirement in the case of MIT/X11/BSD), and both copyright notices are included. If it's possible to meet the terms of two or more licenses at the same time, we say those licenses are "compatible".
Some works meticulously keep track of what copyrights and licenses apply to each particular file when combining source from multiple origins. Some instead apply all licenses and all copyrights of component parts to the entire combined work. Both are generally accepted by the open source community, as long as it's clear that an effort is being made to identify and comply with the original licenses. Both of those are easily representable in the copyright-format/1.0 syntax.
(I am not a lawyer, this is not legal advice, consult a real lawyer if you are worried about actual legality of relicensing, etc.)

Can Clang compile code with GCC compiled .a libs?

I have my project currently compiling under gcc. It uses Boost, ZeroMQ as static .a libraries and some .so libraries like SDL. I want to go clang all the way but not right now. I wonder if it is possible to compile code that uses .a and .so libraries that were compiled under gcc with clang?
Yes, you usually can use clang with GCC compiled libraries (and vice versa, use gcc with CLANG compiled libraries), because in fact it is not compilation but linking which is relevant. You might be unlucky and get unpleasant suprises.
You could in principle have some dependencies on the version of libstdc++ used to link the relevant libraries (if they are coded in C++). Actually, that usually does not matter much.
In C++, name mangling might in theory be an issue (there might be some corner cases, even incompatibilities between two different versions of g++). Again, in practice it is usually not an issue.
So usually you can mix CLANG (even different but close versions of it) with GCC but you may have unpleasant surprises. What should be expected from any C++ compiler (be it CLANG or GCC) is just to be able to compile and link an entire software (and all libraries) together using the same compiler and version (and that includes the same C++ standard library implementation). This is why upgrading a compiler in a distribution is a lot of work: the distribution makers have to ensure that all the packages compile well (and they do get surprises!).
Beware that the version of libstdc++ does matter. Both Clang & GCC communities work hard to make its ABI compatible for compiler upgrades, but there are subtle corner cases. Read the documentation of your particular and specific C++ standard library implementation. These corner cases could explain mysterious crashes when using a good C++ library binary (compiled with GCC 5) in your code compiled with GCC 8. The bug is not in the library, but the ABI evolved incompatibly.
At least for Crypto++ library this does not work (verified :-( ). So for c++ code it is less likely to work, while pure c code would probably link OK.
EDIT: The problem started appearing with Mac OS X 10.9 Mavericks and Xcode-5, which switched the default C++ library for clang from libstdc++ to libc++. It did not exist on Mac OS X 10.8 and earlier.
The solution appears to be: if you need to compile C++ code with clang, and link it to a gcc-compiled library, use "clang++ -stdlib=libstdc++". The linking is successful, and the resulting binary runs correctly.
CAVEAT: It does not seem to work the other way: even though you can build a library compiled with "clang++ -stdlib=libstdc++" and link gcc-compiled code with it, this code will crash with SEGV. So far I found the only way to link with a clang-compiled library is compiling your code with clang, not gcc.
EDIT2:
GCC-12 seems to include -stdlib= flag. Compiling with g++ -stdlib=libc++ creates Clang++-compatible object files. Very nice.
I do have an additional data point to contribute, on the topic of "unpleasant surprises" mixing code from different versions of different compilers. Therein, I link Victor Shoup's C++-based NTL number theory library with a small piece of driver code that just prints out a large factorial computed by the NTL code, a number with a decimal representation that might span multiple lines if sufficiently large.
I have built and installed SageMath (and its version of NTL) on my system running OS X 10.11.6, and also have a current installation of MacPorts. In /usr/bin I find for gcc --version
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Target: x86_64-apple-darwin15.6.0
My MacPorts gcc gives
gcc (MacPorts gcc9 9.1.0_2) 9.1.0
Now, the SageMath build system requires that MacPorts be moved out of the way, so I assume SageMath builds NTL using Apple's development toolset. The SageMath build log is full of invocations of gcc. SageMath actually builds gcc from source if the system on which the makefile is run has too old a version of Apple's developer tools.
My driver code computes big factorials and uses methods of the NTL class ZZ; I initially had tested this by linking to an NTL static library I built myself, and I changed it to link to the SageMath version because I find it pleasing not to duplicate libraries. Now I understand a bit more about the pitfalls which may arise in this process.
The old makefile invoked g++ to make the executable, but this failed at linking phase with the message:
Undefined symbols for architecture x86_64:
"NTL::operator<<(std::basic_ostream<char, std::char_traits<char> >&, NTL::ZZ const&)",
referenced from:
prn_factorial(int, NTL::ZZ&) in print.o
ld: symbol(s) not found for architecture x86_64
collect2: error: ld returned 1 exit status
I had to think about this and run experiments for about 15 minutes before deciding on my own to change the makefile to invoke clang++ which in my current path invokes the MacPorts version
clang version 7.0.1 (tags/RELEASE_701/final)
Target: x86_64-apple-darwin15.6.0
Thread model: posix
InstalledDir: /opt/local/libexec/llvm-7.0/bin
This time, the makefile successfully linked and built my executable. I conclude that this represents one of those edge cases with "unpleasant surprises". Probably I should conclude that working with details of C++ is not for me; big software systems like SageMath are developed just so hobbyists don't really have to muck around with details like these.

cross-compiling haskell code through ghc and mingw tools

I've tried -fvia-C and the -pgms, but none of them manage to create an executable, splurting out lots of errors like Warning: retaining unknown function ``L4' in output from C compiler.
GHC can't be used as a cross-compiler out of the box. The build system has some support for cross-compilation which we're currently working on improving. For more information, see CrossCompilation on the GHC wiki. I suggest taking further discussion to the glasgow-haskell-users or cvs-ghc mailing lists.

Resources