clang llvm very long compliation time on cygwin - cygwin

Hi i have been compiling llvm and clang on my cygwin env using CC=gcc-4 and CXX=g++-4 flags as gcc 3.4.x doesnt seems to compile llvm clang at all. But my question is about the age long compilation time. I have been compiling this from 8pm in the evening and right now its 1:35 am. Also the size of my build directory has gone above 8 gigabytes. And still i see
llvm[5]: Linking Debug+Asserts executable clang-format
Is this normal? Can i somehow make this faster?

Here are some stats
Compiler: GCC 4.5.3
Clang, LLVM: 3.2
A Debug+Assert build took me around 8 hours to build with total build
size over 11 gigabytes.
A Release+assert took mere 1 hour with 800 megabytes of build only.
Also for Release build (configure with --enable-optimized) i used make with -j 4. But i highly doubt the long compilation time was mainly due to debug build as warned by build process itself.
Note: Debug build can be 10 times slower than an optimized build

I suspect this is because of Cygwin. You should be able to build them with MS Visual Studio, and some have done it with Mingw.

What you're seeing is pretty much expected. LLVM / clang are written in C++. So, there are tons of debug information there. Linker is having really hard times trying to merge everything together.
On Linux the usual suggestion is to try gold instead of ld. This usually speeds everything up tenfold.

Related

Clang huge compilation?

Good Morning.
I am compiling Clang, following the instructions here Getting Started: Building and Running Clang
I am on linux and the compilation goes smoothly. But I think I am missing out something...
I want to compile ONLY clang, not all the related libraries. The option -DLLVM_ENABLE_PROJECTS=clang seems doing what I want (check LLVM_ENABLE_PROJECTS here)
If I use the instructions written there, I can compile, but I think I am compiling too much....a build directory of 70GB seems too much to me...
I tried to download the official debian source and compile the debian package (same source code! just using the "debian way" to create a package from official debian source), just to compare...The compilation goes smoothly, is very fast, and the build directory is much much smaller...as I expected...
I noticed in the first link I provided the phrase "This builds both LLVM and Clang for debug mode."...
So, anyone knows if my problem is due to the fact that I am compiling a "debug mode" version? if so, how could I compile the default version? and is there a way to compile ONLY clang without LLVM?
Yes, debug mode binaries are typically much larger than release mode binaries.
Cmake normally uses CMAKE_BUILD_TYPE to determine he build type. It can be set from the command line with -DCMAKE_BUILD_TYPE="Release" o -DCMAKE_BUILD_TYPE="Debug" (sometimes there are other build types as well).

Linking with gfortran: _edata: invalid version 21 (max 4)

I'm working with RHEL6 systems, but need to port the code using C++11 (and even C++14) features. This forced me to build gcc-8.2 by hand, installed under a private prefix (/prod/pfe/local). This created a number of executables under /prod/pfe/local/bin: gcc, g++, ld, and gfortran.
I'm now trying to build CBLAS, which uses the above gfortran. Building the library (cblas_LINUX.a) works fine, but creating an executable fails with a cryptic errors cited in the title:
gfortran -o xscblat1 c_sblat1.o c_sblas1.o ../lib/cblas_LINUX.a
/prod/pfe/local/lib/gcc/x86_64-pc-linux-gnu/8/../../../../x86_64-pc-linux-gnu/bin/ld: /prod/pfe/local/lib/gcc/x86_64-pc-linux-gnu/8/../../../../lib64/libgfortran.so: _edata: invalid version 21 (max 4)
/prod/pfe/local/lib/gcc/x86_64-pc-linux-gnu/8/../../../../x86_64-pc-linux-gnu/bin/ld: /prod/pfe/local/lib/gcc/x86_64-pc-linux-gnu/8/../../../../lib64/libgfortran.so: error adding symbols: bad value
Did I configure build gfortran incorrectly? If not, how do I solve this problem -- additional FFLAGS or LDFLAGS of some kind?
Ok, according to the gcc-developers, this is a known bug triggered by the use of the new linker (gold).
Rebuilding the compiler suit with --disable-gold solves the problem.
Update: correction -- somehow, disabling gold is not good enough. Going back to the binutils-2.30 is what I ended up doing...

Building and using Boost for Windows Visual Studio with a custom ICU

I have been trying for a few days to build a project based on UIMA C++ framework (http://uima.apache.org/). I am currently using the version 2.4.0 release candidate 6, which comes with Linux and Windows binaries to have all dependancies easily bundled.
In particular, it comes with binary libraries for ICU (3.6 I believe).
In my project, I am building a C++ UIMA annotator and my code makes use of Boost C++ library v1.51.0.
Everything compiles fine but at runtime, I get Access Violation exceptions when starting to use, let's say operator <<(ostream&, const icu::UnicodeString&). It may be a problem of version incompatibility between Boost and UIMA C++.
So, I'm trying to recompile Boost on my machine, telling it to reuse the ICU that comes along with UIMA C++, but there seems to be a problem with MSVC toolset because I always get messages telling me there is no ICU available when building Boost:
c:\Users\Sylvain\boost_1_51_0>b2 toolset=msvc-10.0 variant=release -sICU_LINK=c:\users\sylvain\apache-uima\uimacpp
Building the Boost C++ Libraries.
Performing configuration checks
- 32-bit : yes
- x86 : yes
- has_icu builds : no
warning: Graph library does not contain MPI-based parallel components.
note: to enable them, add "using mpi ;" to your user-config.jam
- iconv (libc) : no
- iconv (separate) : no
- icu : no
- icu (lib64) : no
- gcc visibility : no
- long double support : yes
Has anyone managed to build Boost with the -sICU_PATH options and MSVC?
Thanks,
Sylvain
Just had to build Boost with ICU (and succeeded). Since this question is one of the first results on google (and not of very much help right now), I decided to share what I learned.
I was doing an x64 build of Boost 1.56 with MSVC11 (Visual Studio 2012), linking against a custom build of ICU 4.8.1.
First of all, Boost's directory detection for ICU seems a little weird. Here is what my final layout for the ICU directory looked like:
my_icu_root
+- bin
+- bin64
+- include
+- layout
+- unicode
+- lib
+- lib64
I copied all ICU dlls (both Debug and Release versions) to bin, all libs (again Debug and Release) to lib and all header files to include. To make bjam happy, I also had to copy the full bin and lib directories to bin64 and lib64 respectively. Without both directories, either detection of ICU or compilation of Boost.Locale would fail on my machine.
With this layout, all I had to do was to add -sICU_PATH=<my_icu_root> to my usual bjam command line to get it to build.
You know that Boost successfully detected ICU if you get both
- has_icu builds : yes
[...]
- icu : yes
during configuration.
Here is some advice if, for some reason, it does not work right away.
Bjam caches configure information to bin.v2/project-cache.jam. If you try to re-run Bjam after a failed configuration, be sure to delete that file first. Otherwise bjam might decide to just skip ICU detection altogether (you will see lots of (cached) in the console output).
If configuration fails, take a look at bin.v2/config.log to get detailed information on what went wrong. Most likely, it was unable to compile the test program at libs/regex/build/has_icu_test.cpp. This log contains the command line of the build command, which is usually enough to find out what went wrong. If the log seems suspiciously empty, you probably forgot to delete the project-cache.jam.
Finally, a successful configure run is no guarantee for a successful build. On my machine, I managed to configure everything correctly but still had Boost.locale fail during build because of missing lib files. So be sure to check the build output for failed or skipped targets.
Good luck!
Take a look a boost/libs/regex/build/has_icu_test.cpp. I can't remember the fix/issue off the top of my head but you should be able to cheat and simply return 0 from main() there.
Maybe Boost doesn't work with a six year old ICU. Can you rebuild UIMA instead?
My command line as follows:
bjam -sICU_PATH=c:\icu --toolset=msvc-10.0 variant=release stage
Just look into \bin.v2\config.log
It contains exact error. In my case it was absence of specific library for linking
...found 10 targets...
...found 3 targets...
...found 66 targets...
...updating 2 targets...
msvc.link bin.v2\libs\regex\build\msvc-10.0\debug\threading-multi\has_icu.exe
LINK : fatal error LNK1181: cannot open input file 'icuind.lib'
The problem is - boost build anyway looks for debug library even when requested for variant=release.
I'm experiencing the same problem. And the way I choose to work around it is to make a copy of icuin.lib and name it as icuind.lib, and so for other libs. Then bjam says it has found icu.

How to compile NodeJS on a D-Link DNS 325 with fun_plug 0.5 installed?

I am trying to compile Node on my NAS device, but I get this error, and I don't really know how to make this work:
/node-v0.6.6/deps/v8/src/arm/constants-arm.h:33:2: error: #error ARM EABI support is
required.
scons: *** [obj/release/accessors.o] Error 1
scons: building terminated because of errors.
Waf: Leaving directory `/ffp/home/root/node-v0.6.6/out'
Build failed: -> task failed (err #2):
{task: libv8.a SConstruct -> libv8.a}
Did someone actually manage do get Node to compile on a D-Link NAS? Does someone know of any official guides to doing this or where should I ask for help?
Many thanks.
DNS320, Fonzplug, NodeJS, Funplug, (DNS323, my original target) also appears to apply to sheeva plug!!.
(as a matter of interest 323 takes about 7.5 hours to compile, whilst 320 takes 1hr, 3 minutes)
(also compile on 1G8 Intel Linux (Debian) take about 15 minutes)
========================
NOTE: on fonz plug - you need the following installed:
binutils
List item
kernel-headers
pkg-config
uclibc
gcc
make
gettext
patch
bison
flex
autoconf
automake
=======================
then you can:
exported TMPDIR=/ffp/tmp (need this to put tmp files on HD not in memory!)
export CC='gcc -march=armv5t -mfloat-abi=softfp -fno-tree-sink -O0'
export CCFLAGS='-march=armv5t -mfloat-abi=softfp -fno-tree-sink -O0'
export CXX='g++ -march=armv5t -mfloat-abi=softfp -fno-tree-sink -O0'
export GCC='-march=armv5t -mfloat-abi=softfp -fno-tree-sink -O0'
./configure --prefix=/ffp --without-snapshot
......
make
.............
make install
Notes:
I have had experience with NodeJS 0.4.9 - so I have kept using it - read on as to why!!
actually compiles without error!!
BUT!
when run - Illegal instruction - pops up
this appears to be because V8 is EXPECTED to be compiled on a host machine that is NOT an arm!!
see below for references I've found: (It has taken me 6 months to find all these)
Someone, somewhere has decided that running on ARM is an embedded environment, therefore you won't ever compile on it!!
Can someone with authority fix this!!!!!!!!
The fact that I can get a clean compile, says it can work, but at least V8 doesn't want us too????
original instructions:
https://github.com/joyent/node/wiki/Installation
What else I have found:
http://code.google.com/p/v8/wiki/CrossCompilingForARM
http://code.google.com/p/v8/issues/detail?id=914
http://code.google.com/p/v8/issues/detail?id=1632&q=vfp%20off&colspec=ID%20Type%20Status%20Priority%20Owner%20Summary%20HW%20OS%20Area%20Stars
https://github.com/joyent/node/issues/1566
http://fastr.github.com/articles/Node.js-on-OpenEmbedded.html
http://freebsd.1045724.n5.nabble.com/problems-with-cvsup-on-FreeBSD-9-snapshot-201101-td4491053.html
http://code.google.com/p/v8/issues/detail?id=1446
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dai0133c/index.html
http://infocenter.arm.com/help/topic/com.arm.doc.qrc0007e/QRC0007_VFP.pdf
https://github.com/joyent/node/issues/1386
https://github.com/joyent/node/issues/2131
and from this article, the minimum installs of fonz stuff that needs to be pre-installed:
http://www.shadowandy.net/2008/08/adding-compiling-capabilities-to-your-dns-323.htm
DNS323 - wiki site (new one??!!)
http://dns323.kood.org/dns-320
http://tsd.dlink.com.tw/downloads2008detail.asp (open source for dlink find dns then 320 - or 323 etc)
compiler options:
http://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html
debugging options:
http://gcc.gnu.org/onlinedocs/gcc/Debugging-Options.html
Interest:
http://itrs.tw/wiki/Code_Sourcery_G%2B%2B_Lite
http://pandorawiki.org/Floating_Point_Optimization
http://wiki.debian.org/ArmHardFloatPort/VfpComparison
https://groups.google.com/group/nodejs-dev/browse_thread/thread/18bfc495b01e2f9b/8507143b5578ebf9#8507143b5578ebf9
http://stackoverflow.com/questions/6788768/cannot-build-node-on-sheevaplug-armv5t-with-debian-squeeze/6790823#6790823
http://www.plugcomputer.org/plugwiki/index.php/Scratchbox2_based_cross_compiling
why 0.4.9, and not 0.6.6?
6.6 may be crapping out in same place but error says EABI error, compiler with Fonz doesn't understand EABI.
I think the V8 guys (or node guys) have relabelled the error descriptor to EABI, I don't know enough to trace problem.
There are heaps of other switches available for the compilers - I have given up trying them all (other people seem to have found the same problem, they are MUCH more knowledgable than I am, so I am going to assume that they have tried all the available switches that could make a difference).
As a matter of interest the V8 compile only uses the CXXFLAGS switch to compile with, whilst the rest of NodeJS seems to use the others I have labelled!!
Also note that in order to get snapshot running it points to /tmp - no matter what I do, -
I ended up moving /tmp to /tmp1, and ln -s /tmp /ffp/tmp
ie
mv /tmp /tmp1
ln -s /tmp /ffp/tmp

Compiled gcc4.4.6 on one machine, how to let another machine use it?

I built gcc 4.4.6 (to use CUDA) on a fast server, it takes about 10 min. However, on my own desktop, it takes kinda for ever to compile.
So both machines are 64 bit Linux, although 1 is Ubuntu while the other is Arch Linux. Arch Linux has new kernel version.
So on the server, I installed the built gcc-4.4.6 to /opt. And I just copy /opt/gcc-4.4.6 to my PC's /opt/gcc-4.4.6.
em, seems like it doesn't quite work, when I tried
./x86_64-unknown-linux-gnu-gcc ~/Development/c/hello/hello.c
it shows
x86_64-unknown-linux-gnu-gcc: error trying to exec 'cc1': execvp: No such file or directory
So what can I do now?
Thanks,
Alfred
If the systems are similar enough, you could compile GCC on the big machine (don't forget that GCC needs to be configured and built in a directory outside of its source tree), then run make -j3 all and then make install DESTDIR=/tmp/gccinst/ and copy that /tmp/gccinst directory to your small machine, and finally copy it into the root filesystem (on the small machine).
However, GCC 4.4.6 is quite old today, if you are compiling GCC try to compile GCC 4.6.2 (or 4.6.1 at least).
And (shameless plug for my work) if you compile a GCC 4.6, please enable plugins on it, then you might try the GCC MELT [meta-] plugin (MELT is a high level domain specific language to ease the development of GCC extensions).

Resources