Solver libraries fail to load on linux - alloy

Alloy fails to load dynamic libraries of the included solvers from the jar file.
I installed alloy from AUR on 64 bit Linux. The jar file does contain e.g.
99085 Sun Feb 22 19:21:37 AST 2015 amd64-linux/libglucose.so
However, when I launch alloy, I get messages:
Failed to load: libglucose.so
How do I get alloy to find these libraries?

Related

What is the missing of "version.lib" in process of building clang with clang-cl?

I successfully built standalone llvm on windows with clang-cl (clang 8.0 downloadable binary) against back-end msvc build tool 2017 with windows 10 sdk using cmake/ninja
After that when I was building standalone clang, it reported "version.lib" in linking phase of clang-rename.exe is missing.
LINK Pass 1: command "....
" failed (exit code 1104) with the following output:
LINK : fatal error LNK1104: cannot open file 'version.lib'
The weird thing is that word version.lib was slabbed in place amoung various lib\clang?????.libs and the leading -LIBPATH:llvm\\.\lib
I tried looking for version.lib in both build folders of llvm and clang, and found none.
Am I supposed to have verson.lib in llvm\lib?
What am I missing here?

How to build glibc with reduced size?

I'm trying to download glibc 2.23 sources and build them on my Ubuntu system.
I need to build that specific version from sources for getting modified version of glibc customized for my research, and it will be used only within my research apps using the loader environment variables (e.g., LD_PREDLOAD or LD_LIBRARY_PATH).
But, when building it as following, I got a huge file as an output (libc.so weights about 11MB):
download the sources to some local dir (let's say /tmp/glibc/)
create new directory for build results (/tmp/glibc/build)
run configure from build dir:
< build-dir >$ ../configure --prefix=< build-dir >
As a result, the build process will produce libc.so file under build-dir with a size of 11MB.
Is there anyway to reduce the size of the built libc.so?
p.s.
Here are my system details:
Linux version 4.4.0-93-generic (buildd#lgw01-03) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4) ) #116-Ubuntu SMP Fri Aug 11 21:17:51 UTC 2017
Thanks :)
Building glibc from source could be a bad idea. See this and some comments there. Its current version is GNU libc 2.26... Consider instead upgrading your entire Ubuntu distribution (Ubuntu 17.10 should be released in a few weeks, end of October 2017)
../configure --prefix= build-dir
is a misunderstanding of the role of --prefix in autoconf-ed software. It relates to where the software is installed, not to its build directory.
(and I don't know exactly what should be your --prefix since libc is so essential to your system, perhaps it should be --prefix=/ but you should check carefully)
Is there any way to reduce the size of the built libc.so?
You might use (very carefully) strip(1), but you risk breaking your system.
And you might not care about reducing the size of libc since it is used (and shared) by almost every software on your Linux system!
BTW, consider also musl-libc. It can cohabit nicely with GNU glibc, and in practice is used only by programs built with musl-gcc (provided by it).
If you are doing some research, it would be reasonable to work in a chroot(2)-ed environment. See also schroot. You could install with the help of make install DESTDIR=/tmp/instmylibc then copy that /tmp/instmylibc appropriately. Read more about autoconf
PS. Be sure to at least back up your important data before such dangerous experimentations. I don't think that the size of your libc.so should be a significant concern. But you need to use chroot, perhaps with the help of debootstrap during installation of the chrooted environment.

Building and using Boost for Windows Visual Studio with a custom ICU

I have been trying for a few days to build a project based on UIMA C++ framework (http://uima.apache.org/). I am currently using the version 2.4.0 release candidate 6, which comes with Linux and Windows binaries to have all dependancies easily bundled.
In particular, it comes with binary libraries for ICU (3.6 I believe).
In my project, I am building a C++ UIMA annotator and my code makes use of Boost C++ library v1.51.0.
Everything compiles fine but at runtime, I get Access Violation exceptions when starting to use, let's say operator <<(ostream&, const icu::UnicodeString&). It may be a problem of version incompatibility between Boost and UIMA C++.
So, I'm trying to recompile Boost on my machine, telling it to reuse the ICU that comes along with UIMA C++, but there seems to be a problem with MSVC toolset because I always get messages telling me there is no ICU available when building Boost:
c:\Users\Sylvain\boost_1_51_0>b2 toolset=msvc-10.0 variant=release -sICU_LINK=c:\users\sylvain\apache-uima\uimacpp
Building the Boost C++ Libraries.
Performing configuration checks
- 32-bit : yes
- x86 : yes
- has_icu builds : no
warning: Graph library does not contain MPI-based parallel components.
note: to enable them, add "using mpi ;" to your user-config.jam
- iconv (libc) : no
- iconv (separate) : no
- icu : no
- icu (lib64) : no
- gcc visibility : no
- long double support : yes
Has anyone managed to build Boost with the -sICU_PATH options and MSVC?
Thanks,
Sylvain
Just had to build Boost with ICU (and succeeded). Since this question is one of the first results on google (and not of very much help right now), I decided to share what I learned.
I was doing an x64 build of Boost 1.56 with MSVC11 (Visual Studio 2012), linking against a custom build of ICU 4.8.1.
First of all, Boost's directory detection for ICU seems a little weird. Here is what my final layout for the ICU directory looked like:
my_icu_root
+- bin
+- bin64
+- include
+- layout
+- unicode
+- lib
+- lib64
I copied all ICU dlls (both Debug and Release versions) to bin, all libs (again Debug and Release) to lib and all header files to include. To make bjam happy, I also had to copy the full bin and lib directories to bin64 and lib64 respectively. Without both directories, either detection of ICU or compilation of Boost.Locale would fail on my machine.
With this layout, all I had to do was to add -sICU_PATH=<my_icu_root> to my usual bjam command line to get it to build.
You know that Boost successfully detected ICU if you get both
- has_icu builds : yes
[...]
- icu : yes
during configuration.
Here is some advice if, for some reason, it does not work right away.
Bjam caches configure information to bin.v2/project-cache.jam. If you try to re-run Bjam after a failed configuration, be sure to delete that file first. Otherwise bjam might decide to just skip ICU detection altogether (you will see lots of (cached) in the console output).
If configuration fails, take a look at bin.v2/config.log to get detailed information on what went wrong. Most likely, it was unable to compile the test program at libs/regex/build/has_icu_test.cpp. This log contains the command line of the build command, which is usually enough to find out what went wrong. If the log seems suspiciously empty, you probably forgot to delete the project-cache.jam.
Finally, a successful configure run is no guarantee for a successful build. On my machine, I managed to configure everything correctly but still had Boost.locale fail during build because of missing lib files. So be sure to check the build output for failed or skipped targets.
Good luck!
Take a look a boost/libs/regex/build/has_icu_test.cpp. I can't remember the fix/issue off the top of my head but you should be able to cheat and simply return 0 from main() there.
Maybe Boost doesn't work with a six year old ICU. Can you rebuild UIMA instead?
My command line as follows:
bjam -sICU_PATH=c:\icu --toolset=msvc-10.0 variant=release stage
Just look into \bin.v2\config.log
It contains exact error. In my case it was absence of specific library for linking
...found 10 targets...
...found 3 targets...
...found 66 targets...
...updating 2 targets...
msvc.link bin.v2\libs\regex\build\msvc-10.0\debug\threading-multi\has_icu.exe
LINK : fatal error LNK1181: cannot open input file 'icuind.lib'
The problem is - boost build anyway looks for debug library even when requested for variant=release.
I'm experiencing the same problem. And the way I choose to work around it is to make a copy of icuin.lib and name it as icuind.lib, and so for other libs. Then bjam says it has found icu.

ELFsh Inject Compiled C code into an existing binary

I have hit a problem working with the ELFsh library, specifically when attempting to use the example in the testsuite which demonstrates injecting compiled C code into an existing binary.
What I've done so far is:
Checkout ELFsh from SVN (svn checkout http://svn.eresi-project.org/svn/trunk/ eresi)
./env.sh, ./configure --enable-32-64 --enable-m64, make, sudo make install. Also tried sudo make install64. After installing dependencies, the library built and installed fine.
eresi/testsuite/elf/etrel_inject/etrel_original contains the test I am interested in running. In there, I made a small change to the Makefile, which I believe has a small bug which is that -m32 is not explicitly specified in the all32 target which causes the result of all32 and all64 to be the same if compiled on a 64-bit system. Next:
I ran make (builds everything okay)
Now, my understanding of what is supposed to happen is that there is a target host.c which is compiled to hijackme32 and hijackme64. These are 32 and 64 bit binaries to which rel.32.o and rel.64.o are injected to respectively.
This injection can be performed in two ways in the test - through the ELFsh script (relinject32.esh and relinject64.sh) and through the C API (relinject32 and relinject64).
Running relinject32 results in:
[E] Unable to load object
Running relinject64 results in:
[E] Unable to copy PLT
To get finer grained information about where the problem lies, I loaded up ELFsh interactively and went through the steps of relinject32/64.esh one by one. Both scripts fail on loading of the target:
(elfsh-0.82-b2-dev#local) load hijackme32
[E] Cannot load object
(elfsh-0.82-b2-dev#local) load hijackme64
[*] Sun Mar 18 02:49:04 2012 - New object loaded : hijackme64
Architecture EM_X86_64 : AMDx86-64 architecture not supported. No flowjack available.
[E] Libmjollnir unsupported architecture
Oddly, attempting to load the 32-bit target with its full path results in a different error:
(elfsh-0.82-b2-dev#local) load /home/mike/Desktop/eresi/testsuite/elf/etrel_inject/etrel_original/hijackme32
Architecture EM_X86_64 : AMDx86-64 architecture not supported. No flowjack available.
[E] Cannot load object
This is surprising because the OS produces:
mike#mike-linux:file hijackme32
hijackme32: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, not stripped
mike#mike-linux:file hijackme64
hijackme64: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, not stripped
The current system I am testing on is x86-64 Ubuntu.
My question is how do I get this test to run? Am I setting up the environment or building incorrectly?
I tried going on the IRC (preferred method of communication), but the project (and channel) seem pretty much dead now.

Build problems, using macports libs with GHC

I'm trying to follow a tutorial for the Diagrams library for haskell.
I've installed Cairo and gtk2hs with macports.
But when I try to run the tutorial examples, I get the following error:
$ ghc --make diagramsTutorial.lhs
Linking diagramsTutorial ...
ld: warning: in /opt/local/lib/libgtk-x11-2.0.dylib, file was built for unsupported file format which is not the architecture being linked (i386)
ld: warning: in /opt/local/lib/libgdk-x11-2.0.dylib, file was built for unsupported file format which is not the architecture being linked (i386)
ld: warning: in /opt/local/lib/libatk-1.0.dylib, file was built for unsupported file format which is not the architecture being linked (i386)
ld: warning: in /opt/local/lib/libpangocairo-1.0.dylib, file was built for unsupported file format which is not the architecture being linked (i386)
.. etc...
I'm using OS X 10.6.8, core i5 macbook pro.
EDIT I have just found that I'm using the 32bit haskell platform, which may be part of the problem. However I can't install the 64 bit version (it has an unspecified error during install). I can't find the uninstaller. According to this page, I should find an uninstaller at /Library/Frameworks/GHC.framework/Tools/Uninstaller but there is nothing there.
It looks like the Macports libraries are 64-bit only. You can check with lipo -info /opt/local/lib/libgtk-x11-2.0.dylib If this is the case, you should reinstall them using the +universal variant, which will allow for linking both 32bit and 64bit code.

Resources