unable to execute 'x86_64-conda_cos6-linux-gnu-gcc': No such file or directory (pysam installation) - linux

I am trying to install pysam.
After excecuting:
python path/to/pysam-master/setup.py build
This error is produced:
unable to execute 'x86_64-conda_cos6-linux-gnu-gcc': No such file or directory
error: command 'x86_64-conda_cos6-linux-gnu-gcc' failed with exit status 1
There are similar threads, but they all seem to address the problem assumig administriator rights, which I do not have. Is there a way around to install the needed files?
DISCLAIMER: This question derived from a previous post of mine.
manually installing pysam error: "ImportError: No module named version"
But since it might require a different approach, I made it a question of its own.

You can also receive the same error while installing some R packages if R was installed using conda (as I had).
Then just install the package by executing: conda install gxx_linux-64 to have that command available.
Source:
https://github.com/RcppCore/Rcpp/issues/770#issuecomment-346716808

It looks like Anaconda had a new release (4.3.27) that sets the C compiler path to a non-existing executable (quite an embarrassing bug; I'm sure they'll fix it soon). I had a similar issue with pip installing using the latest Miniconda, which I fixed by using the 4.3.21 version and ensuring I was not doing something like conda update conda.
See https://repo.continuum.io/miniconda/ which has release dates and versions.

It should now be safe to update conda. This is fixed in the following python packages for linux-64:
python-3.6.2-h0b30769_14.tar.bz2
python-2.7.14-h931c8b0_15.tar.bz2
python-2.7.13-hac47a24_15.tar.bz2
python-3.5.4-hc053d89_14.tar.bz2
The issue was as Jon Riehl described - we (Anaconda, formerly Continuum) build all of our packages with a new GCC package that we created using crosstool-ng. This package does not have gcc, it has a prefixed gcc - the missing command you're seeing, x86_64-conda_cos6-linux-gnu-gcc. This gets baked into python, and any extension built with that python goes looking for that compiler. We have fixed the issue using the _PYTHON_SYSCONFIGDATA_NAME variable that was added to python 3.6. We have backported that to python 2.7 and 3.5. You'll now only ever see python using default compilers (gcc), and you must set the _PYTHON_SYSCONFIGDATA_NAME to the appropriate filename to have the new compilers used. Setting this variable is something that we'll put into the activate scripts for the compiler package, so you'll never need to worry about it. It may take us a day or two to get new compiler packages out, though, so post issues on the conda-build issue tracker if you'd like to use the new compilers and need help getting started.
Relevant code changes are at:
py27: https://github.com/anacondarecipes/python-feedstock/tree/master-2.7.14
py35: https://github.com/anacondarecipes/python-feedstock/tree/master-3.5
py36: https://github.com/anacondarecipes/python-feedstock

The solution that worked for me was to use the conda to install the r packages:
conda install -c r r-tidyverse
or r-gggplot2, r-readr
Also ensure that the installation is not failing because of admin privileges.
It will save you a great deal of pain

After upgrading Golang to 1.19.1, I started to get:
# runtime/cgo
cgo: C compiler "x86_64-conda-linux-gnu-cc" not found: exec: "x86_64-conda-linux-gnu-cc": executable file not found in $PATH
Installing gcc_linux-64 from the same channel, has resolved it:
conda install -c anaconda gcc_linux-64

Somewhere in your $PATH (e.g., ~/bin), do
ln -sf $(which gcc) x86_64-conda_cos6-linux-gnu-gcc
Don't put this in a system directory or conda's bin directory, and remember to remove the link when the problem is resolved upstream. gcc --version should be version 6.
EDIT: I understand the sentiment in the comments against manipulating system paths, but maybe we can use a little critical thinking for the actual case in hand before reciting doctrine. What actually have we done with the command above? Nothing more than putting an executable (symlink) called x86_64-conda_cos6-linux-gnu-gcc in one's personal ~/bin directory.
If putting something in one's personal ~/bin directory broke future conda (after it fixes the C compiler path to point to gcc it embeds), then that would be a bug with conda. Would the existence of this verbosely named compiler mess with anything else? Unlikely either. Even if something did pick it up, it's just your system gcc after all...

Related

Install an external library instance of libv8-3.14 to folder

I need libv8-3.14 to run some R packages on linux, but I don't have root access/sudo access on the linux computer I'm using so I'd like to install an external folder instance of libv8-3.14. I've seen R packages reference this as external as CDFLAG="folder/v8-3.14" so I know it is possible.
I'm new(ish) to linux but I've installed external libraries before with tar.gz files which then have a configure file in them, which I set the external folder with ./configure --prefix==/folder/loc, but the only downloads I can find of libv8 are .git (which I can't get to work either).
How can I install an libv8-3.14 to a folder and install so I can set:
export PATH=$PATH:/path/to/install/
and
export `LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/install/`
I had the exact same problem. In case somebody in the future comes across this post, I will leave my suggestions and how it worked out in the end. Also, all credits go to an experienced colleague of mine.
The most sure thing to do is to consult IT, or someone who has already had the same problem, there is usually a workaround these issues.
A way you can do it yourself:
Create an anaconda environment, you can name it 'V8' or something (make sure the environment is based on the latest python version, or recent enough for r-v8).
activate it
install the conda version of the V8 R interface with conda install -c conda-forge r-v8
That's it. Whenever you need V8, fire up your environment beforehand, and it should be A-OK.
Further advice: If you run into errors when installing r-v8, it may be a good idea to update your conda and all the packages. However, depending on your conda version conda update conda and conda upgrade --all MAY BREAK your conda installation, so be careful. (For further information on this problem, see the endless complaints of people in this issue: https://github.com/conda/conda/issues/8920).
V8 doesn't use autotools, so it has no ./configure. In fact, it provides no installation facilities at all, because it is meant for embedding, not installing.
What I would try is to download the Ubuntu package (guessing from your other question, you are on Ubuntu, right?) for the right architecture from https://packages.ubuntu.com/trusty/libv8-3.14.5, and extracting it manually. .deb files are just ZIP archives.
As a side note, there's no point in setting PATH, because libv8, being a library, provides no executables. LD_LIBRARY_PATH is all you need.

Building own package for conda gcc and binutils issue

This post summarize my painful but finally successful (just by chance) way to build own conda package for the
netgen meshing tool with Python interface. I found the recipe for the netgen build due to tpaviot.
After cloning the repository into 'netgen-conda' folder I ran:
conda build netgen-conda/netgen-6.2-dev
Which reports "Unsatisfiable dependencies": 'oce', 'gcc-5', 'binutils'.
So I tried to install these packages myself. Unfortunately the documentation do not emphasize the important fact that 'conda build' use its own temporary environment so it doesn't matter what you have installed (see). Nevertheless even installing 'gcc-5' together with 'binutils' manually turns out to be nearly impossible.
Hint for other newbies: Lot of my problems disappear after I learned details about channels.
First try was installing 'gcc-5' with 'binutils' from the 'salford_systems' channel suggested by anaconda:
conda install -c salford_systems binutils gcc-5
But it results in:
ERROR conda.core.link:_execute_actions(337): An error occurred while installing package 'salford_systems::gcc-5-5.3.0-0'.
LinkError: post-link script failed for package salford_systems::gcc-5-5.3.0-0
running your command again with-vwill provide additional information
location of failed script: /home/jb/miniconda3/envs/test/bin/.gcc-5-post-link.sh
Using verbose output ('-v') provides no more info. I was also confused by the fact that the script does not exist on the given path (probably automatically deleted).
With current experience I admit that the reason of problem can be dug out from the '-vv' output (reported issue). After some trying I found that only way to
install both is to first install 'gcc-5' into a clean environment and then install 'binutils'. Since 'conda build' installs everything
from scratch and there is no way to specify order of installed packages I was stuck.
Another issue that puzzled me is the 'conda build' long prefix hack. For unknown reason they use extremely long prefix for an auxiliary folder
which result in various kind of issues. I have faced to three such problems:
As is usual today, I have encrypted HOME causing a known issue.
Using a workaround '--croot /tmp' prevents creating the hard links from '/tmp' into 'HOME/miniconda3' since they are on different filesystems.
There is a fallback to use the copy. I even thought that the fallback doesn't work for a while, but it worked, just making the build running longer.
Trying to install 'gcc' (4.x) from 'default' channel complained about too short prefix. So ultimate workaroud was to set the length of the prefix manually
'--prefix-length 70'.
Finally, I found that the dependency on 'binutils' is not necessary and successfully build the package with:
conda build --prefix-length 70 -c salford_systems -c conda-forge -c dlr-sc netgen-conda/netgen-6.2-dev
Summary (of open questions):
Conda channels introduce a new kind of dependency hell already forgotten when using 'apt-get'. Is there a way to figure out what is a canonical channel for a package.
Does anyone succeed to build with combination 'gcc-5' and 'binutils'?
There is still lack of documentation about internal conda mechanisms and error messages do not provide clue to the problem.
Conda-build use a problematic prefix hack and lack ability to control order of installed packages. Does anybody know the reason for this hack?

Deployement Qt error [duplicate]

I wrote application for linux which uses Qt5.
But when I am trying to launch it on the linux without Qt SDK installed, the output in console is:
Failed to load platform plugin "xcb". Available platforms are:
How can I fix this? May be I need to copy some plugin file?
When I use ubuntu with Qt5 installed, but I rename Qt directory, the same problem occurs. So, it uses some file from Qt directory...
UPDATE:
when I create in the app dir "platforms" folder with the file libqxcb.so, the app still doesnot start, but the error message changes:
Failed to load platform plugin "xcb". Available platforms are:
xcb
How can this happen? How can platform plugin be available but can't be loaded?
Use ldd (man ldd) to show shared library dependencies. Running this on libqxcb.so
.../platforms$ ldd libqxcb.so
shows that xcb depends on libQt5DBus.so.5 in addition to libQt5Core.so.5 and libQt5Gui.so.5 (and many other system libs). Add libQt5DBus.so.5 to your collection of shared libs and you should be ready to move on.
As was posted earlier, you need to make sure you install the platform plugins when you deploy your application. Depending on how you want to deploy things, there are two methods to tell your application where the platform plugins (e.g. platforms/plugins/libqxcb.so) are at runtime which may work for you.
The first is to export the path to the directory through the QT_QPA_PLATFORM_PLUGIN_PATH variable.
QT_QPA_PLATFORM_PLUGIN_PATH=path/to/plugins ./my_qt_app
or
export QT_QPA_PLATFORM_PLUGIN_PATH=path/to/plugins
./my_qt_app
The other option, which I prefer is to create a qt.conf file in the same directory as your executable. The contents of which would be:
[Paths]
Plugins=/path/to/plugins
More information regarding this can be found here and at using qt.conf
I tried to start my binary, compiled with Qt 5.7, on Ubuntu 16.04 LTS where Qt 5.5 is preinstalled. It didn't work.
At first, I inspected the binary itself with ldd as was suggested here, and satisfied all "not found" dependencies. Then this notorious This application failed to start because it could not find or load the Qt platform plugin "xcb" error was thrown.
How to resolve this in Linux
Firstly you should create platforms directory where your binary is, because it is the place where Qt looks for XCB library. Copy libqxcb.so there. I wonder why authors of other answers didn't mention this.
Then you may want to run your binary with QT_DEBUG_PLUGINS=1 environment variable set to check which dependencies of libqxcb.so are not satisfied. (You may also use ldd for this as suggested in the accepted answer).
The command output may look like this:
me#xerus:/media/sf_Qt/Package$ LD_LIBRARY_PATH=. QT_DEBUG_PLUGINS=1 ./Binary
QFactoryLoader::QFactoryLoader() checking directory path "/media/sf_Qt/Package/platforms" ...
QFactoryLoader::QFactoryLoader() looking at "/media/sf_Qt/Package/platforms/libqxcb.so"
Found metadata in lib /media/sf_Qt/Package/platforms/libqxcb.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"xcb"
]
},
"className": "QXcbIntegrationPlugin",
"debug": false,
"version": 329472
}
Got keys from plugin meta data ("xcb")
loaded library "/media/sf_Qt/Package/platforms/libqxcb.so"
QLibraryPrivate::loadPlugin failed on "/media/sf_Qt/Package/platforms/libqxcb.so" : "Cannot load library /media/sf_Qt/Package/platforms/libqxcb.so: (/usr/lib/x86_64-linux-gnu/libQt5DBus.so.5: version `Qt_5' not found (required by ./libQt5XcbQpa.so.5))"
This application failed to start because it could not find or load the Qt platform plugin "xcb"
in "".
Available platform plugins are: xcb.
Reinstalling the application may fix this problem.
Aborted (core dumped)
Note the failing libQt5DBus.so.5 library. Copy it to your libraries path, in my case it was the same directory where my binary is (hence LD_LIBRARY_PATH=.). Repeat this process until all dependencies are satisfied.
P.S. thanks to the author of this answer for QT_DEBUG_PLUGINS=1.
I tried the main parts of each answer, to no avail. What finally fixed it for me was to export the following environment variables:
LD_LIBRARY_PATH=/usr/local/lib:~/Qt/5.9.1/gcc_64/lib
QT_QPA_PLATFORM_PLUGIN_PATH=~/Qt/5.9.1/gcc_64/plugins/
Ubuntu 16.04 64bit.
I got the problem for apparently no reasons. The night before I watched a movie on my VideoLan instance, that night I would like to watch another one with VideoLan. VLC just didn't want to run because of the error into the question.
I google a bit and I found the solution it solved my problem: from now on, VLC is runnable just like before. The solution is this comand:
sudo ln -sf /usr/lib/x86_64-linux-gnu/qt5/plugins/platforms/ /usr/bin/
I am not able to explain what are its consequencies, but I know it creates some missing symbolic link.
Since version 5, Qt uses a platform abstraction system (QPA) to abstract from the underlying platform.
The implementation for each platform is provided by plugins. For X11 it is the XCB plugin. See Qt for X11 requirements for more information about the dependencies.
There might be many causes to this problem. The key is to use
export QT_DEBUG_PLUGINS=1
before you run your Qt application. Then, inspect the output, which will point you to the direction of the error. In my case it was:
Cannot load library /opt/nao/plugins/platforms/libqxcb.so: (/opt/nao/bin/../lib/libz.so.1: version `ZLIB_1.2.9' not found (required by /usr/lib/x86_64-linux-gnu/libpng16.so.16))
But that is solved in different threads. See for instance https://stackoverflow.com/a/50097275/2408964.
Probably this information will help. I was on Ubuntu 18.04 and when I tried to install Krita, using the ppa method, I got this error:
This application failed to start because it could not find or load the Qt platform plugin "xcb" in "".
Available platform plugins are: linuxfb, minimal, minimalegl, offscreen, wayland-egl, wayland, xcb.
Reinstalling the application may fix this problem.
Aborted
I tried all the solutions that I found in this thread and other webs without any success.
Finally, I found a post where the author mention that is possible to activate the debugging tool of qt5 using this simple command:
export QT_DEBUG_PLUGINS=1
After adding this command I run again krita I got the same error, however this time I knew the cause of that error.
libxcb-xinerama.so.0: cannot open shared object file: No such file or directory.
This error prevents to the "xcb" to load properly. So, the solution will be install the `libxcb-xinerama.so.0" right? However, when I run the command:
sudo apt install libxcb-xinerama
The lib was already installed. Now what Teo? Well, then I used an old trick :) Yeah, that one --reinstall
sudo apt install --reinstall libxcb-xinerama
TLDR: This last command solved my problem.
I ran into a very similar problem with the same error message. First, debug some by turning on the Qt Debug printer with the command line command:
export QT_DEBUG_PLUGINS=1
and rerun the application. For me this revealed the following:
"Cannot load library /home/.../miniconda3/lib/python3.7/site-packages/PyQt5/Qt/plugins/platforms/libqxcb.so: (libxkbcommon-x11.so.0: cannot open shared object file: No such file or directory)"
"Cannot load library /home/.../miniconda3/lib/python3.7/site-packages/PyQt5/Qt/plugins/platforms/libqxcb.so: (libxkbcommon-x11.so.0: cannot open shared object file: No such file or directory)"
Indeed, I was missing libxkbcommon-x11.so.0 and libxkbcommon-x11.so.0. Next, check your architecture using dpkg from the linux command line. (For me, the command "arch" gave a different and unhelpful result)
dpkg --print-architecture #result for me: amd64
I then googled "libxkbcommon-x11.so.0 ubuntu 18.04 amd64", and likewise for libxkbcommon-x11.so.0, which yields those packages on packages.ubuntu.com. That told me, in retrospect unsurprisingly, I'm missing packages called libxkbcommon-x11-0 and libxkbcommon0, and that installing those packages will include the needed files, but the dev versions will not. Then the solution:
sudo apt-get update
sudo apt-get install libxkbcommon0
sudo apt-get install libxkbcommon-x11-0
So, I spent about a day trying to figure out what was the issue; tried all the proposed solutions, but none of that worked like installing xcb libs or exporting Qt plugins folder. The solution that suggested to use QT_DEBUG_PLUGINS=1 to debug the issue didn't provide me a direct insight like in the answer - instead I was getting something about unresolved symbols within Qt5Core.
That gave me a hint, though: what if it's trying to use different files from different Qt installations? On my machine I had standard version installed in /home/username/Qt/ and some local builds within my project that I compiled by myself (I have other custom built kits as well in other locations). Whenever I tried to use any of the kits (installed by Qt maintenance tool or built by myself), I would get an "xcb error".
The solution was simple: provide the Qt path through CMAKE_PREFIX_PATH and not though Qt5_DIR as I did, and it solved the problem. Example:
cmake .. -DCMAKE_PREFIX_PATH=/home/username/Qt/5.11.1/gcc_64
I faced the same problem when after installing Viber. It had all required qt libraries in /opt/viber/plugins/.
I checked dependencies of /opt/viber/plugins/platforms/libqxcb.so and found missing dependencies. They were libxcb-render.so.0, libxcb-image.so.0, libxcb-icccm.so.4, libxcb-xkb.so.1
So I resolved my issue by installing missing packages with this libraries:
apt-get install libxcb-xkb1 libxcb-icccm4 libxcb-image0 libxcb-render-util0
I like the solution with qt.conf.
Put qt.conf near to the executable with next lines:
[Paths]
Prefix = /path/to/qtbase
And it works like a charm :^)
For a working example:
[Paths]
Prefix = /home/user/SDKS/Qt/5.6.2/5.6/gcc_64/
The documentation on this is here: https://doc.qt.io/qt-5/qt-conf.html
All you need to do is
pip uninstall PyQt5
and
conda install pyqt
Most of the problem of pyqt can be fixed by this simplest solution.
In my case, I needed to deploy two Qt apps on an Ubuntu virtualbox guest. One was command-line ("app"), the other GUI_based ("app_GUI").
I used "ldd app" to find out what the required libs are, and copied them
to the Ubuntu guest.
While the command-line executable "app" worked ok, the GUI-based executable crashed, giving
the "Failed to load platform plugin "xcb" error. I checked ldd for libxcb.so, but this too had no missing dependencies.
The problem seemed to be that while I did copy all the right libraries I accidentally had copied also libraries that were already present at the guest system.. meaning that (a) they were unnecessary to copy them in the first place and (b) worse, copying them produced incompatibilities between the install libraries.
Worse still, they were undetectable by ldd like I said..
The solution? Make sure that you copy libraries shown as missing by ldd and absolutely no extra libraries.
In my case missing header files were the reason libxcb was not built by Qt. Installing them according to https://wiki.qt.io/Building_Qt_5_from_Git#Linux.2FX11 resolved the issue:
yum install libxcb libxcb-devel xcb-util xcb-util-devel mesa-libGL-devel libxkbcommon-devel
Folks trying to get this started on Ubuntu 20.04 please try to run this and see if this solves the problem. This worked for me
sudo apt-get update -y
sudo apt-get install -y libxcb-xinerama0
I link all Qt stuff statically to the generic Linux builds of my open source projects. It makes life a bit easier. You just need to build static versions of Qt libraries first. Of course this cannot be applied to closed source software due to licensing issues. The deployment of Qt5 apps on Linux is currently a bit problematic, because Ubuntu 12.04, for example, doesn't have Qt5 libraries in the package repositories.
I had this problem, and on a hunch I removed the Qt Configs from my environment. I.e.,
rm -rf ~/.config/Qt*
Then I started qtcreator and it reconfigured itself with the existing state of the machine. It no longer remembered where my projects were, but that just meant I had to browse to them "for the first time" again.
But more importantly it built itself a coherent set of library paths, so I could rebuild and run my project executables again without the xcb or qxcb libraries going missing.
I faced the same situation, but on a Ubuntu 20.04 VM.
TL;DR: Check file permissions.
What I did:
I copied the Qt libs required to /usr/local/lib/x86_64-linux-gnu/ and added it to LD_LIBRARY_PATH
I copied the platforms folder from Qt to my application directory and added it to QT_PLUGIN_PATH
I ran ldd on the executable and in the offending libqxcb.so (ldd libqxcb.so), and it complains about some dependencies although ldconfig listed them as found.
linux-vdso.so.1 (0x00007ffee19af000)
libQt5XcbQpa.so.5 => not found
libfontconfig.so.1 => /lib/x86_64-linux-gnu/libfontconfig.so.1 (0x00007f7cb18fb000)
libfreetype.so.6 => /lib/x86_64-linux-gnu/libfreetype.so.6 (0x00007f7cb183c000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f7cb1820000)
libQt5Gui.so.5 => /usr/local/lib/x86_64-linux-gnu/libQt5Gui.so.5 (0x00007f7cb0fd4000)
libQt5DBus.so.5 => not found
I used export QT_DEBUG_PLUGINS=1 for further info. It complains about missing files, although they are there.
What I found:
For some reason, when copying to the VM through the shared folder the files permissions were not the correct ones.
Thus, I ran: sudo chmod 775 * on the libs and voilĂ .
I solved the issue through this https://github.com/NVlabs/instant-ngp/discussions/300
pip uninstall opencv-python
pip install opencv-python-headless
This seems to have been a problem with the cv2 Python package and how it loops in Qt
sudo ln -sf /usr/lib/...."adapt-it"..../qt5/plugins/platforms/ /usr/bin/
It creates the symbolic link it's missed. Good for QT ! Good for VLC !!

How to link with Python3 Libs with cmake?

I have Python3 installed via brew install python3. However, cmake cannot find PythonLibs 3. Here's the header of my CMakeLists.txt.
cmake_minimum_required(VERSION 3.0)
find_package(PythonLibs 3 REQUIRED)
When I ran cmake, I got this error message
Could NOT find PythonLibs: Found unsuitable version "2.7.6", but required is at least "3" (found /usr/lib/libpython2.7.dylib)
Not sure what I did wrong.
In my experience, this happened because I was using an older version of cmake (2.8 instead of 3+) that didn't know about Python 3.4 (it gave up after 3.3.)
The solution was to go into the CMakeLists.txt file and add an "additional versions" directive ABOVE the find_package:
set(Python_ADDITIONAL_VERSIONS 3.4)
find_package(PythonLibs 3 REQUIRED)
You could probably also fix it by upgrading your version of cmake. But the above worked for me with cmake 2.8
Because you are using CMake >= 3.0, you can you find_package(Python COMPONENTS Interpreter Development) see: https://cmake.org/cmake/help/v3.12/module/FindPython.html
That would for instance give you for:
find_package(Python COMPONENTS Interpreter Development)
message("Python_FOUND:${Python_FOUND}")
message("Python_VERSION:${Python_VERSION}")
message("Python_Development_FOUND:${Python_Development_FOUND}")
message("Python_LIBRARIES:${Python_LIBRARIES}")
Results:
Python_FOUND:TRUE
Python_VERSION:3.8.0
Python_Development_FOUND:TRUE
Python_LIBRARIES:/usr/lib/x86_64-linux-gnu/libpython3.8.so
Another reason for this is that CMake can't ever find Python 3 when it is installed from brew on OSX. It looks like the CMake devs know that FindPythonLibs sucks and have a ticket to revamp it but it doesn't look like it will happen any time soon.
I believe the Python interpreter itself knows where its libraries and headers are so I think the best thing to do would be to run it to find out. To get the path to the Python interpreter I would force the user to specify it manually. One of the big issues with Python is that lots of software includes its own copy so you'll end up with 5 copies of it on your system. The chance of picking up the wrong one is just too high. Get the user to specify the correct one.

autoconf error on ubuntu 11.04

i've been googling and installing libraries for a while, but I couldn't quite handle this problem with autoconf.
i have downloaded a program that i want to compile, made a few changes, and need to run autogen.sh and ./configure and make install respectively.
however, when i try to run autogen.sh, i get the following error;
configure.ac:225: error: possibly undefined macro: AM_PATH_GTK_2_0
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
so i went on, downloaded autoconf-2.68, automake-1.11, m4-1.4.16, and tried sudo apt-get install libgtk2.0-dev, none of which changed the outcome. when i try aclocal, i again get the error
configure.ac:225: warning: macro `AM_PATH_GTK_2_0' not found in library
i'm currently stuck, and got nowhere to go. so i'd be glad for any suggestion.
I know it's a bit late, but you just have to install the library libgtk2.0-dev:
sudo apt-get install libgtk2.0-dev
If you get errors about something related to GLIB, then install libglib2.0-dev as well:
sudo apt-get install libglib2.0-dev
Given the name AM_PATH_GTK_2_0, one makes the following sequence of observations:
1) "AM_" is in automake's namespace, so that m4 macro must come from automake.
2) Hmmm, it isn't in automake.
3) It probably comes from gtk, so the gtk developer's have made an error in naming
their m4 macro in conflict with automake. That's a bug in gtk, but I'll probably need
to download the newest version of gtk to get the macro.
The problem is that you don't have the m4 macro that gtk expects you to have. You probably need to install libgtk-devel (or something like that). If I am correct and libgtk is indeed installing an m4 macro named AM_..., please report that as a bug to the developers. They are stomping on automakes' namespace (this is, unfortunately, an extremely common error.)
Since you mention downloading automake, I think the problem is that you are running aclocal that is not looking in /usr/share/aclocal, but in a different location (ie, you installed automake in /usr/local) When you installed libgtk-dev, it probably installed the *.m4 file in /usr/share/aclocal, but you need that file in /usr/local/share/aclocal (or $prefix/share/aclocal, where prefix is what you used to install automake.) The simplest solution is to copy that file to $(aclocal --print) That is, run "aclocal --print" to see where aclocal is looking for m4 files, then find the file that libgtk-dev installed that defines the improperly named m4 macro and copy that file to the appropriate location. Alternatively (and probably a better solution) you can put a file named dirlist in $(aclocal --print) that contains the single line "/usr/share/aclocal", so that your hand installed aclocal will always look for m4 files that are installed in /usr/share.
In case of same issue on CenOS/RedHat:
sudo yum install gtk2-devel

Resources