I'm trying to install packages with Spack following the instruction on how to install OpenMPI.
When I install packages like xpmem, I get a first error saying ERROR: Kernel configuration is invalid. Later in the output I get compilation errors like error: passing argument 4 of ‘proc_create_data’ from incompatible pointer type.
I already tried to re-install linux-headers as done here. Also I tried to use different versions of the GCC compiler as well as xpmem package and used a clean PATH variable as recommended by Spack. But nothing seems to fix it. Also on another machine it yielded the same error (Both are Ubuntu 20.04.3).
Does anyone have an idea what could be wrong?
As a package manager, spack should install xpmem's dependencies, then try and compile xpmem itself. When encountering build errors, the next step is to try spack cd and running the compile steps manually.
Related
When I run the config file for installing GSL library for Windows 10 I get the following error:
error: Something went wrong bootstrapping makefile fragments for
automatic dependency tracking. If GNU make was not used, consider
re-running the configure script with MAKE="gmake" (or whatever is
necessary). You can also try re-running configure with the
'--disable-dependency-tracking' option to at least be able to build
the package (albeit without support for automatic dependency
tracking).
If I run ./config MAKE="gmake" I still get the error. I have searched in StackOverflow and on the web and still haven't found a solution.
I wanted to test out Pocketsphinx in Node.JS. It says I need to install Swig version 3.0.7 or above.
I think I installed all the other dependencies correctly. I can even type Swig commands in the Terminal now, but I keep getting this error whenever I run npm install pocketsphinx:
CMake Error at /usr/local/Cellar/cmake/3.6.3/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:148 (message):
Could NOT find SWIG (missing: SWIG_DIR) (Required is at least version
"3.0.7")
Call Stack (most recent call first):
/usr/local/Cellar/cmake/3.6.3/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:388 (_FPHSA_FAILURE_MESSAGE)
/usr/local/Cellar/cmake/3.6.3/share/cmake/Modules/FindSWIG.cmake:75 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
CMakeLists.txt:4 (find_package)
I tried brew install swig, npm install swig, and npm install -g swig. I tried going to the swig download page and following the installation instructions, but nothing I seem to do stops the error from happening. I'm trying this on a Macbook by the way.
I really have no clue what I'm doing here. I just wanted to test out Pocketsphinx and now I've installed Swig in 4 different places, and CMake can't seem to recognise any of them.
Any help would be wonderful!
Came here looking for the Windows-based error. I found a solution that seems to work for me, so decided to post it here.
Create two environment variables in the "System Variables" section: SWIG_DIR and SWIG_EXECUTABLE. These must point to /path/to/the/swig/dir/ and /path/to/the/swig/dir/swig.exe respectively.
After this, add one more entry to the PATH variable: /path/to/the/swig/dir. Test this out by typing swig in the command prompt. It should display a message must specify an input file. Use -help for available options.
Restart the computer to apply all environment variable changes. find_package(SWIG required) should work correctly now.
Check the source code for the FindSwig.cmake.
Unfortunately, if a find script does not work as expected and you do not see right away why that is the case, you usually have to dig into its source. In your case, it looks like CMake was able to find and run the SWIG executable, but then failed to obtain the swig directory.
Try manually running swig -swiglib and check that the printed directory indeed contains a swig.swg file. Also, be sure that the swig executable found by CMake is actually the correct one (you can verify this by inspecting the value of SWIG_EXECUTABLE in either the cmake-gui, the ccmake curses interface, or in the CMakeCache.txt file directly).
Note that CMake will not update the executable path once it has been found! So if you make changes to your system that influence the executable location, you will have to clear the cache (eg. by deleting the CMakeCache.txt) and re-run CMake for the changes to take effect.
I tried to work it out for myself. The problem was when I typed npm install swig.
I forgot to set up this version of Swig, and the compiler was using it.
I typed sudo npm uninstall swig and it worked perfectly. I feel very stupid!
I am trying to install pysam.
After excecuting:
python path/to/pysam-master/setup.py build
This error is produced:
unable to execute 'x86_64-conda_cos6-linux-gnu-gcc': No such file or directory
error: command 'x86_64-conda_cos6-linux-gnu-gcc' failed with exit status 1
There are similar threads, but they all seem to address the problem assumig administriator rights, which I do not have. Is there a way around to install the needed files?
DISCLAIMER: This question derived from a previous post of mine.
manually installing pysam error: "ImportError: No module named version"
But since it might require a different approach, I made it a question of its own.
You can also receive the same error while installing some R packages if R was installed using conda (as I had).
Then just install the package by executing: conda install gxx_linux-64 to have that command available.
Source:
https://github.com/RcppCore/Rcpp/issues/770#issuecomment-346716808
It looks like Anaconda had a new release (4.3.27) that sets the C compiler path to a non-existing executable (quite an embarrassing bug; I'm sure they'll fix it soon). I had a similar issue with pip installing using the latest Miniconda, which I fixed by using the 4.3.21 version and ensuring I was not doing something like conda update conda.
See https://repo.continuum.io/miniconda/ which has release dates and versions.
It should now be safe to update conda. This is fixed in the following python packages for linux-64:
python-3.6.2-h0b30769_14.tar.bz2
python-2.7.14-h931c8b0_15.tar.bz2
python-2.7.13-hac47a24_15.tar.bz2
python-3.5.4-hc053d89_14.tar.bz2
The issue was as Jon Riehl described - we (Anaconda, formerly Continuum) build all of our packages with a new GCC package that we created using crosstool-ng. This package does not have gcc, it has a prefixed gcc - the missing command you're seeing, x86_64-conda_cos6-linux-gnu-gcc. This gets baked into python, and any extension built with that python goes looking for that compiler. We have fixed the issue using the _PYTHON_SYSCONFIGDATA_NAME variable that was added to python 3.6. We have backported that to python 2.7 and 3.5. You'll now only ever see python using default compilers (gcc), and you must set the _PYTHON_SYSCONFIGDATA_NAME to the appropriate filename to have the new compilers used. Setting this variable is something that we'll put into the activate scripts for the compiler package, so you'll never need to worry about it. It may take us a day or two to get new compiler packages out, though, so post issues on the conda-build issue tracker if you'd like to use the new compilers and need help getting started.
Relevant code changes are at:
py27: https://github.com/anacondarecipes/python-feedstock/tree/master-2.7.14
py35: https://github.com/anacondarecipes/python-feedstock/tree/master-3.5
py36: https://github.com/anacondarecipes/python-feedstock
The solution that worked for me was to use the conda to install the r packages:
conda install -c r r-tidyverse
or r-gggplot2, r-readr
Also ensure that the installation is not failing because of admin privileges.
It will save you a great deal of pain
After upgrading Golang to 1.19.1, I started to get:
# runtime/cgo
cgo: C compiler "x86_64-conda-linux-gnu-cc" not found: exec: "x86_64-conda-linux-gnu-cc": executable file not found in $PATH
Installing gcc_linux-64 from the same channel, has resolved it:
conda install -c anaconda gcc_linux-64
Somewhere in your $PATH (e.g., ~/bin), do
ln -sf $(which gcc) x86_64-conda_cos6-linux-gnu-gcc
Don't put this in a system directory or conda's bin directory, and remember to remove the link when the problem is resolved upstream. gcc --version should be version 6.
EDIT: I understand the sentiment in the comments against manipulating system paths, but maybe we can use a little critical thinking for the actual case in hand before reciting doctrine. What actually have we done with the command above? Nothing more than putting an executable (symlink) called x86_64-conda_cos6-linux-gnu-gcc in one's personal ~/bin directory.
If putting something in one's personal ~/bin directory broke future conda (after it fixes the C compiler path to point to gcc it embeds), then that would be a bug with conda. Would the existence of this verbosely named compiler mess with anything else? Unlikely either. Even if something did pick it up, it's just your system gcc after all...
This post summarize my painful but finally successful (just by chance) way to build own conda package for the
netgen meshing tool with Python interface. I found the recipe for the netgen build due to tpaviot.
After cloning the repository into 'netgen-conda' folder I ran:
conda build netgen-conda/netgen-6.2-dev
Which reports "Unsatisfiable dependencies": 'oce', 'gcc-5', 'binutils'.
So I tried to install these packages myself. Unfortunately the documentation do not emphasize the important fact that 'conda build' use its own temporary environment so it doesn't matter what you have installed (see). Nevertheless even installing 'gcc-5' together with 'binutils' manually turns out to be nearly impossible.
Hint for other newbies: Lot of my problems disappear after I learned details about channels.
First try was installing 'gcc-5' with 'binutils' from the 'salford_systems' channel suggested by anaconda:
conda install -c salford_systems binutils gcc-5
But it results in:
ERROR conda.core.link:_execute_actions(337): An error occurred while installing package 'salford_systems::gcc-5-5.3.0-0'.
LinkError: post-link script failed for package salford_systems::gcc-5-5.3.0-0
running your command again with-vwill provide additional information
location of failed script: /home/jb/miniconda3/envs/test/bin/.gcc-5-post-link.sh
Using verbose output ('-v') provides no more info. I was also confused by the fact that the script does not exist on the given path (probably automatically deleted).
With current experience I admit that the reason of problem can be dug out from the '-vv' output (reported issue). After some trying I found that only way to
install both is to first install 'gcc-5' into a clean environment and then install 'binutils'. Since 'conda build' installs everything
from scratch and there is no way to specify order of installed packages I was stuck.
Another issue that puzzled me is the 'conda build' long prefix hack. For unknown reason they use extremely long prefix for an auxiliary folder
which result in various kind of issues. I have faced to three such problems:
As is usual today, I have encrypted HOME causing a known issue.
Using a workaround '--croot /tmp' prevents creating the hard links from '/tmp' into 'HOME/miniconda3' since they are on different filesystems.
There is a fallback to use the copy. I even thought that the fallback doesn't work for a while, but it worked, just making the build running longer.
Trying to install 'gcc' (4.x) from 'default' channel complained about too short prefix. So ultimate workaroud was to set the length of the prefix manually
'--prefix-length 70'.
Finally, I found that the dependency on 'binutils' is not necessary and successfully build the package with:
conda build --prefix-length 70 -c salford_systems -c conda-forge -c dlr-sc netgen-conda/netgen-6.2-dev
Summary (of open questions):
Conda channels introduce a new kind of dependency hell already forgotten when using 'apt-get'. Is there a way to figure out what is a canonical channel for a package.
Does anyone succeed to build with combination 'gcc-5' and 'binutils'?
There is still lack of documentation about internal conda mechanisms and error messages do not provide clue to the problem.
Conda-build use a problematic prefix hack and lack ability to control order of installed packages. Does anybody know the reason for this hack?
I'm trying to modify GTK2 on Ubuntu Oneiric.
I download the source:
apt-get source libgtk2.0-0
cd gtk+2.0-2.24.6/
I try to compile and overwrite the current GTK2:
./configure --prefix=/usr
sudo make
Soemhow I get an error (I have all the necessary libraries and the build-essential package etc):
In file included from gtkquery.c:26:0:
gtkquery.h:31:2: error: #error "gtkfilechooserprivate.h is not supported API for general use"
By the way, I am able to modify and recompile GTK3 with no problems with the same steps.
If use debuild, I get thousands of
dpkg-source: error: cannot represent change to gtk+2.3.0-2.24.6/gtk+2.0-2.24.6/something: binary file contents changed
You won't get anything near the Ubuntu-provided build if you try building it by hand that way -- you'll miss all the ./configure options and other settings. (Look into debian/rules for the full details of what they're setting.)
Instead, try debian/rules build.
For reasons I haven't investigated yet (possibly including me not understanding how it should work), that didn't work on the first package I tried, but setting up pbuilder let me build the package I wanted.
It might feel like overkill to get a clean chroot as a build environment, but it is way too easy to build yourself problems that no one else in the world can replicate because you've got something funny on your local system.