lightgbm version incompatible - python-3.x

I installed the latest version of lightGBM(lgb.__version__ == '2.2.1') which is supported by gcc8, but now I have a model already built with lightgbm==2.0.2 which is supported by gcc7.
I need to conform with the previous version, which means I have to downgrade current version of lightgbm, using pip install lightgbm==2.0.2, however when I import it, I met Library not loaded: /usr/local/opt/gcc/lib/gcc/7/libgomp.1.dylib.
I have checked here and here, the problem is I must use lightgbm of previous version.
I assume the problem is caused by gcc version, so is there a way I can install gcc 7?(by the way I tried create a virtualenv on my computer so that I can have both version of lightgbm, also can I install gcc 7 under the virtual environment and keep gcc 8 on my computer as well?)
Thanks soooo much!

So to start off with, it looks like your problem has more to do with gcc than with your python module. While it is a best practice to use virtual environments for each project, that will only effect the lightgbm module and not your gcc version.
To accomplish what you are trying to do, I would recommend taking a look at the following:
Homebrew install specific version of formula?
Their solution is with postgresql but it should translate to most other programs installed with Homebrew.
The only other option I can think of would be to just use the newest versions of lightgbm and gcc but that doesn't appear to be possible for your project.

Related

Gensim install in Python 3.11 fails because of missing longintrepr.h file

Operating System: macOS Monterey 12.6
Chip: Apple M1
Python version: 3.11.1
I try:
pip3 install gensim
The install process starts well but fatally fails towards the end while running 'clang'. The error message is:
clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -arch arm64 -arch x86_64 -g -I/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -I/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/numpy/core/include -c gensim/models/word2vec_inner.c -o build/temp.macosx-10.9-universal2-cpython-311/gensim/models/word2vec_inner.o
gensim/models/word2vec_inner.c:217:12: fatal error: 'longintrepr.h' file not found
#include "longintrepr.h"
^~~~~~~~~~~~~~~
1 error generated.
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
This issue is raised in a couple of github postings and is attributed to some incompatibility between cython and python 3.11. However, no suggestion is forwarded as to what users should do until cython is updated. I may have misrepresented the details of the discussions on github but I think this is the gist of it.
Can anyone help me in installing gensim in the meantime?
Thanks.
I updated cython and aiohttp. The latter because I had seen a post where the aiohttp install failed for the same reason as mine (missing "longintrepr.h").
No improvement. "pip install gensim" still fails and fails with the same message as copied above.
It seems your issue may be due to the specifics of a fairly-new Python, and lagging library support, on a somewhat new system (a MacOS M1 machine) which has its own somewhat-unique build toolchains.
Unless you absolutely need to use Python 3.11.1, I'd suggest using Gensim within a Python environment with a slightly-older Python interpreter, where the various pacakges you truly need may be a little more settled. For example, on many OS/architecture/Python combinations, a standard pip install will grab precompiled libraries – so build errors of the type you're seeing can't happen.
That your installation is falling back to a local compilation (which hits a problem without an easy off-the-shelf solution) is a hint that something about the full configuration is still somewhat undersupported by one or more of the involved libraries.
If you use the conda 3rd-party system for managing Python virtual environments, it also offers you the ability to explicitly choose which Python version will be used in each environment. That is, you're not stuck with the exact version, and installed libraries, that are default/global on your OS. You could easily try Python 3.10, or Python 3.9, which might work better.
And, keeping your development/project virtual-environment distinct from the system's Python is often considered a "best practice" for other purposes, too. There's no risk that you'll cause damage to the system Python and any tools reliant on it, or face an issue where several of your Python projects need conflicting library versions. (You just use a separate environment for eah project.) And, the exercise of rigorously specifying what needs to be in your project's environment helps keep its prerequisites/dependencies clear for any future relocations/installations elsewhere.
When using the conda tool for this purpose, I usually start with the miniconda version, so that I have explicit control over exactly what packages are installed, and can thus keep each environment minimally-specified for its purposes. (The larger anaconda approach pre-installs tons of popular packages instead.)
I also faced the same issue for gensim library on Windows laptop while using Python 3.11.1 Changing to the Python 3.10 worked for me.
Hey you can download the zip file from here
And then unzip the source tar.gz package and install it :
python setup.py install

How do you force Haskell's cabal-install program to ignore system libraries?

I'm trying to install the accelerate library from hackage, but it keeps giving me an error saying that my version of base package (4.15.1.0) is too new. Is there a way to force it to ignore the base package that's installed as a system-wide package, and instead download the correct version of the base package?
I'm using Manjaro Linux, ghc version 9.0.2, and cabal version 3.4.0.0. I can't seem to find any documentation on how to force it to ignore a system package. I've tried searching on https://cabal.readthedocs.io/en/3.6/ , but it doesn't seem to mention it anywhere.
The base package is always hard-fixed to the compiler version. The only way to switch base is to switch to a different GHC. That's easier with Stack than it is with Cabal-install – just select a snapshot that has a suitable base version (lts-18.24 would do), and Stack will automatically install the corresponding compiler.
But it could well be that you can actually use base-4.15, and just accelerate has conservative dependency bounds. Try installing it with --allow-newer=base. If that works, give the maintainers a PR that the version bounds can be relaxed.

How to install VTK5 on Archlinux?

I need to run a program which use VTK5 on my Archlinux PC, but I found it really hard to install VTK5, there is only VTK6(not compatible with VTK5) in official repo, and when I try to install it from AUR, it returns "Makepg was unable to build vtk5", then I try to install through source code, the result is that I was unable to install the VTK Python module...
Is there anybody who has any experience or idea about it?
I have not installed on Archlinux specifically, but on different linux machines. If you compile from source and are interested in python, remember to select the option python wrapping when running cmake. Btw, once built, you will have to update both the pythonpath and the ldlibrarypath.
You can also have a try at enthought canopy, which distributes a complete installation with numpy, scipy, vtk http://docs.enthought.com/canopy/quick-start/install_linux.html

Upgrading a package whose newest version is not still in the distribution repository [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I need to upgrade libpng from version 1.2 to 1.5. I need to do so because of this: libpng warning: Application built with libpng-1.2.26 but running with 1.5.2. I am using Lubuntu 11.10 and in the Canonical repositories libnpg 1.5 is not still released although at Debian ones there are testing packages (http://packages.debian.org/search?keywords=libpng) that at first they would fit to me. I added the Debian repositories to Synaptec and I was able to install libpng15, but those packages do not replace libpng12, son when it comes to compile some source code the IDE uses libpng12 instead of libpng15.
To try to solve this I downloaded the libpng15 deb package, uncompressed it and changed the Replaces, Conflicts and Provides tags of the control file with the libpng15 text. Then, I executed the modified deb, but what I only got was a GDebi error and a general system failure because (I think) libpng12 was uninstalled with no replacement and Lubuntu heavily depends on it, which forced me to reinstall Lubuntu because the computer did not boot again in Linux. Yes, this solution is not the neatest way I think.
So, is there any way to upgrade a package and replace the old version whose newer version exists but it is not still in the distribution repository? I found ubuntu repository for libpng and How to upgrade a package in linux that was built from source?. Although not very determinant so far.
I have not found out how to upgrade and replace a package whose newer version is not still in the distribution repository. But I have realized that if some library X relies on a given version of other library Y, there is no way to change the version of that dependence unless you make some change onto the source code of X, that is it, the library X is recompiled to point to the desired version (usually with the help of some configuration tag). Even though some trick could be done as by modifying the symlink of the library Y to point to the newer version. Then, the compiler will complain and ask for the old version.
Maybe this looks obvious now. But if the software that has to be recompiled requires many hours, has unresolved dependences or gives built errors you will try to avoid the compilation no matter if you are violating thermodynamics laws.
So in my case I had to recompile Qt and by using the -system-libpng configuration tag Qt understood it had to use system libpng libraries, not in-built ones. And after 8 hours of compiling I got a successfully built which solved this libpng problem.
Thanks everyone for the comments and suggests.
For all of the trouble you're going through, it might be easier to simply compile from source, and install to /usr/local (instead of /usr, as debs do). I've done this for several library dependencies for programs I've compiled (with make build systems) without any trouble. However, it sounds like the program(s) you're compiling are having trouble choosing the right version of the package. In my opinion, that is the real issue. Having multiple versions of a library installed simultaneously is supported, but perhaps not by apt in the case of mixing Debian and Ubuntu repos.
When you compile your program, use gcc -lpng15 instead of -lpng. According to the gcc info manual, an option of -lname causes the linker to look for libname.a in the lib folders. On my system (Ubuntu 10.04), libpng.a is a symlink to libpng12.a. This is why your program is choosing the wrong lib.
Try adding this ppa: https://launchpad.net/~linaro-maintainers/+archive/overlay. It contains libpng1.5 for Oneiric.
You can install it by running
sudo add-apt-repository ppa:linaro-maintainers/overlay
sudo apt-get update
sudo apt-get install libpng1.5
To properly link against libpng15, you will also need to install libpng15-dev.

What's the best way to build software that doesn't require the newest glibc?

I'm attempting to build a binary package that can be run on multiple Linux distributions. It's currently built on Ubuntu 10.04, but it fails on Ubuntu 8.04 with the following error:
./test: /usr/lib/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by ./test)
./test: /usr/lib/libstdc.so.6: version `GLIBC_2.11' not found (required by ./test)
What's the preferred way to solve this problem? Is there a way to install an old glibc on a new box and build against it, or do I have to build on an old distribution? And if I build against an old glibc, will it work on a new glibc?
Or, alternatively, are there just some handy compiler flags or packages I could install to solve the problem?
The best solution I've found is to install a virtual machine running Debian stable, and build on that. Debian stable is old enough that any packages built with it will run on any other Debian-based distribution like Ubuntu. You may have to work around non-critical bugs that have been fixed in later versions of various software but not backported to Debian stable.
If you really want to make sure it runs on every recent distribution, you might also consider statically linking against a libC you select. However you may then still run into problems if you use features that are only provided by newer kernels (newer system calls e.g.).

Resources