What installed in /usr/local are not seen by others - linux

I often have this issue when configuring a software on linux. When I install some library (for instance libsodium) by cloning the repository then doing the usual
./autoconf.sh
./configure
make
make install
I get everything install in /usr/local/ which is absolutely good for me.
Unfortunately, when I try to install something that depends on this library (for example libzmq, I get the issue
configure: error: Package requirements (libsodium) were not met:
No package 'libsodium' found
Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Alternatively, you may set the environment variables sodium_CFLAGS
and sodium_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.
I guess the problem is because configure is looking on usr/ and not /usr/local. The ugly workaround is to install everything in usr/ instead of /usr/local. A more brutal approach would be to copy all what is installed in /usr/local into /usr/.
What is the correct solution when facing this kind of issues?
How should I adjust the PKG_CONFIG_PATH or the sodium_LIBS?

Set the PKG_CONFIG_PATH to /usr/local by means of your shell.
Some work with export, some with other means.
E. G.
export PKG_CONFIG_PATH=/usr/locall

run:
$ export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig/
prior to ./autogen.sh && ./configure.

Related

Yum/apt-get before cpan to manage UNIX system-wide Perl modules?

Perl's cpan command is a powerful way to manage Perl modules. However, when maintaining modules system-wide under UNIX, Michal Ingeli notes that another possible option is
yum install 'perl(PerlModuleName)'. If available, should yum be my first resort in this case?
For example, the command cpanm CGI installs the CGI module under my ~/perl5 directory, which may be best if the CGI module is only needed by scripts run under my account. But this won't provide the CGI module to scripts run by other accounts.
I can use cpanm -l <directory> to force the cpanm command to load modules to a specific directory (e.g., cpanm -l /usr/local CGI to install CGI to /usr/local/lib/perl5), or I can edit ~/cpan/CPAN/MyConfig.pm to change the default install location cpan uses.
But on nearly all systems, multiple Perl system library locations exist (/usr/local/share/perl5, /usr/share/perl5/vendor_perl, /usr/lib64/perl5, etc.), and choosing the correct one is somewhat arbitrary since these are not generated by the cpan command.
With this in mind, should I turn to yum (if available) before cpan for system-wide UNIX Perl module management? It's easy enough to test with a command like:
yum install 'perl(LWP::Simple)'
If yum failed in this instance, I would fall back to:
cpanm -l <directory> LWP::Simple
What do you recommend in this type of case, and why?
(Note that nxadm has answered a more general question about this.)
To summarize answers so far:
If at all possible, use the system package manager to update CPAN modules. E.g., for LWP::Simple:
yum install 'perl(LWP::Simple)', or
apt-get install liblwp-simple-perl
If the preceding fails, try to implement a separate Perl environment in which to use CPAN modules not present in the system-wide libraries. Consider local::lib or Perlbrew for this;
Only if the above options don't apply, use cpanm -l <directory> to load the module to a system-wide directory.
I can't speak from experience with RPM/yum systems, but I have done a lot of work with Perl applications on Debian systems and I do highly recommend going with the system packaged versions of CPAN modules if you can. I know a lot of people disagree and historically they may have had good reason but I've been doing it for a long time and find it works very well.
In the Debian world there are an enormous number of Perl modules in pre-packaged form and if you happen to need one that isn't packaged you can build your own package with dh-make-perl and put it in your local apt repository. Being able to run apt-get install your-application and have it pull in all the required dependencies is a real time saver when your code is moving through Dev -> Staging/UAT -> Production workflows. It also gives you confidence that the version of a particular module you're deploying to production is the same as the one you tested in UAT.
One thing you absolutely should not do is use cpanm or the cpan shell as root to install modules into the system directories. If you decide to install direct from CPAN, then use local::lib to install the modules in an application-specific lib directory.
[Edit] Some sample commands as requested:
On a Debian-based system, you would first install the dh-make-perl tool:
sudo apt-get-install dh-make-perl
Then to download a package from CPAN and build it into a .deb file you would run a command like this*:
dh-make-perl --build --cpan Algorithm::CouponCode
You could install the resulting .deb file with:
sudo dpkg -i libalgorithm-couponcode-perl_1.005-1_all.deb
Managing your own apt repository is a whole other topic. In my case I'd copy the .deb to an appropriate directory on the local apt server and run a script to update the index (I think our script uses dpkg-scanpackages).
Note in my opening paragraph above I recommend using systems packages "if you can". To be clear, I meant in the case where most of the modules you want are already packaged by Debain. The example above did not build packages for any dependencies. If your app involves installing modules which have long dependency chains that are not in Debian already, then using cpanm and local::lib will simplify the install. But then you shoulder the burden of repeating that as your code advances through staging to production servers. And you may need to use cpanfile or carton to make sure you're getting the same versions at each step.
* one gotcha: if you have previously set up local::lib so that cpan installs go into a private directory (e.g.: /home/user/perl5) then that will affect the pathnames used in the .deb produced by dh-make-perl. To avoid that, run this before dh-make-perl:
unset PERL5LIB PERL_LOCAL_LIB_ROOT PERL_MB_OPT PERL_MM_OPT
Your system's perl was put there for your system's use. The folks that maintain your distribution will update it when they see fit to another version that suits the needs of your system. Using your system's Package Manager to manage it is really your best idea.
Feel free to use it, but if you need a different version, for whatever reason, you are best rolling your own into a separate location. When maintaining your own perl install, use CPAN.

How can I tell if Mono is installed properly on Linux?

I asked IT to install Mono on CentOS using the following commands:
$yum install bison gettext glib2 freetype fontconfig libpng libpng-devel libX11 libX11-devel glib2-devel libgdi* libexif glibc-devel urw-fonts java unzip gcc gcc-c++ automake autoconf libtool make bzip2 wget
$cd /usr/local/src
$wget http://download.mono-project.com/sources/mono/mono-3.2.5.tar.bz2
$tar jxf mono-3.2.5.tar.bz2
$cd mono-3.2.5
$./configure --prefix=/opt/mono
$make && make install
However, when I run mono myapp.exe I get
-bash: mono: command not found
I know nothing about Linux - I feel like I'm in Japan. Assuming Linux has a path variable or something like it, maybe mono isn't in the path?
I can't even find an executable called mono in /usr/local/src, just a mono folder. Mind you I can't work out how to even search for a file so I might not be looking properly.
How can I tell whether its installed correctly? Maybe its just not available to the non-admin account I use?
I'm lost. Help!
If mono is properly installed, you should not get a message like -bash: mono: command not found. If something is installed then it most typically is in the $PATH.
On my system the executable is located on /usr/bin/mono (as most things are) but things may be different on a RPM-based system.
Your ./configure, however, got the prefix /opt/mono, so probably your executable also is located under that special path. (And thus mono isn't properly installed.) Why did you install it there? Anyway. If this is the fact, then you can execute it using sth like
/opt/mono/bin/mono foo.exe
to find the executable below your prefix path you could use
find /opt/mono -name mono
to see all directory entries which are named exactly mono. One of those should be your executable.
If your programm is properly installed you will usually find it's executable using "which"
which programm
like:
which firefox
/usr/bin/firefox
There are many guides and tutorials out there that recommend installing in /opt/mono in order to not conflict with the mono supplied by official distribution packages (which would be installed in /usr).
However what most of those guides miss is that /opt/mono is a non-standard prefix that will not be taken in account by the system when trying to find the executables (the system looks at the $PATH environment variable).
There are 2 possible solutions to this:
Instead of using the prefix /opt/mono use /usr/local (which is actually what ./configure or ./autogen.sh uses by default if you don't supply any prefix!). This prefix is normally included in the $PATH environment variable of most distributions.
Use your custom mono installation from a Parallel Environment. This is a bit more complicated to set up, but it's specially recommended for people who want to install two versions of mono in parallel (i.e. a very modern version, and a more stable version supplied by the official distribution packages), and have good control of when they can use one or another.
The reason that many internet tutorials recommend /opt/mono instead of /usr/local is actually because most of them are based on the wiki page (referenced above) that explains how to set up a Mono Parallel Environment, but they of course don't include the other steps to properly set up such an environment (they just borrowed the bit about how to call configure).

How can I change the directory where cabal stores the documentation

I installed a custom Haskell toolchain with the prefix $HOME/usr, so the compiler lives in $HOME/usr/bin/ghc and the documentation in $HOME/usr/share/doc/ghc/.... The toolchain consists of a ghc installation, a cabal installation and all the libs you need. I set up $PATH in a way, that all these programs are in it. There is no other installation of these tools on my system.
Now I tried to install some other libraries. But I always got the same error when cabal tried to install the documentation:
~$ cabal install --global binary
Resolving dependencies...
Configuring binary-0.5.0.2...
Preprocessing library binary-0.5.0.2...
Building binary-0.5.0.2...
... snip ...
Registering binary-0.5.0.2...
cabal: /usr/local/share/doc: permission denied
How can I tell cabal where the documentation should live? I don't want to give this information again and again in the shell, so the best would be a config file. I want to have all the haskell related stuff in my home tree, to avoid destroying my system with a wrong command.
Why are you installing with "--global"? By default this would put everything in /usr/local/. If you do a standard per-user install the docs will be installed into your home directory and it should work fine.
That being said, this is configurable via a file. The cabal config file is typically located at ~/.cabal/config/. Here's the relevant section of mine:
install-dirs global
-- prefix: /usr/local
-- bindir: $prefix/bin
-- libdir: $prefix/lib
-- libsubdir: $pkgid/$compiler
-- libexecdir: $prefix/libexec
-- datadir: $prefix/share
-- datasubdir: $pkgid
-- docdir: $datadir/doc/$pkgid
-- htmldir: $docdir/html
-- haddockdir: $htmldir
You can make whatever changes you like, just be sure to uncomment the lines. There is also an "install-dirs user" section, which is used in per-user installs.
I agree with the poster. Why is there no clear documentation for how to do
cabal install package --global
that prompts for sudo when permission is needed?
Doing
sudo cabal install package
is a bad idea because then you're building packages as root. And you have to allow an internet connection to write to a file owned by root (you will have to populate /root/.cabal or something like that).
Here is a good reason why one would want to do this:
If I install ghc and the haskell platform through my linux package manager (there are good reasons for this ;), then if I do cabal install package
it will not recognize the packages that globally recognized.
Well, someone actually posted a(n almost annoyingly) detailed description of how to do global installations (with either --global or install-dirs global) without running into permission errors. The trick is to use root-cmd sudo in the cabal config file.
See,
http://jdgallag.wordpress.com/2011/05/14/cabal-install-to-global-using-sudo-but-do-not-build-as-root/

Make install, but not to default directories?

I want to run 'make install' so I have everything I need, but I'd like it to install the things in their own folder as opposed to the system's /usr/bin etc. is that possible? even if it references tools in the /usr/bin etc.?
It depends on the package. If the Makefile is generated by GNU autotools (./configure) you can usually set the target location like so:
./configure --prefix=/somewhere/else/than/usr/local
If the Makefile is not generated by autotools, but distributed along with the software, simply open it up in an editor and change it. The install target directory is probably defined in a variable somewhere.
Since don't know which version of automake you can use DESTDIR environment variable.
See Makefile to be sure.
For example:
export DESTDIR="$HOME/Software/LocalInstall" && make -j4 install
make DESTDIR=./new/customized/path install
This quick command worked for me for opencv release 3.2.0 installation on Ubuntu 16. DESTDIR path can be relative as well as absolute.
Such redirection can also be useful in case user does not have admin privileges as long as DESTDIR location has right access for the user. e.g /home//
It could be dependent upon what is supported by the module you are trying to compile. If your makefile is generated by using autotools, use:
--prefix=<myinstalldir>
when running the ./configure
some packages allow you to also override when running:
make prefix=<myinstalldir>
however, if your not using ./configure, only way to know for sure is to open up the makefile and check. It should be one of the first few variables at the top.
If the package provides a Makefile.PL - one can use:
perl Makefile.PL PREFIX=/home/my/local/lib LIB=/home/my/local/lib
make
make test
make install
* further explanation: https://www.perlmonks.org/?node_id=564720
I tried the above solutions. None worked.
In the end I opened Makefile file and manually changed prefix path to desired installation path like below.
PREFIX ?= "installation path"
When I tried --prefix, "make" complained that there is not such command input. However, perhaps some packages accepts --prefix which is of course a cleaner solution.
try using INSTALL_ROOT.
make install INSTALL_ROOT=$INSTALL_DIRECTORY

Install multiple versions of a package

I want to install multiple versions of a package (say libX) from src. The package (libX) uses Autotools to build, so follows the ./configure , make, make install convention. The one installed by default goes to /usr/local/bin and /usr/local/lib and I want to install another version of this in /home/user/libX .
The other problem is that libX is a dependency for another package (say libY) which also uses autotools. How to I make libY point to the version installed in /home/user/libX ? There could be also a possibility that its a system package like ffmpeg and I want to use the latest svn version for my src code and hence build it from src. What do i do in that case ? What is the best practice in this case so that I do not break the system libraries?
I'm using Ubuntu 10.04 and Opensuse 10.3.
You can usually pass the --prefix option to configure to tell it to install the library in a different place. So for a personal version, you can usually run it as:
./configure --prefix=$HOME/usr/libX
and it will install in $HOME/usr/libX/bin, $HOME/usr/libX/lib, $HOME/usr/libX/etc and so on.
If you are building libY from source, the configure script usually uses the pkg-config tool to find out where a package is stored. libX should have included a .pc file in the directory $HOME/usr/libX/lib/pkgconfig which tells configure where to look for headers and library files. You will need to tell the pkg-config tool to look in your directory first.
This is done by setting the PKG_CONFIG_PATH to include your directory first.
When configuring libY, try
PKG_CONFIG_PATH=$HOME/usr/libX/lib/pkgconfig:/usr/local/lib/pkgconfig ./configure
man pkg-config should give details.

Resources