What should Linux/Unix 'make install' consist of? - linux

I've written a C++ program (command line, portable code) and I'm trying to release a Linux version at the same time as the Windows version. I've written a makefile as follows:
ayane: *.cpp *.h
g++ -Wno-write-strings -oayane *.cpp
Straightforward enough so far; but I'm given to understand it's customary to have a second step, make install. So when I put the install: target in the makefile... what command should be associated with it? (If possible I'd prefer it to work on all Unix systems as well as Linux.)

Installation
A less trivial installer will copy several things into place, first insuring that the appropriate paths exists (using mkdir -p or similar). Typically something like this:
the executable goes in $INSTALL_PATH/bin
any libraries built for external consumption go in $INSTALL_PATH/lib or $INSTALL_PATH/lib/yourappname
man pages go in $INSTALL_PATH/share/man/man1 and possibly other sections if appropriate
other docs go in $INSTALL_PATH/share/yourappname
default configuration files go in $INSTALL_PATH/etc/yourappname
headers for other to link against go in $INSTALL_PATH/include/yourappname
Installation path
The INSTALL_PATH is an input to the build system, and usually defaults to /usr/local. This gives your user the flexibility to install under their $HOME without needing elevated permission.
In the simplest case just use
INSTALL_PATH?=/usr/local
at the top of the makefile. Then the user can override it by setting an environment variable in their shell.
Deinstallation
You also occasionally see make installs that build a manifest to help with de-installation. The manifest can even be written as a script to do the work.
Another approach is just to have a make uninstall that looks for the things make install places, and removes them if they exist.

In the simplest case you just copy the newly created executable into the /usr/local/bin path. Of course, it's usually more complicated than that.
Notice that most of these operations require special rights, which is why make install is usually invoked using sudo.

make install is usually the step that "installs" the binary into the correct place.
For example, when compiling Vim, make install may place it in /usr/local/bin
Not all Makefiles have a make install

Related

Add software bin or just add soft link for executable file in bin when install software on linux?

I’m not root for the linux server,
so I choose to install softwares in my $HOME/local/bin, I already added the $HOME/local/bin directory to the PATH environment variable, wrote in my .bashrc.
Some softwares install this way like:
tar xvzf ncurses-5.9.tar.gz
cd ncurses-5.9
./configure --prefix=$HOME/local
make
make install
cd ..
So it will directly install in my $HOME/local/bin.
But for some softwares, after download like sbt-1.2.1.zip (based on java), and decompression, shows just a file fold sbt, it contains three foldsbin conf lib, and in its bin, contains one executable file named sbt and java9-rt-export.jar sbt-launch-lib.bash sbt-launch.jar sbt.bat.
Here I wonder:
I should just soft link this executable sbt file path under my $HOME/local/bin, then source my .bashrc?
Or, after decompression, add one line in my .bashrc export PATH="downloadpath/sbt/bin:$PATH"?
Since just one executable downloadpath/sbt/bin, so I'm not sure it is right to add whole bin fold path, if software's bin fold contains executable files (one or many), I think this situation is more convenient for just add it's bin in .bashrc, but even so, I'm not sure its right?
I'm not familiar with installation software, now I usually know way
but not why. Here I shows two ways (more ways not be showed here) to
install, executable file always be written in bin or src? But some
softwares no bin just src but no executable files in it...
Slurm also can use modules to install software, conda also other way, but I want to
confirm these traditional ways I mentioned (that two) still can be
used on slurm or conda?
However, any suggestion even one aspect's reminding will be grateful!
For precompiled software, or, in general, software that does not offer configure scripts or (C)make files, it is ofter better to leave them in their target directory and adapt the *PATH (PATH to binaries, but also LD_LIBRARY_PATH, LIBRARY_PATH to libraries and CPATH to include files and MANPATH to the man page) environment variables.
The reason is that the software might be configured to read files with hardcoded paths, relative to the position of the executable, such as libraries, etc.
In your case, you might also need to setup the CLASSPATH env variable to the directory with the jar files.
To ease software installation, you can use tools such as easybuild that can help, and even create user modules just like the system module installed by the system administrators.
There is something wrong in my opinion with your setup. If you don`t have root account on your server, is not better to test what you have to test, in a more safe environment - for example a vm/container on your developement machine ?
However, in your situation maybe it can be better to start sbt by using a separate bash script than modifying your .bashrc

Installing dependencies in configure script

I'm writing a program that requires LLVM, and thinking of using autotools to ship it on Linux, so from the user's viewpoint the process would look like the well-known ./configure && make && sudo make install.
With autotools, one normally relies on the system package manager to install dependencies. The problem is that, for whatever reason, this doesn't work with LLVM; on Ubuntu 14.04, apt-get thinks the latest version is 3.4, whereas a more recent version would actually be needed. Thus, I need to supply a script to download and build LLVM first (a local copy thereof, not interfering with any older version that might be on the system), a process which takes a few hours.
The most obvious place to put this process is at the start of configure. Is this considered normal and reasonable? Or is there a convention that configure should only contain the things autotools normally puts in it, and installing dependencies should be another script that the user runs first and separately? In the latter case, is there a convention regarding what that separate script should be called?
Don't install anything during configure. The scripts name is "configure" not "install-dependencies".
Write a configure check, and if llvm is missing, Give the user an explanation how to install it. If necessary provide a separate script to download llvm.
It is good practice to run configure (and make) as normal unprivileged user and not as root. So you may not even have permissions to install anything. You would have to check if "sudo" is installed, etc.
It may also happen that the system the user is installing has no network connectivity (firewall etc.), so your download will fail.

Set environmental vars and enable core dumps in autotools build

I am using Autotools for my current project. I'm using Ubuntu and Linux mint. With Autotools I can tell it to check a users's system to check for any required libraries my project needs in order to function properly. Now I would like to check if a user's system has enabled core dumps and if not, then execute the command ulimit -c unlimited to enable core dumps. How and where do I specify this?
Also, once the user has executed the make command to compile the source code, they execute sudo make install in order to move the binaries at /usr/local/bin/MYPROJECT. I want to add the location of my project's binaries into the path environmental variable, so that the user can execute any of the binaries in my project from a terminal without the need of typing the full path. How and where do I specify this in Autotools?
I'm thinking this is something I would add in the configure.ac file, but I haven't found any examples on how I can do this. Any help would be appreciated.
It sounds as if you basically misunderstand what installation of a software
package on Linux is about.
The job of autotools is to build a portable installation package of your
software. When I install your package, it does not become your decision
whether programs that crash will generate core dumps on my computer
when I run them. It does not become your decision what PATH I use to
invoke programs by unqualified name. These are my decisions or defaults
that I have accepted from my OS distribution.
If you execute ulimit -c unlimited, the command will in any case
only apply to the shell in which it is invoked. It doesn't
reconfigure the host system (!).
If you would like users to be able to invoke your program by unqualified
name, the normal procedure is make your package install it by default in the place,
/usr/local/bin, that unix-like OSes traditionally add to a
user's default PATH for finding locally installed programs. That is
where autotools will configure it to be installed, by default. Change it
only if you don't want your program to be in the user's default PATH.
And in any case, a user can decide where your software is installed by
passing --prefix=/path/of/my/choice to the ./configure command. Unless
you have some unavoidable reason not to, make your package installation
use the defaults that everybody expects and leave it up to the installing user
to change them.
Bottom line: You are asking how to do installation actions with autotools that
are not meant to be done with autotools, because they are not meant to be
done by package installations.

GHC Install Without Root

So I'd like to set up a linux machine for Haskell development with one huge caveat -- no root privs on this machine. We could of course get the admins to install GHC for us, eventually. However, in the long-term then we need to hassle them when we want to upgrade, etc. So much better to do everything in userland. Which also means that we'll want to install c libs we link to in userland as well, etc. to keep everything as hassle-free as possible.
So, the question is, how, soup-to-nuts, would I go about doing a purely userland install of GHC? The machine will have gcc, and the usual toolchain. If necessary, we can start with a typical ghc install to get the ball rolling, but it would be nice not to.
Additionally, any tips on managing an environment like this would be appreciated, especially involving how such a setup can be manageable with multiple devs/accounts.
I did this too. I created a directory ~/usr and passed --prefix=$HOME/usr to all configure scripts. Using the Haskell Platform makes this process even smoother.
You obviously need a directory that all pertinent users have at least read permission on. Say /home/foo, with subdirectories bin, lib, share, .cabal. Then ./configure --prefix=/home/foo and make && make install, and make sure that /home/foo/* is before /usr/* in everybody's PATH, LIBRARY_PATH etc. You should probably start with installing gcc and c-libs there, and when everything C is installed, install ghc.
I managed to install ghc through stack by following these instructions. It worked like a charm; the only additional thing I had to do was to install the GMP library and to add it to the LD_LIBRARY_PATH.
If you want to use stack to install ghc or ghci, follow this offical manual:
download the tar.gz file from the release link (curl/wget/even scp can upload your local file to a remote server)
extract the file with tar xvzf and enter the folder test if ./stack run properly
add
export PATH="<stack_path>:$PATH"
to ~/.bashrc
Every time you start the terminal, do source ~/.bashrc
install ghci locally
stack ghci
It will install ghci automatically and launch it.

Make install, but not to default directories?

I want to run 'make install' so I have everything I need, but I'd like it to install the things in their own folder as opposed to the system's /usr/bin etc. is that possible? even if it references tools in the /usr/bin etc.?
It depends on the package. If the Makefile is generated by GNU autotools (./configure) you can usually set the target location like so:
./configure --prefix=/somewhere/else/than/usr/local
If the Makefile is not generated by autotools, but distributed along with the software, simply open it up in an editor and change it. The install target directory is probably defined in a variable somewhere.
Since don't know which version of automake you can use DESTDIR environment variable.
See Makefile to be sure.
For example:
export DESTDIR="$HOME/Software/LocalInstall" && make -j4 install
make DESTDIR=./new/customized/path install
This quick command worked for me for opencv release 3.2.0 installation on Ubuntu 16. DESTDIR path can be relative as well as absolute.
Such redirection can also be useful in case user does not have admin privileges as long as DESTDIR location has right access for the user. e.g /home//
It could be dependent upon what is supported by the module you are trying to compile. If your makefile is generated by using autotools, use:
--prefix=<myinstalldir>
when running the ./configure
some packages allow you to also override when running:
make prefix=<myinstalldir>
however, if your not using ./configure, only way to know for sure is to open up the makefile and check. It should be one of the first few variables at the top.
If the package provides a Makefile.PL - one can use:
perl Makefile.PL PREFIX=/home/my/local/lib LIB=/home/my/local/lib
make
make test
make install
* further explanation: https://www.perlmonks.org/?node_id=564720
I tried the above solutions. None worked.
In the end I opened Makefile file and manually changed prefix path to desired installation path like below.
PREFIX ?= "installation path"
When I tried --prefix, "make" complained that there is not such command input. However, perhaps some packages accepts --prefix which is of course a cleaner solution.
try using INSTALL_ROOT.
make install INSTALL_ROOT=$INSTALL_DIRECTORY

Resources