Installing dependencies in configure script - linux

I'm writing a program that requires LLVM, and thinking of using autotools to ship it on Linux, so from the user's viewpoint the process would look like the well-known ./configure && make && sudo make install.
With autotools, one normally relies on the system package manager to install dependencies. The problem is that, for whatever reason, this doesn't work with LLVM; on Ubuntu 14.04, apt-get thinks the latest version is 3.4, whereas a more recent version would actually be needed. Thus, I need to supply a script to download and build LLVM first (a local copy thereof, not interfering with any older version that might be on the system), a process which takes a few hours.
The most obvious place to put this process is at the start of configure. Is this considered normal and reasonable? Or is there a convention that configure should only contain the things autotools normally puts in it, and installing dependencies should be another script that the user runs first and separately? In the latter case, is there a convention regarding what that separate script should be called?

Don't install anything during configure. The scripts name is "configure" not "install-dependencies".
Write a configure check, and if llvm is missing, Give the user an explanation how to install it. If necessary provide a separate script to download llvm.
It is good practice to run configure (and make) as normal unprivileged user and not as root. So you may not even have permissions to install anything. You would have to check if "sudo" is installed, etc.
It may also happen that the system the user is installing has no network connectivity (firewall etc.), so your download will fail.

Related

How to automake installation process on Linux-like operating systems?

I wrote a simple C application but it has some dependencies. Instead of giving my friend (who is a linux noob) commands to run in terminal, to install the dependencies, I would like to give him a single file that would install everything my application needs.
Btw, is makefile a good idea, or maybe a bash script would be most appropriate? I would like to ask about the root password only once, save it somewhere (in the script/makefile variable) and then simply use it to install all dependencies. Any ideas how to do it the most professional way?
This is what packages are for.
Depending on what target OS/distribution, you will need to package a DEB or an RPM.
There are tools to simplify this process that allow declaring dependencies as well as running pre/post install/uninstall scripts.
The most professional way to distribute this is using a private repository.

Set environmental vars and enable core dumps in autotools build

I am using Autotools for my current project. I'm using Ubuntu and Linux mint. With Autotools I can tell it to check a users's system to check for any required libraries my project needs in order to function properly. Now I would like to check if a user's system has enabled core dumps and if not, then execute the command ulimit -c unlimited to enable core dumps. How and where do I specify this?
Also, once the user has executed the make command to compile the source code, they execute sudo make install in order to move the binaries at /usr/local/bin/MYPROJECT. I want to add the location of my project's binaries into the path environmental variable, so that the user can execute any of the binaries in my project from a terminal without the need of typing the full path. How and where do I specify this in Autotools?
I'm thinking this is something I would add in the configure.ac file, but I haven't found any examples on how I can do this. Any help would be appreciated.
It sounds as if you basically misunderstand what installation of a software
package on Linux is about.
The job of autotools is to build a portable installation package of your
software. When I install your package, it does not become your decision
whether programs that crash will generate core dumps on my computer
when I run them. It does not become your decision what PATH I use to
invoke programs by unqualified name. These are my decisions or defaults
that I have accepted from my OS distribution.
If you execute ulimit -c unlimited, the command will in any case
only apply to the shell in which it is invoked. It doesn't
reconfigure the host system (!).
If you would like users to be able to invoke your program by unqualified
name, the normal procedure is make your package install it by default in the place,
/usr/local/bin, that unix-like OSes traditionally add to a
user's default PATH for finding locally installed programs. That is
where autotools will configure it to be installed, by default. Change it
only if you don't want your program to be in the user's default PATH.
And in any case, a user can decide where your software is installed by
passing --prefix=/path/of/my/choice to the ./configure command. Unless
you have some unavoidable reason not to, make your package installation
use the defaults that everybody expects and leave it up to the installing user
to change them.
Bottom line: You are asking how to do installation actions with autotools that
are not meant to be done with autotools, because they are not meant to be
done by package installations.

Make - Make Install and Linux update

I am trying my hands new on Linux.
The following command is very useful:
sudo apt-get install <application>;
As it adds the application into the linux programs list and automatically upgrades it while running the update manager.
But I would like to get more knowledge on installing the programs from the .tar.gz archives as well.
So I do:
Extract the archive
./configure;
make;
make install;
I have two questions in this process:
1) I read in the forum that "make install" is not good if we are updating the binaries.
So should I just do "make" and the "install" ?
2) Second question is that is there a way to add the program installed in such manner to the Linux Software Update list so that I do not have to use the terminal for every new version that is released
Installing programs from tarballs:
You really do not want to install packages from .tar.gz when they are in the repositories. It is much harder to update or remove it manually than you could do with apt-get.
If you really have to compile the program yourself use checkinstall instead of make install. This creates a package you can install it via the package management and later remove using apt-get. This is much cleaner.
Also you may want to type
./configure && make && sudo checkinstall
instead of the commands you wrote. This way the program is only compiled if the configuration succeeded. The package is only built if the compilation succeeded. With ; instead of && all processes would be attempted no matter if its prerequisites are matched.
Graphical package managers
You can install your packages from GUI programs. Kubuntu uses for example uses muon for this, but the programs vary between distributions.
make install is "not good" if you want to be able to easily remove the files associated with a package as there is no log of the work it does and often no easy way to reverse the process. That has little to nothing to do with updating the software though (though updates can certainly run into related issues).
No, you can't add the manually compiled and installed software to your distributions list of packaged software (other than through something like checkinstall or creating a package yourself) since that's exactly what you were avoiding in the first place.
That all being said if the package exists for your distribution and you want to build it from source yourself you can often just build a more-or-less official version of the package from the distributions source package.

GHC Install Without Root

So I'd like to set up a linux machine for Haskell development with one huge caveat -- no root privs on this machine. We could of course get the admins to install GHC for us, eventually. However, in the long-term then we need to hassle them when we want to upgrade, etc. So much better to do everything in userland. Which also means that we'll want to install c libs we link to in userland as well, etc. to keep everything as hassle-free as possible.
So, the question is, how, soup-to-nuts, would I go about doing a purely userland install of GHC? The machine will have gcc, and the usual toolchain. If necessary, we can start with a typical ghc install to get the ball rolling, but it would be nice not to.
Additionally, any tips on managing an environment like this would be appreciated, especially involving how such a setup can be manageable with multiple devs/accounts.
I did this too. I created a directory ~/usr and passed --prefix=$HOME/usr to all configure scripts. Using the Haskell Platform makes this process even smoother.
You obviously need a directory that all pertinent users have at least read permission on. Say /home/foo, with subdirectories bin, lib, share, .cabal. Then ./configure --prefix=/home/foo and make && make install, and make sure that /home/foo/* is before /usr/* in everybody's PATH, LIBRARY_PATH etc. You should probably start with installing gcc and c-libs there, and when everything C is installed, install ghc.
I managed to install ghc through stack by following these instructions. It worked like a charm; the only additional thing I had to do was to install the GMP library and to add it to the LD_LIBRARY_PATH.
If you want to use stack to install ghc or ghci, follow this offical manual:
download the tar.gz file from the release link (curl/wget/even scp can upload your local file to a remote server)
extract the file with tar xvzf and enter the folder test if ./stack run properly
add
export PATH="<stack_path>:$PATH"
to ~/.bashrc
Every time you start the terminal, do source ~/.bashrc
install ghci locally
stack ghci
It will install ghci automatically and launch it.

What should Linux/Unix 'make install' consist of?

I've written a C++ program (command line, portable code) and I'm trying to release a Linux version at the same time as the Windows version. I've written a makefile as follows:
ayane: *.cpp *.h
g++ -Wno-write-strings -oayane *.cpp
Straightforward enough so far; but I'm given to understand it's customary to have a second step, make install. So when I put the install: target in the makefile... what command should be associated with it? (If possible I'd prefer it to work on all Unix systems as well as Linux.)
Installation
A less trivial installer will copy several things into place, first insuring that the appropriate paths exists (using mkdir -p or similar). Typically something like this:
the executable goes in $INSTALL_PATH/bin
any libraries built for external consumption go in $INSTALL_PATH/lib or $INSTALL_PATH/lib/yourappname
man pages go in $INSTALL_PATH/share/man/man1 and possibly other sections if appropriate
other docs go in $INSTALL_PATH/share/yourappname
default configuration files go in $INSTALL_PATH/etc/yourappname
headers for other to link against go in $INSTALL_PATH/include/yourappname
Installation path
The INSTALL_PATH is an input to the build system, and usually defaults to /usr/local. This gives your user the flexibility to install under their $HOME without needing elevated permission.
In the simplest case just use
INSTALL_PATH?=/usr/local
at the top of the makefile. Then the user can override it by setting an environment variable in their shell.
Deinstallation
You also occasionally see make installs that build a manifest to help with de-installation. The manifest can even be written as a script to do the work.
Another approach is just to have a make uninstall that looks for the things make install places, and removes them if they exist.
In the simplest case you just copy the newly created executable into the /usr/local/bin path. Of course, it's usually more complicated than that.
Notice that most of these operations require special rights, which is why make install is usually invoked using sudo.
make install is usually the step that "installs" the binary into the correct place.
For example, when compiling Vim, make install may place it in /usr/local/bin
Not all Makefiles have a make install

Resources