Converting a script/program into an installable tarball? - linux

I am quite familiar with writing small programs or scripts, and I am also familiar with downloading a source tarball and installing it using ./configure && make && make install (or, failing that, doing whatever the INSTALL or README files tell me to do). However, I don't know how to take a script that I've written and package it up (along with its supporting files) so that it can be installed as above.
I've tried Googling, but I'm not sure what search terms to use. If I try something involving the word "package," I get guides on how to convert a tarball to a deb/rpm/miscellaneous-distro-specific-package-format, which is not what I want.
I guess I need to use something called a build system, which might be called autotools, unless it's actually cmake, rake, scons, or something else?
Also, if my program requires some supporting files, how might I need to modify my program so that it can locate those files once they have been installed in /usr/share, /usr/lib, and wherever else they end up after installation?
Basically, I'm looking to graduate from writing scripts to writing distributable programs.

Answering your question "How do I learn how to use autotools to distribute my program as a tarball?"
http://www.gnu.org/software/autoconf/manual/
http://www.gnu.org/software/automake/manual/ and
http://sources.redhat.com/autobook/ (Outdated but still useful)

Related

"make install" - Changing output destination for all builds

I am doing Linux development on a few machines, mainly Slackware 13.37 and Ubuntu 12.04. I am testing and validating the results of a few simple makefiles, and want to confirm the output of make install. However, before I go ahead testing this, I want to know if there is a portable means of changing the default output destination for make install for any makefile.
I would prefer if I could somehow stage my output, so all output goes to, for example:
/test/bin
/test/lib
/test/usr/bin
instead of:
/bin
/lib
/usr/bin
I know that in QNX development environments, for example, I can set environment variables like QCONF_OVERRIDE and INSTALL_ROOT_nto, and guarantee that no makefile is able to install anywhere other than a subdirectory of /test for example. Is there a similar mechanism for GCC on Ubuntu that just requires setting some environment variables in my ~/.bashrc file? I do all my work via command-line and VIM anyways, so I'm not worried about the case where a pretty IDE doesn't understand these environment variables due to them not being in a .kderc, .gnomerc, or equivalent.
Thank you.
Short answer: no.
Long answer:
There isn't a way to set the output destination for any Makefile; the Makefile or some other part of the build system has to be designed to make it possible. make is a very simple tool because it's intended to function identically across a wide variety of platforms. Consequently, it doesn't really use environment variables that aren't present in the Makefile itself. This is good in terms of environment pollution and for keeping things less magic, but bad for achieving your desired goal.
A bit of context: things are a bit more complicated in part because, unlike the QNX development environment (a largely homogeneous cross-compilation environment), a large portion of software that uses make (I'm assuming GNU make but this applies to other versions as well) to build is written for a heterogeneous build and run environment—it may be designed to be able to build for different distributions, operating systems (Linux, MS Windows, Mac OS X, FreeBSD, etc.), and even hardware architecture (x86, arm, mips, power, sparc, sh, etc.). Sometimes you're building for the same system and sometimes for a different one. Unsurprisingly, there isn't really a standard way to define an install path across such a variety of systems.
Basile mentioned in his answer that for programs that use the GNU build system, you can use ./configure --prefix=/test. This will typically work for simple programs that use Autotools. For more complicated programs using GNU Autotools, there are usually complications if more diverse options are available:
cross-compiling? You might want your prefix to be set to where it's going to be installed on the target system (maybe /usr/local), and to use make install DESTDIR=/test.
Does your build system expect dependencies in the prefix directory, but you want to find them elsewhere? Better run ./configure --help to see what other options there are for providing alternate paths (often --with-foo=/prefix/of/foo)
I'm sure there's more that I'm forgetting right now, but I think you get the picture.
Keep in mind that only applies to projects that use Autotools. Other projects may have other systems (perhaps naming a variable or editing a configuration file), so ultimately your best bet is to read the project documentation, and failing that, the Makefile. Fun, eh?
P.S. Having variables defined in the environment is different than passing them as a command argument to make, i.e. SPAM="alot" make target is different from make target SPAM="alot"—the latter will override makefile variables. See the GNU make docs on Variables from the Environment.
Just change prefix variable in makefile
prefix=/test
then run
make install
also you can run following command for installing binaries
make prefix=/test install
Refer http://www.gnu.org/prep/standards/html_node/Directory-Variables.html
At least for GNU software (using autotools) you should configure your software with
./configure --prefix=/test
just using (without a specific --prefix to configure before)
make prefix=/test install
usually won't work correctly, because some of the files are builtin for the program so their path becomes a constant inside it.
You could also use make install DESTDIR=/tmp/dest and then copy /tmp/dest to /test but it won't work correctly neither
For example, my /usr/local/bin/emacs binary has /usr/local/share/emacs/24.3.50/lisp as some string (checked with strings /usr/local/bin/emacs command) and the /usr/local/ part of that path is the configure-d prefix.
BTW, you could have a chroot-ed environment to test your applications for various distributions.

Porting VisualStudio (2008) Project to Linux

I'm trying to port a wide Visual Studio (2008) Project to Linux System.
Do somebody know if it exist a way to easly "transform" the .vcproj file into a makefile?
Easiest would be to just learn how to write your own Makefile. It's quite simple.
But other than that you could try http://www.codeproject.com/Articles/28908/Tool-for-Converting-VC-2005-Project-to-Linux-Makef
Maybe this can help you, but you need to handle whit you outputs in the original code
Make-It-so
http://code.google.com/p/make-it-so/
or
sln2mak
http://www.codeproject.com/Articles/28908/Tool-for-Converting-VC-2005-Project-to-Linux-Makef
i hope this can help you
You can use Winemaker, which is part of WINE: something all major distributions already include.
On Fedora, which use yum, you can, as root, run yum install /usr/bin/winemaker to install it. This will probably also work on other yum based operating system, but you may have to provide another path, if winemaker is packaged to install in /usr/bin (which I doubt will be the case).
Once you have converted the project, consider using Autotools instead -- it's, in my experience, by far the simplest build tool available and is very easy to learn and use. Just don't be scared to poor documentation you will often find lying around. The only files you have to edit are configure.ac and Makefile.am files.

Generate package config file automagically using Scons, bjam, and/or cmake

Hey Stackoverflowers: one comment and one question.
Comment: You guys/girls are great, thanks for taking a look.
Question:
Can Bjam, Scons, or Cmake easily install a .pc file for library projects?
I find it really annoying that I have to maintain the same library dependency list in my scons/bjam/make file, the .pc file (for libraries), and rpm/deb package config files.
It would be nice if a build tool could manage the build and installation meta-data.
Thoughts?
Because SCons is such a flexible environment, yes you can in fact use it to manage the entire process from building to deliverable package.
Our build goes through several phases with SCons:
Build - resulting .o, .os, generated files, etc under ./build
Assembly - resulting exe, so/dll, binarys, etc under ./delivery
Packing & configuration - a set of deb/rpm/msi + configuration, etc under ./package
It isn't all out of the box, and requires you to write some python code, find tools etc, but it does work for us pretty well.
Our project is C, C++, Java, & Python building dozens of binary targets for a distributed system with multiple delivery targets for different machine installs on Windows, Ubuntu and Redhat Linux.
Again, be prepared to have to customize your scripts and write custom builders though to wrap different processes.

Create setup for Linux C project

I want to create a setup for my project so that it can be installed on any pc without installing the header files.
How can I do that?
There are two general ways to distribute programs:
Source Distribution (source code to be built). The most common way is to use GNU autotools to generate a configure script so that your project can be installed by doing ./configure && make install
Binary Distribution (prebuilt). Instead of shipping source, you ship binaries. There are a couple of competing standards although the two main ones are RPM and DEB file.
You just changed your question (appreciated, it was kind of vage), so my answer no longer applies ..
make sure you have a C compiler
I'd be surprised if you didn't, Linux normally has one
find an editor you are comfortable with
vi and emacs are the classics
write your first program and compile
learn about makefiles
learn about sub projects and libraries
In many respects, your question is too vague to be answerable. You will need to describe more what you have in mind. All else apart, if you are using an integrated development environment (IDE), then what you do should be coloured strongly by what the IDE encourages you to do. (Fighting your IDE is counter-productive; I've just never found an IDE that doesn't make me want to fight it.)
However, for a typical project on Linux, you will create a directory to hold the materials. For a small project (up to a few thousand lines of code in a few - say 5-20 - files), you might not need any more structure than a single directory. For bigger projects, you will segregate sub-sections of the project into separate sub-directories under the main project directory.
Depending on your build mechanisms, you may have a single makefile at the top of the project hierarchy (or the only directory in the 'hierarchy'). This goes in line with the 'Recursive Make Considered Harmful' paper (P Miller). Alternatively, you can create a separate makefile for each sub-directory and the top-level makefile simply coordinates builds across directories.
You should also consider which version control system (VCS) you will use.

Why use build tools like Autotools when we can just write our own makefiles?

Recently, I switched my development environment from Windows to Linux. So far, I have only used Visual Studio for C++ development, so many concepts, like make and Autotools, are new to me. I have read the GNU makefile documentation and got almost an idea about it. But I am kind of confused about Autotools.
As far as I know, makefiles are used to make the build process easier.
Why do we need tools like Autotools just for creating the makefiles? Since all knows how to create a makefile, I am not getting the real use of Autotools.
What is the standard? Do we need to use tools like this or would just handwritten makefiles do?
You are talking about two separate but intertwined things here:
Autotools
GNU coding standards
Within Autotools, you have several projects:
Autoconf
Automake
Libtool
Let's look at each one individually.
Autoconf
Autoconf easily scans an existing tree to find its dependencies and create a configure script that will run under almost any kind of shell. The configure script allows the user to control the build behavior (i.e. --with-foo, --without-foo, --prefix, --sysconfdir, etc..) as well as doing checks to ensure that the system can compile the program.
Configure generates a config.h file (from a template) which programs can include to work around portability issues. For example, if HAVE_LIBPTHREAD is not defined, use forks instead.
I personally use Autoconf on many projects. It usually takes people some time to get used to m4. However, it does save time.
You can have makefiles inherit some of the values that configure finds without using automake.
Automake
By providing a short template that describes what programs will be built and what objects need to be linked to build them, Makefiles that adhere to GNU coding standards can automatically be created. This includes dependency handling and all of the required GNU targets.
Some people find this easier. I prefer to write my own makefiles.
Libtool
Libtool is a very cool tool for simplifying the building and installation of shared libraries on any Unix-like system. Sometimes I use it; other times (especially when just building static link objects) I do it by hand.
There are other options too, see StackOverflow question Alternatives to Autoconf and Autotools?.
Build automation & GNU coding standards
In short, you really should use some kind of portable build configuration system if you release your code to the masses. What you use is up to you. GNU software is known to build and run on almost anything. However, you might not need to adhere to such (and sometimes extremely pedantic) standards.
If anything, I'd recommend giving Autoconf a try if you're writing software for POSIX systems. Just because Autotools produce part of a build environment that's compatible with GNU standards doesn't mean you have to follow those standards (many don't!) :) There are plenty of other options, too.
Edit
Don't fear m4 :) There is always the Autoconf macro archive. Plenty of examples, or drop in checks. Write your own or use what's tested. Autoconf is far too often confused with Automake. They are two separate things.
First of all, the Autotools are not an opaque build system but a loosely coupled tool-chain, as tinkertim already pointed out. Let me just add some thoughts on Autoconf and Automake:
Autoconf is the configuration system that creates the configure script based on feature checks that are supposed to work on all kinds of platforms. A lot of system knowledge has gone into its m4 macro database during the 15 years of its existence. On the one hand, I think the latter is the main reason Autotools have not been replaced by something else yet. On the other hand, Autoconf used to be far more important when the target platforms were more heterogeneous and Linux, AIX, HP-UX, SunOS, ..., and a large variety of different processor architecture had to be supported. I don't really see its point if you only want to support recent Linux distributions and Intel-compatible processors.
Automake is an abstraction layer for GNU Make and acts as a Makefile generator from simpler templates. A number of projects eventually got rid of the Automake abstraction and reverted to writing Makefiles manually because you lose control over your Makefiles and you might not need all the canned build targets that obfuscate your Makefile.
Now to the alternatives (and I strongly suggest an alternative to Autotools based on your requirements):
CMake's most notable achievement is replacing AutoTools in KDE. It's probably the closest you can get if you want to have Autoconf-like functionality without m4 idiosyncrasies. It brings Windows support to the table and has proven to be applicable in large projects. My beef with CMake is that it is still a Makefile-generator (at least on Linux) with all its immanent problems (e.g. Makefile debugging, timestamp signatures, implicit dependency order).
SCons is a Make replacement written in Python. It uses Python scripts as build control files allowing very sophisticated techniques. Unfortunately, its configuration system is not on par with Autoconf. SCons is often used for in-house development when adaptation to specific requirements is more important than following conventions.
If you really want to stick with Autotools, I strongly suggest to read Recursive Make Considered Harmful (archived) and write your own GNU Makefile configured through Autoconf.
The answers already provided here are good, but I'd strongly recommend not taking the advice to write your own makefile if you have anything resembling a standard C/C++ project. We need the autotools instead of handwritten makefiles because a standard-compliant makefile generated by automake offers a lot of useful targets under well-known names, and providing all these targets by hand is tedious and error-prone.
Firstly, writing a Makefile by hand seems a great idea at first, but most people will not bother to write more than the rules for all, install and maybe clean. automake generates dist, distcheck, clean, distclean, uninstall and all these little helpers. These additional targets are a great boon to the sysadmin that will eventually install your software.
Secondly, providing all these targets in a portable and flexible way is quite error-prone. I've done a lot of cross-compilation to Windows targets recently, and the autotools performed just great. In contrast to most hand-written files, which were mostly a pain in the ass to compile. Mind you, it is possible to create a good Makefile by hand. But don't overestimate yourself, it takes a lot of experience and knowledge about a bunch of different systems, and automake creates great Makefiles for you right out of the box.
Edit: And don't be tempted to use the "alternatives". CMake and friends are a horror to the deployer because they aren't interface-compatible to configure and friends. Every half-way competent sysadmin or developer can do great things like cross-compilation or simple things like setting a prefix out of his head or with a simple --help with a configure script. But you are damned to spend an hour or three when you have to do such things with BJam. Don't get me wrong, BJam is probably a great system under the hood, but it's a pain in the ass to use because there are almost no projects using it and very little and incomplete documentation. autoconf and automake have a huge lead here in terms of established knowledge.
So, even though I'm a bit late with this advice for this question: Do yourself a favor and use the autotools and automake. The syntax might be a bit strange, but they do a way better job than 99% of the developers do on their own.
For small projects or even for large projects that only run on one platform, handwritten makefiles are the way to go.
Where autotools really shine is when you are compiling for different platforms that require different options. Autotools is frequently the brains behind the typical
./configure
make
make install
compilation and install steps for Linux libraries and applications.
That said, I find autotools to be a pain and I've been looking for a better system. Lately I've been using bjam, but that also has its drawbacks. Good luck finding what works for you.
Autotools are needed because Makefiles are not guaranteed to work the same across different platforms. If you handwrite a Makefile, and it works on your machine, there is a good chance that it won't on mine.
Do you know what unix your users will be using? Or even which distribution of Linux? Do you know where they want software installed? Do you know what tools they have, what architecture they want to compile on, how many CPUs they have, how much RAM and disk might be available to them?
The *nix world is a cross-platform landscape, and your build and install tools need to deal with that.
Mind you, the auto* tools date from an earlier epoch, and there are many valid complaints about them, but the several projects to replace them with more modern alternatives are having trouble developing a lot of momentum.
Lots of things are like that in the *nix world.
Autotools is a disaster.
The generated ./configure script checks for features that have not been present on any Unix system for last 20 years or so. To do this, it spends a huge amount of time.
Running ./configure takes for ages. Although modern server CPUs can have even dozens of cores, and there may be several such CPUs per server, the ./configure is single-threaded. We still have enough years of Moore's law left that the number of CPU cores will go way up as a function of time. So, the time ./configure takes will stay approximately constant whereas parallel build times reduce by a factor of 2 every 2 years due to Moore's law. Or actually, I would say the time ./configure takes might even increase due to increasing software complexity taking advantage of improved hardware.
The mere act of adding just one file to your project requires you to run automake, autoconf and ./configure which will take ages, and then you'll probably find that since some important files have changed, everything will be recompiled. So add just one file, and make -j${CPUCOUNT} recompiles everything.
And about make -j${CPUCOUNT}. The generated build system is a recursive one. Recursive make has for a long amount of time been considered harmful.
Then when you install the software that has been compiled, you'll find that it doesn't work. (Want proof? Clone protobuf repository from Github, check out commit 9f80df026933901883da1d556b38292e14836612, install it to a Debian or Ubuntu system, and hey presto: protoc: error while loading shared libraries: libprotoc.so.15: cannot open shared object file: No such file or directory -- since it's in /usr/local/lib and not /usr/lib; workaround is to do export LD_RUN_PATH=/usr/local/lib before typing make).
The theory is that by using autotools, you could create a software package that can be compiled on Linux, FreeBSD, NetBSD, OpenBSD, DragonflyBSD and other operating systems. The fact? Every non-Linux system to build packages from source has numerous patch files in their repository to work around autotools bugs. Just take a look at e.g. FreeBSD /usr/ports: it's full of patches. So, it would have been as easy to create a small patch for a non-autotools build system on a per project basis than to create a small patch for an autotools build system on a per project basis. Or perhaps even easier, as standard make is much easier to use than autotools.
The fact is, if you create your own build system based on standard make (and make it inclusive and not recursive, following the recommendations of the "Recursive make considered harmful" paper), things work in a much better manner. Also, your build time goes down by an order of magnitude, perhaps even two orders of magnitude if your project is very small project of 10-100 C language files and you have dozens of cores per CPU and multiple CPUs. It's also much easier to interface custom automatic code generation tools with a custom build system based on standard make instead of dealing with the m4 mess of autotools. With standard make, you can at least type a shell command into the Makefile.
So, to answer your question: why use autotools? Answer: there is no reason to do so. Autotools has been obsolete since when commercial Unix has become obsolete. And the advent of multi-core CPUs has made autotools even more obsolete. Why programmers haven't realized that yet, is a mystery. I'll happily use standard make on my build systems, thank you. Yes, it takes some amount of work to generate the dependency files for C language header inclusion, but the amount of work is saved by not having to fight with autotools.
I dont feel I am an expert to answer this but still give you a bit analogy with my experience.
Because upto some extent it is similar to why we should write Embedded Codes in C language(High Level language) rather then writing in Assembly Language.
Both solves the same purpose but latter is more lenghty, tedious ,time consuming and more error prone(unless you know ISA of the processor very well) .
Same is the case with Automake tool and writing your own makefile.
Writing Makefile.am and configure.ac is pretty simple than writing individual project Makefile.

Resources