I'm looking into trying to find an easy way to manage packages compiled from source so that when it comes time to upgrade, I'm not in a huge mess trying to uninstall/install the new package.
I found a utility called CheckInstall, but it seems to be quite old, and I was wondering if this a reliable solution before I begin using it?
http://www.asic-linux.com.mx/~izto/checkinstall/
Also would simply likely to know any other methods/utilities that you use to handle these installations from source?
Whatever you do, make sure that you eventually go through your distribution's package management system (e.g. rpm for Fedora/Mandriva/RH/SuSE, dpkg for Debian/Ubuntu etc). Otherwise your package manager will not know anything about the packages you installed by hand and you will have unsatisfied dependencies at best, or the mother of all messes at worst.
If you don't have a package manager, then get one and stick with it!
I would suggest that you learn to make your own packages. You can start by having a look at the source packages of your distribution. In fact, if all you want to do is upgrade to version 1.2.3 of MyPackage, your distribution's source package for 1.2.2 can usually be adapted with a simple version change (unless there are patches, but that's another story...).
Unless you want distribution-quality packages (e.g. split library/application/debugging packages, multiple-architecture support etc) it is usually easy to convert your typical configure & make & make install scenario into a proper source package. If you can convince your package to install into a directory rather than /, you are usually done.
As for checkinstall, I have used it in the past, and it worked for a couple of simple packages, but I did not like the fact that it actually let the package install itself onto my system before creating the rpm/deb package. It just tracked which files got installed so that it would package them, which did not protect against unwelcome changes. Oh, and it needed root prilileges to work, which is another main sticking point for me. And lets not go into what happens with statically linked core utilities...
Most tools of the kind seem to work that way, so I simply learnt to build my own packages The Right Way (TM) and let checkinstall and friends mess around elsewhere. If you are still interested, however, there is a list of similar programs here:
http://www.dwheeler.com/essays/automating-destdir.html
PS: BTW checkinstall was updated at the end of 2009, which probably means that it's still adequately current.
EDIT:
In my opinion, the easiest way to perform an upgrade to the latest version of a package if it is not readily available in a repository is to alter the source package of the latest version in your distribution. E.g. for Centos the source packages for the latest version are here:
http://mirror.centos.org/centos/5.5/os/SRPMS/
http://mirror.centos.org/centos/5.5/updates/SRPMS/
...
If you want to upgrade e.g. php, you get the latest SRPM for your distrbution e.g. php-5.1.6-27.el5.src.rpm. Then you do:
rpm -hiv php-5.1.6-27.el5.src.rpm
which installs the source package (just the sources - it does not compile anything). Then you go to the rpm build directory (on my mandriva system its /usr/src/rpm), you copy the latest php source tarball to the SOURCES subdirectory and you make sure it's compressed in the same way as the tarball that just got installed there. Afterwards you edit the php.spec file in the SPECS directory to change the package version and build the binary package with something like:
rpmbuild -ba php.spec
In many cases that's all it will take for a new package. In others things might get a bit more complicated - if there are patches or if there are some major changes in the package you might have to do more.
I suggest you read up on the rpm and rpmbuild commands (their manpages are quite good, in a bit extensive) and check up the documentation on writing spec files. Even if you decide to rely on official backport repositories, it is useful to know how to build your own packages. See also:
http://www.rpm.org/wiki/Docs
EDIT 2:
If you are already installing packages from source, using rpm will actually simplify the building process in the long term, apart from maintaining the integrity of your system. The reason for this is that you won't have to remember the quirks of each package on your own ("oooh, right, now I remember, foo needs me to add -lbar to its CFLAGS"), as the build process will be in the .spec file, which you could imagine as a somewhat structured build script.
As far as upgrading goes, if you already have a .spec file for a previous version of the package, there are two main issues that you may encounter, but both exist whether you use rpm to build your package or not:
A patch that was applied to the previous version by the distribution does not apply any more. In many cases the patch has already been applied to the upstream package, so you can simply drop it. In others you may have to edit it - or I suppose if you deem it unimportant you can drop it too.
The package changed in some major way which affected e.g. the layout of the files it installs. You do read the release notes notes for each new version, don't you?
Other than these two issues, upgrading often boils down to just changing a version number in the spec file and running rpmbuild - even easier than installing from a tarball.
I would suggest that you have a look at the tutorials or at the source package for some simple piece of software such as:
http://mirror.centos.org/centos/5.5/os/SRPMS/ipv6calc-0.61-1.src.rpm
http://mirror.centos.org/centos/5.5/os/SRPMS/libevent-1.4.13-1.src.rpm
If you have experience in buildling packages from a tarball, using rpm to build software is not much of a leap really. It will never be as simple as installing a premade binary package, however.
I use checkinstall on Debian. It should not be so different on CentOS. I use it like that:
./configure
make
sudo checkinstall make install # fakeroot in place of sudo works usally for more security
# install the package generated
Related
My knowledge on Linux administration is limited and hence wanted to check here about the pros and cons of installing any RHEL/CentOS Linux software using rpm packages over installing through tar/zip files.
Thanks
a non-exhaustive list of pros and contras:
rpm
intelligent dependency managment
conflict checking
allow easy and clean uninstall
allow for upgrades / downgrades
list all files owned by a package
a central database with all packages installed, which files they own, their interdependencies
from source
you choose yourself all compiler flags
you can choose a custom installation path
I have tried to explain the diff, pros and cons,
Tar
Basically tar is old way of dealing with in Linux. We can say its existence when the Linux was created.
Usually the tar consists of Source Code and needs to be compiled in binary format for us to use.
Pros:
Using tar packages you gain more control over the programs that you install.
If you want certain portions that avoided, you could do that on the go. Which give you the upper hand.
Cons:
The main issue comes in the maintainability of the packages installed.
They are hard to manage. Once you install, there was no way to manage the software unless and until its well documented. It also hard to version them and you are left blank on the software version you have. The possible reason for this because of the non-indexing nature of files. The files could be spread across your file system, which makes it difficult to remove or upgrade it.
Hard to automate.
It is also hard to automate because of the complexities in maintaining the packages.
Below I tried explaining how tar file are compiled to get better understanding,
Prepare(setup) environment for building
./configure
This script has lots of options that you should change. Like --prefix or --with-dir=/foo. That means every system has a different configuration. Also ./configure checks for missing libraries that should be installed. Anything wrong here causes not to build your application. That's why distros have packages that are installed on different places, because every distro thinks it's better to install certain libraries and files to certain directories. It is said to run ./configure, but in fact you should change it always.
Building the system
make
This is actually make all by default. And every make has different actions to do. Some do building, some do tests after building, some do checkout from external SCM repositories. Usually you don't have to give any parameters, but again some packages execute them differently.
Install to the system
make install
This installs the package in the place specified with configure. If you want you can specify ./configure to point to your home directory. However, lots of configure options are pointing to /usr or /usr/local. That means then you have to use actually sudo make install because only root can copy files to /usr and /usr/local.
Please go through the below link for more information on the above commands
Why always ./configure; make; make install; as 3 separate steps?
RPM
The RPM Package Manager (RPM) is an open packaging system,
RPM packages pre-compiled binary packages (as well as source packages) for an easy one-click installation experience. RPM by itself does not manage dependency and resolve conflicts. When combined with Yum or PackageKit it will resolve all the dependency for the package.
RPM makes system updates easy. Installing, uninstalling and upgrading RPM packages can be accomplished with short commands. RPM maintains a database of installed packages and their files, so you can invoke powerful queries and verification on your system. During upgrades, RPM handles configuration files carefully, so that you never lose your customisation, that you cannot accomplish with regular .tar files.
RPM feature has the ability to verify packages. If you deleted an important file for some package, you can verify the package. You will notified of changes, if any—at which point you can reinstall the package, if necessary. Any configuration files that you modified are preserved during re installation.
Pros:
Install, reinstall, remove, upgrade and verify packages
Use a database of installed packages to query and verify packages
Use metadata to describe packages, their installation instructions, and so on
Package pristine software sources into source and binary packages
Add packages to Yum repositories
Digitally sign your packages
Querying a package (if the package is on your local file system or after the package is installed)
Validating a package (checking a package has not been tampered with, before or after installation).
Cons
Not as customisable as tar.
eg on usability: We will see how to install package using Tar or rpm:
in Tar:
$ tar xvf package.tar
$ cd package
$ ./configure --prefix=PREFIX
$ make
$ make install
in RPM:
rpm -U package-2.4.x-1.i686.rpm
That simple!!.
It basically depends on the usability and the purpose of your use.
Each of them has its on pros and cons depends on how and for what we use it.
I know it a long explanation,how this will give you clear picture. I know there are more untouched such as architecture and execution. I am not pretty confident to explain those here.
In simple words you can say that rpm are prepackaged binaries. They're just ready to go, it does everything for you. But to install rpm and deb you need to be root to have some write permissions. That leaves some serious security hole in the system. You may be unknowingly installing a Torjan horse. Also if the packages are screwed up they may cause the installation to fail altogether.
I personally recommend using tar as you are in more control. It is old school, I know, that's why a bit difficult but, in my opinion best way to go.
You can further refer to the link:
https://tldp.org/HOWTO/Software-Building-HOWTO-4.html
Currently I am trying to install a rpm secured_soft_2.0.0.rpm and i am
unable to install it as we already have secured_soft_1.3.0 installed.
Requirement is that we need to have both the versions installed.
Complexities :
These package inturn have dependent rpm's (lot of them ) and all these
interdependent rpm's also have versions
ex: secured_soft_1.3.0 works only with packages which are of version 1.3,
and secured_soft_2.0.0.rpm work only with dependecies of version 2.0 only.
So all these dependencies also need to be reinstalled and even these
dependenies should be parallely installed, without deleting old.
Finally , both these versions contain shared libraries and these shared
lib's do not have version numbers in their name.
#rpm -ivh secured_soft_2.0.0.rpm
error: Failed dependencies:
init-class >= 1.4.17.1-1 is needed by secured_soft_2.0.0.rpm
init-connection-interface >= 2.0.11.0 is needed by secured_soft_2.0.0.rpm
init-logger >= 2.0.11.0 is needed by secured_soft_2.0.0.rpm
init-security >= 2.0.11.0 is needed by secured_soft_2.0.0.rpm
As i have specified we already have secured_soft_1.3.0.rpm installed and
above dependencies are also available but of different version.
So we need the to install above dependencies and also need the old version's
of dependencies for the old rpm's to work
ex : secured_soft_2.0.0.rpm has libArt.so libSec.so and so on
which are copied to /usr/lib
Similarly secured_soft_1.3.0.rpm also has libArt.so libSec.so and so on
which are already available in /usr/lib
I tried to rename the so's but still iam not able to install.
Is it possible to change the location for these so's and get the things done
Is there any way we can do it.
At the moment, iam stuck here and would need advice on this
Appreciate any help on this.
Since the programs use the same filenames, and you need to put them on the same machine, you might be able to move the older version to another directory tree and make it work there.
You can do this with many applications which do not have compiled-in pathnames.
For instance,
install the older version (this sounds like where you are starting from)
use rpm -ql for each of the packages containing unversioned executables, libraries and associated files.
use tar to capture an archive of those files, relative to /usr (but omitting directories not owned by the packages).
create a new directory, e.g., /usr/local/myapp and untar the older version there.
update configuration files in the new location as needed
For applications such as this, I would run in a script that updates PATH (and perhaps sets LD_LIBRARY_PATH) to force the program to run from the new location. You can verify if this works using tools such as strace and lsof, i.e., by looking for the files that the program opens.
Once you have the older version working properly in the new location, you can uninstall its rpms and install the new version of the application.
Caveat: If the newer package is copied from a newer version of the operating system, however, the task is likely to be beyond your ability, whether or not you choose the alternative approach of recompiling the newer packages to fit on the existing system.
Building new/custom packages is one route to recompiling the newer version. If you have the source-RPMs for each part, that is a starting point:
extract the files from the source-RPM, e.g., using a script such as unrpm (see for example HowTo: Extract an RPM Package Files Without Installing It), and
copy those extracted files to their as-expected locations in your build-tree, e.g., $HOME/rpmbuild/SOURCES and $HOME/rpmbuild/SPECS
modify the spec-file to use the alternative location
build the new/modified package using the modified spec-file.
No, out of the box, you cannot.
I'd highly recommend looking into Docker, where you can throw each one into their own container and let them take care of all their dependency problems.
I have the 3.0.1 version of Alex installed on my /usr/bin. I think the Haskell Platform originally put it there (although I'm not 100% sure...).
Unfortunately, version 3.0.1 is bugged so I need to upgrade it to 3.0.5. I tried using cabal to install the latest version of Alex but cabal install alex-3.0.5 it installed the executable on .cabal/bin over on my home folder instead of on /usr/bin
Do I just manually copy the executable to /usr/bin? (that sound like a lot of trouble to do all the time)
Do I change my PATH environment variable so that .cabal/bin comes before /usr/bin? (I'm afraid that an "ls" executable or similar over on the cabal folder might end up messing up my system)
Or is there a simpler way to go at it in general?
I want to first point out the layout that works well for me, and then suggest how you might proceed in your particular situation.
What works well for me
In general, I think that a better layout is to have the following search path:
directories with important non-Haskell related binaries
directory that cabal install installs to
directory that binaries from the Haskell platform are in
This way, you can use cabal install to update binaries from the Haskell platform, but they cannot accidently shadow some non-Haskell related binary.
(On my Windows machine, this layout is easy to achieve, because the binaries from the Haskell platform are installed in a separate directory by default. So I just manually adapt the search path and that's it. I don't know how to achieve it on other platforms).
Suggestion for your particular situation
In your specific situation with the Haskell platform binaries already installed together with the non-Haskell related binaries, maybe you can use the following layout for the search path:
directory containing links to some of the binaries in 3
directory with important non-Haskell related binaries and Haskell platform binaries
directory that cabal install installs to.
This way, binaries from cabal install cannot accidently shadow the important stuff in 2. But if you decide you want to shadow something form the Haskell platform, you can manually add a link to 1. If it's a soft link, I think you only have to do that once per program name, and then you can call cabal install for that program to update it. You could even look up what executables are bundled with the Haskell platform and do that once and for all.
On second though, putting /.cabal/bin in front of /usr/bin in the PATH is simpler and is what most people do already.
Its also not a big deal since only cabal will put files in .cabal/bin so it should be predictable and with little risk of overwriting stuff.
I have a question about the structure of the source code from a cygport package.
Here is the contents of a Cygports source file:
the actual source bundle for the project (tar.gz, tar.bz2, etc.)
the any number of *.patch files.
a .cygport file
I am trying to build gedit-3.4.2 from cygports repository.
How does the .cygport file help me run the proper options in the ./configure ?
For instance, in gedit if i don't specify --disable-spell it won't proceed due to error. How do I get the list of ./configure options that were used to build the project when the cygport was built?
Is there some way we can use the cygport executable to build the cygport and change the prefix too?
Here is the contents of gedit-3.4.2-1.cygport:
inherit python gnome2
DESCRIPTION="GNOME text editor"
PATCH_URI="3.4.2-cygwin.patch"
DEPEND="gnome-common gtk-doc
girepository(Gtk-3.0)
pkgconfig(enchant)
pkgconfig(gtksourceview-3.0)
pkgconfig(libpeas-gtk-1.0)"
PKG_NAMES="${PN} ${PN}-devel"
PKG_HINTS="setup devel"
gedit_CONTENTS="--exclude=gtk-doc --exclude=libgedit* etc/ usr/bin/ usr/lib/gedit/ ${PYTHON_SITELIB#/} usr/share/"
gedit_devel_CONTENTS="usr/include/ usr/lib/gedit/libgedit* usr/lib/pkgconfig/ usr/share/gtk-doc/"
DIFF_EXCLUDES="*.desktop.in *.schemas.in *-marshal.h"
CYGCONF_ARGS="--libexecdir=/usr/lib --enable-python"
KEEP_LA_FILES="none"
EDIT Someone from Cygwin Ports mailing list said:
"The configure options are
--libexecdir=/usr/lib --enable-python
Which is from CYGCONF_ARGS."
Here is the contents of a Cygports source file:
You'd do better to think of it as a Cygwin package source file.
cygport is simply a tool for automating the creation of Cygwin binary and source packages. It is the primary tool available, but unlike with some other packaging systems, there's really nothing forcing you to use it. It is quite possible to build a Cygwin package entirely by hand, since it is really nothing more than a tarball that Cygwin's setup.exe can blindly unpack into the Cygwin root directory (typically c:\cygwin) with the expectation that this will put the package's files in sensible locations.
Before cygport existed, people did build their own ad hoc packaging systems. Many Cygwin package maintainers still use these tools they created. (Yours truly included; two of my three packages use cygport, but the third still uses a custom build system.)
Ultimately, you want to read the cygport manual, in /usr/share/doc/cygport/manual.html.
(Yes, I know, "RTFM" answers are frowned on here. But, as one who currently maintains two cygport based packages in the official Cygwin package repository, please believe me when I tell you that the manual is still the single best resource available on this topic.)
How does the .cygport file help me run the proper options in the ./configure ?
As you found out through other resources, you'd first need to edit the CYGCONF_ARGS value in the .cygport file.
The simplest possible step after that is cygport gedit-3.4.2-1.cygport all. That attempts to rebuild all the binary packages in a single step. It also builds a new source package containing updated .cygport and patch files.
If something breaks in the all build process, it is usually faster to switch to using the sub-commands contained by all instead of completely restarting the process. The all step just runs prep, compile, install, package, and finish for you, in that order. For instance, if all fails during the compilation step, there's probably no need to repeat the prep step.
(It is exceptionally uncommon for cygport or a sane build system to wreck the build tree, forcing you to re-run prep. Far more commonly, you end up needing to re-do prep when you manually wreck the build tree while trying to get a new package to build for the first time and need to start over.)
For instance, in gedit if i don't specify --disable-spell it won't proceed due to error.
You can probably fix that by installing the libaspell-devel package from the official Cygwin package repository with setup.exe.
Personally, I wouldn't disable any feature unless it meant installing unofficial packages, such as those from the Cygwin Ports project.[*] It is nice to have Cygwin Ports repository, but because it contains so many packages, installing one can end up creating an "install the world" situation: package A depends on packages B, C and D, and C depends on E, F, G, H, and G depends on I, J, K, and... Dependency hierarchies within the Cygwin package repo tend to be flatter and narrower than those in the Cygports repo.
Is there some way we can use the cygport executable to build the cygport and change the prefix too?
You have guessed that you just add --prefix=/my/private/program/tree to CYGCONF_ARGS, I trust.
[*] If you are feeling confused about "Cygwin Ports" and cygport, the naming similarity is no coincidence. cygport is a tool created by Yaakov Selkowitz for himself when creating the Cygwin Ports package repository. Later, it became popular enough among other Cygwin package maintainers that it pushed out most of the competing build systems.
I have a libfoo-devel rpm that I can create, using the trick to override _topdir. Now I want to build a package "bar" which has a BuildRequires 'libfoo-devel". I can't seem to find the Right Way to get access to the contents of libfoo-devel without having to install it on the build host. How should I be doing it?
EDIT:
My build and target distros are both SuSE.
I prefer solutions that don't require mock, since I believe SuSE does not include it in its stock repo.
Subsequent EDIT:
I believe that the answer I seek is in the build package. Perhaps it's SuSE's answer to mock? Or it's the distributed version of the oBS service?
DESCRIPTION
build is a tool to build SuSE Linux
RPMs in a safe and clean way. build
will install a minimal SuSE Linux as
build system into some directory and
will chroot to this system to compile
the package. This way you don't risk
to corrupt your working system (due to
a broken spec file for example), even
if the package does not use BuildRoot.
build searches the spec file for a
BuildRequires: line; if such a line is
found, all the specified rpms are
installed. Otherwise a selection of
default packages are used. Note that
build doesn't automatically resolve
missing dependencies, so the specified
rpms have to be sufficient for the
build.
Note that if you really don't need libfoo-devel installed to build package bar the most sensible alternative would be to remove libfoo-devel from the BuildRequires directive (and maybe put the requirement where it belongs).
However, if you cannot do that for some reason, create a "development" rpm database. Basically it involves using rpm --initdb --root /path/to/fake/root. Then populate it with all of the "target packages" of your standard distro installation.
That's a lot of rpm --install --root /path/to/fake/root --justdb package-name.rpm commands, but maybe you can figure out a way to copy over your /var/lib/rpm/* database files and use those as a starting point. Once you have the alternative rpm database, you can fake the installation of the libfoo-devel package with a --justdb option. Then you'll be home free on the actual rpm build.
If neither mock nor the openSUSE Build Service are a viable choice then you will have to buckle down and install the package, either directly or in a chroot; the package provides files that the SRPM packager has decided are required to build, and hence is in the BuildRequires tag.