Installing dkms onto an Ubuntu machine that doesn't have a compiler - linux

I'm trying to install dkms onto machines that have no make or gcc.
I plan to push only binaries to those target machines.
On my build machine I plan to use dkms to build dkms-enabled modules and then use dkms mktarball ... --binaries-only to create tarballs for distribution.
I want to push those tarballs to target machines, and on those machines I
want to use dkms ldtarball, and so the target machines do need dkms,
but they don't need gcc (or make).
Build (host) and target machines run the same Ubuntu
apt-get install dkms on the target automatically brings in gcc
Downloading the dkms.deb (apt-get download dkms)and installing it with dpkg --install --ignore-depends=gcc ... dkms.deb
does work, but leaves the dependency unresolved so that any future apt-get (installing some other package for instance) fails.
I can try to use the equiv package to create dummy installations of gcc and make, but this seems like an awful hack I'd prefer to avoid. Also it could create problems if I ever want to actually install
gcc on the target in the future.
There's a note about this in the dkms README (section 3), but no guidance on how to accomplish it.
"If you choose not to load module source on your system or if you choose not to load a
compiler ... DKMS can still be used to install modules.".
(sorry if the tags are misleading ... there's no tag available for dkms)

Create a local dummy package which pretends to install gcc and any other deps you want to keep off your system. The equivs package can be used for this, but it's simple enough to do by hand as well.

Many thanks to Darik Horn for an excellent (offline) answer.
He suggested using /etc/apt/preferences.d to pin the unwanted packages. That solution looks promising and I will explore further (and post results here if possible).
I was able to find another solution Ubuntu Forums and at Superuser which looks promising.
The steps which the script performs can be done manually and are basically this:
mkdir
dpkg-deb -x dkms...deb
dpkg-deb --control dkms...deb /DEBIAN
Edit the /DEBIAN/control file Depends line
dpkg -b dkms-modified-...deb

Related

RPM Vs Tar based Installation

My knowledge on Linux administration is limited and hence wanted to check here about the pros and cons of installing any RHEL/CentOS Linux software using rpm packages over installing through tar/zip files.
Thanks
a non-exhaustive list of pros and contras:
rpm
intelligent dependency managment
conflict checking
allow easy and clean uninstall
allow for upgrades / downgrades
list all files owned by a package
a central database with all packages installed, which files they own, their interdependencies
from source
you choose yourself all compiler flags
you can choose a custom installation path
I have tried to explain the diff, pros and cons,
Tar
Basically tar is old way of dealing with in Linux. We can say its existence when the Linux was created.
Usually the tar consists of Source Code and needs to be compiled in binary format for us to use.
Pros:
Using tar packages you gain more control over the programs that you install.
If you want certain portions that avoided, you could do that on the go. Which give you the upper hand.
Cons:
The main issue comes in the maintainability of the packages installed.
They are hard to manage. Once you install, there was no way to manage the software unless and until its well documented. It also hard to version them and you are left blank on the software version you have. The possible reason for this because of the non-indexing nature of files. The files could be spread across your file system, which makes it difficult to remove or upgrade it.
Hard to automate.
It is also hard to automate because of the complexities in maintaining the packages.
Below I tried explaining how tar file are compiled to get better understanding,
Prepare(setup) environment for building
./configure
This script has lots of options that you should change. Like --prefix or --with-dir=/foo. That means every system has a different configuration. Also ./configure checks for missing libraries that should be installed. Anything wrong here causes not to build your application. That's why distros have packages that are installed on different places, because every distro thinks it's better to install certain libraries and files to certain directories. It is said to run ./configure, but in fact you should change it always.
Building the system
make
This is actually make all by default. And every make has different actions to do. Some do building, some do tests after building, some do checkout from external SCM repositories. Usually you don't have to give any parameters, but again some packages execute them differently.
Install to the system
make install
This installs the package in the place specified with configure. If you want you can specify ./configure to point to your home directory. However, lots of configure options are pointing to /usr or /usr/local. That means then you have to use actually sudo make install because only root can copy files to /usr and /usr/local.
Please go through the below link for more information on the above commands
Why always ./configure; make; make install; as 3 separate steps?
RPM
The RPM Package Manager (RPM) is an open packaging system,
RPM packages pre-compiled binary packages (as well as source packages) for an easy one-click installation experience. RPM by itself does not manage dependency and resolve conflicts. When combined with Yum or PackageKit it will resolve all the dependency for the package.
RPM makes system updates easy. Installing, uninstalling and upgrading RPM packages can be accomplished with short commands. RPM maintains a database of installed packages and their files, so you can invoke powerful queries and verification on your system. During upgrades, RPM handles configuration files carefully, so that you never lose your customisation, that you cannot accomplish with regular .tar files.
RPM feature has the ability to verify packages. If you deleted an important file for some package, you can verify the package. You will notified of changes, if any—at which point you can reinstall the package, if necessary. Any configuration files that you modified are preserved during re installation.
Pros:
Install, reinstall, remove, upgrade and verify packages
Use a database of installed packages to query and verify packages
Use metadata to describe packages, their installation instructions, and so on
Package pristine software sources into source and binary packages
Add packages to Yum repositories
Digitally sign your packages
Querying a package (if the package is on your local file system or after the package is installed)
Validating a package (checking a package has not been tampered with, before or after installation).
Cons
Not as customisable as tar.
eg on usability: We will see how to install package using Tar or rpm:
in Tar:
$ tar xvf package.tar
$ cd package
$ ./configure --prefix=PREFIX
$ make
$ make install
in RPM:
rpm -U package-2.4.x-1.i686.rpm
That simple!!.
It basically depends on the usability and the purpose of your use.
Each of them has its on pros and cons depends on how and for what we use it.
I know it a long explanation,how this will give you clear picture. I know there are more untouched such as architecture and execution. I am not pretty confident to explain those here.
In simple words you can say that rpm are prepackaged binaries. They're just ready to go, it does everything for you. But to install rpm and deb you need to be root to have some write permissions. That leaves some serious security hole in the system. You may be unknowingly installing a Torjan horse. Also if the packages are screwed up they may cause the installation to fail altogether.
I personally recommend using tar as you are in more control. It is old school, I know, that's why a bit difficult but, in my opinion best way to go.
You can further refer to the link:
https://tldp.org/HOWTO/Software-Building-HOWTO-4.html

DPKG/APT-GET simulated install while "hiding" dependencies

I have some .deb files which I am currently modifying to have varying dependencies in the control file within the archive. I would like to be able to do simulated installations via:
sudo dpkg --install --simulate ./myFile.deb
The install script is meant to exercise some varying logging capabilities depending on certain combinations of dependency mismatches, etc. For example, one of my packages depends on the presence of libusb-1.0-0 > 1.0.0.16, and I already have the latest available version installed on my test system. Is it possible to pass a flag to dpkg so that it either:
Thinks that libusb is either a different version than that which is currently installed.
Thinks that libusb, or any other arbitrary library/package is not already installed.
Thank you.
You could take snapshots of /var/lib/dpkg/ in the various states you wish to test, and then pass the path of those snapshots to dpkg with the --admindir=... flag.

Make - Make Install and Linux update

I am trying my hands new on Linux.
The following command is very useful:
sudo apt-get install <application>;
As it adds the application into the linux programs list and automatically upgrades it while running the update manager.
But I would like to get more knowledge on installing the programs from the .tar.gz archives as well.
So I do:
Extract the archive
./configure;
make;
make install;
I have two questions in this process:
1) I read in the forum that "make install" is not good if we are updating the binaries.
So should I just do "make" and the "install" ?
2) Second question is that is there a way to add the program installed in such manner to the Linux Software Update list so that I do not have to use the terminal for every new version that is released
Installing programs from tarballs:
You really do not want to install packages from .tar.gz when they are in the repositories. It is much harder to update or remove it manually than you could do with apt-get.
If you really have to compile the program yourself use checkinstall instead of make install. This creates a package you can install it via the package management and later remove using apt-get. This is much cleaner.
Also you may want to type
./configure && make && sudo checkinstall
instead of the commands you wrote. This way the program is only compiled if the configuration succeeded. The package is only built if the compilation succeeded. With ; instead of && all processes would be attempted no matter if its prerequisites are matched.
Graphical package managers
You can install your packages from GUI programs. Kubuntu uses for example uses muon for this, but the programs vary between distributions.
make install is "not good" if you want to be able to easily remove the files associated with a package as there is no log of the work it does and often no easy way to reverse the process. That has little to nothing to do with updating the software though (though updates can certainly run into related issues).
No, you can't add the manually compiled and installed software to your distributions list of packaged software (other than through something like checkinstall or creating a package yourself) since that's exactly what you were avoiding in the first place.
That all being said if the package exists for your distribution and you want to build it from source yourself you can often just build a more-or-less official version of the package from the distributions source package.

Building rpm, overriding _topdir, but getting BuildRequires deps?

I have a libfoo-devel rpm that I can create, using the trick to override _topdir. Now I want to build a package "bar" which has a BuildRequires 'libfoo-devel". I can't seem to find the Right Way to get access to the contents of libfoo-devel without having to install it on the build host. How should I be doing it?
EDIT:
My build and target distros are both SuSE.
I prefer solutions that don't require mock, since I believe SuSE does not include it in its stock repo.
Subsequent EDIT:
I believe that the answer I seek is in the build package. Perhaps it's SuSE's answer to mock? Or it's the distributed version of the oBS service?
DESCRIPTION
build is a tool to build SuSE Linux
RPMs in a safe and clean way. build
will install a minimal SuSE Linux as
build system into some directory and
will chroot to this system to compile
the package. This way you don't risk
to corrupt your working system (due to
a broken spec file for example), even
if the package does not use BuildRoot.
build searches the spec file for a
BuildRequires: line; if such a line is
found, all the specified rpms are
installed. Otherwise a selection of
default packages are used. Note that
build doesn't automatically resolve
missing dependencies, so the specified
rpms have to be sufficient for the
build.
Note that if you really don't need libfoo-devel installed to build package bar the most sensible alternative would be to remove libfoo-devel from the BuildRequires directive (and maybe put the requirement where it belongs).
However, if you cannot do that for some reason, create a "development" rpm database. Basically it involves using rpm --initdb --root /path/to/fake/root. Then populate it with all of the "target packages" of your standard distro installation.
That's a lot of rpm --install --root /path/to/fake/root --justdb package-name.rpm commands, but maybe you can figure out a way to copy over your /var/lib/rpm/* database files and use those as a starting point. Once you have the alternative rpm database, you can fake the installation of the libfoo-devel package with a --justdb option. Then you'll be home free on the actual rpm build.
If neither mock nor the openSUSE Build Service are a viable choice then you will have to buckle down and install the package, either directly or in a chroot; the package provides files that the SRPM packager has decided are required to build, and hence is in the BuildRequires tag.

Install multiple versions of a package

I want to install multiple versions of a package (say libX) from src. The package (libX) uses Autotools to build, so follows the ./configure , make, make install convention. The one installed by default goes to /usr/local/bin and /usr/local/lib and I want to install another version of this in /home/user/libX .
The other problem is that libX is a dependency for another package (say libY) which also uses autotools. How to I make libY point to the version installed in /home/user/libX ? There could be also a possibility that its a system package like ffmpeg and I want to use the latest svn version for my src code and hence build it from src. What do i do in that case ? What is the best practice in this case so that I do not break the system libraries?
I'm using Ubuntu 10.04 and Opensuse 10.3.
You can usually pass the --prefix option to configure to tell it to install the library in a different place. So for a personal version, you can usually run it as:
./configure --prefix=$HOME/usr/libX
and it will install in $HOME/usr/libX/bin, $HOME/usr/libX/lib, $HOME/usr/libX/etc and so on.
If you are building libY from source, the configure script usually uses the pkg-config tool to find out where a package is stored. libX should have included a .pc file in the directory $HOME/usr/libX/lib/pkgconfig which tells configure where to look for headers and library files. You will need to tell the pkg-config tool to look in your directory first.
This is done by setting the PKG_CONFIG_PATH to include your directory first.
When configuring libY, try
PKG_CONFIG_PATH=$HOME/usr/libX/lib/pkgconfig:/usr/local/lib/pkgconfig ./configure
man pkg-config should give details.

Resources