I am trying to identify the installed software on Centos servers, until now I 'came up' with the following two basic solutions
Parse the file system for executables and libs
Run something like rpm -qa
The first one is time consuming, while the second does not apply to all my cases. For example I want to 'search' for packages even if the server is not running and I can access only its file system as a remote Volume, a Snapshot or an Image.
What I am thinking is to try and parse the same database / files that rpm -qa reads the data from.
After running strace -o /tmp/rpm-strace.out rpm -qa I found (without being sure) that /var/lib/rpm/Packages and /var/lib/rpm/Names are some possible locations for that 'database' but I can not parse any of those 2 files.
Does anyone know how to parse these files? Is there any alternative to achieve what I want?
Note: The whole idea is feasible under Ubuntu as this 'Unix & Linux' question describes.
Disclaimer: This question may be more suitable for serverfault site.
You really need to use rpm to parse the rpm database. If you have access to the filesystem, you could simply use chroot to run rpm inside the appropriate root context:
chroot /my/server/filesystem rpm -qa
Those files are various sorts of BerkeleyDB database files. Assuming that your runtime environment has the same version of BerkeleyDB available, you can use something like Python's bsddb module to read them:
>>> import bsddb
>>> name = bsddb.btopen('/var/lib/rpm/Name')
>>> for pkg in name.keys():
... print pkg
...
GConf2
GeoIP
GeoIP-GeoLite-data
GeoIP-GeoLite-data-extra
GitPython
GraphicsMagick
[...]
But this is a terrible idea and you shouldn't do it, because who knows if the Name database has exactly what you're looking for? Maybe it includes deleted packages that are somehow marked deleted, so rpm -qa would ignore them. You would probably need to look at the rpm sources to figure out exactly how things are stored.
my (fedora) rpm command accepts a --dbpath option to specify a different
directory with the database. There is also a rpm-python package to manipulate the database from python.
Related
Our company produces an rpm on red-hat Linux and there is a thought that we should redistribute as little 3rd party code as possible in our package. We are using log4j.jar and want to stop including it in our rpm, but instead have the log4j rpm installed at the client site. However, that means that my java program needs to know where the file is located so that I can import the file. Without knowing where the client installed the file, in case they installed it in a non-standard location, I cannot be sure it is on the classpath. Is there a standard way to determine that? Do rpm or yum have a command so that I can see where a specific file was installed or do I need our customer to tell me where it is so I can set the classpath value correctly? I can't just use the find command on their system to search their files. Any suggestions would be appreciated.
Thanks
I'd like to compare the contents of an RPM to an installed system (that professes to have the RPM already installed), looking for any files/directories that may have be different (or missing) -- something like pkgchk on Solaris.
It looks like rpm -V can be used to compare the system's filesystem(s) against the system's RPM database -- but I want to be able to compare an offline "golden" RPM with what's on the system (e.g., to avoid depending on a potentially tainted on-system RPM database or on incorrect version information from pre-release RPM files).
I know I could write something to unpack the RPM and then walk through the contents, comparing everything. But is there any existing tool that can do the comparison in-situ?
After looking through the "Similar Questions", I found a reference to a way to do this, add the "-p" option to the command to yield: rpm -Vp some*.rpm. I'd missed the aside on the man page saying that the query options were also applicable to the verification operation.
Alright so after trying to chase down the dependencies for various pieces of software for the n-th time and replicating work that various people do for all the different linux distributions I would like to know if there is a better way of bundling various pieces of software into one .rpm or .deb file for easier distribution.
My current set up for doing this is a frankenstein monster of various tools but mainly Vagrant and libguestfs (built from source running in Fedora because none of the distributions actually ship it with virt-diff). Here are the steps I currently follow:
Spin up a base OS using either a Vagrant box or by create one from live CDs.
Export the .vmdk and call it base-image.
Spin up an exact replica of the previous image and go to town: use the package manager,
or some other means, to download, compile, and install all the pieces that I need. Once again, export the .vmdk and call it non-base-image.
Make both base images available to the Fedora guest OS that has libguestfs.
Use virt-diff to diff the two images and dump that data to file called diff.
Run several ruby scripts to massage diff into another format that contains the information I need and none of the stuff I don't like things in /var.
Run another script to generate a command script for guestfish with a bunch of copy-out commands.
Run the guestfish script.
Run another script to regenerate the symlinks from diff because guestfish can't do it.
Turn the resulting folder structure into a .deb or .rpm file and ship it.
I would like to know if there is a better way to do this. You'd think there would be but I haven't figured it out.
I would definitely consider something along the lines of:
A)
yum list (select your packages/dependencies whatever)
use yumdownloader on the previous list (or use th pkgs you have already downloaded)
createrepo
ship on media with install script that adds the cd repo to repolist, etc.
or B)
first two steps as above, then pack the rpms into an archive build a package that contains all of the above and kicks off the actual install of the rpms (along the lines of rpm -Uvh /tmp/repo/*) as a late script (in the cleanup phase, maybe). Dunno if this can be done avoiding locks on the rpm database.
I think you reached the point of complexity - indeed a frankenstein monster - where you should stop fearing of making proper packages with dependencies. We did this in my previous work - we had a set of fabricated rpm packages - and it was very easy and straightforward, including:
pre/post install scripts
uninstall scripts
dependencies
We never had to do anything you just described. And for the customer, installing even a set of packages was very easy!
You can follow a reference manual of how to build RPM package for more info.
EDIT: If you need a single installation package, then create this master packge, that would contain all the other packages (with dependencies set properly) and installed them in the post-install script (and uninstalled them in the uninstall script).
There are mainly 3 steps to make a package with all dependencies (let it be A, B & C).
A. Gather required files.
There are many way to gather files of the main software and its dependencies. In order to get all the dependices and for error free run you need to use a base OS (i.e live system)
1. Using AppDirAssistant
This app is used by www.portablelinuxapps.org to create portable app directory. They scan and watch for the files accessed by the app to find required.
2. Using chroot & overlayfs
In this method you don't need to boot into live cd instead chroot into it.
a. mount the .iso # /cdrom and
b. mount the filesystem(filesystem.squashfs) # another place, say # /tmp/union/root
c. Bind mount /proc # /tmp/union/root/proc
d. Overlay on it
mount -t overlayfs overlayfs /tmp/union/root -o lowerdir=/tmp/union/root,upperdir=/tmp/union/rw
e. Chroot
chroot /tmp/union/root
Now you can install packages using apt-get or another method (only from the chrooted terminal). All the changed files are stored # /tmp/union/rw. Take files from there.
3. Using manually collected packages
Use package manager to collect dependencies. For example
apt-get install package --print-uris will print download uris for dep packages. Using this uris download packages and extract all (dpkg -x 1.deb ./extracted).
B. Clean garbage files
After gathering files remove unwanted files
C. Pack files
1. Using appImageAssistance
If you manually gathered files then you need to copy appname.desktop file from ./usr/share/applications to root of directory tree. Also copy file named AppRun from another app or extract it from AppDirAssistance.
2. Make a .deb or .rpm using gathered files.
Is the problem primarily that of ensuring that your customers have installed all the standard upstream distro packages necessary for your package to run?
If that's the case, then I believe the most straightforward solution would be to leverage the yum and apt infrastructure to have those tools track down and install the necessary prerequisite packages.
If you supply a native yum/apt repository with complete pre-req specs (the hard work you've apparently already completed). Then the standard system install tool takes care of the rest. See link below for more on creating a personal repository for yum/apt.
For off-line customers, you can supply media with your software, and a mirror - or mirror subset - of the upstream distro, and instructions for adding them to yum config/apt config.
Yum
Creating a Yum Repository in the Fedora Deployment Guide
Apt
How To Setup A Debian Repository on the Debian Wiki
So your customers aren't ever going to install any other software that might specify a different version of those dependencies that you are walking all over, right?
Why not just create your own distro if you're going to go that far?
Or you can just give them a bunch of packages and a single script that does rpm -i dep1 dep2 yourpackage
I have a debian package that I built that contains a tar ball of the files, a control file, and a postinst file. Its built using dpkg-deb and it installs properly using dpkg.
The modification I would like to make is to have the installation directory of the files be determined at runtime based on an environment variable that will be set when dpkg -i is run on the deb file. I echo out the environment variable in the postinst script and I can see that its set properly.
My questions:
1) Is it possible to dynamically determine the installation directory at runtime?
2) If its possible how would I go about this? I have read about the rules file and the mypackage.install files but I don't know if either of these would allow me to accomplish this.
I could hack it by copying the files to the target location in the posinst script but I would prefer to do it the right way if possible.
Thanks in advance!
So this is what I found out about this problem over the past couple of weeks.
With prepackaged binaries you can't build a debian package with a destination directory dynamicall determined at runtime. I believe that this might be possible if installing a package that is built from source where you can set the install directory using configure. But in this case since these are embedded Ubuntu machines they don't have make so I didn't pursue such an option. I did work out a non traditional method (hack) for installing that did work. Since debian packages simply contain a tar ball relative to / simply build your package relative to a directory under /tmp. In the postinst script you can then determine where to copy the files from the archive into a permanent location.
I expected that after rebooting and the automatic deletion of the subdirectory under /tmp that dpkg might not know that the file package existed. This wasn't a problem. When I ran 'dpkg -l myapp' it showed as still installed. Updating the package using dpkg/apt-get also worked without a hitch.
What I did find is that if you attempted to remove the package using 'dpkg -r myapp' that dpkg would try and remove /tmp which wasn't good. However /tmp isn't easily removed so it never succeeded. Plus in our situation we never remove packages but instead simply upgrade them.
I eventually had to abandon the universal package due to code differences in the sources resulting in having to recompile per platform but I would have left it this way and it did work.
I tried using --instdir to change the install directory of the package and it does relocate the files but dpkg fails since the dpkg file can't be found relative to the new instdir. Using --instdir is sort of like a chroot. I also tried --admindir and --root in various combinations to see if I could use the dpkg system relative to / but install relocate the files but they didn't work. I guess rpm has a relocate option that works but not Ubuntu.
You can also write a script that runs dpkg-deb with a different environment for 6 times, generating 6 different packages. When you make a modification, you simply have to run your script, and all 6 packages gets generated and you can install them on your machines avoiding postinst hacking!
Why not install to a standard location, and simply use a postinst script to create symbolic links to the desired location? This is much cleaner, and shouldn't break anything in dpk -I.
It seems that the RPM logic is quite different from what I know already and I am having some issues understanding the "RPM logic". For my work, I have to create a documentation on "How-to create a RPM package on Red Hat 5".
I'm used to Debian and it's derivatives (Ubuntu, and so on) and thus to Debian packages (aka. .deb files).
From what I read, it seems that ones need to be root to create a RPM package. While I understand why root could be required to install a package, I still don't understand why elevated privileges should be needed just to create one.
If I try to create a RPM package as a user, changing the buildroot it fails on the %installstep because I don't have permission to write files into /usr/bin. Fair enough but... why does it want to copy my files into /usr/bin at this step?! I just want to create the package, not install it!
I'm sure I'm missing something here. Is there anyone who could give me at least a basic understanding of how rpmbuild works and why?
Will this do?
You don't need to be root to build RPM packages. I recommend you to read this two part article to get you started.
The official Maximum RPM book also has a chapter on Having RPM Use a Different Build Area, which allows non-root users to build RPMs.