Regarding RPM upgrade - linux

RPM is taking care of installing older version and also same version scenario.
If I do update instead of installing new version when old is existing then it updates the package so no problem here.
But When I try to install a new version when an older version exists RPM installs the new version separately thus two versions will exist.
I would like to stop the new version installation when an older exists in my spec file by checking in %pre section. How can I know that rpm -ivh is called or rpm -Uvh is called in my spec file?
if [ "$1" = "1" ]; then
echo Perform tasks to prepare for the initial installation
elif [ "$1" = "2" ]; then
echo You already have old version Please use -U to upgrade.
fi
"$1" = "2" true for both new installation when old is present and for upgrade.
Please let me know how to solve this.

From what I can see, you can't. Find some other way. For example trying to use -i when both already-existing and to-be-installed packages have conflicting files usually stops the installation.

RPM will never install two versions of the same rpm packages that only differ in versions/releases (unless you have some old/buggy version). Now I will assume this is what you wanted:
I have rpm package that should do something a bit different after installation and after upgrade. For example install database on first install, but update it otherwise.
That would mean one of a few things:
You are trying to upgrade automatically something that should be updated by the administrator and/or the program itself. Files generated by the program should not be touched by packages. Ever
You should mark those files as %config in %files section and leave administrator to merge xxx.rpmnew files with the files edited by user
In other words rpms should only touch things it created itself, knows what they contain and can compare it with the database/other source or reliable information. This doesn't seem to be your case

The correct answer is:
yum _____ --nogpgchech
have a nice day!

Related

Rpm-spec file. running yum groupinstall

Good afternoon,
I am currently building an RPM that has some requirements I have not found answers to on the web. I have narrowed this down to a single question.
Normally when I run an install from command line, one of the steps has me run the following command yum groupinstall "Compatibility libraries" which installs 32-bit compatibility libraries on my 64-bit desktop. I am wondering if there is a way to accomplish this in the Requires: field of my RPM-spec file, as I have only found a way to require very specific RPM's for dependencies?
I could always add in the 10-15 individual packages that get installed with yum groupinstall "Compatibility Libraries", but I was hoping there was a better option.
Description of RPM:
My RPM is very basic in nature. It will untar multiple tar files into various locations, overwrite files throughout the main install directory, install compatibility libraries, and then proceed to startup a service.
If anyone needs more information to what I am trying to accomplish please let me know. Thank you.
You can only require specific packages, not groups, in your Requires: lines. You should absolutely not run yum in your %post script, because then (a) you are then hiding your dependencies, and nobody likes to see things get installed that they didn't expect, and (b) you will probably end up getting stuck because yum in %post would need to wait for the existing yum process to exit.
For library Requires:, the rpm build process will generally figure things out for you. You still need to manually specify the appropriate BuildRequires: dependencies, which are things that are required to build the package.
If you want to update your question with more details (e.g., a link to the spec file and a description of what you're trying to do, if it's not obvious from the spec), maybe we can come up with better solutions.

How to install two different versions of same rpm and make them work parallely

Currently I am trying to install a rpm secured_soft_2.0.0.rpm and i am
unable to install it as we already have secured_soft_1.3.0 installed.
Requirement is that we need to have both the versions installed.
Complexities :
These package inturn have dependent rpm's (lot of them ) and all these
interdependent rpm's also have versions
ex: secured_soft_1.3.0 works only with packages which are of version 1.3,
and secured_soft_2.0.0.rpm work only with dependecies of version 2.0 only.
So all these dependencies also need to be reinstalled and even these
dependenies should be parallely installed, without deleting old.
Finally , both these versions contain shared libraries and these shared
lib's do not have version numbers in their name.
#rpm -ivh secured_soft_2.0.0.rpm
error: Failed dependencies:
init-class >= 1.4.17.1-1 is needed by secured_soft_2.0.0.rpm
init-connection-interface >= 2.0.11.0 is needed by secured_soft_2.0.0.rpm
init-logger >= 2.0.11.0 is needed by secured_soft_2.0.0.rpm
init-security >= 2.0.11.0 is needed by secured_soft_2.0.0.rpm
As i have specified we already have secured_soft_1.3.0.rpm installed and
above dependencies are also available but of different version.
So we need the to install above dependencies and also need the old version's
of dependencies for the old rpm's to work
ex : secured_soft_2.0.0.rpm has libArt.so libSec.so and so on
which are copied to /usr/lib
Similarly secured_soft_1.3.0.rpm also has libArt.so libSec.so and so on
which are already available in /usr/lib
I tried to rename the so's but still iam not able to install.
Is it possible to change the location for these so's and get the things done
Is there any way we can do it.
At the moment, iam stuck here and would need advice on this
Appreciate any help on this.
Since the programs use the same filenames, and you need to put them on the same machine, you might be able to move the older version to another directory tree and make it work there.
You can do this with many applications which do not have compiled-in pathnames.
For instance,
install the older version (this sounds like where you are starting from)
use rpm -ql for each of the packages containing unversioned executables, libraries and associated files.
use tar to capture an archive of those files, relative to /usr (but omitting directories not owned by the packages).
create a new directory, e.g., /usr/local/myapp and untar the older version there.
update configuration files in the new location as needed
For applications such as this, I would run in a script that updates PATH (and perhaps sets LD_LIBRARY_PATH) to force the program to run from the new location. You can verify if this works using tools such as strace and lsof, i.e., by looking for the files that the program opens.
Once you have the older version working properly in the new location, you can uninstall its rpms and install the new version of the application.
Caveat: If the newer package is copied from a newer version of the operating system, however, the task is likely to be beyond your ability, whether or not you choose the alternative approach of recompiling the newer packages to fit on the existing system.
Building new/custom packages is one route to recompiling the newer version. If you have the source-RPMs for each part, that is a starting point:
extract the files from the source-RPM, e.g., using a script such as unrpm (see for example HowTo: Extract an RPM Package Files Without Installing It), and
copy those extracted files to their as-expected locations in your build-tree, e.g., $HOME/rpmbuild/SOURCES and $HOME/rpmbuild/SPECS
modify the spec-file to use the alternative location
build the new/modified package using the modified spec-file.
No, out of the box, you cannot.
I'd highly recommend looking into Docker, where you can throw each one into their own container and let them take care of all their dependency problems.

RPM create a trigger that will use version information from the new (triggering) RPM

A little background - Since Redhat stopped updating gcc versions for RHEL6, we had to implement our own gcc5 deployment.
We can't upgrade to RHEL7 at this time, nor will it solve this problem as Redhat hadn't deployed gcc5 for rhel7 yet.
Our current issue is with /usr/src/kernels/$(uname -r)/include/linux/compiler-gcc5.h
That path and the compiler-gcc*.h files are owned by kernel-devel, and a gcc5 one exists only for fedora (only for distros where gcc5 has been introduced officialy).
I can easily create an rpm that will own the file (the reason is mainly so we will receive a conflict if RH ever deploys kerenl-devel with that file, so we can stop deploying our own) for the current kernel, but I'd like to use %triggerin to copy the file to any new kernel-devel installed on the system.
using uname -r wouldn't work for 2 reasons:
it's evaluated at rpmbuild time, not install time.
it evaluates the current running kernel, not the future one just installed.
I can overcome 1 by using a %post script, I think, but can't figure out how to overcome 2, nor how to own the file by my rpm in it's final location (can I %ghost a path I don't know in advance, or use something like dkms?)
Is there a way to know the exact version of the rpm that triggers the %triggerin and incorporate it into the path?
Is there an alternate solution?

Keep files generated by RPM after rerunning rpm -e

Is it possible to keep RPM content in the filesystem after the uninstallation ?
Basically, I don't want to erase the files generated by the RPM when someone runs rpm -e, but still remove the package from the list of installed RPMs. I want to uninstall my application manually from the server and I cannot find a way to do so.
Thanks in advance.
Specfile creation
From this perspective as a package creator, you can flag files as configurations. John Warbrick at Cambridge Uni has a good rundown of the different file markings you can use in an RPM specfile and how they behave during upgrade.
Files marked %config, if changed, are:
left behind on uninstall but renamed with an .rpmsave extension
untouched on update or reinstall, but the payload copy is installed with an .rpmnew extension
Warbrick does not deal with straight package removal: if a file marked %config is unchanged, erasing the package will remove the file. (Bear in mind that, according to RPM scriptlet ordering, RPM update or reinstall begins with erasing the old package, before the new files are lain down. I.e., the old unchanged config is replaced with the new config.)
It would be an abuse of RPM file classification to mark something a %config that is not really a config. This is important because sysadmins need to be able to trust the package payload. Plus, you probably don't want such a file to be renamed with an extension.
Your best hope then is to create the file during RPM %post. Files copied to the filesystem in %install are checked against the %files list and stored in the RPM database, but files created in %post are not. Again, this an abuse of the RPM spec file format, but if you mention it in the %description and any other documentation, but especially if it's logically expected behavior in the context of your package, you'll maybe get some sympathy.
Existing packages
For sysadmins wanting to remove a package without removing its payload, you use rpm --justdb. First, check the file list:
# rpm -ql $PACKAGE | tee $PACKAGE.payload
[rpm returns list of files, tee saves it in the file $PACKAGE.payload]
# rpm -e --justdb $PACKAGE
(At this stage, you may get a complaint about deps, so repeat with --nodeps if you must.)
And finally, check that the package is gone, but payload is still there:
# rpm -q $PACKAGE
package $PACKAGE is not installed
# ls -l $(<$PACKAGE.payload)
[ls queries list of files saved in $PACKAGE.payload]
(Or for a more reliable ls if filenames contain spaces, which would be unconscionable in RPM.)
# cat $PACKAGE.payload|xargs -d '\n' ls -l
Other observations
Note that for a package created using the method outlined at the top of my answer, attempting to list the files it owns with rpm -ql $PACKAGE will not show those persistent files. Another consequence of throwing down files during %post is that another package which "owns" those files could overwrite them unexpectedly - they're not listed in the RPM database, so they're not protected.
The two methods I've outlined break Best Practices for both RPM package creation and sysadmin. Please be very careful how you use this dangerous "little bit of knowledge". Maybe there's something else you could do to create the desired situation.
(Finally, I know this is a question from four year back. It wasn't answered. It needed answering. Cheers.)
rpm -e --repackage package_name will drop a re-installable copy in /var/spool/repackage. You can extract from that rpm using rpm2cpio if you need those files back individually.
One of the advantages of RPMS is exactly that: you can uninstall (remove)
all the files easily.
If you want to install the files provided by an RPM manually,
simply unpack the RPM as root, with:
cd / &&
rpm2cpio *.rpm | cpio -idmv
It's not a good idea, but you can unpack RPMS like that elsewhere (e.g. under ${HOME}),
and that's very useful. You may need to adjust some environment variables though.
You don't pollute the RPM database but still use the contents of an RPM.
Of course your application may get broken by system updates,
because it is not protected by dependencies.
Anyway this is how you "install" and RPM if you are not root.
Removal is also easy (if installation path chosen suitably).

Run time installation directory of debian package contents

I have a debian package that I built that contains a tar ball of the files, a control file, and a postinst file. Its built using dpkg-deb and it installs properly using dpkg.
The modification I would like to make is to have the installation directory of the files be determined at runtime based on an environment variable that will be set when dpkg -i is run on the deb file. I echo out the environment variable in the postinst script and I can see that its set properly.
My questions:
1) Is it possible to dynamically determine the installation directory at runtime?
2) If its possible how would I go about this? I have read about the rules file and the mypackage.install files but I don't know if either of these would allow me to accomplish this.
I could hack it by copying the files to the target location in the posinst script but I would prefer to do it the right way if possible.
Thanks in advance!
So this is what I found out about this problem over the past couple of weeks.
With prepackaged binaries you can't build a debian package with a destination directory dynamicall determined at runtime. I believe that this might be possible if installing a package that is built from source where you can set the install directory using configure. But in this case since these are embedded Ubuntu machines they don't have make so I didn't pursue such an option. I did work out a non traditional method (hack) for installing that did work. Since debian packages simply contain a tar ball relative to / simply build your package relative to a directory under /tmp. In the postinst script you can then determine where to copy the files from the archive into a permanent location.
I expected that after rebooting and the automatic deletion of the subdirectory under /tmp that dpkg might not know that the file package existed. This wasn't a problem. When I ran 'dpkg -l myapp' it showed as still installed. Updating the package using dpkg/apt-get also worked without a hitch.
What I did find is that if you attempted to remove the package using 'dpkg -r myapp' that dpkg would try and remove /tmp which wasn't good. However /tmp isn't easily removed so it never succeeded. Plus in our situation we never remove packages but instead simply upgrade them.
I eventually had to abandon the universal package due to code differences in the sources resulting in having to recompile per platform but I would have left it this way and it did work.
I tried using --instdir to change the install directory of the package and it does relocate the files but dpkg fails since the dpkg file can't be found relative to the new instdir. Using --instdir is sort of like a chroot. I also tried --admindir and --root in various combinations to see if I could use the dpkg system relative to / but install relocate the files but they didn't work. I guess rpm has a relocate option that works but not Ubuntu.
You can also write a script that runs dpkg-deb with a different environment for 6 times, generating 6 different packages. When you make a modification, you simply have to run your script, and all 6 packages gets generated and you can install them on your machines avoiding postinst hacking!
Why not install to a standard location, and simply use a postinst script to create symbolic links to the desired location? This is much cleaner, and shouldn't break anything in dpk -I.

Resources