RPM fails to follow dependency order on install - linux

I'm trying to force rpm to follow a given install order and it is not working as expected. The Requires clause I added is not being respected.
I am doing a bare-metal Linux installer (openSUSE 42.2-based). A whole system -- hundreds of packages -- are installed with one RPM command (using --root). I am having problems with three packages -- pam-config, pam-script, and openssh. The pam-config %post scriptlet tries to modify files contained in pam-script and openssh but is installed, by default, before them. It does not have dependencies by default, so, having the source, I rectified that by adding:
Requires: pam-script
Requires: openssh
to pam-config.spec. (I also tried Prereq: with same results.) As expected, with this change, it switches the ordering for pam-script and that error goes away. But it steadfastly refuses to change the order of installation for openssh, which is installed two packages after pam-config. [Openssh is dependent on coreutils and shadow (pwdutil), both of which are already installed at this point. It's also dependent (PreReq) on a mysterious macro, %{fillup_prereq}.]
Everything else installs (and runs) just fine, but I would like to understand better how rpm works. I thought if I used Required: to specify openssh in pam-config, that openssh would invariably be installed before pam-config. It worked for pam-script.
rpm -qp --requires on the .rpm file shows openssh. I repeated the install with the -vv option instead of -v. I can see the Requires: for openssh listed just the same as pam-script (YES (added provide)). I see a pam-config-0.91xxx -> openssh-7.2p2xxx listed under SCC #8: 11 members (100 external dependencies). I see the install of pam-config, which has no dependency information and nothing remarkable except for the %post scriptlet command that generates the error (pam-config --service sshd --delete --listfile). What other kind of things should I be looking at to debug this? What are these SCCs? Am I missing something about Requires? Or is there something obscure I may have overlooked, like circular, indirect, or hidden dependencies (I've checked for that, but ruled it out)? I've looked at several RPM tutorials and done a number of web searches and come up empty.
UPDATE: It appears that unlike pam-script, openssh is caught up in a mutual-dependency critical section. Here is the order of the packages actually being installed:
ruby2.1-rubygem-ruby-dbus-0.9.3-4.3.x86_64.rpm
pam-script-1.1.6-1.os42.gb01.x86_64.rpm
suse-module-tools-12.4-3.2.x86_64.rpm
kmod-17-6.2.x86_64.rpm
kmod-compat-17-6.2.x86_64.rpm
libcurl4-7.37.0-15.1.x86_64.rpm
pam-config-0.91-1.2.os42.gb01.x86_64.rpm
systemd-sysvinit-228-15.1.x86_64.rpm
krb5-1.12.5-5.13.x86_64.rpm
openssh-7.2p2-6.1.SBC.os42.gb01.x86_64.rpm
dracut-044-12.1.x86_64.rpm
systemd-228-15.1.x86_64.rpm
If I stage an installation on a production system and stop just before pam-config, it complains about being dependent on krb5, which is in the future! If I stop at ruby, it works. If I stop at pam-script, it works. If I stop at suse-module-tools, it complains about dependencies on dracut. So I'm wondering if RPM abandons its ordering principle within a mutual-dependency critical section, or if there is a dependency I haven't uncovered yet. I am using rpm -q --requires and rpm -q --provides to work this out. Stay tuned.

You can add more explicit sub-fields to the Requires tag, e.g. Requires(post): openssh-server or Requires(pre,post): openssh-server.
A single RPM transaction isn't really atomic, but is treated that way. Without this additional information, it just ensures that the packages are installed by the end of this transaction, which is "good enough" most of the time.
Another option is to put the required configuration into a %triggerin stanza, which I believe only executes once both packages are installed.

Related

Building own package for conda gcc and binutils issue

This post summarize my painful but finally successful (just by chance) way to build own conda package for the
netgen meshing tool with Python interface. I found the recipe for the netgen build due to tpaviot.
After cloning the repository into 'netgen-conda' folder I ran:
conda build netgen-conda/netgen-6.2-dev
Which reports "Unsatisfiable dependencies": 'oce', 'gcc-5', 'binutils'.
So I tried to install these packages myself. Unfortunately the documentation do not emphasize the important fact that 'conda build' use its own temporary environment so it doesn't matter what you have installed (see). Nevertheless even installing 'gcc-5' together with 'binutils' manually turns out to be nearly impossible.
Hint for other newbies: Lot of my problems disappear after I learned details about channels.
First try was installing 'gcc-5' with 'binutils' from the 'salford_systems' channel suggested by anaconda:
conda install -c salford_systems binutils gcc-5
But it results in:
ERROR conda.core.link:_execute_actions(337): An error occurred while installing package 'salford_systems::gcc-5-5.3.0-0'.
LinkError: post-link script failed for package salford_systems::gcc-5-5.3.0-0
running your command again with-vwill provide additional information
location of failed script: /home/jb/miniconda3/envs/test/bin/.gcc-5-post-link.sh
Using verbose output ('-v') provides no more info. I was also confused by the fact that the script does not exist on the given path (probably automatically deleted).
With current experience I admit that the reason of problem can be dug out from the '-vv' output (reported issue). After some trying I found that only way to
install both is to first install 'gcc-5' into a clean environment and then install 'binutils'. Since 'conda build' installs everything
from scratch and there is no way to specify order of installed packages I was stuck.
Another issue that puzzled me is the 'conda build' long prefix hack. For unknown reason they use extremely long prefix for an auxiliary folder
which result in various kind of issues. I have faced to three such problems:
As is usual today, I have encrypted HOME causing a known issue.
Using a workaround '--croot /tmp' prevents creating the hard links from '/tmp' into 'HOME/miniconda3' since they are on different filesystems.
There is a fallback to use the copy. I even thought that the fallback doesn't work for a while, but it worked, just making the build running longer.
Trying to install 'gcc' (4.x) from 'default' channel complained about too short prefix. So ultimate workaroud was to set the length of the prefix manually
'--prefix-length 70'.
Finally, I found that the dependency on 'binutils' is not necessary and successfully build the package with:
conda build --prefix-length 70 -c salford_systems -c conda-forge -c dlr-sc netgen-conda/netgen-6.2-dev
Summary (of open questions):
Conda channels introduce a new kind of dependency hell already forgotten when using 'apt-get'. Is there a way to figure out what is a canonical channel for a package.
Does anyone succeed to build with combination 'gcc-5' and 'binutils'?
There is still lack of documentation about internal conda mechanisms and error messages do not provide clue to the problem.
Conda-build use a problematic prefix hack and lack ability to control order of installed packages. Does anybody know the reason for this hack?

Diffrences between "apt-get install openjdk-8-jdk" and downloading *.tar.gz

I want to install openjdk on ubuntu.
I found two ways to install it.
The first is typing "sudo apt-get install openjdk-8-jdk" on terminal.
The second is downloading the binary file such as *.tar.gz and then unpack the file and set environment variables JAVA_HOME&PATH.
So, is there any difference between this two methods?
I mean, will it cause different results?
Thanks a lot.
With the first approach, the installation is controlled by Debians Apt package manager and will receive updates, with the second one you will have to do that manually.
It will probably not end with different result.
On linux distributions you have what is called a packet manager : Yours (and on almost every ubuntu) is APT.
So the main difference is that when you use apt, you can "trust more" what you are downloading, because hopefully, content in apt are check.
However, because of this checking, apt isn't every time up-to-date, and it may induce some difference in version.
However, in my opinion if you doesn't want to duplicate file or pollute your system, you may want to choose either one option and stay with it : if you use apt, use apt to update, if you download it manually keep updating it manually.
I personally prefer to use apt when possible.

A mess with different Perl installs

I tried to upgrade Perl and put my computer into a complete mess
I am currently running RHEL6.5, 64bits, and this is the thing:
I had perl-5.10.1 installed, and working nice. this came installed,
and I could see it from yum
I wanted to install Padre, an Perl IDE, but that required at least v5.11 [I was so close! :( ]
There were no newer version for Perl in the repos that I have access to (and I have a limitation that I can't add new repos)
I got approval from my boss to download perl-5.20 .0 from www.perl.org and tried to install it
... and the mess begins!
First I installed the new perl with my own id, and that pushed perl to somewhere under my home dir
I tested with 'perl -v' and could see that my env was pointing to the newer install, however, yum never recognized it (not really a problem)
When I tried to install Padre, seems somehow it had the hardcoded the original perl (from /usr/bin) and still claiming for something as newer as 5.11.
Trying to fix it, I did installed the new perl again, now using root, to make it push perl under /usr tree ... it installed, but pushed perl to /usr/local/bin, instead of /usr/bin
So again, I had one more perl install but Padre still looking for the one on /usr/bin
I give up about Padre, and deleted the files related to it, as well as the perl installed on my home dir, however a couple of perl scripts that I had already coded now are throwing errors like:
perl -cw "xmltest.pl" (in directory: /home/myid/scripts/xmltest.pl)
perl: symbol lookup error: /usr/lib64/perl5/auto/Data/Dumper/Dumper.so: undefined symbol: Perl_Istack_sp_ptr
Compilation failed.
... and Data::Dumper in not the only one ... every time I disable one of the modules, another one hangs in the same, or similar way
From what I read about this, seems that this issue is related to modules that were originally installed for one perl version, and are being called by another, however, I already forced the modules that I use to be reinstalled directly from CPAN, and they still failing
Question: How can I, safely, get free from this current perl installs, and perform a new clean install be able to use it w/o these versions conflicts?
My major concern are about the numerous apps that I have that depends on Perl, and I my not broke then on a uninstall
Any help will be much appreciate.
You should:
cleanup
clean (comment out) your ~/.profile from any unwanted paths, and so on
clean any new perl installation from your $HOME (move to safe place for sure)
in short, try return your environment into previous working state
relog, (logout, login)
repair your system perl. Thats mean,
read #Sam Varshavchik's answer
reinstall it from your distribution, using your package manager (5.10).
this step should overwrite the mess you caused.
test it !
don't continue until youre ensured, everything working right as before.
Lesson learned: never overwrite your system perl
learning
read thru perlbrew.pl
repeat previous step once again, especially with the
the homepage
http://perlbrew.pl/Perlbrew-and-Friends.html
https://metacpan.org/pod/App::perlbrew
https://metacpan.org/pod/perlbrew
installing perlbrew
run the installation command \wget -O - http://install.perlbrew.pl | bash
should finished without errors
follow the instructions how to modify your startup file e.g. ~/.profile or such... (you need to add one line to the end)
check your ~/perl5/perlbrew/bin should contain prelbrew and patchperl
relog
setup new perl, run
perlbrew init #init environment
perlbrew available #show what perl you can install
perlbrew install 5.20.0 #will take few minutes - depends on your system speed
perlbrew install-cpanm
perlbrew list #check
perlbrew switch perl-5.20.0 #activate newly installed perl 5.20
Check your installation
in the ~/perl5/perlbrew/bin you should have 3 scripts: prelbrew , patchperl , cpanm
perl -v should return 5.20
type cpanm - should return ~/perl5/perlbrew/bin/cpanm
You're done.
CPAN modules
You can install new modules with cpanm, like:
applications
cpanm cpan-outdated
cpanm App::Ack
cpanm Unicode::Tussle
cpanm Perl::Tidy
cpanm Perl::Critic
collections
cpanm Task::Moose
cpanm Task::Plack
cpanm Task::Unicode
modules
cpanm Path::Tiny
cpanm Try::Tiny
cpanm JSON
cpanm YAML
etc...
Check the ~/perl5/perlbrew/perls/perl-5.20.0/bin/ for new commands
You will need update your own perl script's shebang line to
#!/usr/bin/env perl
I hope don't forget anything, maybe other more experienced perl-gurus will add/edit/correct more.
Anyway, in the reality the steps 5,6,7 are much easier as sounds (by reading this) and could be done in few minutes.
On rpm-based Linux distributions, you should never install system software manually, like this, by trying to compile and build it yourself. RHEL's package management tool, rpm, performs an important function of keeping track of dependencies between packages, and prevent package conflicts.
The errors you showed are precisely the symptoms of a corrupted system Perl installation, and rpm exists precisely to avoid this sort of thing happening. Manually building and installing random tarballs completely bypasses the safety net that rpm provides.
There's no cookie-cutter recipe for recovering from a corrupted system install of a critical system rpm like perl, but in general:
1) run "rpm -q" perl, this will show you the exact version of the perl rpm package that rpm thinks should be installed.
2) go to the RHEL installation media/directory, verify that it contains the same perl-.x86_64.rpm package. If you previously installed RHEL updates, it's possible that you already updated perl, so look for the version that rpm tells you have installed in the RHEL update directory, and verify that you have the correct rpm package.
3) Execute:
rpm -ivh --force perl-<version>.x86_64.rpm
This will reinstall the original perl RPM package that was previously installed. Your problem is not only that you have extra versions of perl installed, but that it's likely that some of your custom perl builds have clobbered the system perl package, and uninstalling them won't help, you have to reinstall the system perl.
4) In RHEL, many perl modules are installed as separate packages. The above process should be used to reinstall every perl rpm package that you have installed. Execute:
rpm -q -a | grep '^perl'
This will give you a list of all Perl packages you have installed. You will need to repeat this procedure for every Perl rpm package.
It's not a 100% guarantee that this will fix everything, there could be other things wrong too, but this is a good first step towards recovery.
What I have done:
From #Sam-Varshavchik answer:
Found the previous perl rpm in my yum cache, and installed ...
rpm -ivh --force perl-<version>.x86_64.rpm
Checked for others "perl*" previously installed packages ... there were +260 so saved it in a file rpm -qa "perl*" > /tmp/perl.pkgs
With +260 packages to install, I realized that do it manually would take too much time, so it was time to put some ksh skills in practice ...
I checked at my yum cache and found ~130 of the +260 packages, so
took out from the list the base perl package (that I already have installed);
for those in the cache, I decided to install then with rpm, in the same way as the base package;
for those that I did not have handy, I used yum, which would download and do the same
of rpm, so ...
CACHE="/var/cache/yum/x86_64"
for perlpkg in $(cat /tmp/perl.pkgs)
do
FILE=$(find $CACHE -name "${perlpkg}.rpm")
if [[ ${FILE} != "" ]] ; then
rpm -ivh --force ${FILE}
else
yum -y reinstall ${perlpkg}
fi
done
From #jm666:
Installed perlbrew (was able to got it from my auth repos, so got it using yum) and using perlbrew, installed 5.20.0 localy
TODO: Didn't got any additional modules and neither Padre yet ... need to learn more about the way perlbrew works and isolate the installed version away from the system perl
Once again, thanks #Sam-Varshavchik and #jm666 for your support ang guidance

NixOS and ghc-mod - Module not found

I'm experimenting a problem with the interaction between the ghc-mod plugin in emacs, and NixOS 14.04. Basically, once packages are installed via nix-env -i, they are visible from ghc and ghci, recognised by haskell-mode, but not found by ghc-mod.
To avoid information duplication, you can find all details, and the exact replication of the problem in a VM, in the bug ticket https://github.com/kazu-yamamoto/ghc-mod/issues/269
The current, default, package management set up for Haskell on NixOS does work will with packages that use the ghc-api, or similar (ghc-mod, hint, plugins, hell, ...) run time resources. It takes a little more work to create a Nix expression that integrates them well into the rest of the environment. It is called making a wrapper expression for the package, for an example look at how GHC is installed an operates on NixOS.
It is reasonable that this is difficult since you are trying to make a install procedure that is atomic, but interacts with an unknown number of other system packages with their own atomic installs and updates. It is doable, but there is a quicker work around.
Look at this example on the install page on the wiki. Instead of trying to create a ghc-mod package that works atomically you weld it on to ghc so ghc+ghc-mod is an atomic update.
I installed ghc+ghc-mod with the below install script added to my ~/.nixpkgs/nixpkgs.nix file.
hsEnv = haskellPackages.ghcWithPackages (self : [
self.ghc
self.ghcMod
# add more packages here
]);
Install package with something like:
nix-env -i hsEnv
or better most of the time:
nix-env -iA nixpkgs.haskellPackages.hsEnv
I have an alias for the above so I do not have to type it out every time. It is just:
nixh hsEnv
The down side of this method is that other Haskell packages installed with nix-env -i[A] will not work with the above installation. If I wanted to get everything working with the lens package then I would have to alter the install script to include lens like:
hsEnv = haskellPackages.ghcWithPackages (self : [
self.ghc
self.ghcMod
self.lens
# add more packages here
]);
and re-install. Nix does not seem to use a different installation for lens or ghc-mod in hsEnv and with the ghc from nix-env -i ghc so apparently only a little more needs to happen behind the scenes most of the time to combine existing packages in the above fashion.
ghc-mod installed fine with the above script but I have not tested out its integration with Emacs as of yet.
Additional notes added to the github thread
DanielG:
I'm having a bit of trouble working with this environment, I can't even get cabal install to behave properly :/ I'm just getting lots of errors like:
With Nix and NixOS you pretty much never use Cabal to install at the global level
Make sure to use sandboxes, if you are going to use cabal-install. You probably do not need it but its there and it works.
Use ghcWithPackages when installing packages like ghc-mod, hint, or anything needs heavy runtime awareness of existing package (They are hard to make atomic and ghcWithPackages gets around this for GHC).
If you are developing install the standard suite of posix tools with nix-env -i stdenv. NixOS does not force you to have your command line and PATH cultured with tools you do not necessarily need.
cabal assumes the existence a few standard tools such as ar, patch(I think), and a few others as well if memory services me right.
If you use the standard install method and/or ghcWithPackages when needed then NixOS will dedup, on a package level (If you plot a dependency tree they will point to the same package in /nix/store, nix-store --optimise can always dedup the store at a file level.), many packages automatically unlike cabal sandboxes.
Response to comment
[carlo#nixos:~]$ nix-env -iA nixos.pkgs.hsEnv
installing `haskell-env-ghc-7.6.3'
these derivations will be built:
/nix/store/39dn9h2gnp1pyv2zwwcq3bvck2ydyg28-haskell-env-ghc-7.6.3.drv
building path(s) `/nix/store/minf4s4libap8i02yhci83b54fvi1l2r-haskell-env-ghc-7.6.3'
building /nix/store/minf4s4libap8i02yhci83b54fvi1l2r-haskell-env-ghc-7.6.3
collision between `/nix/store/1jp3vsjcl8ydiy92lzyjclwr943vh5lx-ghc-7.6.3/bin/haddock' and `/nix/store/2dfv2pd0i5kcbbc3hb0ywdbik925c8p9-haskell-haddock-ghc7.6.3-2.13.2/bin/haddock' at /nix/store/9z6d76pz8rr7gci2n3igh5dqi7ac5xqj-builder.pl line 72.
builder for `/nix/store/39dn9h2gnp1pyv2zwwcq3bvck2ydyg28-haskell-env-ghc-7.6.3.drv' failed with exit code 2
error: build of `/nix/store/39dn9h2gnp1pyv2zwwcq3bvck2ydyg28-haskell-env-ghc-7.6.3.drv' failed
It is the line that starts with collision that tells you what is going wrong:
collision between `/nix/store/1jp3vsjcl8ydiy92lzyjclwr943vh5lx-ghc-7.6.3/bin/haddock' and `/nix/store/2dfv2pd0i5kcbbc3hb0ywdbik925c8p9-haskell-haddock-ghc7.6.3-2.13.2/bin/haddock' at /nix/store/9z6d76pz8rr7gci2n3igh5dqi7ac5xqj-builder.pl line 72.
It is a conflict between two different haddocks. Switch to a new profile and try again. Since this is a welding together ghc+packages it should not be installed in a profile with other Haskell packages. That does not stop you from running binaries and interrupters from both packages at once, they just need to be in their own name space so when you call haddock, cabal, ghc, there is only one choice per profile.
If you are not familiar with profiles yet you can use:
nix-env -S /nix/var/nix/profiles/per-user/<user>/<New profile name>
The default profile is either default or channels do not which one it will be for your set up. But check for it so you can switch back to it later. There are some tricks so that you do not have to use the /nix/var/nix/profiles/ directory to store you profiles to cut down on typing but that is the default location.

How do you uninstall in *nix?

One of the things I still can't wrap my head around is rules of thumb to uninstall programs in *nix environments. Most of the time I'm happy to let the sleeping dogs lie and not uninstall software that I no longer need. But from time to time I end up with several Apaches, svn, etc.
So far here's what I know about dealing with this:
1) if you installed using apt-get or yum, there's an uninstall command. Very rarely there's an uninstall script somewhere in the app's folder, something like uninstall.sh
2) to figure out which particular install is being called from the command line use "type -a" command
3) use "sudo find / | grep" to find where else stuff might be installed (from what I understand type only looks for things that are in the PATH variable)
4) Add/change order of things in PATH to make the desireable version of the app to be first in line or add an alias to .bashrc
5) delete the stuff I no longer want. This one is easy if the application was installed only in one folder, but tricky if there are multiple. One trick that I've heard of is running a find with a time range to find all the files that changed arount the time when the install happened - that roughly shows what was changed and added.
Do you have anything to add/correct?
If you didn't use a package manager (rpm, apt, etc), then you probably installed from source. To install, you performed a process along the lines of ./configure && make && make install. If the application is well-behaved, that "install" make target should be coupled with an "uninstall" target. So extract the sources again, configure again (with the same paths), and make uninstall.
Generally, if you're compiling something from source, the procedure will be
$ make
$ su
# make install
in which case, the vast majority of programs will have an uninstall target, which will let you reverse the steps that happened during install by
$ su
# make uninstall
As always, read the program's README or INSTALL files to determine what's available. In most situations you'll either install something via a package manager (which will also handle the uninstall), or you'll have invoked some kind of manual process (which should have come with a readme explaining how to uninstall it).

Resources