I am working on changing machines from the RHEL world over to the debian/ubuntu world, and I am struggling a bit with a packaging problem. I am trying to build a package for Ubuntu 16.4.
I've got an very old pre-compiled application that can only listen through xinetd. I am creating a binary only package similar to what this person was doing: I need my Debian rules file to simply copy files to it's target. I simply need to copy pre-compiled files into directories.
I have no problem getting files in /opt and in /var/log, however I have been trying to get the dpkg to copy the needed setup file into /etc/xinetd.d/
So I have a debian/package.install file something like this:
opt/oldapplication-3.10/* opt/oldapplication-3.10/
var/log/* var/log/
etc/xinetd.d/oldapplication /etc/xinetd.d
The xinetd setup file never makes it to xinetd.d, and trying to look at the dpkg install with debug doesn't give me any hints. The file is definitely in the tarball, it just simply never gets moved.
Looking through the different dh helper applications, I can't see anything that fits, and google does nothing to illuminate the problem.
Do I have to simply move the file over in a postinst script? Is that the only way to solve this, or is there a more "debian" way to do this by creating a file in the dpkg's debian directory? Is there a more generic setup I should be doing to put files into /etc?
Thanks.
Related
I’m not root for the linux server,
so I choose to install softwares in my $HOME/local/bin, I already added the $HOME/local/bin directory to the PATH environment variable, wrote in my .bashrc.
Some softwares install this way like:
tar xvzf ncurses-5.9.tar.gz
cd ncurses-5.9
./configure --prefix=$HOME/local
make
make install
cd ..
So it will directly install in my $HOME/local/bin.
But for some softwares, after download like sbt-1.2.1.zip (based on java), and decompression, shows just a file fold sbt, it contains three foldsbin conf lib, and in its bin, contains one executable file named sbt and java9-rt-export.jar sbt-launch-lib.bash sbt-launch.jar sbt.bat.
Here I wonder:
I should just soft link this executable sbt file path under my $HOME/local/bin, then source my .bashrc?
Or, after decompression, add one line in my .bashrc export PATH="downloadpath/sbt/bin:$PATH"?
Since just one executable downloadpath/sbt/bin, so I'm not sure it is right to add whole bin fold path, if software's bin fold contains executable files (one or many), I think this situation is more convenient for just add it's bin in .bashrc, but even so, I'm not sure its right?
I'm not familiar with installation software, now I usually know way
but not why. Here I shows two ways (more ways not be showed here) to
install, executable file always be written in bin or src? But some
softwares no bin just src but no executable files in it...
Slurm also can use modules to install software, conda also other way, but I want to
confirm these traditional ways I mentioned (that two) still can be
used on slurm or conda?
However, any suggestion even one aspect's reminding will be grateful!
For precompiled software, or, in general, software that does not offer configure scripts or (C)make files, it is ofter better to leave them in their target directory and adapt the *PATH (PATH to binaries, but also LD_LIBRARY_PATH, LIBRARY_PATH to libraries and CPATH to include files and MANPATH to the man page) environment variables.
The reason is that the software might be configured to read files with hardcoded paths, relative to the position of the executable, such as libraries, etc.
In your case, you might also need to setup the CLASSPATH env variable to the directory with the jar files.
To ease software installation, you can use tools such as easybuild that can help, and even create user modules just like the system module installed by the system administrators.
There is something wrong in my opinion with your setup. If you don`t have root account on your server, is not better to test what you have to test, in a more safe environment - for example a vm/container on your developement machine ?
However, in your situation maybe it can be better to start sbt by using a separate bash script than modifying your .bashrc
Hello experts and thank you for taking your time to answer the question:
Can I remove the .rpm files after having installed the program?
EXAMPLE:
I have installed the rar.rpm file and now I have the rar.rpm in a directory where I downloaded the file and want to know if I could remove that file without disrupting the rar program's function in centos.
Regards
Sam
Yes, you can.
Though you may want to keep it in case you need to install it again.
Also, in general, you shouldn't be (and shouldn't need to) use randomly downloaded .rpm files.
You should usually be using yum or similar tool to install a package from the configured repositories on the system.
Alright so after trying to chase down the dependencies for various pieces of software for the n-th time and replicating work that various people do for all the different linux distributions I would like to know if there is a better way of bundling various pieces of software into one .rpm or .deb file for easier distribution.
My current set up for doing this is a frankenstein monster of various tools but mainly Vagrant and libguestfs (built from source running in Fedora because none of the distributions actually ship it with virt-diff). Here are the steps I currently follow:
Spin up a base OS using either a Vagrant box or by create one from live CDs.
Export the .vmdk and call it base-image.
Spin up an exact replica of the previous image and go to town: use the package manager,
or some other means, to download, compile, and install all the pieces that I need. Once again, export the .vmdk and call it non-base-image.
Make both base images available to the Fedora guest OS that has libguestfs.
Use virt-diff to diff the two images and dump that data to file called diff.
Run several ruby scripts to massage diff into another format that contains the information I need and none of the stuff I don't like things in /var.
Run another script to generate a command script for guestfish with a bunch of copy-out commands.
Run the guestfish script.
Run another script to regenerate the symlinks from diff because guestfish can't do it.
Turn the resulting folder structure into a .deb or .rpm file and ship it.
I would like to know if there is a better way to do this. You'd think there would be but I haven't figured it out.
I would definitely consider something along the lines of:
A)
yum list (select your packages/dependencies whatever)
use yumdownloader on the previous list (or use th pkgs you have already downloaded)
createrepo
ship on media with install script that adds the cd repo to repolist, etc.
or B)
first two steps as above, then pack the rpms into an archive build a package that contains all of the above and kicks off the actual install of the rpms (along the lines of rpm -Uvh /tmp/repo/*) as a late script (in the cleanup phase, maybe). Dunno if this can be done avoiding locks on the rpm database.
I think you reached the point of complexity - indeed a frankenstein monster - where you should stop fearing of making proper packages with dependencies. We did this in my previous work - we had a set of fabricated rpm packages - and it was very easy and straightforward, including:
pre/post install scripts
uninstall scripts
dependencies
We never had to do anything you just described. And for the customer, installing even a set of packages was very easy!
You can follow a reference manual of how to build RPM package for more info.
EDIT: If you need a single installation package, then create this master packge, that would contain all the other packages (with dependencies set properly) and installed them in the post-install script (and uninstalled them in the uninstall script).
There are mainly 3 steps to make a package with all dependencies (let it be A, B & C).
A. Gather required files.
There are many way to gather files of the main software and its dependencies. In order to get all the dependices and for error free run you need to use a base OS (i.e live system)
1. Using AppDirAssistant
This app is used by www.portablelinuxapps.org to create portable app directory. They scan and watch for the files accessed by the app to find required.
2. Using chroot & overlayfs
In this method you don't need to boot into live cd instead chroot into it.
a. mount the .iso # /cdrom and
b. mount the filesystem(filesystem.squashfs) # another place, say # /tmp/union/root
c. Bind mount /proc # /tmp/union/root/proc
d. Overlay on it
mount -t overlayfs overlayfs /tmp/union/root -o lowerdir=/tmp/union/root,upperdir=/tmp/union/rw
e. Chroot
chroot /tmp/union/root
Now you can install packages using apt-get or another method (only from the chrooted terminal). All the changed files are stored # /tmp/union/rw. Take files from there.
3. Using manually collected packages
Use package manager to collect dependencies. For example
apt-get install package --print-uris will print download uris for dep packages. Using this uris download packages and extract all (dpkg -x 1.deb ./extracted).
B. Clean garbage files
After gathering files remove unwanted files
C. Pack files
1. Using appImageAssistance
If you manually gathered files then you need to copy appname.desktop file from ./usr/share/applications to root of directory tree. Also copy file named AppRun from another app or extract it from AppDirAssistance.
2. Make a .deb or .rpm using gathered files.
Is the problem primarily that of ensuring that your customers have installed all the standard upstream distro packages necessary for your package to run?
If that's the case, then I believe the most straightforward solution would be to leverage the yum and apt infrastructure to have those tools track down and install the necessary prerequisite packages.
If you supply a native yum/apt repository with complete pre-req specs (the hard work you've apparently already completed). Then the standard system install tool takes care of the rest. See link below for more on creating a personal repository for yum/apt.
For off-line customers, you can supply media with your software, and a mirror - or mirror subset - of the upstream distro, and instructions for adding them to yum config/apt config.
Yum
Creating a Yum Repository in the Fedora Deployment Guide
Apt
How To Setup A Debian Repository on the Debian Wiki
So your customers aren't ever going to install any other software that might specify a different version of those dependencies that you are walking all over, right?
Why not just create your own distro if you're going to go that far?
Or you can just give them a bunch of packages and a single script that does rpm -i dep1 dep2 yourpackage
I have a debian package that I built that contains a tar ball of the files, a control file, and a postinst file. Its built using dpkg-deb and it installs properly using dpkg.
The modification I would like to make is to have the installation directory of the files be determined at runtime based on an environment variable that will be set when dpkg -i is run on the deb file. I echo out the environment variable in the postinst script and I can see that its set properly.
My questions:
1) Is it possible to dynamically determine the installation directory at runtime?
2) If its possible how would I go about this? I have read about the rules file and the mypackage.install files but I don't know if either of these would allow me to accomplish this.
I could hack it by copying the files to the target location in the posinst script but I would prefer to do it the right way if possible.
Thanks in advance!
So this is what I found out about this problem over the past couple of weeks.
With prepackaged binaries you can't build a debian package with a destination directory dynamicall determined at runtime. I believe that this might be possible if installing a package that is built from source where you can set the install directory using configure. But in this case since these are embedded Ubuntu machines they don't have make so I didn't pursue such an option. I did work out a non traditional method (hack) for installing that did work. Since debian packages simply contain a tar ball relative to / simply build your package relative to a directory under /tmp. In the postinst script you can then determine where to copy the files from the archive into a permanent location.
I expected that after rebooting and the automatic deletion of the subdirectory under /tmp that dpkg might not know that the file package existed. This wasn't a problem. When I ran 'dpkg -l myapp' it showed as still installed. Updating the package using dpkg/apt-get also worked without a hitch.
What I did find is that if you attempted to remove the package using 'dpkg -r myapp' that dpkg would try and remove /tmp which wasn't good. However /tmp isn't easily removed so it never succeeded. Plus in our situation we never remove packages but instead simply upgrade them.
I eventually had to abandon the universal package due to code differences in the sources resulting in having to recompile per platform but I would have left it this way and it did work.
I tried using --instdir to change the install directory of the package and it does relocate the files but dpkg fails since the dpkg file can't be found relative to the new instdir. Using --instdir is sort of like a chroot. I also tried --admindir and --root in various combinations to see if I could use the dpkg system relative to / but install relocate the files but they didn't work. I guess rpm has a relocate option that works but not Ubuntu.
You can also write a script that runs dpkg-deb with a different environment for 6 times, generating 6 different packages. When you make a modification, you simply have to run your script, and all 6 packages gets generated and you can install them on your machines avoiding postinst hacking!
Why not install to a standard location, and simply use a postinst script to create symbolic links to the desired location? This is much cleaner, and shouldn't break anything in dpk -I.
I made a perl script that creates a deb binary package from scratch.
I created the data, control, etc.
But when I run dpkg -i on my deb package it complains that it is unable to files from data.
Example:
unable to create '.dpkg-new'(while processing ''): No such file or directory.
I have downloaded some .deb packages to look at and they do not use the preinst script to create the directory structure.
I am thinking I am doing something wrong, I consider having to create my own directories in preinst but it does not seem right... perhaps I am missing something?
Do I have to create directories where my files from data will be copied in the preinst sh, or should dpkg do it and I am doing something wrong?
I had the same problem in a Ruby script I wrote.
I was generating a list of files to pass to tar when building the data.tar.gz archive. When I ungzip and untared the archive manually it would deflate fine, but dpkg would fail.
It turns out that the list of files must also include each directory to create as well.
Note that when I created data.tar.gz I built it with nearly the same options as dpkg-deb/build.c does in the dpkg-1.15.8.11 source.
execlp(TAR, "tar", "-cf", "-", "--format=gnu", "--null", "-T", "-", "--no-recursion", NULL);
Instead I used
IO.popen("gnutar -czf - --format=gnu -T - --no-recursion", "r+")
In addition to what #Arrowmaster said, check the http://www.debian.org/doc/debian-policy/ for some detailed explanation of the files. After you build the package itself, you can check it with lintian tool to see if there is anything obvious you might have missed.
If any one looks for a solution to the problem:
"Build a deb package from an rpm spec file."
Look here http://www.deepnet.cx/debbuild/
I have made my own perl build script much simple then the mentioned one so I can easily maintain it.
Some useful knowledge gained in the process:
0. the deb is an ar archive that contains 3 files, the order of the files is important.
1. the scripts from control.tar.gz must be made executable.
2. it is good to have a preinstall script to make directories if dirs do not exist.
3. sometimes dpkg decides to unzip your zips (this happened if the zip was the only file in the data.tar.gz) so check for that in an postinstall script.
4. when you tar.gz some files be sure to chmod to the dir that contains the directory structure for your tar.
You should not attempt to manually create a .deb binary package by hand. The Debian New Maintainers' Guide covers how to get started creating a Debian package using the correct tools.
Your hand created package may look correct to you but because it is not installing it is obviously flawed in either a minor way that you have not noticed or in a more serious way that is not visible to you (for example most people don't realize a .deb is actually an ar archive).
There are lots of reasons for this. You really need to run:
dpkg -i -D1110 mydeb.deb
And post the result to have any hope of someone being able to solve the problem.