how to create a debian package which updates only required files while updating the package - linux

After few weeks of struggle i am able to create a medium native package debian package which works well in installation and removing of the package.
As http://www.quietsche-entchen.de/cgi-bin/wiki.cgi/-wiki/CreatingDebianPackages
Debian wiki
http://wiki.debian.org/HowToPackageForDebian http://www.debian.org/doc/manuals/maint-guide/ these are the quite good material for beginners,
I have basic problem, in updating the package all the files data.tar.gz are updated by default.
I want only few files to get updated in the data.tar.gz based on a key variable stored in all the files.
After the unpacking that is executing preinst script, all the files in data.tar.gz are already updated..
my idea was to take back up of the files intially before upgrading the package, and check key variable in files.. if the key variable is greater than the current variable replace it..
which means i am writing a simple backup script.. and executing in the postinst file..
i donot think this is good idea.. and more over limitations in dash script make it a very tough job..

What are you trying to accomplish here? During the reinstallation (or upgrading) of a Debian package, replacement of all of the non-conffiles with the latest version is exactly what's supposed to happen. If the file hasn't changed since the last installed version of the package then there's no harm in updating it anyway, and if is has changed, it's supposed to be updated.
If you have specific files which might be modified by the user and should be preserved across upgrades, make then conf files. The package system will prompt the user and ask them if they want to keep the package maintainer's version or the locally modified version.
(But if you're going to make every file a conf file, then you're probably doing something wrong.)
To make a file a conffile, list it in debian/conffiles. But if the file is going to be installed under /etc then you don't need to do this because dh_installdeb will do it for you.
EDIT following additional information in comment:
Suppose you have files test1.sh and test2.sh (among others) in your package. In the Debian world, they are either conffiles are intended to be modified by the end user, or they're not.
conffiles should be relatively few in number and as short as possible, to minimize the burden of having to reconcile changes made by the package maintainer with conflicting changes made by the end user.
If there are things mixed into the code that the end user is likely to want to tune, try to factor them out into a configuration file. If you put that file in /etc, you don't even have to manually designate it as a conffile.
If the end user needs to make a change to a non-conffile, they should use the dpkg-divert protocol to (1) move the original file aside, and (2) edit a copy. Diverted files are respected by package upgrades. The end user who uses dpkg-divert should be aware that things might break after upgrades as a result, because the package maintainer hasn't foreseen that these files would be modified by end users and the locally modified version might be incompatible with a newly upgraded version of a different file. dpkg-divert should be used carefully and sparingly.

Related

how to use git with a package I am distributing

I have been using git for some time now and I feel I have a good handle on it.
I did however, build my first small program as a distribution (something with ./configure make and make install) and I want to put it up on github but I am not sure how to exactly go about tracking it.
Should I, for instance, initialize git but only track the source code file, manpage, and readme (since the other files generated by autoconf and automake seem a bit superfluous)
Or should I make an entirely different directory and put the source files in there and then manually rebuild everything for version 0.2 when it is time?
Or do something else entirely?
I have tried searching but I cannot come up with any search terms that give me the kind of results I am looking for.
for instance initialize git but only track the source code file, manpage, and readme (since the other files generated by autoconf and automake seem a bit superfluous)
Yes: anything used to build needs to be tracked.
Anything being the result of the build does not need to be tracked.
should I make an entirely different directory
No: in version control, you could make a new tag to mark each version, and release branches from those tags to isolate patches which could be specific to the implementation details of a fixed release.
But you don't create folders (that was the subversion way)
should I make an entirely different directory for sources
Yes, you can (if you have a large set of files for sources)
But see also "Makefiles with source files in different directories"; you don't have just one Makefile.
The traditional way is to have a Makefile in each of the subdirectories (part1, part2, etc.) allowing you to build them independently.
Further, have a Makefile in the root directory of the project which builds everything.
And don't forget to put your object files in a separate folder (not tracked) as well.
See also this question as a concrete example.

How can I include a revision of a file in a Change Package after it was checked in

This happens occasionally in some projects I'm involved in that are using MKS as version control tool. A file is checked-out, modified and then checked-in but it was forgotten to be included in the related Change Package.
Is there a way to achieve this after the fact?
Thanks.
Delete revision (yes, the command exists), and then check in again, this time tied to the CP. Make sure you preserve the working file in the meantime.
If you've new revisions on top of it, you're out of luck.
I don't think that what SzG said really exists ( if so, I'd like to know too :) )
Once any operation is done without a CP, that operation can't be associated to a CP.
A solution for the future would be, to make using the CP mandatory in the project setup.

TortoiseSVN update/cleanup error between Linux repository and Windows XP

For no reason that I can see, I can no longer run a TortoiseSVN Update on a development directory on my portable Windows XP Professional SP3 machine, getting the error:
Previous operation has not finished; run 'cleanup' if it was interrupted
Please execute the 'Cleanup' command.
If I try running cleanup, I get another error,
cannot process the following paths: cannot move $ROOT_DIR/.svn/tmp/tmp-... to $ROOT_DIR/path/where/thing/should/go: no such file or directory
I have verified that both files exist, and actually from CMD.EXE prompt I am able to issue a MOVE with those two filenames and have it work correctly. It's no use because next time SVN tries to repeat the operation itself after creating a different tmp file name, and while CMD succeeded, SVN fails.
UPDATE: the path lengths are in both cases well below PATH_MAX, target file system is NTFS, and permissions are OK. Maybe I'll now try with FileMon to see whatever TortoiseSVN is really up to.
I tried downgrading TortoiseSVN but to no avail. Other repositories work OK between the same machines.
TortoiseSVN 1.7.9, Build 23248 - 32 Bit , 2012/08/30 18:25:37
Subversion 1.7.6,
apr 1.4.6
apr-utils 1.3.12
neon 0.29.6
OpenSSL 1.0.1c 10 May 2012
zlib 1.2.7
Both server (OpenSuSE Linux 12.2) and client now run the latest version of SVN.
On Windows, I also cannot seem to get any more informative logs or information (I'm not very skilled with TortoiseSVN, I have always used the Linux command line version).
I might delete the local copy and run a checkout, but it's about 2 GB of data, and I'm on a slow connection, so it is really more of a "fly physically to server location and hook a copper Ethernet to the local network there" alternative. I'm reserving that as a sort of last ditch, nuclear option; I'd really rather understand what the problem is, for I fear it might happen again.
UPDATE
I've tried to delete remotely the subdirectory involved, committing the deletion on the server; deleting the subdirectory locally, and emptying the .svn/tmp subdirectory where I found sixteen tmp files, all copies of the one PNG causing problems.
I am still not able to perform any SVN subcommand, getting "Run cleanup!" error; on cleanup; I get a failed attempt to copy a tmpfile to the never-sufficiently-damned .PNG file, which no longer exists anywhere, into a directory that no longer exists anywhere.
I tried recreating the directory locally (but not the file!), no changes.
With FileMon, I traced the source PNG to 8e4c2389cf9d85c8b8ee54d49ea053c752a38187.svn-base in .svn/pristine subdirectory, tried removing it and got SVN complaining. I tried copying it to its intended destination (so that the file-as-it-should-be and the file-as-it-is are identical), no joy.
UPDATE
Well, this is weird. I decided to track everything that TortoiseSVN is doing using FileMon. I could see it checking the wc.db and search the item, checking for it in .svn/pristine (and finding it), copying it (unnecessarily if you ask me...) in .svn/tmp, and finally checking $DESTINATION_FILE (with correct case) using Windows Open() API. And getting PATH NOT FOUND. Yet the file is there, I can see it (and the name is less than 8.3 characters). And why PATH not found and not FILE not found?
Okay, it all boiled down to a directory that had been created remotely with a name ending with space. The file in itself was OK; the directory where it stood was not.
When updating, apparently, the directory got created but the name was shortened by Windows to exclude the final space.
To add to the difficulty of diagnosing, while TortoiseSVN did tell me what the problem was, it did so in the dialog box where the Arial font made the space in \path\to\your \file not clearly recognizable (it was, once I knew where to look, and compared that slash with the others. This one stood a little farther from the letter at its left).
Lesson learned: check really carefully the dialog file name, character by character (note to self: find a way of having it in Courier New if at all possible).
You may have two files in the repository that differ only in case. That's a problem on Windows. See this FAQ for details.

let ./configure find library files in specific directory

I'm currently installing R software on a shared space across several servers. After installation I found that when I login on different servers, R is not guaranteed to run due to the missing of some library files on different machines.
Here is what I'm trying to do: since the installation of R is machine-dependent, I'd like to put all missing library files like libtermcap.so.2, libg2c.so.1, etc, to a single directory on the shared space, so that when I run ./configure, it will also search for this directory. Since this directory is shared, the installation could become machine-independent, so I won't need to add missing files on each server.
Is there an option to achieve this when I run ./configure? Thanks.
Assuming you have copied the library files to /shared/lib/ and the header files to /shared/include/, you can run
./configure LDFLAGS=-L/shared/lib CPPFLAGS=-I/shared/include ...other options...
Note, however, that you are bound to run into trouble at run time, when you have to convince your installation to use the shared libraries from the right directory, especially in case someone decides to upgrade the default version on the respective host. That whole business is platform and installation dependent. I think if your hosts are not at least mostly identical, you ought to install your software (R) locally in a way suitable to the respective system.
Peter's answer is correct (+1), and please take special note of his suggestion to install locally. Using the local package management system and auto updating on each box is (in the long run) a much easier solution than trying to get compatible binaries/libraries on a shared drive. To simplify using Peter's solution, note that you can place the appropriate arguments in /shared/share/config.site. For example:
$ cat > /shared/share/config.site << EOF
: ${LDFLAGS=-L/shared/lib}
: ${CPPFLAGS=-I/share/include}
EOF
Whenever you run configure with --prefix=/shared, the config.site file will be read and defaults will be set.

Capturing all the data that has changed during a Linux install

I am trying to figure out which files were changed when I run an app install via make install. I can look at the script, but that calls other scripts and may or may not touch other files, etc. How can I do this programmatically?
Implementation: http://asic-linux.com.mx/~izto/checkinstall/
Several ways come to mind. First, use some sort of LD_PRELOAD to track all files opened. Second approach, compare filesystem before and after.
If your kernel supports it, you can use inotify (a handy interface is inotify tools) and watch your home directory, if the package was configured with --prefix=/home/myusername
I've noticed that checkinstall (using installwatch via LD_PRELOAD) does not always catch everything, the last time I used it it did not catch empty directories that were created for spooling, which caused the subsequent generated .deb's to break.
Note, don't use inotify if you are installing to /, in that case you have to use installwatch or just read all of the makefiles / install scripts closely.

Resources