What's the best way to move a directory into place in a Makefile install? - linux

I'm currently using the usual technique in my Makefile to install individual files:
install:
install -D executable ${BIN_DIR}
But I just ran across a situation where I need to move a whole directory and all files underneath it into place.
Is cp -r the best way or is there a more linux-y/unix-y way to do this?

Yeah, it's hard to think of a more unix-ish way that cp -r, although the -r is a relatively late addition to cp. I can tell you the way we used to do it, and that works neatly across filesystems and such:
Let src be the source directory you want to move, and /path/to/target be an absolute path to the target. Then you can use:
$ tar cf - src | (cd /path/to/target; tar xf -)

My version of install(1) (Debian) has:
-d, --directory
treat all arguments as directory names; create all components of the specified directories
-t, --target-directory=DIRECTORY
copy all SOURCE arguments into DIRECTORY
So if you wanted to use install(1) consistently throughout your Makefile you could do:
install -d destdir
install srcdir/* -t destdir
-t isn't recursive however - if srcdir contains directories, then they won't get copied.

Linking is another viable alternative. That would allow you to keep multiple directories (representing different versions) accessible.

Related

Is it possible to create a folder with the filename into the tar file you are creating?

Let's say I'm trying to tar.gz all the files and folders in /usr/local/bin/data/*
The file name would be data-2015-10-01.tar.gz. When I untar it, is it possible that the root directory would be data-2015-10-01 followed by the contents of whatever is inside of data/* ?
If not, how can I tar /usr/local/bin/data/* but start at the /data/ folder level?
I can't do this unfortunately since the program spits out /usr/local/bin/data/ and I'm unable to change it.
cd /usr/local/bin
tar ... /data/*
There are a couple of ways to do what I think you're trying to accomplish. First, you can use the -C option to tar when creating the archive. That changes tar's current working directory to that directory before creating the archive. Not strictly required in your case, but probably helpful.
# tar -C /usr/local/bin -czf data-2015-10-01.tar.gz data/*
That at least gets you to a single directory named data. If you have control of the extraction (manually or via a script you provide to whomever is unpacking this), then you can do something like this on the extraction:
# mkdir -f data-2015-10-01 && tar -C data-2015-10-01 --strip-components=1 -xzf data-2015-10-01.tar.gz
This will remove the first path, which is "data" and extract everything from there into the directory which is your current working directory, data-2015-10-01. So, it isn't specifically tar that's doing the renaming, but you will effectively end up with the same result.
I've accomplished something similar with a symlink. This is not a great solution if you have (or might have) symlinks in the directory structure you're trying to archive. I have to say that I prefer #geis' solution to strip out the top-level directory on extract, but this gives you another option.
ln -s /usr/local/bin/data data-2015-10-01
tar -cvhf data-2015-10-01.tar.gz data-2015-10-01/
rm data-2015-10-01
(Note the additional -h option in the tar invocation.)

How to exclude binaries when using rsync

I want to rsync a directory to server from a mac machine to linux machine while excluding compiled files like .o files and binary executables. How do I exclude binary files?
What I am using at the moment:
rsync -av --compress --exclude="*.o" dir server:dir
This is a sticky problem because a Unix system does not have a hard and fast definition of the distinction between "binary" and "text" files. You can do a pretty good job by using the file command and searching for text in the output (see How to tell binary from text files in linux), so I'd run find to generate a list of files which file considers to be text, and use that as the list of files to rsync:
find dir | xargs file | awk -F: '$2 ~ /text/ { print $1 }' | \
rsync --files-from=- -av --compress dir server:dir
This will require some tweaking to make sure the pathnames are correct relative to the source dir, and so on, but it should get close to what you want.
In the long term, I'd want to rework my build process to put generated files in a dir/build directory, but this might help for now :-)
You can add a .cvsignore file in the directories and use the option -C to rsync.
But this is only vaguely what you specified. Maybe it suits you well, maybe it assumes other things than you. So be careful and test that properly.
Also, you can run a find before the rsync, scanning the complete tree for files matching your idea of being "binary" (maybe compiled executables?), and place all their names in an exclude file which you then use with option --exclude-from.

How to duplicate a folder exactly

I am trying to copy a filesystem for a device I am programming for. After so much time trying to figure out why the filesystem I was installing wasn't working I found out that cp didn't get the job done. I used du -s to check the size of the original filesystem and the one that I copied with cp -r, as it turns out they differ by about 150 bytes.
Something is telling me that symbolic links or some sort of kernel objects aren't being copied correctly.
Is it possible to copy a folder/file system exactly? If so how would I go about it?
Try doing this the straightforward way :
cp -a src target
from man cp
-a, --archive
same as -dR --preserve=all
It preserve rights, symlinks...
Here I tried all the code in my Linux. Seems Rsync proposed by #seanmcl as the right one while others failed to keep owners and/or some special files or a denied result. The exact code is:
$ sudo rsync -aczvAXHS --progress /var/www/html /var/www/backup
Just remember to use just the directory name and not put a slash (/) or a wildcard (/*) at the end of source and target name otherwise the hidden files right below the source are not copied.
Another popular option is to use tar c source | (cd target && tar x ). See this linuxdevcenter.com article.
The most accurate way I know of copying files is with cpio:
cd /path/to/source
find . -xdev -print0 | cpio -oa0V | (cd /path/to/target && cpio -imV)
Not really easy to use, but this is very precise, preserving timestamps, owners, permissions, special files.
Rsync is the best way to copy a file system. They are myriad arguments that let you control exactly what is copied.
This is what I do, for example to duplicate directory A -> B:
$ mkdir B
$ cd A
$ cp -a ./ ../B

Ideal way to use wget to download and install using temp directory?

I am trying to work out the proper process of installing with Wget, in this example I'll use Nginx.
# Download nginx to /tmp/ directory
wget http://nginx.org/download/nginx-1.3.6.tar.gz -r -P /tmp
# Extract nginx into /tmp/nginx directory
tar xzf nginx-1.3.6.tar.gz -C /tmp/nginx
# Configure it to be installed in opt
./configure --prefix=/opt/nginx
# Make it
make
# Make install
make install
# Clean up temp folder
rm -r /tmp/*
Is this the idealised process? Is there anything I can improve on?
First of all, you definitely seem to reinvent the wheel: if the problem that you want to solve is automated packaging / building software on target systems, then there are myriads of solutions available, in form of various package management systems, port builders, etc.
As for your shell script, there are a couple of things you should consider fixing:
Stuff like http://nginx.org/download/nginx-1.3.6.tar.gz or nginx-1.3.6.tar.gz are constants. Try to extract all constants in separate variables and use them to make maintaining this script a little bit easier, for example:
NAME=nginx
VERSION=1.3.6
FILENAME=$NAME-$VERSION.tar.gz
URL=http://nginx.org/download/$FILENAME
TMP_DIR=/tmp
INSTALL_PREFIX=/opt
wget "$URL" -r -P "$TMP_DIR"
tar xzf "$FILENAME" -C "$TMP_DIR/nginx"
You generally can't be 100% sure that wget exists on target deployment system. If you want to maximize portability, you can try to detect popular networking utilities, such as wget, curl, fetch or even lynx, links, w3m, etc.
Proper practices on using a temporary directory is a long separate question, but, generally, you'll need to adhere to 3 things:
One should somehow find out the temporary directory location. Generally, it's wrong to assume that /tmp is always a temporary directory, as it can be not mounted, it can be non-writable, if can be tmpfs filesystem which is full, etc, etc. Unfortunately, there's no portable and universal way to detect what temporary directory is. The very least one should do is to check out contents of $TMPDIR to make it possible for a user to point the script to proper temporary dir. Another possibly bright idea is a set of heuristic checks to make sure that it's possible to write to desired location (checking at least $TMPDIR, $HOME/tmp, /tmp, /var/tmp), there's decent amount of space available, etc.
One should create a temporary directory in a safe manner. On Linux systems, mktemp --tmpdir -d some-unique-identifier.XXXXXXXXX is usually enough. On BSD-based systems, much more manual work needed, as default mktemp implementation is not particularly race-resistant.
One should clean up temporary directory after use. Cleaning should be done not only on a successful exit, but also in a case of failure. This can be remedied with using a signal trap and a special cleanup callback, for example:
# Cleanup: remove temporary files
cleanup()
{
local rc=$?
trap - EXIT
# Generally, it's the best to remove only the files that we
# know that we have created ourselves. Removal using recursive
# rm is not really safe.
rm -f "$LOCAL_TMP/some-file-we-had-created"
[ -d "$LOCAL_TMP" ] && rmdir "$LOCAL_TMP"
exit $rc
}
trap cleanup HUP PIPE INT QUIT TERM EXIT
# Create a local temporary directory
LOCAL_TMP=$(mktemp --tmpdir -d some-unique-identifier.XXXXXXXXX)
# Use $LOCAL_TMP here
If you really want to use recursive rm, then using any * to glob files is a bad practice. If your directory would have more than several thousands of files, * would expand to too much arguments and overflow shell's command line buffer. I might even say that using any globbing without a good excuse is generally a bad practice. The rm line above should be rewritten at least as:
rm -f /tmp/nginx-1.3.6.tar.gz
rm -rf /tmp/nginx
Removing all subdirectories in /tmp (as in /tmp/*) is a very bad practice on a multi-user system, as you'll either get permission errors (you won't be able to remove other users' files) or you'll potentially heavily disrupt other people's work by removing actively used temporary files.
Some minor polishing:
POSIX-standard tar uses normal short UNIX options nowadays, i.e. tar -xvz, not tar xvz.
Modern GNU tar (and, AFAIR, BSD tar too) doesn't really need any of "uncompression" flags, such as -z, -j, -y, etc. It detects archive/compression format itself and tar -xf is sufficient to extract any of .tar / .tar.gz / .tar.bz2 tarballs.
That's the basic idea. You'll have to run the make install command as root (or the whole script if you want). Your rm -r /tmp/* should be rm -r /tmp/nginx because other commands might have stuff they're working on in the tmp directory.
It should also be noted that the chances that building from source like that will work with no modifications for a decently sized project is fairly low. Generally you will find you need to specify a path to a library explicitly or some code doesn't quite compile correctly on your distribution.

How can I recursively copy a directory into another and replace only the files that have not changed?

I am looking to do a specific copy in Fedora.
I have two folders:
'webroot': holding ALL web files/images etc
'export': folder containing thousands of PHP, CSS, JS documents that are exported from my SVN repo.
The export directory contains many of the same files/folders that the root does, however the root contains additional ones not found in export.
I'd like to merge all of the contents of export with my webroot with the following options:
Overwriting the file in webroot if export's version contains different code than what
is inside of webroot's version (live)
Preserve the permissions/users/groups of the file if it is overwritten (the export
version replacing the live version) *NOTE I would like the webroots permissions/ownership maintained, but with export's contents
No prompting/stopping of the copy
of any kind (ie not verbose)
Recursive copy - obviously I
would like to copy all* files
folders and subfolders found in
export
I've done a bit of research into cp - would this do the job?:
cp -pruf ./export /path/to/webroot
It might, but any time the corresponding files in export and webroot have the same content but different modification times, you'd wind up performing an unnecessary copy operation. You'd probably get slightly smarter behavior from rsync:
rsync -pr ./export /path/to/webroot
Besides, rsync can copy files from one host to another over an SSH connection, if you ever have a need to do that. Plus, it has a zillion options you can specify to tweak its behavior - look in the man page for details.
EDIT: with respect to your clarification about what you mean by preserving permissions: you'd probably want to leave off the -p option.
-u overwrites existing files folder if the destination is older than source
-p perserves the permission and dates
-f turns off verbosity
-r makes the copy recursive
So looks like you got all the correct args to cp
Sounds like a job for cpio (and hence, probably, GNU tar can do it too):
cd export
find . -print | cpio -pvdm /path/to/webroot
If you need owners preserved, you have to do it as root, of course. The -p option is 'pass mode', meaning copy between locations; -v is verbose (but not interactive; there's a difference); -d means create directories as necessary; -m means preserve modification time. By default, without the -u option, cpio won't overwrite files in the target area that are newer than the one from the source area.

Resources