Can I use /dev/null for the find command? - linux

I'm updating a shell script that uses the find command that follows symlinks:
find -L somedir ...
However, on some older platforms, the -L isn't supported and the command must use the older -follow syntax:
find somedir -follow ...
The "-follow" flag is deprecated on newer systems, so my strategy is to test if the command works with the newer -L flag, and if not fall back on the -follow flag.
The script currently runs on RedHawk 5.4.11, but the find incompatibility was discovered on an older Linux version. I was directed to make this work on all Unix/Linux platforms.
So, while creating a dummy find command to test, I'm creating an empty temp directory in /tmp for the find command to return quickly. I then found out that mktemp -d is not supported on the older systems, so I was going to create one the old-fashioned way.
It then dawned on me, "why not just try /dev/null as a temp dir instead of creating one?" So I tried the command:
TEMPDIR=/dev/null
FIND_L_SUPPORTED=`find -L $TEMPDIR &> /dev/null; echo $?`
and it seems to work, but I'm not sure why (since /dev/null is not a directory), or if it's reliable on all platforms.
Two questions:
Is using find against /dev/null reliable on all platforms?
Any other solutions to my original find problem, where some platforms need -L but others need -follow?

Trying to support unknown platforms is not productive. Just because you get one non-standard aspect right (e.g., replacing -L with -follow) doesn't mean there aren't other non-standard behaviors you don't know about that could cause your script to break.
Instead, write your script to support the standard by default, but provide a flag that users can explicitly set to support known older platforms. For example,
if [ "$NON_STANDARD_FIND" = "no-L" ]; then
find somedir -follow ...
else
find -L somedir ...
fi
Then run the script as
my_script ...
or
NON_STANDRD_FIND=no-L my_script ...
as necessary.
In the end, document the platforms you know you can support and how to run your script correctly on those platforms. Users of other platforms should only run your script at their peril.

Related

Is there a replacement for the -P option in tar, when using the POSIX version of tar (not the GNU version)?

I had a script that used tar as follows : tar -cPz and this worked as I want it to
Now however, i'm moving away from bash to a POSIX compliant shell, which means the -P option is no longer supported by tar. I guess this is because i'm now using the POSIX version of tar and no longer the GNU version
So my question : how do i keep it working as it did with the -P option, so how do i prevent tar from removing the leading '/'.
I've read some stuff about 'this is usually not what you want' etc, but in my case, i would like to keep it as it is, so i'm wondering if there is a way to keep this functionality.
Thanks in advance!

Gitbash version does not allow grep -o, is it possible to install new grep package?

I am trying to do a directory-wide search for specific strings in JSON files. The only problem is that these JSON files are only one line, so when I cat all of them, all strings occur a magical "1" time...since there's only one line even when I string them all together.
An easy solution, which I see a lot (here and here), is grep -o. Only problem is it doesn't come standard on my Gitbash. I solved my immediate problem by just installing the latest Cygwin. However, I'm wondering if there was an easier/more granular solution. Is it possible to do the equivalent of "apt-get install" or similar on Gitbash? Or can someone explain to me a quick-and-dirty way to extract and install the tar file in Gitbash?
The other approach is to:
use a cmd session (using the git-cmd.bat which packaged with Git for Windows)
use the grep included Gnu for Windows, which supports the -o option (and actually allow you to use most of the other Unix commands that your script might be currently using)

Ideal way to use wget to download and install using temp directory?

I am trying to work out the proper process of installing with Wget, in this example I'll use Nginx.
# Download nginx to /tmp/ directory
wget http://nginx.org/download/nginx-1.3.6.tar.gz -r -P /tmp
# Extract nginx into /tmp/nginx directory
tar xzf nginx-1.3.6.tar.gz -C /tmp/nginx
# Configure it to be installed in opt
./configure --prefix=/opt/nginx
# Make it
make
# Make install
make install
# Clean up temp folder
rm -r /tmp/*
Is this the idealised process? Is there anything I can improve on?
First of all, you definitely seem to reinvent the wheel: if the problem that you want to solve is automated packaging / building software on target systems, then there are myriads of solutions available, in form of various package management systems, port builders, etc.
As for your shell script, there are a couple of things you should consider fixing:
Stuff like http://nginx.org/download/nginx-1.3.6.tar.gz or nginx-1.3.6.tar.gz are constants. Try to extract all constants in separate variables and use them to make maintaining this script a little bit easier, for example:
NAME=nginx
VERSION=1.3.6
FILENAME=$NAME-$VERSION.tar.gz
URL=http://nginx.org/download/$FILENAME
TMP_DIR=/tmp
INSTALL_PREFIX=/opt
wget "$URL" -r -P "$TMP_DIR"
tar xzf "$FILENAME" -C "$TMP_DIR/nginx"
You generally can't be 100% sure that wget exists on target deployment system. If you want to maximize portability, you can try to detect popular networking utilities, such as wget, curl, fetch or even lynx, links, w3m, etc.
Proper practices on using a temporary directory is a long separate question, but, generally, you'll need to adhere to 3 things:
One should somehow find out the temporary directory location. Generally, it's wrong to assume that /tmp is always a temporary directory, as it can be not mounted, it can be non-writable, if can be tmpfs filesystem which is full, etc, etc. Unfortunately, there's no portable and universal way to detect what temporary directory is. The very least one should do is to check out contents of $TMPDIR to make it possible for a user to point the script to proper temporary dir. Another possibly bright idea is a set of heuristic checks to make sure that it's possible to write to desired location (checking at least $TMPDIR, $HOME/tmp, /tmp, /var/tmp), there's decent amount of space available, etc.
One should create a temporary directory in a safe manner. On Linux systems, mktemp --tmpdir -d some-unique-identifier.XXXXXXXXX is usually enough. On BSD-based systems, much more manual work needed, as default mktemp implementation is not particularly race-resistant.
One should clean up temporary directory after use. Cleaning should be done not only on a successful exit, but also in a case of failure. This can be remedied with using a signal trap and a special cleanup callback, for example:
# Cleanup: remove temporary files
cleanup()
{
local rc=$?
trap - EXIT
# Generally, it's the best to remove only the files that we
# know that we have created ourselves. Removal using recursive
# rm is not really safe.
rm -f "$LOCAL_TMP/some-file-we-had-created"
[ -d "$LOCAL_TMP" ] && rmdir "$LOCAL_TMP"
exit $rc
}
trap cleanup HUP PIPE INT QUIT TERM EXIT
# Create a local temporary directory
LOCAL_TMP=$(mktemp --tmpdir -d some-unique-identifier.XXXXXXXXX)
# Use $LOCAL_TMP here
If you really want to use recursive rm, then using any * to glob files is a bad practice. If your directory would have more than several thousands of files, * would expand to too much arguments and overflow shell's command line buffer. I might even say that using any globbing without a good excuse is generally a bad practice. The rm line above should be rewritten at least as:
rm -f /tmp/nginx-1.3.6.tar.gz
rm -rf /tmp/nginx
Removing all subdirectories in /tmp (as in /tmp/*) is a very bad practice on a multi-user system, as you'll either get permission errors (you won't be able to remove other users' files) or you'll potentially heavily disrupt other people's work by removing actively used temporary files.
Some minor polishing:
POSIX-standard tar uses normal short UNIX options nowadays, i.e. tar -xvz, not tar xvz.
Modern GNU tar (and, AFAIR, BSD tar too) doesn't really need any of "uncompression" flags, such as -z, -j, -y, etc. It detects archive/compression format itself and tar -xf is sufficient to extract any of .tar / .tar.gz / .tar.bz2 tarballs.
That's the basic idea. You'll have to run the make install command as root (or the whole script if you want). Your rm -r /tmp/* should be rm -r /tmp/nginx because other commands might have stuff they're working on in the tmp directory.
It should also be noted that the chances that building from source like that will work with no modifications for a decently sized project is fairly low. Generally you will find you need to specify a path to a library explicitly or some code doesn't quite compile correctly on your distribution.

List CPAN modules that have been made but not installed

Is there an easy/clean way to do this in Linux/ a Linux-like environment?
Purpose
My aim is to run CPAN with admin permissions only during the installation phase, not at the get/make/test phases.
The CPAN configuration items make_install_make_command and mbuild_install_build_command deal with this. Change them to enable sudo support.
Assuming you're using CPAN.pm for that, I have a somewhat unorthodox suggestion.
Make a subclass of CPAN.pm, which actually publishes the results/stages of each module it works with to a registry (via a suplied callback API to make the registry implementation flexible).
Then you need to simply check that registry.
(or you can try to put that as a patch into CPAN.pm itself)
For the sake of documenting an approach that seems promising, but doesn't work - the shell command:
find . -type d -mindepth 1 -maxdepth 1 -print | while read -r DIR; do pushd $DIR; make -q; mk=$?; make -q install; inst=$?; make -q test; tst=$?; echo Directory "$DIR $mk $inst $tst"; popd; done| fgrep -ve /build
when executed in the cpan build dir lists the exit statuses of make -q for "", "test" and "install", which says whether that make goal needs any work to achieve.
But all have nonzero exit statuses, which means they all will do something if you execute them, even if the make has successfully been completed. So you can't tell anything this way.

Move/copy files/folder in linux/solaris using only bash built-ins

There was a situation when somebody moved the whole rootdir into a subdir on a remote system, thus all the system tools like cp, mv, etc didn't work anymore. We had an active session though but couldn't find a way to copy/move the files back using only bash built-ins.
Do somebody know of a way to achieve this?
I even thought about copy the cp or mv binary in the currentdir with
while read -r; do echo $LINE; done
and then redirect this to a file, but it didn't work. Guess because of all the special non printable chars in a binary file that can't be copied/displayed using echo.
thanks.
/newroot/lib/ld-linux.so.2 --library-path /newroot/lib \
/newroot/bin/mv /newroot/* /
(Similar for Solaris, but I think the dynamic linker is named ld.so.1 or something along those lines.)
Or, if your shell is sh-like (not csh-like),
LD_LIBRARY_PATH=/newroot/lib /newroot/bin/mv /newroot/* /
If you have prepared with sash pre-installed, then that is static and has a copy built-in (-cp).
Otherwise LD_LIBRARY_PATH=/copied/to/path/lib /copied/to/path/bin/cp might work?
I think it might have a problem with not having ld-so in the expected place.
Here's a reasonable ghetto replacement for cp. You'll want echo -E if the file ends with a new line (like most text files), echo -nE if it doesn't (like most binaries).
echo -nE "`< in.file`" > out.file
Old thread, but got exactly the same stupid mistake. /lib64 was moved to /lib64.bak remotely and everything stopped working.
This was a x86_64 install, so ephemient's solution was not working:
# /lib64.bak/ld-linux.so.2 --library-path /lib64.bak/ /bin/mv /lib64.bak/ /lib64
/bin/mv: error while loading shared libraries: /bin/mv: wrong ELF class: ELFCLASS64
In that case, a different ld-linux had to be used:
# /lib64.bak/ld-linux-x86-64.so.2 --library-path /lib64.bak/ /bin/mv /lib64.bak/ /lib64
Now the system is salvaged. Thanks ephemient!
/subdir/bin/mv /subdir /
or am I missing something in your explanation?
If you have access to another machine, one solution is to download and compile a Busybox binary. It will be a single binary contains most of the common tools you need to restore your system. This might not work if your system is remote though.

Resources