Is there an easy/clean way to do this in Linux/ a Linux-like environment?
Purpose
My aim is to run CPAN with admin permissions only during the installation phase, not at the get/make/test phases.
The CPAN configuration items make_install_make_command and mbuild_install_build_command deal with this. Change them to enable sudo support.
Assuming you're using CPAN.pm for that, I have a somewhat unorthodox suggestion.
Make a subclass of CPAN.pm, which actually publishes the results/stages of each module it works with to a registry (via a suplied callback API to make the registry implementation flexible).
Then you need to simply check that registry.
(or you can try to put that as a patch into CPAN.pm itself)
For the sake of documenting an approach that seems promising, but doesn't work - the shell command:
find . -type d -mindepth 1 -maxdepth 1 -print | while read -r DIR; do pushd $DIR; make -q; mk=$?; make -q install; inst=$?; make -q test; tst=$?; echo Directory "$DIR $mk $inst $tst"; popd; done| fgrep -ve /build
when executed in the cpan build dir lists the exit statuses of make -q for "", "test" and "install", which says whether that make goal needs any work to achieve.
But all have nonzero exit statuses, which means they all will do something if you execute them, even if the make has successfully been completed. So you can't tell anything this way.
Related
TL;DR
Here is the default behavior.
find ~/ -name *.git 2>/dev/null | dmenu
# Searches everything in home directory and shows output
Time taken about 1-2 sec
What I want:
find ~/ -name *.git 2>/dev/null | less
# Show as soon as it finds result. How to get similar output in dmenu?
As files in my PC will increase, this is going to take longer time.
Detailed description:
I am piping input into dmenu from a find command which takes about 1-2 seconds. Is it possible for dmenu to show input as soon as there is some input in the pipe. Because that's the basic working of piping. It seems like dmenu waits until there are all the entries in pipe so that user can search from it which also looks legit, but still can this be avoided? I would like to run dmenu as soon as there is input in buffer.
I found some workaround to decrease time against find here. Instead of find, locate can be used. So the command goes like
locate -r '/home'"$USER"'.*\.git$'
-r takes input a regular expression. Arguments to -r here filters all git repositories inside /home/$USER. This is a bit faster than using find.
Catch using locate
locate uses a local database for searching. So it will only work as expected when local database will be built/updated.
To update database, use sudo updatedb. Whenever you add/move/delete a file (or a directory in this case), remember to update database for locate to give proper results.
Tip
To avoid entering password every time for updatedb (and other frequently used commands), add them to sudoers by executing sudo visudo and adding entry for path to command's binary's location
Update
I recently realized why use locate when I can simply maintain my own database and cat all the entries to dmenu. With this I was able to achieve what I needed.
# Make a temp directory
mkdir -p $HOME/.tmp
# Search for all git directories and store them in ~/.tmp/gitfies.
[ -e $HOME/.tmp/gitfiles ] || find $HOME/ -regex .*/\.git$ -type d 2>/dev/null > $HOME/.tmp/gitfiles
# cat this file into dmenu
cat $HOME/.tmp/gitfiles | dmenu
This gives a fuzzy finding for directories with dmenu. This is better than using locate as even in locate you need to update local database and so in here. Since we do the filtering of git files at runtime with locate, it is a bit slower than this case.
I can simple create an alias to update this database analogous to sudo updatedb in case of locate, by
alias gitdbupdate="find $HOME/ -regex .*/\.git$ -type d 2>/dev/null > $HOME/.tmp/gitfiles"
Note that I am not using /tmp/ as it won't be persistent across power cycles. So rather I create my own $HOME/.tmp/ directory.
Suppose the command bam is on my system. Let's refer to it as bam1.
Suppose that that I have another version of the bam binary on my system (bam2, although it may be named something much different).
When I run some-script, I want in that script (and all child processes) for all calls to bam to to use bam2. They will otherwise use bam1, as bam1 is in the $PATH by default for that environment.
Assume that I have the full path to bam2 available.
Assume that bam may run child processes that also call bam
Assume that if anything goes wrong, bam must revert back to bam1.
Assume unix-ish systems for now, but Windows support welcome.
$ alias bam="bam2" && bam # <== doesn't quite work. see ls test below
$ alias bam="ls" && ls # <== "-bash: bam: command not found"
I need to override an application binary pointer with another one temporarily. The usage intent is for a nodejs application, so something I can do in there that would perform better x-platform in a node context would be great.
I considered making a tmp symlink and prepending its folder to the system PATH, but I have a feeling there may be a simpler way.
Any tips?
You can create a temporary directory
tmpdir=`mktemp -d`
mark it for removal on exit
trap 'rm -rf "$tmpdir"' exit
add it to your PATH:
PATH="$tmpdir:$PATH" #<= This is the key part
and then place a link named after your override inside of $tmpdir
ln -s "$(which bam2)" "$tmpdir"/bam
Any processes you spawn from here will inherit the PATH variable (it's an environment (=exported) variable) and if they attempt to search for an executable, your temporary directory is what they'll search first.
If you're concerned about security, you'll want to make it (a possibly permanent) read-only directory instead of a temporary, writable one.
I'm updating a shell script that uses the find command that follows symlinks:
find -L somedir ...
However, on some older platforms, the -L isn't supported and the command must use the older -follow syntax:
find somedir -follow ...
The "-follow" flag is deprecated on newer systems, so my strategy is to test if the command works with the newer -L flag, and if not fall back on the -follow flag.
The script currently runs on RedHawk 5.4.11, but the find incompatibility was discovered on an older Linux version. I was directed to make this work on all Unix/Linux platforms.
So, while creating a dummy find command to test, I'm creating an empty temp directory in /tmp for the find command to return quickly. I then found out that mktemp -d is not supported on the older systems, so I was going to create one the old-fashioned way.
It then dawned on me, "why not just try /dev/null as a temp dir instead of creating one?" So I tried the command:
TEMPDIR=/dev/null
FIND_L_SUPPORTED=`find -L $TEMPDIR &> /dev/null; echo $?`
and it seems to work, but I'm not sure why (since /dev/null is not a directory), or if it's reliable on all platforms.
Two questions:
Is using find against /dev/null reliable on all platforms?
Any other solutions to my original find problem, where some platforms need -L but others need -follow?
Trying to support unknown platforms is not productive. Just because you get one non-standard aspect right (e.g., replacing -L with -follow) doesn't mean there aren't other non-standard behaviors you don't know about that could cause your script to break.
Instead, write your script to support the standard by default, but provide a flag that users can explicitly set to support known older platforms. For example,
if [ "$NON_STANDARD_FIND" = "no-L" ]; then
find somedir -follow ...
else
find -L somedir ...
fi
Then run the script as
my_script ...
or
NON_STANDRD_FIND=no-L my_script ...
as necessary.
In the end, document the platforms you know you can support and how to run your script correctly on those platforms. Users of other platforms should only run your script at their peril.
I am trying to work out the proper process of installing with Wget, in this example I'll use Nginx.
# Download nginx to /tmp/ directory
wget http://nginx.org/download/nginx-1.3.6.tar.gz -r -P /tmp
# Extract nginx into /tmp/nginx directory
tar xzf nginx-1.3.6.tar.gz -C /tmp/nginx
# Configure it to be installed in opt
./configure --prefix=/opt/nginx
# Make it
make
# Make install
make install
# Clean up temp folder
rm -r /tmp/*
Is this the idealised process? Is there anything I can improve on?
First of all, you definitely seem to reinvent the wheel: if the problem that you want to solve is automated packaging / building software on target systems, then there are myriads of solutions available, in form of various package management systems, port builders, etc.
As for your shell script, there are a couple of things you should consider fixing:
Stuff like http://nginx.org/download/nginx-1.3.6.tar.gz or nginx-1.3.6.tar.gz are constants. Try to extract all constants in separate variables and use them to make maintaining this script a little bit easier, for example:
NAME=nginx
VERSION=1.3.6
FILENAME=$NAME-$VERSION.tar.gz
URL=http://nginx.org/download/$FILENAME
TMP_DIR=/tmp
INSTALL_PREFIX=/opt
wget "$URL" -r -P "$TMP_DIR"
tar xzf "$FILENAME" -C "$TMP_DIR/nginx"
You generally can't be 100% sure that wget exists on target deployment system. If you want to maximize portability, you can try to detect popular networking utilities, such as wget, curl, fetch or even lynx, links, w3m, etc.
Proper practices on using a temporary directory is a long separate question, but, generally, you'll need to adhere to 3 things:
One should somehow find out the temporary directory location. Generally, it's wrong to assume that /tmp is always a temporary directory, as it can be not mounted, it can be non-writable, if can be tmpfs filesystem which is full, etc, etc. Unfortunately, there's no portable and universal way to detect what temporary directory is. The very least one should do is to check out contents of $TMPDIR to make it possible for a user to point the script to proper temporary dir. Another possibly bright idea is a set of heuristic checks to make sure that it's possible to write to desired location (checking at least $TMPDIR, $HOME/tmp, /tmp, /var/tmp), there's decent amount of space available, etc.
One should create a temporary directory in a safe manner. On Linux systems, mktemp --tmpdir -d some-unique-identifier.XXXXXXXXX is usually enough. On BSD-based systems, much more manual work needed, as default mktemp implementation is not particularly race-resistant.
One should clean up temporary directory after use. Cleaning should be done not only on a successful exit, but also in a case of failure. This can be remedied with using a signal trap and a special cleanup callback, for example:
# Cleanup: remove temporary files
cleanup()
{
local rc=$?
trap - EXIT
# Generally, it's the best to remove only the files that we
# know that we have created ourselves. Removal using recursive
# rm is not really safe.
rm -f "$LOCAL_TMP/some-file-we-had-created"
[ -d "$LOCAL_TMP" ] && rmdir "$LOCAL_TMP"
exit $rc
}
trap cleanup HUP PIPE INT QUIT TERM EXIT
# Create a local temporary directory
LOCAL_TMP=$(mktemp --tmpdir -d some-unique-identifier.XXXXXXXXX)
# Use $LOCAL_TMP here
If you really want to use recursive rm, then using any * to glob files is a bad practice. If your directory would have more than several thousands of files, * would expand to too much arguments and overflow shell's command line buffer. I might even say that using any globbing without a good excuse is generally a bad practice. The rm line above should be rewritten at least as:
rm -f /tmp/nginx-1.3.6.tar.gz
rm -rf /tmp/nginx
Removing all subdirectories in /tmp (as in /tmp/*) is a very bad practice on a multi-user system, as you'll either get permission errors (you won't be able to remove other users' files) or you'll potentially heavily disrupt other people's work by removing actively used temporary files.
Some minor polishing:
POSIX-standard tar uses normal short UNIX options nowadays, i.e. tar -xvz, not tar xvz.
Modern GNU tar (and, AFAIR, BSD tar too) doesn't really need any of "uncompression" flags, such as -z, -j, -y, etc. It detects archive/compression format itself and tar -xf is sufficient to extract any of .tar / .tar.gz / .tar.bz2 tarballs.
That's the basic idea. You'll have to run the make install command as root (or the whole script if you want). Your rm -r /tmp/* should be rm -r /tmp/nginx because other commands might have stuff they're working on in the tmp directory.
It should also be noted that the chances that building from source like that will work with no modifications for a decently sized project is fairly low. Generally you will find you need to specify a path to a library explicitly or some code doesn't quite compile correctly on your distribution.
What I'd like to do is to include settings from a file into my current interactive bash shell like this:
$ . /path/to/some/dir/.settings
The problem is that the .settings script also needs to use the "." operator to include other files like this:
. .extra_settings
How do I reference the relative path for .extra_settings in the .settings file? These two files are always stored in the same directory, but the path to this directory will be different depending on where these files were installed.
The operator always knows the /path/to/some/dir/ as shown above. How can the .settings file know the directory where it is installed? I would rather not have an install process that records the name of the installed directory.
I believe $(dirname "$BASH_SOURCE") will do what you want, as long as the file you are sourcing is not a symlink.
If the file you are sourcing may be a symlink, you can do something like the following to get the true directory:
PRG="$BASH_SOURCE"
progname=`basename "$BASH_SOURCE"`
while [ -h "$PRG" ] ; do
ls=`ls -ld "$PRG"`
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '/.*' > /dev/null; then
PRG="$link"
else
PRG=`dirname "$PRG"`"/$link"
fi
done
dir=$(dirname "$PRG")
Here is what might be an elegant solution:
script_path="${BASH_SOURCE[0]}"
script_dir="$(cd "$(dirname "${script_path}")" && pwd)"
This will not, however, work when sourcing links. In that case, one might do
script_path="$(readlink -f "$(readlink "${BASH_SOURCE[0]}")")"
script_dir="$(cd "$(dirname "${script_path}")" && pwd)"
Things to note:
arrays like ${array[x]} are not POSIX compliant - but then, the BASH_SOURCE array is only available in Bash, anyway
on macOS, the native BSD readlink does not support -f, so you might have to install GNU readlink using e.g. brew by brew install coreutils and replace readlink by greadlink
depending on your use case, you might want to use the -e or -m switches instead of -f plus possibly -n; see readlink man page for details
A different take on the problem - if you're using "." in order to set environment variables, another standard way to do this is to have your script echo variable setting commands, e.g.:
# settings.sh
echo export CLASSPATH=${CLASSPATH}:/foo/bar
then eval the output:
eval $(/path/to/settings.sh)
That's how packages like modules work. This way also makes it easy to support shells derived from sh (X=...; export X) and csh (setenv X ...)
We found $(dirname "$(realpath "$0")") to be the most reliable with both sh and bash. As team mates used them interchangeably, we ran into problems with $BASH_SOURCE which is not supported by sh.
Instead, we now rely on dirname, which can also be stacked to get parent, or grandparent folders.
The following example returns the parent dir of the folder that contains the .sh file:
parent_path=$(dirname "$(dirname "$(realpath "$0")")")
echo $parent_path
I tried messing with variants of $(dirname $0) but it fails when the .settings file is included with ".". If I were executing the .settings file instead of including it, this solution would work. Instead, the $(dirname $0) always returns ".", meaning current directory. This fails when doing something like this:
$ cd /
$ . /some/path/.settings
This sort of works. It works in the sense that you can use the $(dirname $0) syntax within the .settings file to determine its home since you are executing this script in a new shell. However, it adds an extra layer of convolution where you need to change lines such as:
export MYDATE=$(date)
to
echo "export MYDATE=\$(date)"
Maybe this is the only way?