Integrate a built-in Update function in shell in order to receive OTA updates when there are available - linux

i'm struggled here with this thing that it would be awesome if it's gonna be integrated.
Well my idea is to, create a function which it will be runned at a certain time which is gonna to check if there is a new version of the script . But i don't know how to put the commands together.
I already have a sort of sketch Here:
SCRIPT_NAME="$0"
ARGS="$#"
NEW_FILE="/tmp/blog.sh"
VERSION="1.0"
check_upgrade () {
# check if there is a new version of this file
# here, hypothetically we check if a file exists in the disk.
# it could be an apt/yum check or whatever...
[ -f "$NEW_FILE" ] && {
# install a new version of this file or package
# again, in this example, this is done by just copying the new file
echo "Found a new version of me, updating myself..."
cp "$NEW_FILE" "$SCRIPT_NAME"
rm -f "$NEW_FILE"
# note that at this point this file was overwritten in the disk
# now run this very own file, in its new version!
echo "Running the new version..."
$SCRIPT_NAME $ARGS
# now exit this old instance
exit 0
}
I know it's possible to do this, but i didn't found anything useful on internet.
Every advice will be much appreciated.

Assuming the script is always running, make another script that curls the file and checks it against the original. Something like:
if [ version newer ]; then
kill old verseion
mv "new version" 'old version"
./new version
else:
delete tmp file
fi
Run it with cron at intervals you see fit

Related

Concatenating hardcoded directory and user-created text file adds root-level paths when it shouldn't

I have written a script to allow a restricted user access to deleting files on a production webserver. However, to prevent fat-fingering issues leading to accidental filesystem deletion/problems, I have hard coded the base directory in a variable... But the final result is not properly creating the desired path from hard-coded directory + user paths if they have a * wildcard...
I have an Apache 2.4.6 server that caches web content for a user. They have a jailkit user to SSH into this box. As this is production, they are severely limited in their access, however, I would like to give them the ability to clear specific cache directories on their own terms. In order to prevent this from going horribly wrong, I have hard-coded the base cache directory into a script variable, so that no matter what, the script will only run against that path.
So far, this script works well to iterate through their desired cache clear paths... A user creates a .txt file with a /cachePath defined on each line, and the script will iterate through it and delete those paths. It works just fine for /path and /content/path2/ ... But I cannot for the life of me get it working with wildcards (i.e. /path/, /content/path2/). There's probably a sexier way to handle this than what I've done so far (currently have an if | else statement for handling * or /* not included in the script below), but I am getting all kinds of undesired results trying to handle a user-inputted * or /* on a custom path.
#!/bin/bash
#For this to work, a user must create a paths.txt file in their jailed home directory, based off the /mnt/var/www/html cache location. Each location (or file) must be on a new line, and start with a /
#User-created file with custom cache directories to delete
file="/usr/jail/paths.txt"
#Setting this variable to the contents of the user-created cache file
pathToDelete=$(cat $file)
#Hard-coded cache directory to try to prevent deleting anything important outside cache directory
cacheDir="/mnt/var/www/html"
#Let's delete cache
if [ -f $file ];then
echo "Deleting the following cache directories:"
for paths in $pathToDelete
do
echo $cacheDir"$paths"
#rm command commented out until I get expected echo
output
#rm -rfv $cacheDir"$paths"
done
echo "Cache cleared successfully"
mv $file "$file.`date +"%m%d%Y%H%M"`"
else
echo "Nothing to do"
fi
I've tried double quotes, single quotes, no quotes, tried treating "pathToDelete" as an array, none of it is producing the desired output yet. For example, if paths.txt contains only "*", the result is grabbing all directories under / and adding them to "cacheDir"?
/mnt/var/www/html/testing/backup
/mnt/var/www/html/testing/bin
/mnt/var/www/html/testing/boot
/mnt/var/www/html/testing/data
/mnt/var/www/html/testing/dev
/mnt/var/www/html/testing/etc
/mnt/var/www/html/testing/home
/mnt/var/www/html/testing/lib
/mnt/var/www/html/testing/lib64
...
If paths.txt is "./*" it's adding files from the location of the script itself:
/mnt/var/www/html/testing./cacheClear.sh
/mnt/var/www/html/testing./paths.txt
Ultimately, what I'm looking for is this: if /mnt/var/www/html contains the following directories:
/content/
/content/path/
/content/path/file1.txt
/content/path/file2.txt
/content/path/subdir/
/path2/
/path2/fileA.txt
/path2/fileB.txt
Then a file containing
/content/path/*
should delete /content/path/file1.txt, file2.txt, and /subdir/, and preserve the /content/path/ directory.
If the paths.txt file contains
/content/path
/path2/*
Then /content/path directory and subfiles/directories should be deleted, and the files within /path2/ directory will as well... But right now, the script doesn't see the concatenated $cacheDir + $paths as a real / expected location if it contains a * anywhere in it. Works ok without * symbols.
Got a version that works well enough for my purposes:
#!/bin/bash
file="/usr/jail/paths.txt"
pathToDelete=$(cat $file)
cacheDir="/mnt/var/www/html"
if [ -f $file ]; then
if [ "$pathToDelete" == "*" ] || [ "$pathToDelete" == "/*" ]; then
echo "Full Clear"
rm -rfv /mnt/var/www/html/*
else
echo "Deleting the following cache directories:"
for i in ${pathToDelete};
do
echo ${cacheDir}${i}
rm -rfv ${cacheDir}${i}
done
echo "Cache cleared successfully"
fi
fi
The following code is a working solution:
#!/bin/bash -x
file="/usr/jail/paths.txt"
pathToDelete="$(sed 's/^\///' $file)"
cacheDir="/mnt/var/www/html"
if [ -f $file ];then
echo "Deleting the following cache directories:"
for paths in "$pathToDelete"
do
echo $cacheDir/$paths
rm -rfv $cacheDir/$paths
done
echo "Cache cleared successfully"
else
echo "Nothing to do"
fi

How do I rerun a bash script skipping over lines which have previously run sucesfully?

I have a bash script which acts as a wrapper for an analysis pipeline. If the script errors out I want to be able to run the script from the point at which the errors occurred by simply re-running the original command. I have set two different traps; one which will remove the last file being generated on a non-zero exit from my script, the other will remove all the temporary files on exit signal = 0 and essentially cleans up the file system at the end of the run. I turned on noclobber in the bash environment which allows my script to skip over lines of the script where files have already been written but this will only do this if I do not set the non-zero exit trap. As soon as I set this trap then it will exit at the first line where noclobber IDs a file it will not overwrite. Is there a way for me to skip over lines of code that have successfully run previously rather than having to re-run my code from the start? I know I could use conditional statements for each line but I thought there might be a neater way of doing this.
set -o noclobber
# Function to clean up temporary folders when script exits at the end
rmfile() { rm -r $1 }
# Function to remove the file being currently generated
# Function executed if script errors out
rmlast() {
if [ ! -z "$CURRENTFILE" ]
then
rm -r $1
exit 1
fi }
# Trap to remove the currently generated file
trap 'rmlast "$CURRENTFILE"' ERR SIGINT
#Make temporary directory if it has not been created in a previous run
TEMPDIR=$(find . -name "tmp*")
if [ -z "$TEMPDIR" ]
then
TEMPDIR=$(mktemp -d /test/tmpXXX)
fi
# Set CURRENTFILE variable
CURRENTFILE="${TEMPDIR}/Variants.vcf"
# Set CURRENTFILE variable
complexanalysis_tool input_file > $CURRENTFILE
# Set CURRENTFILE variable
CURRENTFILE="${TEMPDIR}/Filtered.vcf"
complexanalysis_tool2 input_file2 > $CURRENTFILE
CURRENTFILE="${TEMPDIR}/Filtered_2.vcf"
complexanalysis_tool3 input_file3 > $CURRENTFILE
# Move files to final destination folder
mv -nv $TEMPDIR/*.vcf /test/newdest/
# Trap to remove temporary folders when script finishes running
trap 'rmfile "$TEMPDIR"' 0
Update:
I have been offered answers suggesting the use of the make utility. I want to make use of its inbuilt utility to check if a dependency has been fulfilled.
In my hands the makefile suggested by VK Kashyap does not seem to skip execution for previously accomplished tasks. So for example I ran the script above and interrupted the script when it was running filtered.vcf with ctrl c. When I rerun the script again it runs from the beginning again i.e. starts again at varaints.vcf. Am I missing something in order to get the makefile to show sources as being fullfilled?
Answer to update:
OK this is a rookie mistake but since I am not familiar with generating makefiles I will post this explanation of my error. The reason my makefile was not rerunning from the exit point was that I had named the targets a different name to the output files being generated. So as VK Kashyap quite correctly answered if you name the targets eg.
variants.vcf
filtered.vcf
filtered2.vcf
the same as the output files being generated then the script will skip previously accomplished tasks.
make utility might be an answer for the thing you want to achive.
it has inbuilt dependecy checking (the stuff which you are trying to achive with tmp files)
#run all target when all of the files are available
all: variants.vcf filtered.vcf filtered2.vcf
mv -nv $(TEMPDIR)/*.vcf /test/newdest/
variants.vcf:
complexanalysis_tool input_file > variants.vcf
filtered.vcf:
complexanalysis_tool2 input_file2 > filtered.vcf
filtered2.vcf:
complexanalysis_tool3 input_file3 > filtered2.vcf
you may use bash script to invoke this make file as:
#/bin/bash
export TEMPDIR=xyz
make -C $TEMPDIR all
make utility will check itself for already accomplished task and skip execution for done stuffs. it will continue where you had the error finishing the task.
you can find more details on internet about exact syntax for makefile.
there is no built-in way to do that.
however, you could brew something like that by keeping track of the last successful line and building your own goto statement, as described here and in Is there a "goto" statement in bash? (just replace the 'labels' with actual line-numbers).
however, the question is whether this is really a smart idea.
a better way is to only run the commands needed, not the commands not-yet-executed.
this could be done either by explicit conditionals in your bash-script:
produce_if_missing() {
# check if first argument is existing
# if not run the rest of the arguments and pipe it into the first one
local curfile=$1
shift
if [ ! -e "${curfile}" ]; then
$# > "${curfile}"
fi
}
produce_if_missing Variants.vcf complexanalysis_tool input_file
produce_if_missing Filtered.vcf complexanalysis_tool2 input_file2
or using tools that are made for such things (see VK Kahyap's answer using make, though i prefer using variables in the make-rules to minimize typos):
Variants.vcf: input_file
complexanalysis_tool $^ > $#
Filtered.vcf: input_file
complexanalysis_tool2 $^ > $#

One liner to append a file into another file but only if it hasn't already been added

I have an automated process that has a number of lines like the following pattern:
sudo cat /some/path/to/a/file >> /some/other/file
I'd like to transform that into a one liner that will only append to /some/other/file if /some/path/to/a/file has not already been added.
Edit
It's clear I need some examples here.
example 1: Updating a .bashrc script for a specific login
example 2: Creating a .screenrc for different logins
example 3: Appending to the end of a /etc/ config file
Some other caveats. The text is going to be added in a block (>>). Consequently, it should be relatively straight forward to see if the entire code block is added or not near the end of a file. I am trying to come up with a simple method for determining whether or not the file has already been appended to the original.
Thanks!
Example python script...
def check_for_appended(new_file, original_file):
""" Checks original_file to see if it has the contents of new_file """
new_lines = reversed(new_file.split("\n"))
original_lines = reversed(original_file.split("\n"))
appended = None
for new_line, orig_line in zip(new_lines, original_lines):
if new_line != orig_line:
appended = False
break
else:
appended = True
return appended
Maybe this will get you started - this GNU awk script:
gawk -v RS='^$' 'NR==FNR{f1=$0;next} {print (index($0,f1) ? "present" : "absent")}' file1 file2
will tell you if the contents of "file1" are present in "file2". It cannot tell you why, e.g. because you previously concatenated file1 onto the end of file2.
Is that all you need? If not update your question to clarify/explain.
Here's a technique to see if a file contains another file
contains_file_in_file() {
local small=$1
local big=$2
awk -v RS="" '{small=$0; getline; exit !index($0, small)}' "$small" "$big"
}
if ! contains_file_in_file /some/path/to/a/file /some/other/file; then
sudo cat /some/path/to/a/file >> /some/other/file
fi
EDIT: Op just told me in the comments that the files he wants to concatenate are bash scripts -- this brings us back to the good ole C preprocessor include guard tactics:
prepend every file with
if [ -z "$__<filename>__" ]; then __<filename>__=1; else
(of course replacing <filename> with the name of the file) and at the end
fi
this way, you surround the script in each file with a test for something that's only true once.
Does this work for you?
sudo (set -o noclobber; date > /tmp/testfile)
noclobber prevents overwriting an existing file.
I think it doesn't, since you wrote you want to append something but this technique might help.
When the appending all occurs in one script, then use a flag:
if [ -z "${appended_the_file}" ]; then
cat /some/path/to/a/file >> /some/other/file
appended_the_file="Yes I have done it except for permission/right issues"
fi
I would continue into writing a function appendOnce { .. }, with the content above. If you really want an ugly oneliner (ugly: pain for the eye and colleague):
test -z "${ugly}" && cat /some/path/to/a/file >> /some/other/file && ugly="dirt"
Combining this with sudo:
test -z "${ugly}" && sudo "cat /some/path/to/a/file >> /some/other/file" && ugly="dirt"
It appears that what you want is a collection of script segments which can be run as a unit. Your approach -- making them into a single file -- is hard to maintain and subject to a variety of race conditions, making its implementation tricky.
A far simpler approach, similar to that used by most modern Linux distributions, is to create a directory of scripts, say ~/.bashrc.d and keep each chunk as an individual file in that directory.
The driver (which replaces the concatenation of all those files) just runs the scripts in the directory one at a time:
if [[ -d ~/.bashrc.d ]]; then
for f in ~/.bashrc.d/*; do
if [[ -f "$f" ]]; then
source "$f"
fi
done
fi
To add a file from a skeleton directory, just make a new symlink.
add_fragment() {
if [[ -f "$FRAGMENT_SKELETON/$1" ]]; then
# The following will silently fail if the symlink already
# exists. If you wanted to report that, you could add || echo...
ln -s "$FRAGMENT_SKELETON/$1" "~/.bashrc.d/$1" 2>>/dev/null
else
echo "Not a valid fragment name: '$1'"
exit 1
fi
}
Of course, it is possible to effectively index the files by contents rather than by name. But in most cases, indexing by name will work better, because it is robust against editing the script fragment. If you used content checks (md5sum, for example), you would run the risk of having an old and a new version of the same fragment, both active, and without an obvious way to remove the old one.
But it should be straight-forward to adapt the above structure to whatever requirements and constraints you might have.
For example, if symlinks are not possible (because the skeleton and the instance do not share a filesystem, for example), then you can copy the files instead. You might want to avoid the copy if the file is already present and has the same content, but that's just for efficiency and it might not be very important if the script fragments are small. Alternatively, you could use rsync to keep the skeleton and the instance(s) in sync with each other; that would be a very reliable and low-maintenance solution.

BASH loop usage

I'm currently learning BASH scripting and I have a question about the IF / WHILE / UNTIL loop statement. I'm trying to learn which one of those is more efficient for checking the contents of a variable for a text-statement, and if not found Can I use the until statement to check a variable? Example usage is like this (I'm using it to check if a system is updated, if not, it updates):
#!/bin/bash
# Flush the YUM cache since we've added new directories and repo's
flushcache=$(yum clean all);
# Does our system need to be updated?
checkupdate=$(yum update | grep -i "No Packages marked for Update");
# This will update the system
updatesystem=$(yum update -y);
# Flush the YUM cache, to make sure we get the newest package list
echo "$flushcache";
# Using a LOOP (until-logic), lets make sure we're all updated!
if [[ $checkupdate != "No packages marked for update" ]]
then
echo "$updatesystem"
else
echo "They system is already updated";
fi
exit 0;
The script exists normally, so that's good, but I'm wanting to know if I'm implementing my new learnz in the most efficient way possible. Also, will this loop around until $checkupdate is a true statement? I'd love to hear some professional input!
Any help is still help! Thanks for indulging me!

Can I customize the svnadmin create process?

I have many different repositories setup on my server. I need to have an identical post-commit hook file in every one of those repos. Simple enough for existing, but is there a way to have calls to svnadmin create automatically copy a post-commit stub file to the new hooks directory? Essentially I'm looking for a post-svnadmin-create hook. Thanks!
I think your best bet would be to wrap the call to svnadmin create in a script that creates the hooks after the repo.
Agreed as long as there is not some built-in way, which there seems not to be. I would have expected subversion to sport something like the customizable skeleton directory for new Linux users. Too bad.
Here is my wrapper with comments if anyone can find it useful - should be fairly extendable. If anyone notices any glaring gotchas in it, don't hesitate - I'm neither a bash nor Linux expert but I think I got most of it covered, and it works :)
# -----------------------------------------------------------------------
# A wrapper for svnadmin to allow post operations following repo creation - copying custom
# hook files into repo in this case. This should be run as root.
# capture input args; note that args[0] == $#[1] (this script name is not captured here)
args=("$#");
# redirect args to svnadmin in all cases - this script should not modify the behavior of svnadmin.
# note: the original binary "/binary_path/svnadmin" has been renamed "/binary_path/svnadmin-wrapped" and
# this script was then named "/binary_path/svnadmin" and given identical user:group & permissions as
# the original.
sudo -u svnuser svnadmin-wrapped ${args[#]};
# capture return code so we can return on exit; svnadmin returns 0 for success
eCode=$?;
# find out if sub-command to svnadmin was "create" and, if so, note the index of the directory arg,
# which is not necessarily going to be in the same position each time (options may be specified
# before the sub-command).
path_idx=0;
found=0;
for i in ${args[#]}
do
# track index; pre-incerement
((path_idx++));
if [ $i == "create" ]
then
# found repo path
((found++));
break;
fi
done
# we now know if the subcommand was create and where the repo path is - finish up as needed.
# note that this block assumes that our hook file stubs are /stub_path/ (owned by root)
# and that there exists a custom log file at /stub_path/cust-log (also owned by root).
d=`date`;
if [ $found != 0 ]
then
# check that the command succeeded
if [ $eCode == 0 ]
then
# check that the directory exists
if [ -d "${args[$path_idx]}/hooks" ]
then
# copy our custom hooks into place
sudo -u svnuser cp "/stub_path/post-commit" "${args[$path_idx]}/hooks/post-commit";
sudo -u svnuser cp "/stub_path/post-revprop-change" "${args[$path_idx]}/hooks/post-revprop-change";
else
# unlikey failure; set custom error code here; log issue
echo "$d svnadmin wrapper error: svnadmin 'create' succeeded but the 'hooks' directory was not found! Params: ${args[#]}" >> "/stub_path/cust-log";
let "eCode=1325";
fi
else
# tried to create but svnadmin failed; log issue
echo "$d svnadmin wrapper error: svnadmin 'create' was called but failed! Params: ${args[#]}" >> "/stub_path/cust-log";
fi
fi
exit $eCode;
-Thanks to all who host and post!

Resources