Homebrew: Pulling updates from a repository with BASH and GPG - linux

I have a fleet of linux computers ("nodes" from here on out) who are what I'll call ephemeral members of a network. The nodes are vehicle mounted and frequently move into and out of wifi coverage.
Of course, it's often beneficial for me to push the update of a single script, program or file to all nodes. What I came up with is this:
Generate a key pair to be shared by all nodes
Encrypt the new file version, with a header that contains installation path, on my workstation. My workstation of course has the public key.
Place the encrypted update in a node-accessible network "staging" folder
When a node finds itself with a good connection, it checks the staging folder.
If there are new files, they're:
copied to the node
decrypted
checked for integrity("Does the file header look good?")
moved to the location prescribed by the header
Here's a simple version of my code. Is this a bad idea? Is there a more elegant way to deal with updating unattended nodes on a super flaky connection?
#!/bin/bash
#A method for autonomously retrieving distributed updates
#The latest and greatest files are here:
stageDir="/remoteDirectory/stage"
#Files are initially moved to a quarantine area
qDir="/localDirectory/quarantine"
#If all went well, put a copy of the encrypted file here:
aDir="/localDirectory/pulled"
#generic extension for encrypted files "Secure Up Date"
ext="sud"
for file in "$stageDir"/*."$ext"; do #For each "sud" file...
fname=$(basename $file)
if [ ! -f $aDir/$fname ]; then #If this file has not already been worked on...
cp "$file" "$qDir"/"$fname" #Move it to the quarantine directory
else
echo "$fname has already been pulled" #Move along
fi
done
if [ "$(ls $qDir)" ]; then #If there's something to do (i.e. files in the directory)
for file in "$qDir"/*."$ext"; do
fname=$(basename $file)
qPath="$qDir/$fname"
untrusted="$qPath.untrusted"
#Decrypt file
gpg --output "$untrusted" --yes --passphrase "supersecretpassphrase" --decrypt "$qPath" #Say yes to overwriting
headline=$(head -n 1 $untrusted) #Get the header (which is the first line of the file)
#Check to see if this is a valid file
if [[ $headline == "#LOOKSGOOD:"* ]]; then #All headers must start with "#LOOKSGOOD:" or something
#Get install path
installPath=$(echo $headline | cut -d ':' -f 2) #Get the stuff after the colon
tail -n +2 $untrusted > $installPath #Send everything but the header line to the install path
#Clean up our working files
rm $untrusted
mv $qPath "$aDir/$fname"
#Report what we did
echo $headline
else
#trash the file if it's not a legit file
echo "$fname is not a legit update...trashing it"
rm "$qDir/$fname"*
fi
done
fi

Related

Bash script deletes files older than N days using lftp - but does not remove recursive directories and files

I have finally got this script working and it logs on to my remote FTP and removes files in a folder that are older than N days. I cannot however get it to remove recursive directories also. What can be changed or added to make this script remove files in subfolders as well as subfolders that are also older than N days? I have tried adding the -r function at a few places but it did not work. I think it needs to be added to where the script also builds the list of files to be removed. Any help would be greatly appreciated. Thank you in advance!
#!/bin/bash
# Simple script to delete files older than specific number of days from FTP.
# This script use 'lftp'. And 'date' with '-d' option which is not POSIX compatible.
# FTP credentials and path
FTP_HOST="xxxxxxxxxxxx"
FTP_USER="xxxxxx"
FTP_PASS="xxxxxxxxxxxxxxxxx"
FTP_PATH="/directadmin"
# Full path to lftp executable
LFTP=`which lftp`
# Enquery days to store from 1-st passed argument or strictly hardcode it, uncomment one to use
STORE_DAYS=${1:? "Usage ${0##*/} X, where X - count of daily archives to store"}
# STORE_DAYS=7
function removeOlderThanDays() {
# Make some temp files to store intermediate data
LIST=`mktemp`
DELLIST=`mktemp`
# Connect to ftp get file list and store it into temp file
${LFTP} << EOF
open ${FTP_USER}:${FTP_PASS}#${FTP_HOST}
cd ${FTP_PATH}
cache flush
cls -q -1 --date --time-style="+%Y%m%d" > ${LIST}
quit
EOF
# Print obtained list, uncomment for debug
# echo "File list"
# cat ${LIST}
# Delete list header, uncomment for debug
# echo "Delete list"
# Let's find date to compare
STORE_DATE=$(date -d "now - ${STORE_DAYS} days" '+%Y%m%d')
while read LINE; do
if [[ ${STORE_DATE} -ge ${LINE:0:8} && "${LINE}" != *\/ ]]; then
echo "rm -f \"${LINE:9}\"" >> ${DELLIST}
# Print files which are subject to deletion, uncomment for debug
#echo "${LINE:9}"
fi
done < ${LIST}
# More debug strings
# echo "Delete list complete"
# Print notify if list is empty and exit.
if [ ! -f ${DELLIST} ] || [ -z "$(cat ${DELLIST})" ]; then
echo "Delete list doesn't exist or empty, nothing to delete. Exiting"
exit 0;
fi
# Connect to ftp and delete files by previously formed list
${LFTP} << EOF
open ${FTP_USER}:${FTP_PASS}#${FTP_HOST}
cd ${FTP_PATH}
$(cat ${DELLIST})
quit
I have addressed this sort of thing a few times.
How to connect to a ftp server via bash script?
Provide commands automatically to ftp in bash script
Bash FTP upload - events to log
Better to use scp and/or ssh when you can, especially if you can set up passwordless access with public keys. Otherwise, I recommend a more robust language like Python or Perl that lets you check the return codes of these steps individually and respond accordingly.

shell script to move data from other server's (or nodes) sub-dirs to current server(node) matching sub-dir

I've .parquet files for multiple dates (from 20190927 to 20200131) inside /data/pg/export/schema.table_YYYYMMDD<random alphanumric string> directory structure in seven different nodes. When process ran, it created sub-directory in schema.table_YYYYMMDD<random alphanumric string> format (such as schema.table_20190927) inside /data/pg/export path for each date. However, it did append some random letter on sub-dir on other hosts. So for instance, I've folder, files in following format:
on node#1 (10.245.122.100)
/data/pg/export/schema.table_20190927 contains:
----1.parquet
----2.parquet
----3.parquet
on node#2 (10.245.122.101)
/data/pg/export/schema.table_20190927S8rW4dQ2 contains:
----4.parquet
----5.parquet
----6.parquet
on node#3 (10.245.122.102)
/data/pg/export/schema.table_20190927P5SJ9aX4 contains:
----7.parquet
----8.parquet
----9.parquet
and so on for other nodes.
How I can bring files from /data/pg/export/schema.table_20190927S8rW4dQ2 on node#2 (10.245.122.101) and /data/pg/export/schema.table_20190927P5SJ9aX4 on node#3 (10.245.122.102) (and similar for other hosts) to /data/pg/export/schema.table_20190927 on node#1 (10.245.122.100) so
final output look like:
***on node#1 (10.245.122.100)***
/data/pg/export/schema.table_20190927 will have:
----1.parquet
----2.parquet
----3.parquet
----4.parquet
----5.parquet
----6.parquet
----7.parquet
----8.parquet
----9.parquet
Welcome to SO. Since it is your first question (well the first I see), and I liked the challenge, here is a script that will do that. For your next question, you must provide your own code with a specific problem you are having, and not expect a complete script as an answer. See my comment for stuff to read on using SO.
The bash knowledge required to make this work is:
while loop
date calculation
variable value incrementation (so basic math)
I made some assumptions:
you have a single user on all nodes which can be used to do scp from node1
that user is hopefully setup to use ssh keys to login, otherwise you will type your password a lot of times!
you have connected at least 1 time to each node, and they are listed in your known_hosts file
on each node, there is 1 and only one directory with a specific date in the name.
all files are copied in each directory. You can modify the scp command to get only the .parquet files if you want.
Basic ideas in the code
loop on each node, so from 2 to 7
loop on dates, so from 20190927 to 20200131
copy files for each node, each date within the loops
this was tested on Linux Mint (== Ubuntu) so the date command is the gnu version, which allows for date calculation the way I did it.
Before use, modify the value of the user variable with your user name.
DISCLAIMER: I did not have multiple systems to test the scp command, so this command was added by memory.
The code:
#!/bin/bash
#
# This script runs on node1
# The node1 IP is 10.245.122.100
#
# This script assumes that you want to copy all files under
# /data/pg/export/schema.table_YYYMMDD<random>
#
###############################################################################
# node1 variables
targetdirprefix="/data/pg/export/schema.table_"
user="YOURUSER"
# Other nodes variables
total_number_nodes=7 # includes node1
ip_prefix=10.245.122
ip_lastdigit_start=99 # node1 == 100, so start at 99
# loop on nodes ---------------------------------------------------------------
# start at node 2, node1 is the target node
nodecount=2
# Stop at maxnode+1, here he last node will be 7
(( whileexit = total_number_nodes + 1 ))
while [[ "$nodecount" -lt "$whileexit" ]]
do
# build the current node IP
(( currentnode_lastdigit = ip_lastdigit_start + nodecount ))
currentnode_ip="${ip_prefix}.${currentnode_lastdigit}"
# DEBUG
echo "nodecount=$nodecount, ip=$currentnode_ip"
# loop on dates ---------------------------------------
firstdate="20190927"
lastdate="20200131"
loopdate="$firstdate"
while [[ "$loopdate" -le "$lastdate" ]]
do
# DEBUG
echo "loopdate=$loopdate"
# go into the target directory (create it if required)
targetdir="${targetdirprefix}${loopdate}"
if [[ -d "$targetdir" ]]
then
cd "$targetdir"
else
mkdir -p "$targetdir"
if [[ "$?" -ne 0 ]]
then
echo "ERROR: could not create directory $targetdir, exiting."
exit 1
else
cd "$targetdir"
fi
fi
# copy the date's file into the target dir (i.e. localy, since we did a cd before)
# the source directory is the same as the targetdir, with extra chars at the end
# this script assumes there is only 1 directory with that particular date!
scp ${user}#${currentnode_ip}:${targetdir}* .
if [[ "$?" -ne 0 ]]
then
echo "WARNING: copy failed from node $nodecount, date $loopdate."
echo " The script will continue for other dates and nodes..."
fi
loopdate=$(date --date "$loopdate +1 days" +%Y%m%d)
done
(( nodecount += 1 ))
done

Concatenating hardcoded directory and user-created text file adds root-level paths when it shouldn't

I have written a script to allow a restricted user access to deleting files on a production webserver. However, to prevent fat-fingering issues leading to accidental filesystem deletion/problems, I have hard coded the base directory in a variable... But the final result is not properly creating the desired path from hard-coded directory + user paths if they have a * wildcard...
I have an Apache 2.4.6 server that caches web content for a user. They have a jailkit user to SSH into this box. As this is production, they are severely limited in their access, however, I would like to give them the ability to clear specific cache directories on their own terms. In order to prevent this from going horribly wrong, I have hard-coded the base cache directory into a script variable, so that no matter what, the script will only run against that path.
So far, this script works well to iterate through their desired cache clear paths... A user creates a .txt file with a /cachePath defined on each line, and the script will iterate through it and delete those paths. It works just fine for /path and /content/path2/ ... But I cannot for the life of me get it working with wildcards (i.e. /path/, /content/path2/). There's probably a sexier way to handle this than what I've done so far (currently have an if | else statement for handling * or /* not included in the script below), but I am getting all kinds of undesired results trying to handle a user-inputted * or /* on a custom path.
#!/bin/bash
#For this to work, a user must create a paths.txt file in their jailed home directory, based off the /mnt/var/www/html cache location. Each location (or file) must be on a new line, and start with a /
#User-created file with custom cache directories to delete
file="/usr/jail/paths.txt"
#Setting this variable to the contents of the user-created cache file
pathToDelete=$(cat $file)
#Hard-coded cache directory to try to prevent deleting anything important outside cache directory
cacheDir="/mnt/var/www/html"
#Let's delete cache
if [ -f $file ];then
echo "Deleting the following cache directories:"
for paths in $pathToDelete
do
echo $cacheDir"$paths"
#rm command commented out until I get expected echo
output
#rm -rfv $cacheDir"$paths"
done
echo "Cache cleared successfully"
mv $file "$file.`date +"%m%d%Y%H%M"`"
else
echo "Nothing to do"
fi
I've tried double quotes, single quotes, no quotes, tried treating "pathToDelete" as an array, none of it is producing the desired output yet. For example, if paths.txt contains only "*", the result is grabbing all directories under / and adding them to "cacheDir"?
/mnt/var/www/html/testing/backup
/mnt/var/www/html/testing/bin
/mnt/var/www/html/testing/boot
/mnt/var/www/html/testing/data
/mnt/var/www/html/testing/dev
/mnt/var/www/html/testing/etc
/mnt/var/www/html/testing/home
/mnt/var/www/html/testing/lib
/mnt/var/www/html/testing/lib64
...
If paths.txt is "./*" it's adding files from the location of the script itself:
/mnt/var/www/html/testing./cacheClear.sh
/mnt/var/www/html/testing./paths.txt
Ultimately, what I'm looking for is this: if /mnt/var/www/html contains the following directories:
/content/
/content/path/
/content/path/file1.txt
/content/path/file2.txt
/content/path/subdir/
/path2/
/path2/fileA.txt
/path2/fileB.txt
Then a file containing
/content/path/*
should delete /content/path/file1.txt, file2.txt, and /subdir/, and preserve the /content/path/ directory.
If the paths.txt file contains
/content/path
/path2/*
Then /content/path directory and subfiles/directories should be deleted, and the files within /path2/ directory will as well... But right now, the script doesn't see the concatenated $cacheDir + $paths as a real / expected location if it contains a * anywhere in it. Works ok without * symbols.
Got a version that works well enough for my purposes:
#!/bin/bash
file="/usr/jail/paths.txt"
pathToDelete=$(cat $file)
cacheDir="/mnt/var/www/html"
if [ -f $file ]; then
if [ "$pathToDelete" == "*" ] || [ "$pathToDelete" == "/*" ]; then
echo "Full Clear"
rm -rfv /mnt/var/www/html/*
else
echo "Deleting the following cache directories:"
for i in ${pathToDelete};
do
echo ${cacheDir}${i}
rm -rfv ${cacheDir}${i}
done
echo "Cache cleared successfully"
fi
fi
The following code is a working solution:
#!/bin/bash -x
file="/usr/jail/paths.txt"
pathToDelete="$(sed 's/^\///' $file)"
cacheDir="/mnt/var/www/html"
if [ -f $file ];then
echo "Deleting the following cache directories:"
for paths in "$pathToDelete"
do
echo $cacheDir/$paths
rm -rfv $cacheDir/$paths
done
echo "Cache cleared successfully"
else
echo "Nothing to do"
fi

variable part in a variable path in ksh script

I'm sorry if something similar was already answered in the past, but I wasn't able to find it. I'm writing a script to perform some housekeeping tasks, and I get stuck in the step below. To put you in the record, it's a script which reads a config file in order to be able to use it as standard protocol in different environments.
The problem is with this code:
# Check if destination folder exist, if not create it.
if [ ! -d ${V_DestFolder} ]; then # Create folder
F_Log "${IF_ROOT} mkdir -p ${V_DestFolder}"
${IF_ROOT} mkdir -p ${V_DestFolder}
continue
fi
# If movement, check write permissions of destination folder.
V_CheckIfMovement=`echo $1|grep #`
if [ $? -eq 0 ]; then # File will be moved.
V_DestFolder=`echo $1|awk -F"#" {'print $2'}`
if [ ! -w ${V_DestFolder} ]; then # Destination folder IS NOT writable.
F_Log "Destination folder ${V_DestFolder} does not have WRITE permissions. Skipping."
continue
fi
fi
Basically I need to move (in this step) some files from one route to another.
It checks if the folder (name read from config file) exists, if not it will be created, after that check if the folder have write rights and move the files.
Here you can see the part of config file which is read in this step:
app/tom*/instances/*/logs|+9|/.*\.gz)$/|move#/app/archive/tom*/logs
I need to say the files are properly moved when I change the tom* of the destination for anything, as "test" or any word without * (as it should).
What I need to know is how I can use a variable in "tom*" in destination. Variable should contain the same name of tom* in the source, which I use as the name of the cell.
This is because I use different tomcat cells with the reference tom7 or tom8 plus 3 letters to describe each one. as example tom7dog or tom7cat.
You should give the shell a chance to evaluate.
V_DestFolder=`echo $1|awk -F"#" {'print $2'}`
for p in ${V_DestFolder}; do
if [ ! -w ${p} ]; then

In Ubuntu Bash, how do I compare a variable to a stdout value? [duplicate]

This question already has answers here:
How do I compare two string variables in an 'if' statement in Bash? [duplicate]
(12 answers)
Closed 9 years ago.
I attempted to follow the answer on
How do I compare two string variables in an 'if' statement in Bash?,
but the accepted solution did not work. As you can see from the
script below, my syntax follows the solutions on that question which
gives me the error found here
Bash syntax error: "[[: not found".
And yes, I tried their solution too.
I have the following script where I am trying to delete all data from a directory. Before I delete all data, I want to compare a variable to a stdout value to verify I have the correct directory.
To avoid deleting all data from the wrong directory, I am attempting to compare the variable in the script with data stored in a *.ini.php file.
Here is the script:
#!/bin/bash
#--- script variables ---
#base path of the timetrex web folder ending with a / character
timetrex_path=/var/www/timetrex/
timetrex_cache=/tmp/timetrex/
#--- initialize script---
#location of the base path of the current version
ttrexVer_path=$(ls -d ${timetrex_path}*.*.*)/
#the timetrex cache folder
ttrexCache_path=$(sed -n 's/[cache]*dir =*\([^ ]*\)/\1/p' < ${ttrexVer_path}timetrex.ini.php)/
echo $timetrex_cache
echo $ttrexCache_path
#clear the timetrex cache
if [[ "$ttrexCache_path" = "$timetrex_cache" ]]
then
#path is valid, OK to do mass delete
#rm -R $ttrexCache_path*
echo "Success: TimeTrex cache has been cleared."
else
#path could be root - don't delete the whole server
echo "Error: TimeTrex cache was NOT cleared."
fi
The output of the script shows the following:
/tmp/timetrex/
/tmp/timetrex/
Error: Timetrex cache was NOT cleared.
As you can see from the output, both values are the same. However, when the script compares the two variables, it thinks they are different values.
Is this because the values are different types? Am I using the wrong comparison operator in the if statement? Thanks in advance.
After doing some more searching, I found that comparing the directory content was somewhat of an effective way of verifying that both variables pointed to the same directory.
Here is one way to do it:
#clear the timetrex cache
if [ "$(diff -q $timetrex_cache $ttrexCache_path 2>&1)" = "" ]
then
#path is valid, OK to do mass delete
rm -R ${ttrexCache_path}*
echo "Success: TimeTrex cache has been cleared."
else
#path could be root - don't delete the whole server
echo "Error: TimeTrex cache was NOT cleared."
fi
If one of the directories is an invalid path, the condition catches the problem and doesn't try to delete the directory contents.
If the directory paths are different but point to valid directories, the condition statement sees that they have different contents and doesn't try to delete the directory contents.
If both directory paths are different and point to valid directories, and the contents of those directories is the same, then the script will delete everything in one of the directories. SO, this is not a foolproof method.
A second method can be seen at https://superuser.com/questions/196572/check-if-two-paths-are-pointing-to-the-same-file. The problem with this method is that this code does not know the difference between /tmp/timetrex and /tmp/timetrex/ which is important when wanting to append a * at the end of the path.
In the end, the best solution for this problem is quite simple. Changing the syntax of the original code is the only thing that needed to be done.
#clear the timetrex cache
if [ ${timetrex_cache} == ${ttrexCache_path} ] && [[ "${timetrex_cache: -1}" = "/" ]]
then
#path is valid, OK to do mass delete
rm -R ${ttrexCache_path}*
echo "Success: TimeTrex cache has been cleared."
else
#path could be root - don't delete the whole server
echo "Error: TimeTrex cache was NOT cleared."
fi
Hope this is helpful to someone!

Resources