Execute Command In Multiple Directories - linux

I have a large set of single install WordPress sites on my Linux server. I want to create text files that contain directory names for groups of my sites.
For instance all my sites live in /var/www/vhosts and I may want to group a set of 100 websites in a text file such as
site1
site2
site3
site4
How can I write a script that will loop through only the directories specified in the group text files and execute a command. My goal is to symlink some of the WordPress plugins and I don't want to have to manually go directory by directory if I can just create groups and execute the command within that group of directories.
For each site in the group file, go to the /wp-content/plugins folder and execute the symlink command specified.

Depending on your goals, you may be able to achieve that with a one-liner using find and an -exec action. I tend to like doing it as a Bash loop, because it is easier to add additional commands instead of having a long and unwieldy command doing it all, as well as handle errors.
I do not know if this is what you intend, but here is a proposal.
#!/bin/bash
# Receives site group file as argument1
# Receives symlink name as argument 2
# Further arguments will be passed to the command called
sites_dir="/var/www/vhosts"
commands_dir="wp-content/plugins"
if ! [[ -f $1 ]] ; then
echo "Site list not found : $1"
exit
fi
while IFS= read -r site
do
site_dir="$sites_dir/$site"
if ! [[ -d $site_dir ]] ; then
echo "Unknown site : $site"
continue
fi
command="$site_dir/$commands_dir/$2"
if ! [[ -x $command ]] ; then
echo "Missing or non-executable command : $command"
continue
fi
"$command" "${#:3}"
done <"$1"

Related

How do I find out if a parameter contains a "/" and if it does create the directory before it and the file after it

I am trying to create a script that takes a parameter with a / in the between the directory and file. I then want to create the directory and create the file inside that directory. I don't really have a huge idea of what I am doing so I don't have any code other than the basic skeleton for a bash if statement.
#!/bin/bash
if [ $1 ?? "/" ]; then
do
fi
If for example the parameter of Website/Google is passed a directory called Website should be created with a file called Google inside it.
if [[ "$1" = */* ]];
dir=${1#/*}
file=${1%%*/}
fi
in bash, or more generally for a POSIX-compatible shell,
case $1 in
*/*) dir=${1#/*}; file=${1%%*/} ;;
esac

Backup the first argument on bash script

I wrote a script to backup the first argument that the user input with the script:
#!/bin/bash
file=$1/$(date +"_%Y-%m-%d").tar.gz
if [ $1 -eq 0 ]
then
echo "We need first argument to backup"
else
if [ ! -e "$file" ]; then
tar -zcvf $1/$(date +"_%Y-%m-%d").tar.gz $1
else
exit
fi
fi
The result that i want from the script is
backup folder the first argument that user input
save the backup file into folder that user input with date time format.
but the script is not running when I try to input the argument. What's wrong with the script?
The backup part of your script seem to be working well, but not the part where you check that $1 is not empty.
Firstly you would need quotes around $1, to prevent that it expends to nothing. Without the quotes the shell sees it as
if [ -eq 0 ]
and throws an error.
Secondly it would be better to use the -z operator to test if the variable exists:
if [ -z "$1" ]
Now you script should work as expected
I see several problems:
As H. Gourlé pointed out, the test for whether an argument was passed is wrong. Use if [ -z "$1" ] to check for a missing/blank argument.
Also, it's almost always a good idea to wrap variable references in double-quotes, as in "$1" above. You do this in the test for whether $file exists, but not in the tar command. There are places where it's safe to leave the double-quotes off, but the rules are complicated; it's easier to just always double-quote.
In addition to checking whether $1 was passed, I'd recommend checking whether it corresponds to a directory (or possibly file) that actually exists. Use something like:
if [ -z "$1" ]; then
echo "$0: We need first argument to backup" >&2
elif [ ! -d "$1" ]; then
echo "$0: backup source $1 not found or is not a directory" >&2
BTW, note how the error messages start with $0 (the name the script was run as) and are directed to error output (the >&2 part)? These are both standard conventions for error messages.
This isn't serious, but it really bugs me: you calculate $1/$(date +"_%Y-%m-%d").tar.gz, store it in the file variable, test to see whether something by that name exists, and then calculate it again when creating the backup file. There's no reason to do that; just use the file variable again. The reason it bugs me is partly that it violates the DRY ("Don't Repeat Yourself") principle, partly that if you ever change the naming convention you have to change it consistently in two places or the script will not work, and partly because in principle it's possible that the script will run just at midnight, and the first calculation will get one day and the second will get a different day.
Speaking of naming conventions, there's a problem with how you store the backup file. If you put it in the directory that's being backed up, then the first day you'll get a .tar.gz file containing the previous contents of the directory. The second day you'll get a file containing the regular contents plus the first backup file. Thus, the second day's backup will be about twice as big. The third day's backup will contain the regular contents, plus the first two backup files, so it'll be four times as big. And the fourth day's will be eight times as big, then 16 times, then 32 times, etc.
You need to either store the backup file somewhere outside the directory being backed up, or add something like --exclude="*.tar.gz" to the arguments to tar. The disadvantage of the --exclude option is that it may exclude other .tar.gz files from the backup, so I'd really recommend the first option. And if you followed my advice about using "$file" everywhere instead of recalculating the name, you only need to make a change in one place to change where the backup goes.
One final note: run your scripts through shellcheck.net. It'll point out a lot of common errors and bad practices before you discover them the hard way.
Here's a corrected version of the script (storing the backup in the directory, and excluding .tar.gz files; again, I recommend the other option):
#!/bin/bash
file="$1/$(date +"_%Y-%m-%d").tar.gz"
if [ -z "$1" ]; then
echo "$0: We need first argument to backup" >&2
elif [ ! -d "$1" ]; then
echo "$0: backup source $1 not found or is not a directory" >&2
elif [ -e "$file" ]; then
echo "$0: A backup already exists for today" >&2
else
tar --exclude="*.tar.gz" -zcvf "$file" "$1"
fi

Bash script to iterate contents of directory moving only the files not currently open by other process

I have people uploading files to a directory on my Ubuntu Server.
I need to move those files to the final location (another directory) only when I know these files are fully uploaded.
Here's my script so far:
#!/bin/bash
cd /var/uploaded_by_users
for filename in *; do
lsof $filename
if [ -z $? ]; then
# file has been closed, move it
else
echo "*** File is open. Skipping..."
fi
done
cd -
However it's not working as it says some files are open when that's not true. I supposed $? would have 0 if the file was closed and 1 if it wasn't but I think that's wrong.
I'm not linux expert so I'm looking to know how to implement this simple script that will run on a cron job every 1 minute.
[ -z $? ] checks if $? is of zero length or not. Since $? will never be a null string, your check will always fail and result in else part being executed.
You need to test for numeric zero, as below:
lsof "$filename" >/dev/null; lsof_status=$?
if [ "$lsof_status" -eq 0 ]; then
# file is open, skipping
else
# move it
fi
Or more simply (as Benjamin pointed out):
if lsof "$filename" >/dev/null; then
# file is open, skip
else
# move it
fi
Using negation, we can shorten the if statement (as dimo414 pointed out):
if ! lsof "$filename" >/dev/null; then
# move it
fi
You can shorten it even further, using &&:
for filename in *; do
lsof "$filename" >/dev/null && continue # skip if the file is open
# move the file
done
You may not need to worry about when the write is complete, if you are moving the file to a different location in the same file system. As long as the client is using the same file descriptor to write to the file, you can simply create a new hard link for the upload file, then remove the original link. The client's file descriptor won't be affected by one of the links being removed.
cd /var/uploaded_by_users
for f in *; do
ln "$f" /somewhere/else/"$f"
rm "$f"
done

How to delete files after using grep function

I have the command below:
grep -rnw '/root/serviceDown/' -e "The service 'httpd' on server is currently down"
and the result is as follows:
/root/serviceDown/2946/000.conf:5:subject=The service 'httpd' on server is currently down
/root/serviceDown/2955/000.conf:5:subject=The service 'httpd' on server is currently down
How to write a script which deletes those files after the grep command and then restarts the server?
This probably is what you are looking for:
grep -lr "The service 'httpd' on server is currently down" /root/serviceDown/ 2>/dev/null | xargs rm
The -n and -w flags do not really make sense for your purpose, the additional information they produce is only in the way here. The -e flag is also not required as far as I can tell, you do not need an extended pattern interpretation for the string you use. The -l flag reduces the output to the name of matching files. You filter out the error output using 2>/dev/null and finally pipe the resulting list of files into the xargsutility which uses a simple rm command to delete the files.
Restarting the server process afterwards can be done by whatever command you usually use for that, just execute it after the above command, either manually or separated by a simple ; in one go.
Obviously you need sufficient system permission to be able to perform both commands...
For more advanced processing of the result as you ask in your comment below I suggest you implement a simple script. This offers much more flexibility, is easier to read and maintain and also allows execution as a single command.
This might be a starting point for you:
#!/bin/bash
# fetch list of matching files
list=`grep -lr "The service 'httpd' on server is currently down" /root/serviceDown/ 2>/dev/null`
if [[ -z "$list" ]]; then
echo "No files matched, nothing to be done...";
exit
fi
# delete files one by one
for match in $list
do
echo "Removing matched file $match..."
echo `rm $match`
done
# restart server process
echo "Restarting server process..."
`service httpd restart`
# that's is, basically
echo "...done."
Save that script into some folder inside your PATH environment variable (e.g./root/bin/restartFailedHttpdServer), make it executable (chmod u+x /root/bin/restartFailedHttpdServer) and finally execute it (restartFailedHttpdServer).

How to read the first line user types into terminal in bash script

I'm trying to write a script where to run the script, the user will type something along the lines of
$./cpc -c test1.txt backup
into the terminal, where ./cpc is to run the script, -c is $option, test1.txt is $source and backup is $destination.
How would I assign the values typed in to the terminal to use them in my script, for example in
if [[ -z $option || -z $source || -z $destination ]]; then
echo "Error: Incorrect number of arguments." (etc)
as when checking the script online the following errors return: 'option/source/destination is referenced but not assigned.'
Sorry in advance if any of this doesn't make sense, I'm trying to be as clear as possible
The arguments are stored in the numbered parameters $1, $2, etc. So, just assign them
option=$1
source=$2
destination=$3
See also man getopt or getopts in man bash.

Resources