Updating phpMyAdmin blowfish_secret via bash shell script in linux - linux

I am working on a bash script that automatically downloads phpMyAdmin and extracts it. I would like to add one more step to this installer script.
Copy config.sample.inc.php as config.inc.php and update this file's line with a random blowfish secret:
$cfg['blowfish_secret'] = ''; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */
So, this is what I have I have tried:
#!/bin/bash
wget -O phpMyAdmin-4.5.3.1-english.zip https://files.phpmyadmin.net/phpMyAdmin/4.5.3.1/phpMyAdmin-4.5.3.1-english.zip;
unzip phpMyAdmin-4.5.3.1-english.zip >/dev/null 2>/dev/null;
cd phpMyAdmin-4.5.3.1-english;
mv * ..;
cd ..;
rm -rf phpMyAdmin-4.5.3.1-english;
rm -rf phpMyAdmin-4.5.3.1-english.zip;
randomBlowfishSecret=`openssl rand -base64 32`;
cat config.sample.inc.php | sed -e "s/cfg['blowfish_secret'] = ''/cfg['blowfish_secret'] = '$randomBlowfishSecret'/" > config.inc.php
When this script runs, phpMyAdmin is downloaded and extracted and the file is copied, however it does not appear to be setting the randomBlowfishSecret to $cfg['blowfish_secret'].
Any ideas?

A few points:
You don't have to end your lines with ; – a newline has the same effect.
If you want to redirect both stdout and stderr, you can use &>/dev/null instead of >/dev/null 2>/dev/null, but in the case of unzip, you can just use unzip -q to suppress output (or even -qq, but -q was already silent for me).
Instead of
cd phpMyAdmin-4.5.3.1-english;
mv * ..;
cd ..;
you can just use mv phpMyAdmin-4.5.3.1-english/* .
There are two files starting with ., which aren't moved with your command (unless you have the dotglob shell option set), so you have to move them separately:
mv phpMyAdmin-4.5.3.1-english/.*.yml .
The phpMyAdmin-4.5.3.1-english is now empty, so you can remove it with rmdir instead of rm -rf (which would have let you know that it's not empty yet).
phpMyAdmin-4.5.3.1-english.zip is just a file; no need to recursively delete it, rm -f is enough.
Instead of the deprecated backticks for command substitution, you could use the more modern $():
randomBlowfishSecret=$(openssl rand -base64 32)
The sed can be improved in three ways:
No need for cat. cat file | sed "s/x/y/g" > output (replace all x in file with y, save to output) is equivalent to sed "s/x/y/g" file > output, but the latter doesn't spawn an extra subshell.
Your regular expression
s/cfg['blowfish_secret'] = ''/
is interpreted as "cfg, and the any ONE character from the list between [ and ]", but you want literal [ and ], so they have to be escaped: \[ and \]. In the replacement string, they don't have to be escaped.
The password generated by openssl rand can contain forward slashes, which confuses sed. You can use a different delimiter for sed, for example "s|x|y|" instead of "s/x/y/".
All of these are cosmetic, except the last two sed bullet points: those can break the script. Well, and the missing hidden files might be annoying, too.
Cleaned up version that works for me:
#!/bin/bash
wget -O phpMyAdmin-4.5.3.1-english.zip https://files.phpmyadmin.net/phpMyAdmin/4.5.3.1/phpMyAdmin-4.5.3.1-english.zip
unzip -q phpMyAdmin-4.5.3.1-english.zip
mv phpMyAdmin-4.5.3.1-english/* .
mv phpMyAdmin-4.5.3.1-english/.*.yml .
rmdir phpMyAdmin-4.5.3.1-english
rm -f phpMyAdmin-4.5.3.1-english.zip
randomBlowfishSecret=$(openssl rand -base64 32)
sed -e "s|cfg\['blowfish_secret'\] = ''|cfg['blowfish_secret'] = '$randomBlowfishSecret'|" config.sample.inc.php > config.inc.php

Related

execute command from the terminal vs from a script - path with wildcard - prevent glob expansion

I experience a strange problem and I think it have something to do with file/directory globbing
script
echo "tar -zcvf $file $base/$target $exclude_args"
cd $base && tar -zcvf $file $base/$target $exclude_args
output
tar -zcvf www_2017-04-24.tar.gz /var/www --exclude '/var/www/bak/*/*' --exclude '/var/www/test'
When running the script the exclude path is omitted (every directory is gzipped)
When running the output directly from putty the directories under /var/www/bak/*/* is excluded from the gzip
update
parse_exclude_paths (){
# escape forward slashes to avoid the paths to expand
args=$(echo "$exclude" | sed 's,/,\\\/,g')
args=$(printf " --exclude '%s'" $args)
# strip escapes
echo "$args" | sed 's,\\\/,/,g'
}
exclude="/var/www/bak/*/* /var/www/test"
exclude_args=''
if [ ! -z "$exclude" ]; then
exclude_args="$(parse_exclude_paths "$exclude")"
fi
update 2
If the command is sent via SSH there is no problems and the exclude paths are excluded from the gzip
ssh root#$host 'cd '"$base"' && tar -zcvf $file '"$base/$target $exclude_args"
I snooped your question history and saw that you're familiar with PHP. Here's the equivalent problem in PHP:
function foo($arg1, $arg2) {
echo "You passed $arg1 and $arg2\n";
}
$var='"one", "two"';
echo "Running: foo($var);\n";
foo($var);
The echo prints Running: foo("one", "two"); and that command works just fine if you copy-paste it!
Why does foo($var); instead write PHP Warning: Missing argument 2 for foo()?
The answer is of course that literal quotes in your variables don't matter for how the function is called. This is the same in both PHP and shell.
The solution in both PHP and Bash is to use an array:
#!/bin/bash
file="www_2017-04-24.tar.gz"
base="/var"
target="www"
exclude_args=( --exclude '/var/www/bak/*/*' )
cd "$base" && tar -zcvf "$file" "$base/$target" "${exclude_args[#]}"
sh is more primitive and doesn't support arbitrary arrays, but we can reuse the positional parameters to the same effect:
#!/bin/sh
file="www_2017-04-24.tar.gz"
base="/var"
target="www"
set -- --exclude '/var/www/bak/*/*' # Now assigned to $1, $2, etc
cd "$base" && tar -zcvf "$file" "$base/$target" "$#"
Another option is to use eval to re-interpret a string as a shell command. This means that anyone who can influence your variables can take over your system, but that may be ok if all the variables come from users with equivalent privileges:
eval "tar -zcvf $file $base/$target $exclude_args"

Deleting all files except ones mentioned in config file

Situation:
I need a bash script that deletes all files in the current folder, except all the files mentioned in a file called ".rmignore". This file may contain addresses relative to the current folder, that might also contain asterisks(*). For example:
1.php
2/1.php
1/*.php
What I've tried:
I tried to use GLOBIGNORE but that didn't work well.
I also tried to use find with grep, like follows:
find . | grep -Fxv $(echo $(cat .rmignore) | tr ' ' "\n")
It is considered bad practice to pipe the exit of find to another command. You can use -exec, -execdir followed by the command and '{}' as a placeholder for the file, and ';' to indicate the end of your command. You can also use '+' to pipe commands together IIRC.
In your case, you want to list all the contend of a directory, and remove files one by one.
#!/usr/bin/env bash
set -o nounset
set -o errexit
shopt -s nullglob # allows glob to expand to nothing if no match
shopt -s globstar # process recursively current directory
my:rm_all() {
local ignore_file=".rmignore"
local ignore_array=()
while read -r glob; # Generate files list
do
ignore_array+=(${glob});
done < "${ignore_file}"
echo "${ignore_array[#]}"
for file in **; # iterate over all the content of the current directory
do
if [ -f "${file}" ]; # file exist and is file
then
local do_rmfile=true;
# Remove only if matches regex
for ignore in "${ignore_array[#]}"; # Iterate over files to keep
do
[[ "${file}" == "${ignore}" ]] && do_rmfile=false; #rm ${file};
done
${do_rmfile} && echo "Removing ${file}"
fi
done
}
my:rm_all;
If we assume that none of the files in .rmignore contain newlines in their name, the following might suffice:
# Gather our exclusions...
mapfile -t excl < .rmignore
# Reverse the array (put data in indexes)
declare -A arr=()
for file in "${excl[#]}"; do arr[$file]=1; done
# Walk through files, deleting anything that's not in the associative array.
shopt -s globstar
for file in **; do
[ -n "${arr[$file]}" ] && continue
echo rm -fv "$file"
done
Note: untested. :-) Also, associative arrays were introduced with Bash 4.
An alternate method might be to populate an array with the whole file list, then remove the exclusions. This might be impractical if you're dealing with hundreds of thousands of files.
shopt -s globstar
declare -A filelist=()
# Build a list of all files...
for file in **; do filelist[$file]=1; done
# Remove files to be ignored.
while read -r file; do unset filelist[$file]; done < .rmignore
# Annd .. delete.
echo rm -v "${!filelist[#]}"
Also untested.
Warning: rm at your own risk. May contain nuts. Keep backups.
I note that neither of these solutions will handle wildcards in your .rmignore file. For that, you might need some extra processing...
shopt -s globstar
declare -A filelist=()
# Build a list...
for file in **; do filelist[$file]=1; done
# Remove PATTERNS...
while read -r glob; do
for file in $glob; do
unset filelist[$file]
done
done < .rmignore
# And remove whatever's left.
echo rm -v "${!filelist[#]}"
And .. you guessed it. Untested. This depends on $f expanding as a glob.
Lastly, if you want a heavier-weight solution, you can use find and grep:
find . -type f -not -exec grep -q -f '{}' .rmignore \; -delete
This runs a grep for EACH file being considered. And it's not a bash solution, it only relies on find which is pretty universal.
Note that ALL of these solutions are at risk of errors if you have files that contain newlines.
This line do perfectly the job
find . -type f | grep -vFf .rmignore
If you have rsync, you might be able to copy an empty directory to the target one, with suitable rsync ignore files. Try it first with -n, to see what it will attempt, before running it for real!
This is another bash solution that seems to work ok in my tests:
while read -r line;do
exclude+=$(find . -type f -path "./$line")$'\n'
done <.rmignore
echo "ignored files:"
printf '%s\n' "$exclude"
echo "files to be deleted"
echo rm $(LC_ALL=C sort <(find . -type f) <(printf '%s\n' "$exclude") |uniq -u ) #intentionally non quoted to remove new lines
Test it online here
Alternatively, you may want to look at the simplest format:
rm $(ls -1 | grep -v .rmignore)

Linux: Update directory structure for millions of images which are already in prefix-based folders

This is basically a follow-up to Linux: Move 1 million files into prefix-based created Folders
The original question:
I want to write a shell command to rename all of those images into the
following format:
original: filename.jpg new: /f/i/l/filename.jpg
Now, I want to take all of those files and add an additional level to the directory structure, e.g:
original: /f/i/l/filename.jpg new: /f/i/l/e/filename.jpg
Is this possible to do with command line or bash?
One way to do it is to simply loop over all the directories you already have, and in each bottom-level subdirectory create the new subdirectory and move the files:
for d in ?/?/?/; do (
cd "$d" &&
printf '%.4s\0' * | uniq -z |
xargs -0 bash -c 'for prefix do
s=${prefix:3:1}
mkdir -p "$s" && mv "$prefix"* "$s"
done' _
) done
That probably needs a bit of explanation.
The glob ?/?/?/ matches all directory paths made up of three single-character subdirectories. Because it ends with a /, everything it matches is a directory so there is no need to test.
( cd "$d" && ...; )
executes ... after cd'ing to the appropriate subdirectory. Putting that block inside ( ) causes it to be executed in a subshell, which means the scope of the cd will be restricted to the parenthesized block. That's easier and safer than putting cd .. at the end.
We then collecting the subdirectories first, by finding the unique initial strings of the files:
printf '%.4s\0' * | uniq -z | xargs -0 ...
That extracts the first four letters of each filename, nul-terminating each one, then passes this list to uniq to eliminate duplicates, providing the -z option because the input is nul-terminated, and then passes the list of unique prefixes to xargs, again using -0 to indicate that the list is nul-terminated. xargs executes a command with a list of arguments, issuing the command several times only if necessary to avoid exceeding the command-line limit. (We probably could have avoided the use of xargs but it doesn't cost that much and it's a lot safer.)
The command called with xargs is bash itself; we use the -c option to pass it a command to be executed. That command iterates over its arguments by using the for arg in syntax. Each argument is a unique prefix; we extract the fourth character from the prefix to construct the new subdirectory and then mv all files whose names start with the prefix into the newly created directory.
The _ at the end of the xargs invocation will be passed to bash (as with all the rest of the arguments); bash -c uses the first argument following the command as the $0 argument to the script, which is not part of the command line arguments iterated over by the for arg in syntax. So putting the _ there means that the argument list constructed by xargs will be precisely $1, $2, ... in the execution of the bash command.
Okay, so I've created a very crude solution:
#!/bin/bash
for file1 in *; do
if [[ -d "$file1" ]]; then
cd "$file1"
for file2 in *; do
if [[ -d "$file2" ]]; then
cd "$file2"
for file3 in *; do
if [[ -d "$file3" ]]; then
cd "$file3"
for file4 in *; do
if [[ -f "$file4" ]]; then
echo "mkdir -p ${file4:3:1}/; mv $file4 ${file4:3:1}/;"
mkdir -p ${file4:3:1}/; mv $file4 ${file4:3:1}/;
fi
done
cd ..
fi
done
cd ..
fi
done
cd ..
fi
done
I should warn that this is untested, as my actual structure varies slightly, but I wanted to keep the question/answer consistent with the original question for clarity.
That being said, I'm sure a much more elegant solution exists than this one.

prompt list of files before execution of rm

I started using "sudo rm -r" to delete files/directories. I even put it as an alias of rm.
I normally know what I am doing and I am quite experience linux user.
However, I would like that when I press the "ENTER", before the execution of rm, a list of files will show up on the screen and a prompt at the end to OK the deletion of files.
Options -i -I -v does not do what I want. I want only one prompt for all the printed files on screen.
Thank you.
##
# Double-check files to delete.
delcheck() {
printf 'Here are the %d files you said you wanted to delete:\n' "$#"
printf '"%s"\n' "$#"
read -p 'Do you want to delete them? [y/N] ' doit
case "$doit" in
[yY]) rm "$#";;
*) printf 'No files deleted\n';;
esac
}
This is a shell function that (when used properly) will do what you want. However, if you load the function in your current shell then try to use it with sudo, it won't do what you expect because sudo creates a separate shell. So you'd need to make this a shell script…
#!/bin/bash
… same code as above …
# All this script does is create the function and then execute it.
# It's lazy, but functions are nice.
delcheck "$#"
…then make sure sudo can access it. Put it in some place that is in the sudo execution PATH (Depending on sudo configuration.) Then if you really want to execute it precisely as sudo rm -r * you will still need to name the script rm, (which in my opinion is dangerous) and make sure its PATH is before /bin in your PATH. (Also dangerous). But there you go.
Here's a nice option
Alias rm to echo | xargs -p rm
The -p option means "interactive" - it will display the entire command (including any expanded file lists) and ask you to confirm
It will NOT ask about the recursively removed files. But it will expand rm * .o to:
rm -rf * .o
rm -rf program.cc program.cc~ program program.o backup?... # NO NO NO NO NO!
Which is much nicer than receiving the error
rm: .o file not found
Edit: corrected the solution based on chepner comment. My previous solutions had a bug :(
This simple script prompts for a y response before deleting the files specified.
rmc script file:
read -p "ok to delete? " ans
case $ans in
[yY]*) sudo rm "$#" ;;
*) echo "Nothing deleted";;
esac
Invoke thus
./rmc *.tmp
I created a script to do this. The solution is similar to #kojiro's.
Save the script with the filename del. Run the command sudo chmod a=r+w+x del to make the script an executable. In the directory in which you want to save the script, export the path by entering export PATH=$PATH:/path/to/the/del/executable in your '~/.bashrc' file and run source ~/.bashrc.
Here, the syntax of rm is preserved, except instead of typing rm ..., type del ... where del is the name of the bash script below.
#! /bin/bash
# Safely delete files
args=("$#") # store all arguments passed to shell
N=$# # number of arguments passed to shell
#echo $#
#echo $#
#echo ${args[#]:0}
echo "Files to delete:"
echo
n=`expr $N - 1`
for i in `seq 0 $n`
do
str=${args[i]}
if [ ${str:0:1} != "-" ]; then
echo $str
fi
done
echo
read -r -p "Delete these files? [y/n] " response
case $response in
[yY][eE][sS]|[yY])
rm ${args[#]:0}
esac

Help with Replacing Strings on Solaris

I am on a Solaris 8 box that does not support -i option for sed, so I am using the following from a google search on the topic:
# find . -name cancel_submit.cgi | while read file; do
> sed 's/ned.dindo.com\/confluence\/display\/CESDT\/CETS+DocTools>DOC Team/wwwin-dev.dindo.com\/Eng\/CntlSvcs\/InfoFrwk\/GblEngWWW\/Public\/index.html>EDCS Team/g' ${file} > ${file}.new
> mv ${file}.new ${file}
> done
This works except it messes up file permissions and group:owner.
How can I retain the original information?
You may use 'cat'.
cat ${file}.new > ${file} && rm ${file}.new
cp -p preserves the stuff you want. Personally I would do this (to imitate sed -i.bak):
...
cp -p ${file} ${file}.bak
sed 's/..../g' ${file}.bak > ${file}
...
You could add rm ${file}.bak to the end if desired, in which case you wouldn't technically need the -p in the cp line above. But with the above you can do mv ${file}.bak ${file} to recover if the replacement goes awry.

Resources