Customization of regular expressions in Linux shell - linux

I am working on the definition of regular expressions. With the command
file=`echo $2 | sed -e "s/\&/\&/g" \
-e "s/</\</g" \
-e "s/>/\>/g" \
-e "s/'/\&apos;/g"`
a shell script accesses a file in a file system and then continues editing the file. That works pretty well. However, it can not be used to capture files whose file path contains two spaces in succession.
Is it possible to adapt this command character so that such special cases are included in the file path?

The easiest thing to do is put the filename in quotes on the command line. For example:
$ script.sh arg "file name"
The other thing you can do, if the the file name is the last argument the script receives, is to take all of the remaining command line args. E.g.:
shift # Shifts off the first argument, so $2 is the first one
file=`echo $* | sed ....`

The file name incl. File path I give the command already.
The program "set_attributes" sets with option -w the value 1 to the specified file "path to file"
./set_attributes -u user -p password -w 1 server_2 "path to file"
./set_attributes -u user -p password -w 1 server_2 "/example/folder
1/filename.jpg
This file gets the program "set_attributes" not handled because there are two blank characters in the file path.

Related

Shell Script With sed and Random number

How to make a shell script that receives one or more text files and removes from them whitespaces and blanklines. After that new files will have a random 2-digit number in front of them.
For example File1.txt generates File1_56.txt
Tried this:
#!/bin/bash
for file in "$*"; do
sed -e '/^$/d;s/[[:blank:]]//g' $* >> "$*_$$.txt"
done
But when I give 2 files as input script merges them into one single file, when I want for each file a separate one.
Try:
#!/bin/bash
for file in "$#"; do
sed -e '/^$/d;s/[[:blank:]]//g' "$file" >> "${file%.txt}_$$.txt"
done
Notes
To loop over each argument without word splitting or other hazards, use for file in "$#" not for file in "$*"
To run the sed command on one file instead of all, specify "$file" as the file, not $*.
To save the output to the correct file, use "${file%.txt}_$$.txt" where ${file%.txt} is an example of suffix removal: it removes the final .txt from the file name.
$$ is the process ID. The title says mentions a "random" number. If you want a random number, replace $$ with $RANDOM.

perl -p -i -e inside a shell script

#!/usr/bin/env bash
DOCUMENT_ROOT=/var/www/html/
rm index.php
cd modules/mymodules/
perl -p -i -e 's/x.xx.xx.y/dd.dd.ddd.dd/g' `grep -ril y.yy.yy.y *`
shows shows a warning
-i used with no filenames on the command line, reading from STDIN.
It prevents running rest of the scripts.
Any solutions ?
Actually I need to run
perl -p -i -e 's/x.xx.xx.y/dd.dd.ddd.dd/g' `grep -ril y.yy.yy.y *`
Inside a shell script
I am using ubuntu with docker.
Let's look at this a step at a time. First, you're running this grep command:
grep -ril y.yy.yy.y *
This recursively searches all files and directories in your current directory. It looks for files containing the string "y.yy.yy.yy" in any case and returns a list of the files which contain this text.
This command will return either a list of filenames or nothing.
Whatever is returned from that grep command is then passed as arguments to your Perl command:
perl -p -i -e 's/x.xx.xx.y/dd.dd.ddd.dd/g' [list of filenames]
If grep returns a list of files, then the -p option here means that each line in every file in the list is (in turn) run through that substitution and then printed to a new file. The -i means there's one new file for each old file and the new files are given the same names as the old files (the old files are deleted once the command has run).
So far so good. But what happens if the grep doesn't return any filenames? In that case, your Perl command doesn't get any filenames and that would trigger the error that you are seeing.
So my (second) guess is that your grep command isn't doing what you want it to and is returning an empty list of filenames. Try running the grep command on its own and see what you get.

bash escape exclamation character inside variable with backtick

I have this bash script:
databases=`mysql -h$DBHOST -u$DBUSER -p$DBPASSWORD -e "SHOW DATABASES;" | tr -d "| " | grep -v Database`
and the issue is when the password has all the characters possible. how can i escape the $DBPASSWORD in this case? If I have a password with '!' and given the fact that command is inside backticks. I have no experience in bash scripts but I've tried with "$DBPASSWORD" and with '$DBPASSWORD' and it doesn't work. Thank you
LATER EDIT: link to script here, line 170 -> https://github.com/Ardakilic/backmeup/blob/master/backmeup.sh
First: The answer from #bishop is spot on: Don't pass passwords on the command line.
Second: Use double quotes for all shell expansions. All of them. Always.
databases=$(mysql -h"$DBHOST" -u"$DBUSER" -p"$DBPASSWORD" -e "SHOW DATABASES;" | tr -d "| " | grep -v Database)
Don't pass the MySQL password on the command line. One, it can be tricky with passwords containing shell meta-characters (as you've discovered). Two, importantly, someone using ps can sniff the password.
Instead, either put the password into the system my.cnf, your user configuration file (eg .mylogin.cnf) or create an on-demand file to hold the password:
function mysql() {
local tmpfile=$(mktemp)
cat > "$tmpfile" <<EOCNF
[client]
password=$DBPASSWORD
EOCNF
mysql --defaults-extra-file="$tmpfile" -u"$DBUSER" -h"$DBHOST" "$#"
rm "$tmpfile"
}
Then you can run it as:
mysql -e "SHOW DATABASES" | tr -d "| " ....
mysql -e "SELECT * FROM table" | grep -v ...
See the MySQL docs on configuration files for further examples.
I sometimes have the same problem when automating activities:
I have a variable containing a string (usually a password) that is set in a config file or passed on the command-line, and that string includes the '!' character.
I need to pass that variable's value to another program, as a command-line argument.
If I pass the variable unquoted, or in double-quotes ("$password"), the shell tries to interpret the '!', which fails.
If I pass the variable in single quotes ('$password'), the variable isn't expanded.
One solution is to construct the full command in a variable and then use eval, for example:
#!/bin/bash
username=myuser
password='my_pass!'
cmd="/usr/bin/someprog -user '$username' -pass '$password'"
eval "$cmd"
Another solution is to write the command to a temporary file and then source the file:
#!/bin/bash
username=myuser
password='my_pass!'
cmd_tmp=$HOME/.tmp.$$
touch $cmd_tmp
chmod 600 $cmd_tmp
cat > $cmd_tmp <<END
/usr/bin/someprog -user '$username' -pass '$password'
END
source $cmd_tmp
rm -f $cmd_tmp
Using eval is simple, but writing a file allows for multiple complex commands.
P.S. Yes, I know that passing passwords on the command-line isn't secure - there is no need for more virtue-signalling comments on that topic.

Listing directories with spaces using Bash in linux

I would like to create a bash script to list all the directories in a directory provided by the user via input, or all the directories in the current directory (given no input).
Here's what I have thus far, but when I execute it I encounter two problems.
1) The script completely ignores my input. The file is located on my desktop but when I type in "home" as the input, the script simply prints the directories of the Desktop (current directory).
2) The directories are printed on their own lines (intended) but it treats each word in a folder name as its own folder. i.e. is printed as:
this
folder
Here's the code I have so far:
#!/bin/bash
echo -n "Enter a directory to load files: "
read d
if [ $d="" ]; #if input is blank, assume d = current directory
then d=${PWD##*/}
for i in $(ls -d */);
do echo ${i%%/};
done
else #otherwise, print sub-directories of given directory
for i in $(ls -d */);
do echo ${i%%/};
done
fi
Also in your response please explain your answer as I'm very new to bash.
Thanks for looking, I appreciate your time.
EDIT: Thanks to John1024's answer, I came up with the following:
#!/bin/bash
echo -n "Enter a directory to load files: "
IFS= read d
ls -1 -d "${d:-.}"/*/
And it does everything I need. Much appreciated!
I believe that this script accomplishes what you want:
#!/bin/sh
ls -1 -d "${1:-.}"/*/
Usage example:
$ bash ./script.sh /usr/X11R6
/usr/X11R6/bin
/usr/X11R6/man
Explanation:
-1 tells ls to print each file/directory on a separate line
-d tells ls to list directories by name instead of their contents
The shell will ${1:-.} to be the first argument to the script if there is one or . (which means the current directory) if there isn't.
Enhancement
The above script displays a / at the end of each directory name. If you don't want that, we can use sed to remove trailing slashes from the output:
#!/bin/sh
ls -1d ${1:-.}/*/ | sed 's|/$||'
Revised Version of Your Script
Starting with your script, some simplifications can be made:
#!/bin/bash
echo -n "Enter a directory to load files: "
IFS= read d
d=${d:-$PWD}
for i in "$d"/*/
do
echo ${i%%/}
done
Notes:
IFS= read d
Normally leading and trailing white space are stripped before the input is assigned to d. By setting IFS to an empty value, however, leading and trailing white space will be preserved. Thus this will work even if the pathologically strange case where the user specifies a directory whose name begins or ends with white space.
If the user enters a backslash, the shell will try to process it as an escape. If you don't like that, use IFS= read -r d and backslashes will be treated as normal characters, not escapes.
d=${d:-$PWD}
If the user supplied a value for d, this leaves it unchanged. If he didn't, this assigns it to $PWD.
for i in "$d"/*/
This will loop over every subdirectory of $d and will correctly handle subdirectory names with spaces, tabs, or any other odd character.
By contrast, consider:
for i in $(ls -d */)
After ls executes here, the shell will split up the output into individual words. This is called "word splitting" and is why this form of the for loop should be avoided.
Notice the double-quotes in for i in "$d"/*/. They are there to prevent word splitting on $d.

How to test linux variable within sed file?

I have a sed file that contains contains a few substitutions, it is executed on a file using the following syntax:
sed -f mysedfile file.txt > fixed_file.txt
I would like to test a system variable and depending what that variable contains, execute different sed operations on file.txt.
Would it be possible to put this logic into mysedfile?
Thank you for the help.
Perl was explicitly created to get around limitations of sed and awk. The -p mode runs a script for each line in the file. You can put it on the commandline:
perl -p -e "s/foo/\$ENV{'HOME'}/e" < files.txt
Or move the script to a file (you can remove the '\' before the $)
perl -p file.pl < files.txt
Or make the first line of your script like this so you can run it directly.
#!/usr/bin/perl -p

Resources