Removing an additional leading slash if one exists - linux

I'm writing a script where I have a default directory for outputting data or the user can specify a directory. The problem is, I don't know how to do this eloquently. Here is what I have:
#!/bin/bash
OUTPUT="$1"
DEFAULT_DIR=/Default/Dir/For/Me
if [ -z "$OUTPUT" ]
then
OUTPUT=.${DEFAULT_DIR}
else
OUTPUT=""${OUTPUT_DIR}""${DEFAULT_DIR}""
fi
echo "$OUTPUT"
If I do this ./script / I get //Default/Dir/For/Me
If I do this ./script /home I get /home/Default/Dir/For/Me
If I do this ./script /home/ I get /home//Default/Dir/For/Me
Is there any way to make this pretty and handle the first scenario properly? Obviously, the first scenario won't work because the directory // does not exist.

(Just to make it clear from the comments)
What I suggest is to pipe tr -s "/" so that it removes duplicate slashes:
$ echo "/home//Default/Dir/For/Me" | tr -s "/"
/home/Default/Dir/For/Me
$ echo "/home//Default/Dir/For/M//////////e" | tr -s "/"
/home/Default/Dir/For/M/e

Here's another solution without having to fork another process:
DEFAULT_DIR=${DEFAULT_DIR//\/\///}
That replaces all occurrences of // with / in the string.

Related

Bash: Cut current path until a certain folder

lets say I have three bash files in different directories:
/a/b/c/d/e/f/script1.sh
/a/bb/c/d/script2.sh
/aa/b/c/d/e/f/g/h/script3.sh
If I call $(pwd) I get the path of the current directory. Is there a way to somehow "crop" this path until a certain folder? In the following an example is shown if the certain folder would be called "c":
In the case of script1.sh I would like to have the path: /a/b/c
In the case of script2.sh I would like to have the path: /a/bb/c
In the case of script3.sh I would like to have the path: /aa/b/c
Thank you for your help
I assume what you want is parameter expansion :
$ path="/a/b/c/d/e/f/script1.sh"
$ echo "${path#*/c}"
/d/e/f/script1.sh
Edit
Inversed :
$ path="/a/b/c/d/e/f/script1.sh"
$ echo "${path%/d*}"
/a/b/c
Regards!
Use cut command:
echo '/a/b/c/d/e/f/script1.sh' | cut -d '/' -f 1-4
echo '/a/bb/c/d/script2.sh' | cut -d '/' -f 1-4
echo '/aa/b/c/d/e/f/g/h/script3.sh' | cut -d '/' -f 1-4
Bash regex:
#!/bin/bash
[[ "$PWD" =~ (^/[^/]+/[^/]+/[^/]+)(.*) ]] && echo ${BASH_REMATCH[1]}
It returns the first three components of your path (if there are three components). You could also set the path tp for example $pwd and:
$ pwd=/a/b/c/d/e/f/script1.sh
$ [[ "$pwd" =~ (^/[^/]+/[^/]+/[^/]+)(.*) ]] && echo ${BASH_REMATCH[1]}
/a/b/c
Also, pay notice to #123's comment below; that is the correct way, my mind was off. Thank you, sir.
Given a situation like this:
$ pwd
/a/b/c/d/cc/e/c/f
$ FOLDER=c
You could use shell parameter expansion on the $PWD variable like this:
$ echo "${PWD%/${FOLDER}/*}/${FOLDER}"
/a/b/c/d/cc/e/c
$ echo "${PWD%%/${FOLDER}/*}/${FOLDER}"
/a/b/c
The difference is in the single or double %. With % the result of the expansion is the expanded value of parameter (${PWD}) with the shortest matching pattern. With %% it will be the longest matching pattern.
Just make sure you'll enclose the ${FOLDER} variable with forward slashes, because otherwise it could match directories that contain the match in their name (like in this example the directory cc)
Because you would like to include the folder you were looking for, it's included at the end of the string, prefixed with a forward slash.
Try this:
[sahaquiel#sahaquiel-PC h]$ pwd
/home/sahaquiel/a/b/c/d/e/f/g/h
[sahaquiel#sahaquiel-PC h]$ pwd | cut -d"/" -f1-6
/home/sahaquiel/a/b/c

Linux : check if something is a file [ -f not working ]

I am currently trying to list the size of all files in a directory which is passed as the first argument to the script, but the -f option in Linux is not working, or am I missing something.
Here is the code :
for tmp in "$1/*"
do
echo $tmp
if [ -f "$tmp" ]
then num=`ls -l $tmp | cut -d " " -f5`
echo $num
fi
done
How would I fix this problem?
I think the error is with your glob syntax which doesn't work in either single- or double-quotes,
for tmp in "$1"/*; do
..
Do the above to expand the glob outside the quotes.
There are couple more improvements possible in your script,
Double-quote your variables to prevent from word-splitting, e.g. echo "$temp"
Backtick command substitution `` is legacy syntax with several issues, use the $(..) syntax.
The [-f "filename"] condition check in linux is for checking the existence of a file and it is a regular file. For reference, use this text as reference,
-b FILE
FILE exists and is block special
-c FILE
FILE exists and is character special
-d FILE
FILE exists and is a directory
-e FILE
FILE exists
-f FILE
FILE exists and is a regular file
-g FILE
FILE exists and is set-group-ID
-G FILE
FILE exists and is owned by the effective group ID
I suggest you try with [-e "filename"] and see if it works.
Cheers!
At least on the command line, this piece of script does it:
for tmp in *; do echo $tmp; if [ -f $tmp ]; then num=$(ls -l $tmp | sed -e 's/ */ /g' | cut -d ' ' -f5); echo $num; fi; done;
If cut uses space as delimiter, it cuts at every space sign. Sometimes you have more than one space between columns and the count can easily go wrong. I'm guessing that in your case you just happened to echo a space, which looks like nothing. With the sed command I remove extra spaces.

Linux Bash Script - match lower case path in argument with actual filesystem path

I have a linux script that gets an argument passed to it that originates from MSDOS (actually DOSEMU running MS DOS 6.22). The argument that gets passed is case insensitive (as DOS didn't do cases) but of course Linux does.
I am trying to get from the following passed argument
/media/zigg4/vol1/database/scan/stalbans/docprint/wp23452.wpd
to
/media/zigg4/vol1/Database/SCAN/STALBANS/DOCPRINT/Wp23452.WPD
I do not know the actual case sensitive path so I need to somehow determine it from the argument that is passed to the script. I have absolutely no idea where to start with this so any help is greatly appreciated.
edited for extra information and clarity
UPDATE
Thanks to the answer by #anubhava I used the following:-
#!/bin/bash
copies=1
if [ ! -z "$2" ]; then
copies=$2
fi
find / -readable -ipath $1 2>&1 | grep -v "Permission denied" | while IFS= read -r FILE; do
lpr -o Collate=True -#$copies -sP $FILE
done
Works great :-)
You can use -ipath option of find for ignore case path matching:
# assuming $arg contains path argument supplied
find . -ipath "*$arg*"
I would employ awk for this (of course without salary)
#!/bin/bash
awk -varg="$1" -vactual="/media/zigg4/vol1/Database/SCAN/STALBANS/DOCPRINT/Wp23452.WPD" 'BEGIN{
if (tolower(arg)==tolower(actual)){
printf "Argument matches actual filepath\n"
}
}'
Run the script as
./script "/media/zigg4/vol1/database/scan/stalbans/docprint/wp23452.wpd"
Something like this:
if [ "$( echo $real | tr A-Z a-z )" = "$lower" ]; then
echo "matchy"
else
echo "no is matchy"
fi
Some notes:
tr is doing a to-lower translate.
The $( ... ) bit is placing the result of the enclosed command into a string.
You could do the translate on either side if you aren't sure if your "lower case" string can be trusted...

How to remove all not single files Linux directory?

I have duplicated files in directory on Linux machine which are listed like that:
ltulikowski#lukasz-pc:~$ ls -1
abcd
abcd.1
abcd.2
abdc
abdc.1
acbd
I want to remove all files witch aren't single so as a result I should have:
ltulikowski#lukasz-pc:~$ ls -1
acbd
The function uses extglob, so before execution, set extglob: shopt -s extglob
rm_if_dup_exist(){
arr=()
for file in *.+([0-9]);do
base=${file%.*};
if [[ -e $base ]]; then
arr+=("$base" "$file")
fi
done
rm -f -- "${arr[#]}"
}
This will also support file names with several digits after the . e.g. abcd.250 is also acceptable.
Usage example with your input:
$ touch abcd abcd.1 abcd.2 abdc abdc.1 acbd
$ rm_if_dup_exist
$ ls
acbd
Please notice that if, for example, abcd.1 exist but abcd does not exist, it won't delete abcd.1.
here is one way to do it
for f in *.[0-9]; do rm ${f%.*}*; done
may get exceptions since some files will be deleted more than once (abcd in your example). If versions always start with .1 you can restrict to match to that.
You can use:
while read -r f; do
rm "$f"*
done < <(printf "%s\n" * | cut -d. -f1 | uniq -d)
printf, cut and uniq are used to get duplicate entries (part before dot) in current directory.
The command
rm *.*
Should do the trick if I understand you correctly
Use ls to confirm first

How to loop a shell script across a specific file in all directories?

Shell Scripting sed Errors:
Cannot view /home/xx/htdocs/*/modules/forms/int.php
/bin/rm: cannot remove `/home/xx/htdocs/tmp.26758': No such file or directory
I am getting an error in my shell script. I am not sure if this for loop will work, it is intended to climb a large directory tree of PHP files and prepend a functions in every int.php file with a little validation. Don't ask me why this wasn't centralized/OO but it wasn't. I copied the script as best I could from here: http://www.cyberciti.biz/faq/unix-linux-replace-string-words-in-many-files/
#!/bin/bash
OLD="public function displayFunction(\$int)\n{"
NEW="public function displayFunction(\$int)\n{if(empty(\$int) || !is_numeric(\$int)){return '<p>Invalid ID.</p>';}"
DPATH="/home/xx/htdocs/*/modules/forms/int.php"
BPATH="/home/xx/htdocs/BAK/"
TFILE="/home/xx/htdocs/tmp.$$"
[ ! -d $BPATH ] && mkdir -p $BPATH || :
for f in $DPATH
do
if [ -f $f -a -r $f ]; then
/bin/cp -f $f $BPATH
sed "s/$OLD/$NEW/g" "$f" > $TFILE && mv $TFILE "$f"
else
echo "Error: Cannot view ${f}"
fi
done
/bin/rm $TFILE
Do wildcards like this even work? Can I check in every subdirectory across a tree like this? Do I need to precode an array and loop over that? How would I go about doing this?
Also is, the $ in the PHP code breaking the script at all?
I am terribly confused.
Problems in your code
You cannot use sed to replace multiple lines this way.
you are using / in OLD which is used in a s/// sed command. This won't work
[ ! -d $BPATH ] && mkdir -p $BPATH || : is horrible. use mkdir -p "$bpath" 2>/dev/null
Yes, wildcards like this will work but only because your string has no spaces
Doube-quote your variables, or your code will be very dangerous
Single quote your strings or you won't understand what you are escaping
Do not use capital variable names, you could accidentally replace a bash inner variable
do not rm a file that does not exist
Your backups will be overwritten as all files are named int.php
Assuming you are using GNU sed, I'm not used to other sed flavors.
If you are not using GNU sed, replacing the \n with a newline (inside the string) should work.
Fixed Code
#!/usr/bin/env bash
old='public function displayFunction(\$int)\n{'
old=${old//,/\\,} # escaping eventual commas
# the \$ is for escaping the sed-special meaning of $ in the search field
new='public function displayFunction($int)\n{if(empty($int) || !is_numeric($int)){return "<p>Invalid ID.</p>";}\n'
new=${new//,/\\,} # escaping eventual commas
dpath='/home/xx/htdocs/*/modules/forms/int.php'
for f in $dpath; do
[ -r "$f" ]; then
sed -i.bak ':a;N;$!ba;'"s,$old,$new,g" "$f"
else
echo "Error: Cannot view $f" >&2
fi
done
Links
Replace newline in sed
Inplace replace with sed with a backup
Using a different sed substitution separator
Existency not necessary if readable
Bash search and replace inside a variable
Bash guide

Resources