How to create directories automatically in linux? - linux

I am having a file named temp.txt where inside this file it contains the following content
https://abcdef/12345-xyz
https://ghifdfg/5426525-abc
I need to create a directories automatically in linux by using only th number part from each line in the file.
So the output should be something like 12345 and 5426525 directories created.
Any approach on how to do this could be helpful.
This is the code that i searched and got from internet,wherein this code, new directories will be created by the file name that starts with BR and W0 .
for file in {BR,W0}*.*; do
dir=${file%%.*}
mkdir -p "$dir"
mv "$file" "$dir"
done

Assuming each URL is of the form
http[s]://any/symbols/some_digits-some_letters
Then you indeed could use the simple prefix and suffix modifiers in shell variable expansion.
${x##*/} expands to the suffix part of x that starts after the last slash /.
${y%%-*} expands to the prefix part of y before the first -.
while read x ; do
y=${x##*/}
z=${y%%-*}
mkdir $z
done < temp.txt

Related

Linux bash How to use a result of a wildcard as a file name in a copy command

I'm writing a Linux script to copy files from a folder structure in to one folder. I want to use a varying folder name as the prefix of the file name.
My current script looks like this. But, I can't seem to find a way to use the folder name from the wildcard as the file name;
for f in /usr/share/storage/*/log/myfile.log*; do cp "$f" /myhome/docs/log/myfile.log; done
My existing folder structure/files as follows and I want the files copied as;
>/usr/share/storage/100/log/myfile.log --> /myhome/docs/log/100.log
>/usr/share/storage/100/log/myfile.log.1 --> /myhome/docs/log/100.log.1
>/usr/share/storage/102/log/myfile.log --> /myhome/docs/log/102.log
>/usr/share/storage/103/log/myfile.log --> /myhome/docs/log/103.log
>/usr/share/storage/103/log/myfile.log.1 --> /myhome/docs/log/103.log.1
>/usr/share/storage/103/log/myfile.log.2 --> /myhome/docs/log/103.log.2
You could use a regular expression match to extract the desired component, but it is probably easier to simply change to /usr/share/storage so that the desired component is always the first one on the path.
Once you do that, it's a simple matter of using various parameter expansion operators to extract the parts of paths and file names that you want to use.
cd /usr/share/storage
for f in */log/myfile.log*; do
pfx=${f%%/*} # 100, 102, etc
dest=$(basename "$f")
dest=$pfx.${dest#*.}
cp -- "$f" /myhome/docs/log/"$pfx.${dest#*.}"
done
One option is to wrap the for loop in another loop:
for d in /usr/share/storage/*; do
dir="$(basename "$d")"
for f in "$d"/log/myfile.log*; do
file="$(basename "$f")"
# test we found a file - glob might fail
[ -f "$f" ] && cp "$f" /home/docs/log/"${dir}.${file}"
done
done
for f in /usr/share/storage/*/log/myfile.log*; do cp "$f" "$(echo $f | sed -re 's%^/usr/share/storage/([^/]*)/log/myfile(\.log.*)$%/myhome/docs/log/\1\2%')"; done

Bash script to get all file with desired extensions

I'm trying to write a bash script that if I pass a text file containing some extension and a folder returns me an output file with the list of all files that match the desired extension, searching recursively in all sub-directories
the folder is my second parameter the extension list file my first parameter
I have tried:
for i in $1 ; do
find . -name $2\*.$i -print>>result.txt
done
but doesn't work
As noted from in comment:
It is not a good idea to write to a hard coded file name.
The given example fixes only the given code from the OP question.
Yes of course, it is even better to call with
x.sh y . > blabla
and remove the filename from the script itself. But my intention is not to fix the question...
The following bash script, named as x.sh
#!/bin/bash
echo -n >result.txt # delete old content
while read i; do # read a line from file
find $2 -name \*.$i -print>>result.txt # for every item do a find
done <$1 # read from file named with first arg from cmdline
with an text file named y with following content
txt
sh
and called with:
./x.sh y .
results in a file result.txt which contents is:
a.txt
b.txt
x.sh
OK, lets give some additional hints as got from comments:
If the results fiel should not collect any other conntent from other results of the script it can be simplified to:
#!/bin/bash
while read i; do # read a line from file
find $2 -name \*.$i -print # for every item do a find
done <$1 >result.txt # read from file named with first arg from cmdline
And as already mentioned:
The hard coded result.txt could be removed and the call can be something like
./x.sh y . > result.txt
Give this one-liner command a try.
Replace /mydir with the folder to search.
Change the list of extensions passed as argument to the egrep command:
find /mydir -type f | egrep "[.]txt|[.]xml" >> result.txt
After the egrep, each extension should be separated with |.
. char must be escaped with [.]

Listing directories with spaces using Bash in linux

I would like to create a bash script to list all the directories in a directory provided by the user via input, or all the directories in the current directory (given no input).
Here's what I have thus far, but when I execute it I encounter two problems.
1) The script completely ignores my input. The file is located on my desktop but when I type in "home" as the input, the script simply prints the directories of the Desktop (current directory).
2) The directories are printed on their own lines (intended) but it treats each word in a folder name as its own folder. i.e. is printed as:
this
folder
Here's the code I have so far:
#!/bin/bash
echo -n "Enter a directory to load files: "
read d
if [ $d="" ]; #if input is blank, assume d = current directory
then d=${PWD##*/}
for i in $(ls -d */);
do echo ${i%%/};
done
else #otherwise, print sub-directories of given directory
for i in $(ls -d */);
do echo ${i%%/};
done
fi
Also in your response please explain your answer as I'm very new to bash.
Thanks for looking, I appreciate your time.
EDIT: Thanks to John1024's answer, I came up with the following:
#!/bin/bash
echo -n "Enter a directory to load files: "
IFS= read d
ls -1 -d "${d:-.}"/*/
And it does everything I need. Much appreciated!
I believe that this script accomplishes what you want:
#!/bin/sh
ls -1 -d "${1:-.}"/*/
Usage example:
$ bash ./script.sh /usr/X11R6
/usr/X11R6/bin
/usr/X11R6/man
Explanation:
-1 tells ls to print each file/directory on a separate line
-d tells ls to list directories by name instead of their contents
The shell will ${1:-.} to be the first argument to the script if there is one or . (which means the current directory) if there isn't.
Enhancement
The above script displays a / at the end of each directory name. If you don't want that, we can use sed to remove trailing slashes from the output:
#!/bin/sh
ls -1d ${1:-.}/*/ | sed 's|/$||'
Revised Version of Your Script
Starting with your script, some simplifications can be made:
#!/bin/bash
echo -n "Enter a directory to load files: "
IFS= read d
d=${d:-$PWD}
for i in "$d"/*/
do
echo ${i%%/}
done
Notes:
IFS= read d
Normally leading and trailing white space are stripped before the input is assigned to d. By setting IFS to an empty value, however, leading and trailing white space will be preserved. Thus this will work even if the pathologically strange case where the user specifies a directory whose name begins or ends with white space.
If the user enters a backslash, the shell will try to process it as an escape. If you don't like that, use IFS= read -r d and backslashes will be treated as normal characters, not escapes.
d=${d:-$PWD}
If the user supplied a value for d, this leaves it unchanged. If he didn't, this assigns it to $PWD.
for i in "$d"/*/
This will loop over every subdirectory of $d and will correctly handle subdirectory names with spaces, tabs, or any other odd character.
By contrast, consider:
for i in $(ls -d */)
After ls executes here, the shell will split up the output into individual words. This is called "word splitting" and is why this form of the for loop should be avoided.
Notice the double-quotes in for i in "$d"/*/. They are there to prevent word splitting on $d.

Copy text from multiple files, same names to different path in bash (linux)

I need help copying content from various files to others (same name and format, different path).
For example, $HOME/initial/baby.desktop has text which I need to write into $HOME/scripts/baby.desktop. This is very simple for a single file, but I have 2500 files in $HOME/initial/ and the same number in $HOME/scripts/ with corresponding names (same names and format). I want append (copy) the content of file in path A to path B (which have the same name and format), to the end of file in path B without erase the content of file in path B.
Example content of $HOME/initial/*.desktop to final $HOME/scripts/*.desktop. I tried the following, but it don't work:
cd $HOME/initial/
for i in $( ls *.desktop ); do egrep "Icon" $i >> $HOME/scripts/$i; done
Firstly, I would backup $HOME/initial and $HOME/scripts, because there is lots of scope for people misunderstanding your question. Like this:
cd $HOME
tar -cvf initial.tar initial
tar -cvf scripts.tar scripts
That will put all the files in $HOME/initial into a single tarfile called initial.tar and all the files in $HOME/scripts into a single tarfile called scripts.tar.
Now for your question... in general, if you want to put the contents of FileB onto the end of FileA, the command is
cat FileB >> FileA
Note the DOUBLE ">>" which means "append" rather than single ">" which means overwrite.
So, I think you want to do this:
cd $HOME/initial/baby.desktop
cat SomeFile >> $HOME/scripts/baby.desktop/SomeFile
where SomeFile is the name of any file you choose to test with. I would test that has worked and then, if you are happy with that, go ahead and run the same command inside a loop:
cd $HOME/initial/baby.desktop
for SOURCE in *
do
DESTINATION="$HOME/scripts/baby.desktop/$SOURCE"
echo Appending "$SOURCE" to "$DESTINATION"
#cat "$SOURCE" >> "$DESTINATION"
done
When the output looks correct, remove the "#" at the start of the penultimate line and run it again.
I solved it, if some people want learn how to resolve is very simple:
using Sed
I need only the match (or pattern) line "Icon=/usr/share/some_picture.png into $HOME/initial/example.desktop to other with same name and format $HOME/scripts/example.desktop, but I had a lot of .desktop files (2500 files)
cd $HOME/initial
STRING_LINE=`grep -l -R "Icon=" *.desktop`
for i in $STRING_LINE; do sed -ne '/Icon=/ p' $i >> $HOME/scripts/$i ; done
_________
If you need only copy all to other file with same name and format
using cat
cd $HOME/initial
STRING_LINE=`grep -l -R "Icon=" *.desktop`
for i in $STRING_LINE; do cat $i >> $HOME/scripts/$i ; done

How to remove the extension of a file?

I have a folder that is full of .bak files and some other files also. I need to remove the extension of all .bak files in that folder. How do I make a command which will accept a folder name and then remove the extension of all .bak files in that folder ?
Thanks.
To remove a string from the end of a BASH variable, use the ${var%ending} syntax. It's one of a number of string manipulations available to you in BASH.
Use it like this:
# Run in the same directory as the files
for FILENAME in *.bak; do mv "$FILENAME" "${FILENAME%.bak}"; done
That works nicely as a one-liner, but you could also wrap it as a script to work in an arbitrary directory:
# If we're passed a parameter, cd into that directory. Otherwise, do nothing.
if [ -n "$1" ]; then
cd "$1"
fi
for FILENAME in *.bak; do mv "$FILENAME" "${FILENAME%.bak}"; done
Note that while quoting your variables is almost always a good practice, the for FILENAME in *.bak is still dangerous if any of your filenames might contain spaces. Read David W.'s answer for a more-robust solution, and this document for alternative solutions.
There are several ways to remove file suffixes:
In BASH and Kornshell, you can use the environment variable filtering. Search for ${parameter%word} in the BASH manpage for complete information. Basically, # is a left filter and % is a right filter. You can remember this because # is to the left of %.
If you use a double filter (i.e. ## or %%, you are trying to filter on the biggest match. If you have a single filter (i.e. # or %, you are trying to filter on the smallest match.
What matches is filtered out and you get the rest of the string:
file="this/is/my/file/name.txt"
echo ${file#*/} #Matches is "this/` and will print out "is/my/file/name.txt"
echo ${file##*/} #Matches "this/is/my/file/" and will print out "name.txt"
echo ${file%/*} #Matches "/name.txt" and will print out "/this/is/my/file"
echo ${file%%/*} #Matches "/is/my/file/name.txt" and will print out "this"
Notice this is a glob match and not a regular expression match!. If you want to remove a file suffix:
file_sans_ext=${file%.*}
The .* will match on the period and all characters after it. Since it is a single %, it will match on the smallest glob on the right side of the string. If the filter can't match anything, it the same as your original string.
You can verify a file suffix with something like this:
if [ "${file}" != "${file%.bak}" ]
then
echo "$file is a type '.bak' file"
else
echo "$file is not a type '.bak' file"
fi
Or you could do this:
file_suffix=$(file##*.}
echo "My file is a file '.$file_suffix'"
Note that this will remove the period of the file extension.
Next, we will loop:
find . -name "*.bak" -print0 | while read -d $'\0' file
do
echo "mv '$file' '${file%.bak}'"
done | tee find.out
The find command finds the files you specify. The -print0 separates out the names of the files with a NUL symbol -- which is one of the few characters not allowed in a file name. The -d $\0means that your input separators are NUL symbols. See how nicely thefind -print0andread -d $'\0'` together?
You should almost never use the for file in $(*.bak) method. This will fail if the files have any white space in the name.
Notice that this command doesn't actually move any files. Instead, it produces a find.out file with a list of all the file renames. You should always do something like this when you do commands that operate on massive amounts of files just to be sure everything is fine.
Once you've determined that all the commands in find.out are correct, you can run it like a shell script:
$ bash find.out
rename .bak '' *.bak
(rename is in the util-linux package)
Caveat: there is no error checking:
#!/bin/bash
cd "$1"
for i in *.bak ; do mv -f "$i" "${i%%.bak}" ; done
You can always use the find command to get all the subdirectories
for FILENAME in `find . -name "*.bak"`; do mv --force "$FILENAME" "${FILENAME%.bak}"; done

Resources