I am trying to write a bash script (display) that will allow me to access a directory, list the files, and then display the content of all of the files. So far I am able to access the directory and list the files.
#!/bin/bash
#Check for folder name
if [ "$#" -ne 1 ]; then
echo " Usage: count [folder name]"
exit 1
fi
#Check if it is a directory
if [ ! -d "$1" ]; then
echo "Not a valid directory"
exit 2
fi
#Look at the directory
target=$1
echo "In Folder: $target"
for entry in `ls $target`; do
echo $entry
done
So if I use the command ./display [directory] it will list the files. I want to display the contents of all of the files as well but I am stuck. Any help would be appreciated thanks!
Use find to find files. Use less to display files interactively or cat otherwise.
find "$target" -type f -exec less {} \;
I thin a loop similar to your "look at the directory" loop would suffice, but using the cat command instead of ls
I have ~10,000 directories. Most of them have a similarly named text file.
I would like to take these .txt files and copy them to a folder in the main directory, ALL_RESULTS. How can i accomplish this? What I have below
for d in *_directories/; do
#go into directory
cd "$d"
#check if file exists using wildcard, then copy it into ALL_RESULTS and print the name of
#directory out
if ls *SCZ_PGC3_GWAS.sumstats.gz*.txt 1> /dev/null 2>&1; then
cp *SCZ_PGC3_GWAS.sumstats.gz*.txt ../ALL_RESULTS && echo "$d"
#if file does not exist, print the name of the directory we're
#in
else
echo "$d"
echo "files do not exist"
cd ..
fi
done
I keep getting errors saying the directories themselves don't exist. What am I doing wrong?
All relative paths are interpreted relative to the directory you are in (the "current working directory"). So, imagine, you cd into the first directory. So now you are in that directory. Then you loop executes, and you try to cd into the second directory. But that directory is no longer then, you need to go "up" and then cd into the directory. That is the reason the directory does not exists - you have to go "up" a directory for each directory you cd into.
So you need to cd .. on the end of your loop to go back to the directory you started from.
I have ~10,000 directories. ... I would like to take these .txt files and move them to a folder in the main directory, ALL_RESULTS
If you don't need to output anything, just use find for that with a proper regex. Doing ls and cd and a loop will be very slow. Something along:
find . -maxdepth 2 -type f -regex '\./.*_directories/.*SCZ_PGC3_GWAS.sumstats.gz.*\.txt' -exec cp {} ALL_RESULTS \;
You can also add -v to cp to see what it copies.
You miss
shopt -s nullglob
and don't parse ls output :
#!/bin/bash
shopt -s nullglob
for d in *_directories/; do
# check if file exists using wildcard, then copy it into ALL_RESULTS and print
# the name of directory
files=$( $d/*SCZ_PGC3_GWAS.sumstats.gz*.txt )
if [[ ${files[#]} ]]; then
cp "${files[#]}" ALL_RESULTS && echo "$d"
#if file does not exist, print the name of the directory we're
#in
else
echo "$d"
echo "files do not exist"
fi
done
I need a help to finish a script to rename a folders and .
eg: my current folders and files like below:
Gideon/gideon_lisha/Gideon_samuel/Gideon_nathan.xml
Gideon/lisha_gideon/Gideon_noah.xml
...
I want a shell command to rename them like below:
Liang/Liang_lisha/Liang_samuel/Liang_nathan.xml
Liang/lisha_Liang/Liang_noah.xml
...
I tied:
#!/bin/bash
path=$1
filename=$2
newfilename=$3
echo "We are finding '$filename' under the folder '$path'"
count=1
for i in `find $path -iname *$filename*`
do
newpath=`echo $i | sed "s/$filename/$newfilename/g"`
sudo mv "$i" "$newpath"
echo "${count}: Renaming $i to $newpath"
let count++
done
but the script will stop to:
Liang/gideon_lisha/Gideon_samuel/Gideon_nathan.xml
because it changed the folder name, so that can not find the next path. I do not know how let the script run from inner to outer instead of running outer to inner.
finally, I found out the anwser:
#!/bin/bash
path=$1
filename=$2
newfilename=$3
echo "We are finding '$filename' under the folder '$path'"
count=1
for i in `find $path -iname "*$filename*" | tac`
do
newpath=`echo $i | sed "s#\(.*\)$filename#\1$newfilename#i"`
sudo mv "$i" "$newpath"
echo "${count}: Renaming $i to $newpath"
let count++
done
really thank #susbarbatus !
I have a directory that has symbolic links - some of them point to files and some of them to directories - how do I identify the ones poiting to directory in a shell script ( without any prejudice to names offcourse)
use ls -L option to follow symlinks
This is the script that I used to differentiate between directories with contents from files/empty directories
( this will work only if directory has some contents -- in my case I am anyway interested in those directories that have some content so I am happy - but do suggest better options if any
cd dir
for i in `ls `
do
if [ 1 -lt `ls -l -L $i | wc -l` ]
then
echo "$i is a non empty directory"
else
echo "$i is either an empty directory or a file"
fi
done
How to change extension of all *.dat files in a directory to *.txt.
Shell script should take the directory name as an argument. Can
take multiple directories as arguments. Print the log of command
result in appending mode with date and timestamp.
Bash can do all of the heavy lifting such as extracting the extension and tagging on a new one. For example:
for file in $1/*.dat ; do mv "$file" "${file%.*}.txt" ; done
Batch File Rename By File Extension in Unix
# change .htm files to .html
for file in *.htm ; do mv $file `echo $file | sed 's/\(.*\.\)htm/\1html/'` ; done
# change .html files to .htm
for file in *.html ; do mv $file `echo $file | sed 's/\(.*\.\)html/\1htm/'` ; done
#change .html files to .shtml
for file in *.html ; do mv $file `echo $file | sed 's/\(.*\.\)html/\1shtml/'` ; done
#change .html files to php
for file in *.html ; do mv $file `echo $file | sed 's/\(.*\.\)html/\1php/'` ; done
so ==>
# change .dat files to .txt
for file in *.dat ; do mv $file `echo $file | sed 's/\(.*\.\)dat /\1txt/'` ; done
#!/bin/bash
for d in $*; do
for f in $(ls $d/*.dat); do
echo $(date) $(mv -v $f ${f%.dat}.txt)
done
done
Output redirection should be done by the shell when running the script
Leaving out argument validity checks
Simple script:
#!/bin/bash
if [ $# -lt 2 ] then
echo "Usage `basename $0` <any number of directories space separated>"
exit 85 # exit status for wrong number of arguments.
fi
for directories
do
for files in $(ls $directories/*.dat); do
echo $(date) $(mv -v $files ${files%.dat}.txt)
done
done
The first for loop by default loops on the $# i.e. command-line arguments passed.
Follow Pben's solution, if your filename contains blank space, you should use double quotation marks to the variable like the following:
#remove the space in file name
#example file name:19-014-0100.mp3 .mp3
#result file name:19-014-0100.mp3
$ for file in *.mp3 ;
do target=`echo "$file" | sed 's/ //g'`;
echo "$target";
mv "$file" "$target";
done;
#remove the duplicate file extension in file name
#example file name:19-014-0100.mp3.mp3
#result file name:19-014-0100.mp3
$ for file in *.mp3 ;
do target=`echo "$file" | sed 's/\.mp3\.mp3$/.mp3/g'`;
echo "$target";
mv "$file" "$target";
done;
To rename (changing extention) all my html files on epub files I use this command line :
find . -name "*.html*" -exec rename -v 's/\.html$/\.epub/i' {} \;
Script, first finds the names of the given extensions.
It removes the extension from names. Then adds backslash()
for identification of terminal.
Then the 'mv' command executed.
Here the '.temp' folder is used to hide the process from user,
in GUI.
#!/bin/sh
if [ $# -ne 3 ]
then
echo "Usage: ./script folder current_extension modify_extension"
exit
fi
mkdir .temp
find $1 -name "*.$2" > .temp/output_1 && sed "s/$2//" .temp/output_1 > .temp/output_2 && sed -e "s/[ \t]/\\\ /g" .temp/output_2 > .temp/output_3
while read line
do
mv -v "$line""$2" "$line""$3"
done < .temp/output_3
rm -rf .temp
The output files are saved inside the '.temp' folder,later the '.temp' folder is removed.
The top voted answer didn't really work for me. I may have been doing something wrong. My scenario was trying to create a file with the original name, but with the date appended to it, along with changing the extension from .xslx to .csv. This is what worked for me:
csvname=`echo $xlsx |sed 's/\.xlsx//'`"-$now"`echo $xlsx | sed 's/\(.*\.\)xlsx/\.csv/'`
So, for all the .dat files in a directory (without the date addition), you could run something like this:
for i in *.dat
do mv $i `echo $i |sed 's/\.dat//'``echo $i | sed 's/\(.*\.\)dat/\.txt/'`
done
From the above, this section of code just removed the extension:
echo $i |sed 's/\.dat//'
And this section changes the .dat to .txt:
echo $i | sed 's/\(.*\.\)dat/\.txt/'
And by bumping them next to each other, it concatenates the two outputs into the filename. It's like doing this:
mv [filename][.dat] [filename] + [.txt]
Though, I did use STDOUT instead of the 'mv' command.
Following command to change file extention .c to .h
find . -depth -name "*.c" -exec sh -c 'dname=$(dirname {}) && fname=$(basename {} .c) && mv {} $dname/$fname.h' ";"
change js to cjs extension files recursively:
cd dist # where you place your .js
for file in $(find . -type f -name "*.js"); do mv "$file" "${file%.*}.cjs"; done