How do I split a parent folder into 2 or more without creating subfolders.
like folder A into folderA1, FolderA2 but all in the same directory rather than being subfolders in folder A.
Actually this is the script I use but it only ends up creating subfolders
let fileCount=3000
let dirNum=1
for f in *
do
[ -d $f ] && continue
[ $fileCount -eq 3000 ] && {
dir=$(printf "%03d" $dirNum)
mkdir $dir
let dirNum=$dirNum+1
let fileCount=0
}
mv $f $dir
let fileCount=$fileCount+1
done
In the parent directory of folderA, run the following script:
#!/bin/bash
i=0 # counter for current file
j=0 # counter for current directory
batchsize=1000 # size of each batch
find folderA -type f -print0 | while read -r -d $'\0' file
do
if (( i % batchsize == 0 ))
then
(( j++ ))
mkdir "dir_$j"
fi
mv -- "$file" "dir_$j"
(( i++ ))
done
If all files in folderA have "normal" names, i.e. no whitespace, no glob characters, etc, the script can be written as
#!/bin/bash
find folderA -maxdepth 2 -type f | xargs -n 1000 | while read files
do
mkdir dir_$((++i))
mv $files dir_$i/
done
Which is briefer, and also much more performant.
Related
I need to write script in loop which will count the number of files and directories and indicates which grater and by how much. Like etc: there are 10 more files than directories.
I was trying something like that but it just show files and directories and I don't have idea how to indicates which is greater etc. Thanks for any help
shopt -s dotglob
count=0
for dir in *; do
test -d "$dir" || continue
test . = "$dir" && continue
test .. = "$dir" && continue
((count++))
done
echo $count
for -f in *; do
"$fname"
done
Here is a recursive dir walk I used for something a while back. Added counting of dirs and files:
#!/bin/sh
# recursive directory walk
loop() {
for i in *
do
if [ -d "$i" ]
then
dir=$((dir+1))
cd "$i"
loop
else
file=$((file+1))
fi
done
cd ..
}
loop
echo dirs: $dir, files: $file
Paste it to a script.sh and run with:
$ sh script.sh
dirs: 1, files: 11
You can use the find command to make things simplier.
The following command will list all the files in the given path:
find "path" -mindepth 1 -maxdepth 1 -type f
And also using the -type d you will get the directories.
Piping find into the wc -l will give you the number instead of the actual file and directory names, so:
root="${1:-.}"
files=$( find "$root" -mindepth 1 -maxdepth 1 -type f | wc -l)
dirs=$( find "$root" -mindepth 1 -maxdepth 1 -type d | wc -l)
if [ $files -gt $dirs ]; then
echo "there are $((files - dirs)) more files"
elif [ $files -lt $dirs ]; then
echo "there are $((dirs - files)) more dirs"
else
echo "there are the same"
fi
Use could use find to get the number of files/folders in a directory. Use wc -l to count the number of found paths, which you could use to calculate/show the result;
#!/bin/bash
# Path to search
search="/Users/me/Desktop"
# Get number of files
no_files=$(find "$search" -type f | wc -l )
# Number of folders
no_folders=$(find "$search" -type d | wc -l )
echo "Files: ${no_files}"
echo "Folders: ${no_folders}"
# Caculate dif
diff=$((no_files - $no_folders))
# Check if there are more folders or files
if [ "$diff" -gt 0 ]; then
echo "There are $diff more files then folders!"
else
diff=$((diff * -1 ) # Invert negative number to positive (-10 -> 10)
echo "There are $diff more folders then files!"
fi;
Files: 13
Folders: 2
There are 11 more files then folders!
I have a quite simple script I'd like to write just using bash.
Given a folder with 0..N *.XML files; I want to sort those by name and remove N-10 files (leave the last 10 in place).
I've been tinkering with find and tail/head but couldn't figure a way
find /mnt/user/Temporary/1 -name *.xml | tail -n +10 | rm
Please read up. It is about keeping the last 10. If there are 10 or less files, none should be deleted!
EDIT:
As someone closed, but did not repoen the question, here is the solution for those getting here with the same question.
#!/bin/bash
files=()
while IFS= read -r -d $'\0'; do
files+=("$REPLY")
done < <(find . -name *.xml -print0 | sort)
Limit=$((${#files[#]}-10))
count=0
while [ $Limit -gt $count ]; do
rm "${files[count]}"
let count=count+1
done
Maybe some linux "pro" can optimize it or give it some parameters (like limit, path and file pattern) to make it callable anywhere.
EDIT: New answer
#!/usr/bin/env bash
files=$(find *.xml | wc -l)
[ "$files" -lt 10 ] && echo "Files are less than 10..." && exit 1
count=$(($files-10))
for i in $(find *.xml | sort -V); do
[ $count -eq 0 ] && echo "Done" && exit 1
rm $i
((count--))
done
$files stores the number of *.xml in the folder
if the number is less or equal to 10 exit
set a counter that of the number of files to delete
loop through each file in order
if the counter is equal to 0 exit
if not remove the file and increment the counter
I found a script, here on StackOverflow, which I modified a little.
The script classifies all files from a folder in subfolders, each subfolder having only 8 files. But I have files with such names 0541_2pcs.jpg. 2pcs means two pieces (copies).
so I would like the script to take this into count when dividing files to each folder. e.g. a folder may have 6 files and this 0541_2pcs.jpg which literally means 2 files and so on, depending on the number indicated in the file's name.
This is the script:
cd photos;
dir="${1-.}"
x="${1-8}"
let n=0
let sub=0
while IFS= read -r file ; do
if [ $(bc <<< "$n % $x") -eq 0 ] ; then
let sub+=1
mkdir -p "Page-$sub"
n=0
fi
mv "$file" "Page-$sub"
let n+=1
done < <(find "$dir" -maxdepth 1 -type f)
Can anyone help me?
You can add a test for whether a file name contains the string "2pcs" via code like [ ${file/2pcs/x} != $file ]. In the script shown below, if that test succeeds then n is incremented a second time. Note, if the eighth file tested for inclusion in a directory is a multi-piece file, you will end up with one too many files in that directory. That could be handled by additional testing the script doesn't do. Note, there's no good reason for your script to call bc to do a modulus, and setting both of dir and x from the first parameter doesn't work; my script uses two parameters.
#!/bin/bash
# To test pagescript, create dirs four and five in a tmp dir.
# In four, say
# for i in {01..30}; do touch $i.jpg; done
# for i in 03 04 05 11 16 17 18; do mv $i.jpg ${i}_2pcs.jpg; done
# Then in the tmp dir, say
# rm -rf Page-*; cp four/* five/; ../pagescript five; ls -R
#cd photos; # Leave commented out when testing script as above
dir="${1-.}" # Optional first param is source directory
x=${2-8} # Optional 2nd param is files-per-result-dir
let n=x
let sub=0
for file in $(find "$dir" -maxdepth 1 -type f)
do # Uncomment next line to see values as script runs
#echo file is $file, n is $n, sub is $sub, dir is $dir, x is $x
if [ $n -ge $x ]; then
let sub+=1
mkdir -p Page-$sub
n=0
fi
mv "$file" Page-$sub
[ ${file/2pcs/x} != $file ] && let n+=1
let n+=1
done
I want to move all my files older than 1000 days, which are distributed over various subfolders, from /home/user/documents into /home/user/archive. The command I tried was
find /home/user/documents -type f -mtime +1000 -exec rsync -a --progress --remove-source-files {} /home/user/archive \;
The problem is, that (understandably) all files end up being moved into the single folder /home/user/archive. However, what I want is to re-construct the file tree below /home/user/documents inside /home/user/archive. I figure this should be possible by simply replacing a string with another somehow, but how? What is the command that serves this purpose?
Thank you!
I would take this route instead of rsync:
Change directories so we can deal with relative path names instead of absolute ones:
cd /home/user/documents
Run your find command and feed the output to cpio, requesting it to make hard-links (-l) to the files, creating the leading directories (-d) and preserve attributes (-m). The -print0 and -0 options use nulls as record terminators to correctly handle file names with whitespace in them. The -l option to cpio uses links instead of actually copying the files, so very little additional space is used (just what is needed for the new directories).
find . -type f -mtime +1000 -print0 | cpio -dumpl0 /home/user/archives
Re-run your find command and feed the output to xargs rm to remove the originals:
find . -type f -mtime +1000 -print0 | xargs -0 rm
Here's a script too.
#!/bin/bash
[ -n "$BASH_VERSION" ] && [[ BASH_VERSINFO -ge 4 ]] || {
echo "You need Bash version 4.0 to run this script."
exit 1
}
# SOURCE=/home/user/documents/
# DEST=/home/user/archive/
SOURCE=$1
DEST=$2
declare -i DAYSOLD=10
declare -a DIRS=()
declare -A DIRS_HASH=()
declare -a FILES=()
declare -i E=0
# Check directories.
[[ -n $SOURCE && -d $SOURCE && -n $DEST && -d $DEST ]] || {
echo "Source or destination directory may be invalid."
exit 1
}
# Format source and dest variables properly:
SOURCE=${SOURCE%/}
DEST=${DEST%/}
SOURCE_LENGTH=${#SOURCE}
# Copy directories first.
echo "Creating directories."
while read -r FILE; do
DIR=${FILE%/*}
if [[ -z ${DIRS_HASH[$DIR]} ]]; then
PARTIAL=${DIR:SOURCE_LENGTH}
if [[ -n $PARTIAL ]]; then
TARGET=${DEST}${PARTIAL}
echo "'$TARGET'"
mkdir -p "$TARGET" || (( E += $? ))
chmod --reference="$DIR" "$TARGET" || (( E += $? ))
chown --reference="$DIR" "$TARGET" || (( E += $? ))
touch --reference="$DIR" "$TARGET" || (( E += $? ))
DIRS+=("$DIR")
fi
DIRS_HASH[$DIR]=.
fi
done < <(exec find "$SOURCE" -mindepth 1 -type f -mtime +"$DAYSOLD")
# Copy files.
echo "Copying files."
while read -r FILE; do
PARTIAL=${FILE:SOURCE_LENGTH}
cp -av "$FILE" "${DEST}${PARTIAL}" || (( E += $? ))
FILES+=("$FILE")
done < <(exec find "$SOURCE" -mindepth 1 -type f -mtime +"$DAYSOLD")
# Remove old files.
if [[ E -eq 0 ]]; then
echo "Removing old files."
rm -fr "${DIRS[#]}" "${FILES[#]}"
else
echo "An error occurred during copy. Not removing old files."
exit 1
fi
I am trying to create a script that will find all the files in a folder that contain, for example, the string 'J34567' and process them. Right now I can process all the files in the folder with my code, however, my script will not just process the contained string it will process all the files in the folder. In other words once I run the script even with the string name ./bashexample 'J37264' it will still process all the files even without that string name. Here is my code below:
#!/bin/bash
directory=$(cd `dirname .` && pwd)
tag=$1
echo find: $tag on $directory
find $directory . -type f -exec grep -sl "$tag" {} \;
for files in $directory/*$tag*
do
for i in *.std
do
/projects/OPSLIB/BCMTOOLS/sumfmt_linux < $i > $i.sum
done
for j in *.txt
do
egrep "device|Device|\(F\)" $i > $i.fail
done
echo $files
done
Kevin, you could try the following:
#!/bin/bash
directory='/home'
tag=$1
for files in $directory/*$tag*
do
if [ -f $files ]
then
#do your stuff
echo $files
fi
done
where directory is your directory name (you could pass it as a command-line argument too) and tag is the search term you are looking for in a filename.
Following script will give you the list of files that contain (inside the file, not in file name) the given pattern.
#!/bin/bash
directory=`pwd`
tag=$1
for file in $(find "$directory" -type f -exec grep -l "$tag" {} \;); do
echo $file
# use $file for further operations
done
What is the relevance of .std, .txt, .sum and .fail files to the files containing given pattern?
Its assumed there are no special characters, spaces, etc. in file names.
If that is the case following should help working around those.
How can I escape white space in a bash loop list?
Capturing output of find . -print0 into a bash array
There are multiple issues in your script.
Following is not required to set the operating directory to current directory.
directory=$(cd `dirname .` && pwd)
find is executed twice for the current directory due to $directory and ..
find $directory . -type f -exec grep -sl "$tag" {} \;
Also, result/output of above find is not used in for loop.
For loop is run for files in the $directory (sub directories not considered) with their file name having the given pattern.
for files in $directory/*$tag*
Following for loop will run for all .txt files in current directory, but will result in only one output file due to use of $i from previous loop.
for j in *.txt
do
egrep "device|Device|\(F\)" $i > $i.fail
done
This is my temporary solution. Please check if it follows your intention.
#!/bin/bash
directory=$(cd `dirname .` && pwd) ## Should this be just directory=$PWD ?
tag=$1
echo "find: $tag on $directory"
find "$directory" . -type f -exec grep -sl "$tag" {} \; ## Shouldn't you add -maxdepth 1 ? Are the files listed here the one that should be processed in the loop below instead?
for file in "$directory"/*"$tag"*; do
if [[ $file == *.std ]]; then
/projects/OPSLIB/BCMTOOLS/sumfmt_linux < "$file" > "${file}.sum"
fi
if [[ $file == *.txt ]]; then
egrep "device|Device|\(F\)" "$file" > "${file}.fail"
fi
echo "$file"
done
Update 1
#!/bin/bash
directory=$PWD ## Change this to another directory if needed.
tag=$1
echo "find: $tag on $directory"
while IFS= read -rd $'\0' file; do
echo "$file"
case "$file" in
*.std)
/projects/OPSLIB/BCMTOOLS/sumfmt_linux < "$file" > "${file}.sum"
;;
*.txt)
egrep "device|Device|\(F\)" "$file" > "${file}.fail"
;;
*)
echo "Unexpected match: $file"
;;
esac
done < <(exec find "$directory" -maxdepth 1 -type f -name "*${tag}*" \( -name '*.std' -or -name '*.txt' \) -print0) ## Change or remove the maxdepth option as wanted.
Update 2
#!/bin/bash
directory=$PWD
tag=$1
echo "find: $tag on $directory"
while IFS= read -rd $'\0' file; do
echo "$file"
/projects/OPSLIB/BCMTOOLS/sumfmt_linux < "$file" > "${file}.sum"
done < <(exec find "$directory" . -maxdepth 1 -type f -name "*${tag}*" -name '*.std' -print0)
while IFS= read -rd $'\0' file; do
echo "$file"
egrep "device|Device|\(F\)" "$file" > "${file}.fail"
done < <(exec find "$directory" -maxdepth 1 -type f -name "*${tag}*" -name '*.txt' -print0)