Getting files names with bash script - linux

I am trying to get all the Files names from a directory "blabla"
and only from that directory without its sub-directorys
and i need all those names without the X first names and Y last names
and without its path (only the file names themselfs
i tried
#!/bin/bash
find blavla | sort
but it gave me all the files including the subfolders files
and it gave me FULL name (with the path)
and i have no idea how to reed without the X first and Y last names
tried to search online and reading the man find but didnt find nothing

Use the following command:
find . -maxdepth 1 -type f -exec basename {} ';' | \
sort | \
awk 'BEGIN { X = 2; Y = 2 } { lines[NR] = $0 } END { for (i=1 + X; i<=NR - Y; i++) print lines[i] }'
Set X and Y to how many file names you want to skip at the beginning and at the end of the list respectively.

Try this (substitute Y and X with actual values):
cd blavla && find . -maxdepth 1 -type f|head -n -Y|tail -n +(X+1)

Related

Linux - is there a way to get the file size of a directory BUT only including the files that have a last modified / creation date of x?

as per title I am trying to find a way to get the file size of a directory (using du) but only counting the files in the directory that have been created (or modified) after a specific date.
Is it something that can be done using the command line?
Thanks :)
From #Bodo's comment. Using GNU find:
find directory/ -type f -newermt 2021-11-25 -printf "%s\t %f\n" | \
awk '{s += $1 } END { print s }' | \
numfmt --to=iec-i
find looks in in directory/ (change this)
Looks for files (-type f)
that have a newer modified time than 2021-11-25 (-newermt) (change this)
and outputs the files's size (%s) on each line
adds up all the sizes from the lines with awk {s += $1 }
Prints the results END { print s }
Formats the byte value to human readable with numfmt's --to=iec-i

How do I use perl-rename to replace . with _ on linux recursively, except for extensions

I am trying to rename some files and folders recursively to format the names, and figured find and perl-rename might be the tools for it. I've managed to find most of the commands I want to run, but for the last two:
I would like for every . in a directory name to be replaced by _ and
for every . but the last in a file name to be replaced with _
So that ./my.directory/my.file.extension becomes ./my_directory/my_file.extension.
For the second task, I don't even have a command.
For the first task, I have the following command :
find . -type d -depth -exec perl-rename -n "s/([^^])\./_/g" {} +
Which renames ./the_expanse/Season 1/The.Expanse.S01E01.1080p.WEB-DL.DD5.1.H264-RARBG ./the_expanse/Season 1/Th_Expans_S01E0_1080_WEB-D_DD__H264-RARBG, so it doesn't work because each word character before an . is eaten.
If instead type :
find . -type d -depth -exec perl-rename -n "s/\./_/g" {} +, I rename ./the_expanse/Season 1/The.Expanse.S01E01.1080p.WEB-DL.DD5.1.H264-RARBG into _/the_expanse/Season 1/The_Expanse_S01E01_1080p_WEB-DL_DD5_1_H264-RARBG which doesn't work either because the current directory is replaced by _.
If someone could give me a solution to:
replace every . in a directory name by _ and
replace every . but the last in a file name with _
I'd be very grateful.
First tackling the directories with .
# find all directories and remove the './' part of each and save to a file
$ find -type d | perl -lpe 's#^(\./|\.)##g' > list-all-dir
#
# dry run
# just print the result without actual renaming
$ perl -lne '($old=$_) && s/\./_/g && print' list-all-dir
#
# if it looked fine, rename them
$ perl -lne '($old=$_) && s/\./_/g && rename($old,$_)' list-all-dir
This part s/\./_/g is for matching every . and replacing it with _
Second tackling the file extensions, rename . except . for file extension
# find all *.txt file and save or your match
$ find -type f -name \*.txt | perl -lpe 's#^(\./|\.)##g' > list-all-file
#
# dry run
$ perl -lne '($old=$_) && s/(?:(?!\.txt$)\.)+/_/g && print ' list-all-file
#
# if it looked fine, rename them
$ perl -lne '($old=$_) && s/(?:(?!\.txt$)\.)+/_/g && rename($old,$_) ' list-all-file
This part (?:(?!\.txt$)\.)+ is for matching every . except the last . before the file extension.
NOTE
Here I used .txt and you should replace it with your match. The Second code will rename input like this:
/one.one/one.one/one.file.txt
/two.two/two.two/one.file.txt
/three.three/three.three/one.file.txt
to such an output:
/one_one/one_one/one_file.txt
/two_two/two_two/one_file.txt
/three_three/three_three/one_file.txt
and you can test it here with an online regex match.

Write a specific text/string into a text file for each file present in a specified folder

I am trying to prepare a txt file containing a specific string of text (X tab Y) per line for each file in a folder matching my search parameter.
So far I've got:
find ./directory/*.extension -type f | wc -l
This gives me the number of files with *.extension - but I can't find a way to print (X separated by tab Y) on a line equal to the number of files matching find.
I.e. for 3 files matching my search, the txt file should contain:
X Y
X Y
X Y
Sorry if this is too basic, but any help would be appreciated.
I would do :
find ./directory/ -name "*.extension" -type f -exec echo -e "X\tY" \; > yourfile.txt
This will execute the echo -e "X\tY" command for each file found by the find and redirect this ouput to the file yourfile.txt

finding largest file for each directory

I am currently stuck with listing the largest file of each subdirectory in a specific directory.
I succeeded in listing the largest file in a directory by entering the following command (in Debian):
find . -type f -printf "%p\n" | ls -rS |tail -1
I expected entering the command in a shell-file (searchHelper.sh) and running the following command would return the expected filenames for each subdirectory:
find -type d -execdir ./searchHelper.sh {} +
Unfortunately it does not return the largest file for each subdirectory, but something else.
May I get a hint for getting the filename (with absolute path) of the largest file of each subdirectory?
Many thanks in advance
Give a try to this safe and tested version:
find "$(pwd)" -depth -type f -printf "d%h\0%s %p\0" | awk -v RS="\0" '
/^d/ {
directoryname=substr($0,2);
}
/^[0-9]/ {
if (!biggestfilesizeindir[directoryname] || biggestfilesizeindir[directoryname] < $1) {
biggestfilesizeindir[directoryname]=$1;
biggestfilesizefilenameindir[directoryname]=substr($0,index($0," ")+1);
}
}
END {
for (directoryname in biggestfilesizefilenameindir) {
print biggestfilesizefilenameindir[directoryname];
}
}'
This is safe even if the names contain special chars: ' " \n etc.

Getting directory list with serial number in shell script

I wanna list out all the sub directory in a main directory with a serial numbers.
Example :
If a directory A contains B,C,D and E as a sub directory then the output should be like
1 B
2 C
3 D
4 E
ls | nl ;
where ls is for listing the directory files and nl is for numbered line
You could use a loop:
i=0
for f in A/*/; do
echo "$((++i)) $f"
done
The pattern matches all directories in A. $((++i)) increments the variable $i by 1 for each directory that is found.
use this:
find * -type d | nl
find * -type d : print name of all directories in current path
nl : add line number to output
It depends what you want to do with the number. If you plan to use it further/later, and anyone could make or delete directories between looking at the list and using the number, you will be in trouble.
If that is the case, you can use the inode numbers of the directories like this as they are constant and unique across the entire filesystem:
ls -di */
19866918 f/ 19803132 other/ 19705681 save/
On a Mac, you can also do
stat -f '%i %N' */
19866918 f/
19803132 other/
19705681 save/
and I believe the Linux equivalent is
stat -c '%i %N' */

Resources