Open header-files in editor based on content in corresponding source - linux

I have several files that have the same name, but a different extension. For example
echo "array" > A.hpp
echo "..." > A.h
echo "content" > B.hpp
echo "..." > B.h
echo "content" > C.hpp
echo "..." > C.h
I want to get a list of *.h files based on some content in the corresponding *.hpp file. In particular I am looking for a one-liner to open them in my editor.
It is fair to assume that for each *.hpp file the corresponding *.h file exists. Also, since they are source files, it may be assumed that the filenames do not contain whitespaces.
Current approach
I know how to get a list of *.hpp files based on their content. An approach (but surely not the only or the best) is to
find . -type f -iname '*.hpp' -print | xargs grep -i 'content' | cut -d":" -f1
which gives
./B.hpp
./C.hpp
Opening in my editor is then done by
st `find . -type f -iname '*.hpp' -print | xargs grep -i 'content' | cut -d":" -f1`
But how can I get/open the corresponding *.h files?

You say you want to get a list of *.h files based on some content in the corresponding *.hpp file.
while read -r line ; do
echo "${line%.hpp}.h"
done < <(grep -i 'content' *.hpp| cut -d":" -f1)
BashFAQ 001 recommends to use a while loop and read command to read a data stream.
One-liner as requested
st `while IFS= read -r line ; do echo "${line%.hpp}.h"; done < <(grep -i 'content' *.hpp| cut -d":" -f1)`
If you are dealing with filenames containing whitespace, you need to use printf instead of echo.
st `while IFS= read -r line ; do printf '%q' "${line%.hpp}.h"; done < <(grep -i 'content' *.hpp| cut -d":" -f1)`
The %q lets printf format the output so that it can be reused as shell input.
Explanation
You have to read it from behind. First we grep all files ending in .hpp in the current directory for the string 'content' and cut everything but the basename.
The while loop will read the output of grep and assign the basename to the variable line.
Inside the while loop we use bash's parameter substitution to change the file extension from .h to .hpp.

Your question still isn't clear but is this all you're trying to do (using GNU awk for gensub())?
$ awk '/content/{print gensub(/[^.]+$/,"h",1,FILENAME)}' *.hpp
B.h
C.h

Related

Find and delete files, but leave X newest [duplicate]

Is there a simple way, in a pretty standard UNIX environment with bash, to run a command to delete all but the most recent X files from a directory?
To give a bit more of a concrete example, imagine some cron job writing out a file (say, a log file or a tar-ed up backup) to a directory every hour. I'd like a way to have another cron job running which would remove the oldest files in that directory until there are less than, say, 5.
And just to be clear, there's only one file present, it should never be deleted.
The problems with the existing answers:
inability to handle filenames with embedded spaces or newlines.
in the case of solutions that invoke rm directly on an unquoted command substitution (rm `...`), there's an added risk of unintended globbing.
inability to distinguish between files and directories (i.e., if directories happened to be among the 5 most recently modified filesystem items, you'd effectively retain fewer than 5 files, and applying rm to directories will fail).
wnoise's answer addresses these issues, but the solution is GNU-specific (and quite complex).
Here's a pragmatic, POSIX-compliant solution that comes with only one caveat: it cannot handle filenames with embedded newlines - but I don't consider that a real-world concern for most people.
For the record, here's the explanation for why it's generally not a good idea to parse ls output: http://mywiki.wooledge.org/ParsingLs
ls -tp | grep -v '/$' | tail -n +6 | xargs -I {} rm -- {}
Note: This command operates in the current directory; to target a directory explicitly, use a subshell ((...)) with cd:
(cd /path/to && ls -tp | grep -v '/$' | tail -n +6 | xargs -I {} rm -- {})
The same applies analogously to the commands below.
The above is inefficient, because xargs has to invoke rm separately for each filename.
However, your platform's specific xargs implementation may allow you to solve this problem:
A solution that works with GNU xargs is to use -d '\n', which makes xargs consider each input line a separate argument, yet passes as many arguments as will fit on a command line at once:
ls -tp | grep -v '/$' | tail -n +6 | xargs -d '\n' -r rm --
Note: Option -r (--no-run-if-empty) ensures that rm is not invoked if there's no input.
A solution that works with both GNU xargs and BSD xargs (including on macOS) - though technically still not POSIX-compliant - is to use -0 to handle NUL-separated input, after first translating newlines to NUL (0x0) chars., which also passes (typically) all filenames at once:
ls -tp | grep -v '/$' | tail -n +6 | tr '\n' '\0' | xargs -0 rm --
Explanation:
ls -tp prints the names of filesystem items sorted by how recently they were modified , in descending order (most recently modified items first) (-t), with directories printed with a trailing / to mark them as such (-p).
Note: It is the fact that ls -tp always outputs file / directory names only, not full paths, that necessitates the subshell approach mentioned above for targeting a directory other than the current one ((cd /path/to && ls -tp ...)).
grep -v '/$' then weeds out directories from the resulting listing, by omitting (-v) lines that have a trailing / (/$).
Caveat: Since a symlink that points to a directory is technically not itself a directory, such symlinks will not be excluded.
tail -n +6 skips the first 5 entries in the listing, in effect returning all but the 5 most recently modified files, if any.
Note that in order to exclude N files, N+1 must be passed to tail -n +.
xargs -I {} rm -- {} (and its variations) then invokes on rm on all these files; if there are no matches at all, xargs won't do anything.
xargs -I {} rm -- {} defines placeholder {} that represents each input line as a whole, so rm is then invoked once for each input line, but with filenames with embedded spaces handled correctly.
-- in all cases ensures that any filenames that happen to start with - aren't mistaken for options by rm.
A variation on the original problem, in case the matching files need to be processed individually or collected in a shell array:
# One by one, in a shell loop (POSIX-compliant):
ls -tp | grep -v '/$' | tail -n +6 | while IFS= read -r f; do echo "$f"; done
# One by one, but using a Bash process substitution (<(...),
# so that the variables inside the `while` loop remain in scope:
while IFS= read -r f; do echo "$f"; done < <(ls -tp | grep -v '/$' | tail -n +6)
# Collecting the matches in a Bash *array*:
IFS=$'\n' read -d '' -ra files < <(ls -tp | grep -v '/$' | tail -n +6)
printf '%s\n' "${files[#]}" # print array elements
Remove all but 5 (or whatever number) of the most recent files in a directory.
rm `ls -t | awk 'NR>5'`
(ls -t|head -n 5;ls)|sort|uniq -u|xargs rm
This version supports names with spaces:
(ls -t|head -n 5;ls)|sort|uniq -u|sed -e 's,.*,"&",g'|xargs rm
Simpler variant of thelsdj's answer:
ls -tr | head -n -5 | xargs --no-run-if-empty rm
ls -tr displays all the files, oldest first (-t newest first, -r reverse).
head -n -5 displays all but the 5 last lines (ie the 5 newest files).
xargs rm calls rm for each selected file.
find . -maxdepth 1 -type f -printf '%T# %p\0' | sort -r -z -n | awk 'BEGIN { RS="\0"; ORS="\0"; FS="" } NR > 5 { sub("^[0-9]*(.[0-9]*)? ", ""); print }' | xargs -0 rm -f
Requires GNU find for -printf, and GNU sort for -z, and GNU awk for "\0", and GNU xargs for -0, but handles files with embedded newlines or spaces.
All these answers fail when there are directories in the current directory. Here's something that works:
find . -maxdepth 1 -type f | xargs -x ls -t | awk 'NR>5' | xargs -L1 rm
This:
works when there are directories in the current directory
tries to remove each file even if the previous one couldn't be removed (due to permissions, etc.)
fails safe when the number of files in the current directory is excessive and xargs would normally screw you over (the -x)
doesn't cater for spaces in filenames (perhaps you're using the wrong OS?)
ls -tQ | tail -n+4 | xargs rm
List filenames by modification time, quoting each filename. Exclude first 3 (3 most recent). Remove remaining.
EDIT after helpful comment from mklement0 (thanks!): corrected -n+3 argument, and note this will not work as expected if filenames contain newlines and/or the directory contains subdirectories.
Ignoring newlines is ignoring security and good coding. wnoise had the only good answer. Here is a variation on his that puts the filenames in an array $x
while IFS= read -rd ''; do
x+=("${REPLY#* }");
done < <(find . -maxdepth 1 -printf '%T# %p\0' | sort -r -z -n )
For Linux (GNU tools), an efficient & robust way to keep the n newest files in the current directory while removing the rest:
n=5
find . -maxdepth 1 -type f -printf '%T# %p\0' |
sort -z -nrt ' ' -k1,1 |
sed -z -e "1,${n}d" -e 's/[^ ]* //' |
xargs -0r rm -f
For BSD, find doesn't have the -printf predicate, stat can't output NULL bytes, and sed + awk can't handle NULL-delimited records.
Here's a solution that doesn't support newlines in paths but that safeguards against them by filtering them out:
#!/bin/bash
n=5
find . -maxdepth 1 -type f ! -path $'*\n*' -exec stat -f '%.9Fm %N' {} + |
sort -nrt ' ' -k1,1 |
awk -v n="$n" -F'^[^ ]* ' 'NR > n {printf "%s%c", $2, 0}' |
xargs -0 rm -f
note: I'm using bash because of the $'\n' notation. For sh you can define a variable containing a literal newline and use it instead.
Solution for UNIX & Linux (inspired from AIX/HP-UX/SunOS/BSD/Linux ls -b):
Some platforms don't provide find -printf, nor stat, nor support NUL-delimited records with stat/sort/awk/sed/xargs. That's why using perl is probably the most portable way to tackle the problem, because it is available by default in almost every OS.
I could have written the whole thing in perl but I didn't. I only use it for substituting stat and for encoding-decoding-escaping the filenames. The core logic is the same as the previous solutions and is implemented with POSIX tools.
note: perl's default stat has a resolution of a second, but starting from perl-5.8.9 you can get sub-second resolution with the stat function of the module Time::HiRes (when both the OS and the filesystem support it). That's what I'm using here; if your perl doesn't provide it then you can remove the ‑MTime::HiRes=stat from the command line.
n=5
find . '(' -name '.' -o -prune ')' -type f -exec \
perl -MTime::HiRes=stat -le '
foreach (#ARGV) {
#st = stat($_);
if ( #st > 0 ) {
s/([\\\n])/sprintf( "\\%03o", ord($1) )/ge;
print sprintf( "%.9f %s", $st[9], $_ );
}
else { print STDERR "stat: $_: $!"; }
}
' {} + |
sort -nrt ' ' -k1,1 |
sed -e "1,${n}d" -e 's/[^ ]* //' |
perl -l -ne '
s/\\([0-7]{3})/chr(oct($1))/ge;
s/(["\n])/"\\$1"/g;
print "\"$_\"";
' |
xargs -E '' sh -c '[ "$#" -gt 0 ] && rm -f "$#"' sh
Explanations:
For each file found, the first perl gets the modification time and outputs it along the encoded filename (each newline and backslash characters are replaced with the literals \012 and \134 respectively).
Now each time filename is guaranteed to be single-line, so POSIX sort and sed can safely work with this stream.
The second perl decodes the filenames and escapes them for POSIX xargs.
Lastly, xargs calls rm for deleting the files. The sh command is a trick that prevents xargs from running rm when there's no files to delete.
I realize this is an old thread, but maybe someone will benefit from this. This command will find files in the current directory :
for F in $(find . -maxdepth 1 -type f -name "*_srv_logs_*.tar.gz" -printf '%T# %p\n' | sort -r -z -n | tail -n+5 | awk '{ print $2; }'); do rm $F; done
This is a little more robust than some of the previous answers as it allows to limit your search domain to files matching expressions. First, find files matching whatever conditions you want. Print those files with the timestamps next to them.
find . -maxdepth 1 -type f -name "*_srv_logs_*.tar.gz" -printf '%T# %p\n'
Next, sort them by the timestamps:
sort -r -z -n
Then, knock off the 4 most recent files from the list:
tail -n+5
Grab the 2nd column (the filename, not the timestamp):
awk '{ print $2; }'
And then wrap that whole thing up into a for statement:
for F in $(); do rm $F; done
This may be a more verbose command, but I had much better luck being able to target conditional files and execute more complex commands against them.
If the filenames don't have spaces, this will work:
ls -C1 -t| awk 'NR>5'|xargs rm
If the filenames do have spaces, something like
ls -C1 -t | awk 'NR>5' | sed -e "s/^/rm '/" -e "s/$/'/" | sh
Basic logic:
get a listing of the files in time order, one column
get all but the first 5 (n=5 for this example)
first version: send those to rm
second version: gen a script that will remove them properly
With zsh
Assuming you don't care about present directories and you will not have more than 999 files (choose a bigger number if you want, or create a while loop).
[ 6 -le `ls *(.)|wc -l` ] && rm *(.om[6,999])
In *(.om[6,999]), the . means files, the o means sort order up, the m means by date of modification (put a for access time or c for inode change), the [6,999] chooses a range of file, so doesn't rm the 5 first.
Adaptation of #mklement0's excellent answer with some parameters and without needing to navigate to the folder containing the files to be deleted...
TARGET_FOLDER="/my/folder/path"
FILES_KEEP=5
ls -tp "$TARGET_FOLDER"**/* | grep -v '/$' | tail -n +$((FILES_KEEP+1)) | xargs -d '\n' -r rm --
[Ref(s).: https://stackoverflow.com/a/3572628/3223785 ]
Thanks! 😉
found interesting cmd in Sed-Onliners - Delete last 3 lines - fnd it perfect for another way to skin the cat (okay not) but idea:
#!/bin/bash
# sed cmd chng #2 to value file wish to retain
cd /opt/depot
ls -1 MyMintFiles*.zip > BigList
sed -n -e :a -e '1,2!{P;N;D;};N;ba' BigList > DeList
for i in `cat DeList`
do
echo "Deleted $i"
rm -f $i
#echo "File(s) gonzo "
#read junk
done
exit 0
Removes all but the 10 latest (most recents) files
ls -t1 | head -n $(echo $(ls -1 | wc -l) - 10 | bc) | xargs rm
If less than 10 files no file is removed and you will have :
error head: illegal line count -- 0
To count files with bash
I needed an elegant solution for the busybox (router), all xargs or array solutions were useless to me - no such command available there. find and mtime is not the proper answer as we are talking about 10 items and not necessarily 10 days. Espo's answer was the shortest and cleanest and likely the most unversal one.
Error with spaces and when no files are to be deleted are both simply solved the standard way:
rm "$(ls -td *.tar | awk 'NR>7')" 2>&-
Bit more educational version: We can do it all if we use awk differently. Normally, I use this method to pass (return) variables from the awk to the sh. As we read all the time that can not be done, I beg to differ: here is the method.
Example for .tar files with no problem regarding the spaces in the filename. To test, replace "rm" with the "ls".
eval $(ls -td *.tar | awk 'NR>7 { print "rm \"" $0 "\""}')
Explanation:
ls -td *.tar lists all .tar files sorted by the time. To apply to all the files in the current folder, remove the "d *.tar" part
awk 'NR>7... skips the first 7 lines
print "rm \"" $0 "\"" constructs a line: rm "file name"
eval executes it
Since we are using rm, I would not use the above command in a script! Wiser usage is:
(cd /FolderToDeleteWithin && eval $(ls -td *.tar | awk 'NR>7 { print "rm \"" $0 "\""}'))
In the case of using ls -t command will not do any harm on such silly examples as: touch 'foo " bar' and touch 'hello * world'. Not that we ever create files with such names in real life!
Sidenote. If we wanted to pass a variable to the sh this way, we would simply modify the print (simple form, no spaces tolerated):
print "VarName="$1
to set the variable VarName to the value of $1. Multiple variables can be created in one go. This VarName becomes a normal sh variable and can be normally used in a script or shell afterwards. So, to create variables with awk and give them back to the shell:
eval $(ls -td *.tar | awk 'NR>7 { print "VarName=\""$1"\"" }'); echo "$VarName"
leaveCount=5
fileCount=$(ls -1 *.log | wc -l)
tailCount=$((fileCount - leaveCount))
# avoid negative tail argument
[[ $tailCount < 0 ]] && tailCount=0
ls -t *.log | tail -$tailCount | xargs rm -f
I made this into a bash shell script. Usage: keep NUM DIR where NUM is the number of files to keep and DIR is the directory to scrub.
#!/bin/bash
# Keep last N files by date.
# Usage: keep NUMBER DIRECTORY
echo ""
if [ $# -lt 2 ]; then
echo "Usage: $0 NUMFILES DIR"
echo "Keep last N newest files."
exit 1
fi
if [ ! -e $2 ]; then
echo "ERROR: directory '$1' does not exist"
exit 1
fi
if [ ! -d $2 ]; then
echo "ERROR: '$1' is not a directory"
exit 1
fi
pushd $2 > /dev/null
ls -tp | grep -v '/' | tail -n +"$1" | xargs -I {} rm -- {}
popd > /dev/null
echo "Done. Kept $1 most recent files in $2."
ls $2|wc -l
Modified version of the answer of #Fabien if you want to specify a path. Useful if you're running the script elsewhere.
ls -tr /path/foo/ | head -n -5 | xargs -I% --no-run-if-empty rm /path/foo/%

How to move files where the first line contains a string?

I am currently using the following command:
grep -l -Z -E '.*?FindMyRegex' /home/user/folder/*.csv | xargs -0 -I{} mv {} /home/destination/folder
This works fine. The problem is it uses grep on the entire file.
I would like to use the grep command on the FIRST line of the file only.
I have tried to use head -1 file | at the beginning, but it did not work.
A change I would add to your script is -
for file in *.csv; do
head -1 "$file" | grep -l -Z -E '.*?FindMyRegex' | xargs -0 -I{} mv {} /home/destination/folder;
done
you can maybe try sed '1q' file.csv | grep ... to search the regexp only in the first line.
You don't need grep or find, as long as your files don't have embedded newlines.
I don't know an easy way off the top of my head to get sed to delimit with nulls.
mv $( for f in /home/user/folder/*.csv;
do sed -ns '1 { /yourPattern/F; q; }' $f;
done ) /home/destination/folder/
EDIT
Rewrote with a loop. This will run a separate instance of sed to check each file, but at least it shouldn't read beyond the first line. It will fail syntactically if there are no hits.
You might need -E depending on your regex.
-n says don't print records from the files.
-s says treat each file as a distinct input - this is so the filenames aren't always the first one.
This does require GNU sed for the F.
gawk 'FNR==1{if($0~/PATTERN/)
printf "mv %s %s\n",FILENAME, "/target";nextfile}' /path/*.csv
First of all, in your regex: .*?FindMyRegex the .*? doesn't make any sense, they could be removed.
The above awk (gawk) one-liner will build up mv file target command lines for you. You can check them, if you are satisfied with them, pipe the output to |sh , the commands are gonna be executed.
replace PATTERN by your regex pattern, and /target by the real target dir.
The one-liner is assuming that the filenames don't contain special chars (space i.e.), if it is the case, add "s to the mv cmd.
using GNU awk to find the filenames, pipe the filenames into xargs
gawk -v pattern="myRegex" '
FNR == 1 {if ($0 ~ pattern) printf "%s\0", FILENAME; nextfile}
' *.csv | xargs -0 echo mv -t destination
If it looks OK, remove "echo"
Try this Shellcheck-clean Bash code:
#! /bin/bash
shopt -s nullglob # Globs that match nothing expand to nothing
shopt -s dotglob # Globs match files whose names start with '.'
dest=/home/destination/folder
for file in *.csv ; do
head -n 1 -- "$file" | grep -qE '.*?FindMyRegex' && mv -- "$file" "$dest"
done
shopt -s nullglob prevents an error if there are no .csv files in the directory.
shopt -s dotglob ensures that files whose name starts with '.' are handled.
The -- in the options for head and mv ensures that files whose names begin with - are handled correctly.
The quotes in "$file" and "$dest" ensure that names that contain whitespace (actually $IFS) characters (including newlines) or glob metacharacters are handled correctly.
Note that the .*? in the reqular expression is probably redundant, and may not do what you think it does (grep -E doesn't do non-greedy matching).

change name of file in nested folders

I have been trying to think of a way to rename file names that are listed in nested folders and am having an issue resolving this matter. as a test i have been able to cut out what part of the name i would like to alter but can't think of how to put that into a variable and chain the name together. the file format looks like this.
XXX_XXXX_YYYYYYYYYY_100426151653-all.mp3
i have been testing this format out to cut the part out i was looking to change but i am not sure this would be the best way of doing it.
echo XXX_XXXX_YYYYYYYYYY_100426095135-all.mp3 |awk -F_ '{print $4}' | cut -c 1-6
I would like to change the 100426151653 to this 20100426-151653 format in the name.
i have tried to use the rename the file with this command with this format 's/ //g' but that format did not work i had to resort to rename ' ' '' file name to remove a blank space.
so the file would start as this
XXX_XXXX_YYYYYYYYYY_100426151653-all.mp3
and end like this
XXX_XXXX_YYYYYYYYYY_20100426-151653-all.mp3
How about using find and a bash function
#!/bin/bash
modfn () {
suffix=$2
fn=$(basename $1)
path=$(dirname $1)
fld1=$(echo $fn | cut -d '_' -f1)
fld2=$(echo $fn | cut -d '_' -f2)
fld3=$(echo $fn | cut -d '_' -f3)
fld4=$(echo $fn | cut -d '_' -f4)
fld5=${fld4%$suffix}
l5=${#fld5}
fld6=${fld5:0:$(($l5 - 6))}
fld7=${fld5:$(($l5 - 6)):6}
newfn="${fld1}_${fld2}_${fld3}_20${fld6}-${fld7}${suffix}"
echo "moving ${path}/${fn} to ${path}/${newfn}"
mv ${path}/${fn} ${path}/${newfn}"
}
export -f modfn
suffix="-all.mp3"
export suffix
find . -type f -name "*${suffix}" ! -name "*-*${suffix}" -exec bash -c 'modfn "$0" ${suffix}' {} +
The above bash script uses find to search in the current folder and it's contents for files like WWW_XXXX_YYYYYYYYYY_AAAAAABBBBBB-all.mp3 yet excludes ones that are already renamed and look like WWW_XXXX_YYYYYYYYYY_20AAAAAA-BBBBBB-all.mp3.
W,X,Y,A,B can be any character other than underscore or dash.
All the found files are renamed
NOTE: There are ways to shrink the above script but doing that makes the operation less obvious.
This perl one-liner does the job:
find . -name "XXX_XXXX_YYYYYYYYYY_*-all.mp3" -printf '%P\n' 2>/dev/null | perl -nle '$o=$_; s/_[0-9]{6}/_20100426-/; $n=$_; rename($o,$n)if!-e$n'
Note: I came just with a find command and regex part. The credit for a perl one liner goes to perlmonks user at http://www.perlmonks.org/?node=823355

Find files in different directories and operate on the filenames

$ ls /tmp/foo/
file1.txt file2.txt
$ ls /tmp/bar/
file20.txt
$ ls /tmp/foo/file*.txt | grep -o -E '[0-9]+' | sort -n | paste -s -d,
1,2
How to fetch the number in the filename from both the directories? in the above example, I need to get 1,2,20, its in bash shell.
UPDATE:
$ ls /tmp/foo/file*.txt /tmp/bar/file*.txt /tmp/jaz99/file*.txt /tmp/nah/file*.txt | grep -o -E '[0-9]+' | sort -n | paste -s -d,
ls: cannot access /tmp/nah/file*.txt: No such file or directory
1,2,20,30,99
in this case, it should not print 99 (as its not matched by *), and should not print the error if file not found.
You can get this done using a loop with output of find:
s=
# run a loop using find command in a process substitution
while IFS= read -d '' -r file; do
file="${file##*/}" # strip down all directory paths
s+="${file//[!0-9]/}," # remove all non-numeric characters and append comma
done < <(find /tmp/{foo,bar,nah,jaz99} -name '*.txt' -print0 2>/dev/null)
echo "${s%,}" # remove last comma from string
Output
1,2,20,30
Here's my take on this. Use arrays. No need to use external tools like sed or awk or find.
#!/usr/bin/env bash
declare -a a=()
for f in /tmp/{foo,bar,nah}/file*.txt; do
[[ $f =~ .*file([0-9]+).* ]]
a+=( ${BASH_REMATCH[1]} )
done
IFS=,
echo "${a[*]}"
The [[...]] expression populates the $BASH_REMATCH array with regex components. You can use that to extract the numbers and place them in a new temporary array, which you can express with comma separators using $IFS.
Results:
$ mkdir /tmp/foo /tmp/bar
$ touch /tmp/foo/file{1,2}.txt /tmp/bar/file20.txt
$ ./doit
1,2,20

Output filename/lines/type for given directory

I'm trying to teach myself basic file manipulation and scripting in linux but I've hit a wall. Right now I'm trying to output a table that gives something like
FILENAME LINES TYPE
File1 22 File
File2 56 File
Folder1 N/A Directory
when given any directory to search. I've been researching how to format output using awk and using maybe grep and wc to try and get my data but I'm a bit lost. For all I know I'm barking up the wrong tree entirely.
Look at printf to format your output, then look at the commands file to find your file type, wc to print out the number of lines, etc.
All this could be done via a find | while read loop:
printf "%-20.20s %-3.3s %s\n", "File", "Lines", "Type"
find . -type f -print0 | while read -d $'\0' file
do
file_name=$(basename $file)
lines="$(cat $file | wc -l | sed 's/^ *//')"
desc="$(file --brief "$file")"
printf "%-20.20s %3.3s %s\n", "$file_name", $lines, "$desc"
done
The $(...) syntax returns the output of the enclosed command as a string that can be assigned to variable. I use cat $file | wc -l to eliminate the name of the file, and then use sed to remove leading spaces.

Resources