Display only hidden regular files exluding directories using AWK - linux

I have something that looks like this to display regular files but I don't know how to get it to display only hidden files ".".
ls -al | awk ' /^-/ {print $9}'
.ghost1.c
.ghost2
.ghost3.cpp
input4.txt
lab1.cpp
Lab2.cpp
proc
prog1.c
prog2.c
prog3.c.txt
prog.4c
script1_t03.sh
This is the 9th field and the teacher recommends we use the && operator to display only REGULAR HIDDEN files.

You can use find command for this :
find -maxdepth 1 -type f -name ".*"

The shell expands the pattern .* to all hidden files, including directories and special files, but excluding everything else. Using ls -ld would do the trick, e.g.
ls -ld .* | awk ' /^-/ {print $9}'

Related

Bash How to find directories of given files

I have a folder with several subfolders and in some of those subfolders I have text files example_1.txt, example_2.txt and so on. example_1.txt may be found in subfolder1, some subfolders do not contain text files.
How can I list all directories that contain a text file starting with example?
I can find all those files by running this command
find . -name "example*"
But what I need to do is to find the directories these files are located in? I would need a list like this subfolder1, subfolder4, subfolder8 and so on. Not sure how to do that.
You must be use the following command,
find . -name "example*" | uniq -u | awk -F / '{print $2}'
find all files in subfolders with mindepth 2 to avoid this folder (.)
get dirname with xargs dirname
sort output-list and make folders unique with sort -u
print only basenames with awk (delimiter is / and last string is $NF). add "," after every subfolder
translate newlines in blanks with tr
remove last ", " with sed
list=$(find ./ -mindepth 2 -type f -name "example*"|xargs dirname|sort -u|awk -F/ '{print $NF","}'|tr '\n' ' '|sed 's/, $//')
echo $list
flow, over, stack
Suggesting find command that prints only the directories path.
Than sort the paths.
Than remove duplicates.
find . -type f -name "example*" -printf "%h\n"|sort|uniq

How to grep files that has different letters?

I have thousands of files in a directory that are called: abc.txt srr.txt eek.txt abb.txt and etc. I want to grep only those files that has different last two letters. Example:
Good output: abc.txt eek.txt
Bad output: ekk.txt dee.txt.
Here is what I am trying to do:
#!/bin/bash
ls -l directory |grep .txt
It greps every file that has .txt in it.
How do I grep files that has two different last letters?
I'd go with find to list the *.txt files, and grep to filter out the ones that have the last two letters the same (using a backreference):
find . -type f -name '*.txt' | grep -v '\(.\)\1\.txt$'
It essentially picks up a character then immediately tries to back-reference it before .txt, and -v provides a reverse match leaving only files that do not have the same last two characters.
UPDATE: To move the found files you can chain mv to the command:
find . -type f -name '*.txt' | grep -v '\(.\)\1\.txt$' | xargs -i -t mv {} DESTINATION
It's not a good idea to parse the result of ls (read this doc to understand why). Here is what you could do in pure Bash, without using any external commands:
#!/bin/bash
shop -s nullglob # make sure glob yields nothing if there are no matches
for file in *.txt; do # grab all .txt files
[[ -f $file ]] || continue # skip if not a regular file
last6="${file: -6}" # get the last 6 characters of file name
[[ "${last6:1:1}" != "${last6:2:1}" ]] && printf '%s\n' "$file" # pick the files that match the criteria
# change printf to mv "$file" "$target_dir" above if you want to move the files
done
I've seem to accomplish what I wanted by using this:
ls -l |awk '{print $9}' | grep -vE "(.).?\1.?\."
awk '{print $9}' prints only the .txt files
grep -vE '(.).?\1.?\.' filters any names where the three characters before the period are not unique: aaa.txt, aab.txt, aba.txt and baa.txt are all filtered.

How to generate DEBIAN/md5sums file in this file structure?

I have the following file structure to build a Debian package, which doesn't contain any binary file (compiling task):
source/
source/DEBIAN
source/etc
source/usr
build.sh
The content of build.sh file is:
#!/bin/bash
md5sum `find . -type f | awk 'source/.\// { print substr($0, 3) }'` > DEBIAN/md5sums
dpkg-deb -b source <package-name_version>.deb
The problem is that the md5sum command here considers also DEBIAN/ files when making DEBIAN/md5sums file. I want to except DEBIAN/ files from the md5sum process.
find could ignores files specifying a pattern inside their path:
find . -type f -not -path "*DEBIAN*"
Your Awk script contains a syntax error and probably some sort of logic error as well. I guess you mean something like
md5sum $(find ./source/ -type f |
awk '!/^\.\/source\/DEBIAN/ { print substr($0, 3) }') > DEBIAN/md5sums
Equivalently, you could exclude source/DEBIAN from the find command line; but since you apparently want to postprocess the output with Awk anyway, factoring the exclusion into the Awk script makes sense.
The upgrade from `backticks` to $(dollar-paren) command substitution is not strictly necessary, but nevertheless probably a good idea.
Apparently, this code was copy/pasted from a script which uses substr to remove the leading ./ from the output from find. If (as indicated in comments) you wish to remove more, the script has to be refactored, because you cannot (easily) feed relative paths to md5sum which are not relative to the current directory. But moving more code to find and trimming the output with a simpler Awk script works fine:
find ./source -path '*/DEBIAN' -prune -o -type f -exec md5sum {} \; |
awk '{ print $1 " " substr($2, 10) }'
Try filtering the results of find through e.g. grep -v to exclude:
find . -type f | grep -v '^./source/DEBIAN/' | ...
Or you can probably do the filtering in awk as well...

shell must parse ls -Al output and get last field (file or directory name) ANY SOLUTION

I must parse ls -Al output and get file or directory name
ls -Al output :
drwxr-xr-x 12 s162103 studs 12 march 28 2012 personal domain
drwxr-xr-x 2 s162103 studs 3 march 28 22:32 public_html
drwxr-xr-x 7 s162103 studs 8 march 28 13:59 WebApplication1
I should use only ls -Al | <something>
for example:
ls -Al | awk '{print $8}'
but this doesn't work because $8 is not name if there's spaces in directory name,it is a part of name. maybe there's some utilities that cut last name or delete anything before? I need to find any solution. Please, help!
EDITED: I know what parse ls -Al is bad idea but I should exactly parse it with construction above! No way to use some thing like this
for f in *; do
somecommand "$f"
done
Don't parse ls -Al, if all you need is the file name.
You can put all file names in an array:
files=( * )
or you can iterate over the files directly:
for f in *; do
echo "$f"
done
If there is something specific from ls that you need, update your question to specify what you need.
How about thisls -Al |awk '{$1=$2=$3=$4=$5=$6=$7=$8="";print $0}'
I know it's a cheap trick but since you don't want to use anything other than ls -Al I cant think anything better...
Based on #squiguy request on comments, I post my comment as an answer:
What about just this?
ls -1A
instead of l (L, the letter), a 1 (one, the number). It will only list the names of the files.
It's also worth noting that find can do what you're looking for:
Everything in this directory, equivalent to ls:
find . -maxdepth 1
Recursively, similar to ls -R:
find .
Only directories in a given directory :
find /path/to/some/dir -maxdepth 1 -type d
md5sum every regular file :
find . -type f -exec md5sum {} \;
Hope awk works for you:
ls -Al | awk 'NR>1{for(i=9;i<NF;i++)printf $i" ";print $i}'
In case you're interested in sed:
ls -Al | sed '1d;s/^\([^ ]* *\)\{8\}//'

Copy the three newest files under one directory (recursively) to another specified directory

I'm using bash.
Suppose I have a log file directory /var/myprogram/logs/.
Under this directory I have many sub-directories and sub-sub-directories that include different types of log files from my program.
I'd like to find the three newest files (modified most recently), whose name starts with 2010, under /var/myprogram/logs/, regardless of sub-directory and copy them to my home directory.
Here's what I would do manually
1. Go through each directory and do ls -lt 2010*
to see which files starting with 2010 are modified most recently.
2. Once I go through all directories, I'd know which three files are the newest. So I copy them manually to my home directory.
This is pretty tedious, so I wondered if maybe I could somehow pipe some commands together to do this in one step, preferably without using shell scripts?
I've been looking into find, ls, head, and awk that I might be able to use but haven't figured the right way to glue them together.
Let me know if I need to clarify. Thanks.
Here's how you can do it:
find -type f -name '2010*' -printf "%C#\t%P\n" |sort -r -k1,1 |head -3 |cut -f 2-
This outputs a list of files prefixed by their last change time, sorts them based on that value, takes the top 3 and removes the timestamp.
Your answers feel very complicated, how about
for FILE in find . -type d; do ls -t -1 -F $FILE | grep -v "/" | head -n3 | xargs -I{} mv {} ..; done;
or laid out nicely
for FILE in `find . -type d`;
do
ls -t -1 -F $FILE | grep -v "/" | grep "^2010" | head -n3 | xargs -I{} mv {} ~;
done;
My "shortest" answer after quickly hacking it up.
for file in $(find . -iname *.php -mtime 1 | xargs ls -l | awk '{ print $6" "$7" "$8" "$9 }' | sort | sed -n '1,3p' | awk '{ print $4 }'); do cp $file ../; done
The main command stored in $() does the following:
Find all files recursively in current directory matching (case insensitive) the name *.php and having been modified in the last 24 hours.
Pipe to ls -l, required to be able to sort by modification date, so we can have the first three
Extract the modification date and file name/path with awk
Sort these files based on datetime
With sed print only the first 3 files
With awk print only their name/path
Used in a for loop and as action copy them to the desired location.
Or use #Hasturkun's variant, which popped as a response while I was editing this post :)

Resources