Recursively traverse Samba shares? - linux

With bash on linux, how would I write a command to recursively traverse shares mounted, and run commands on each file, to get the file type and size, permissions etc, and then output all of this to a file?

A CIFS share mount would look like a regular directory tree in the linux shell.
The command to search as you need is therefore generic.
From the base directory,
find . -type f -exec ls -lsrt {} \; > file.txt
Ok, this does not give you the file-type detail;
that can be done with a -exec file filename on each file.

mount -v | grep smbfs | awk '{print $3}' | xargs ls -lsR
which you can redirect to a file.

mount -v | awk '/smbfs/{
cmd="ls -lsR "$3
while((cmd | getline d)>0){
print d "->file "$3
}
close(cmd)
}'

find $(mount -t smbfs | awk '{print $3}') -mount -type f -ls -execdir file {} \;
...
33597911 4 -rw-rw-r-- 2 peter peter 5 Dec 6 00:09 ./test.d\ ir/base
./base: ASCII text
3662 4 -rw-rw-r-- 2 peter peter 4 Dec 6 02:26 ./test.txt...
./test.txt...: ASCII text
3661 0 -rw-rw-r-- 2 peter peter 0 Dec 6 02:45 ./foo.txt
./foo.txt: empty
...
If you used -exec file {} +, it would run file once with multiple arguments, but then the output wouldn't be nicely interleaved with find's -ls output. (GNU find's -execdir {} + currently behaves the same as -execdir {} \;, due to a bug workaround. Use -exec file {} \; if you want the full path in the file output as well as in the -ls output above it.
find -ls output is not quite the same as ls -l, since it includes inode an number of blocks as the first two fields.

Related

Find Most Recent File in a Directory That Matches Certain File Size

I need to find the most recently modified file in a directory that matches 3.0 MB.
First Attempt
ls -t /home/weather/some.cool*.file | head -n +1 | grep "3.0M"
Second Attempt
find /home/weather/ -maxdepth 1 -type f -name "some.cool*.file" -size 3M -exec ls -t "{}" +; | head -n +1
Am I close?
I hope this is of some use -
ls -ltr --block-size=MB | grep 3MB
The latest modified files will be displayed at the bottom of the output.
The -r flag shows the output in reverse order and the --block-size=MB will show the size of files in MB.
This should work:
ls -lh --sort=time /path/to/directory/*.file | grep "3.0M" | head -n =1

Command to check for a very large number of .gz files in a directory [duplicate]

This question already has answers here:
Argument list too long error for rm, cp, mv commands
(31 answers)
Closed 5 years ago.
Below is the current days file. Previous days file converted to .gz by system. I wanted to find the total count of last days specific .gz files. I tried the below command which gives me the error. Please suggest
bash-3.2$ ls -lrth|tail
299K Mar 23 2017 N08170323091903766
333K Mar 23 2017 N08170323091903771
328K Mar 23 2017 N09170323091903776
367K Mar 23 2017 N09170323091903782
347K Mar 23 2017 N04170323092003784
368K Mar 23 2017 N08170323092003783
***bash-3.2$ ls -lrth N08170322*|wc -l***
bash: /usr/bin/ls: Arg list too long
0
***bash-3.2$ zcat N08170322*.gz|wc -l***
bash: /usr/bin/zcat: Arg list too long
0
This is happening because you have too many files in the directory.
You can easily get around the first issue:
ls | grep -c N08170322
or, to be even more precise:
ls | grep -c '^N08170322'
would give you the list of files. However, a better way to do this is:
find . -name "N08170322*" -exec ls {} + | wc -l
which will address the ls parsing issue mentioned in #hek2mgl's comment.
If you really want to count the lines of all the zipped files in one shot, you can do this:
find . -name "N08170322*" -exec zcat {} + | wc -l
See also:
Argument list too long error for rm, cp, mv commands
Use this
find . -name "N08170322*" -exec ls {} \; |wc -l
As explained in the below answer, you are getting argument list too long as there are multiple files in the directory. To overcome it, you can club it with find and exec
Edit: Created use case to check if command works with/without ls
These are the 3 empty files I created.
$ find -name "file*" -exec ls {} \;
./file1
./file2
./file3
Running wc -l without ls, prints number of lines in each file.
$ find -name "file*" -exec wc -l {} \;
0 ./file1
0 ./file2
0 ./file3
Running it with ls gives me number count of number of files, which is what OP wants.
$ find -name "file*" -exec ls {} \; | wc -l
3

How to redirect out put of xargs when using sed

Since swiching over to a better management system I am wanting to remove all the redundant logs at the top of each of our source files. In Notepad++ I was able to achieve the result by using "replace in files" and replacing matches to \A(//.*\n)+ with blank. On Linux however I am having no such luck and am needing to resort to 'xargs' and 'sed'.
The sed expression I'm using is:
sed '1,/^[^\/]/{/^[^\/]/b; d}'
Ugly to be sure but it does seem to work.
The problem I'm having is when I try to run that through 'xargs' in order to feed it all the source files in our system I am unable to redirect the output to 'stripped' files, which I then intend to copy over the originals.
I want something in the line of:
find . -name "*.com" -type f -print0 | xargs -0 -I file sed '1,/^[^\/]/{/^[^\/]/b; d}' "file" > "file.stripped"
However I'm having grief passing the ">" through to the receiving environment (shell) as I'm already using too many quote marks. I have tried all manner of escaping and shell "wrappers" but I just can't get it to play ball.
Anyone care to point me in the right direction?
Thanks,
Slarti.
I made a similar scenario with a simpler sed expression just as an example, see if it works for you:
I created 3 files with the string "abcd" inside each:
# ls -l
total 12
-rw-r--r-- 1 root root 5 Oct 6 09:05 test.aaaaa.com
-rw-r--r-- 1 root root 5 Oct 6 09:05 test2.aaaaa.com
-rw-r--r-- 1 root root 5 Oct 6 09:05 test3.aaaaa.com
# cat test*
abcd
abcd
abcd
Running the find command as you showed using the -exec option instead of xargs, and replacing the sed expression for a silly one that simply replaces every "a" for "b" and the option -i, that writes directly do the input file:
# find . -name "*.com" -type f -print0 -exec sed -i 's/a/b/g' {} \;
./test2.aaaaa.com./test3.aaaaa.com./test.aaaaa.com
# cat test*
bbcd
bbcd
bbcd
In your case it should look like this:
# find . -name "*.com" -type f -print0 -exec sed -i '1,/^[^\/]/{/^[^\/]/b; d}' {} \;

shell must parse ls -Al output and get last field (file or directory name) ANY SOLUTION

I must parse ls -Al output and get file or directory name
ls -Al output :
drwxr-xr-x 12 s162103 studs 12 march 28 2012 personal domain
drwxr-xr-x 2 s162103 studs 3 march 28 22:32 public_html
drwxr-xr-x 7 s162103 studs 8 march 28 13:59 WebApplication1
I should use only ls -Al | <something>
for example:
ls -Al | awk '{print $8}'
but this doesn't work because $8 is not name if there's spaces in directory name,it is a part of name. maybe there's some utilities that cut last name or delete anything before? I need to find any solution. Please, help!
EDITED: I know what parse ls -Al is bad idea but I should exactly parse it with construction above! No way to use some thing like this
for f in *; do
somecommand "$f"
done
Don't parse ls -Al, if all you need is the file name.
You can put all file names in an array:
files=( * )
or you can iterate over the files directly:
for f in *; do
echo "$f"
done
If there is something specific from ls that you need, update your question to specify what you need.
How about thisls -Al |awk '{$1=$2=$3=$4=$5=$6=$7=$8="";print $0}'
I know it's a cheap trick but since you don't want to use anything other than ls -Al I cant think anything better...
Based on #squiguy request on comments, I post my comment as an answer:
What about just this?
ls -1A
instead of l (L, the letter), a 1 (one, the number). It will only list the names of the files.
It's also worth noting that find can do what you're looking for:
Everything in this directory, equivalent to ls:
find . -maxdepth 1
Recursively, similar to ls -R:
find .
Only directories in a given directory :
find /path/to/some/dir -maxdepth 1 -type d
md5sum every regular file :
find . -type f -exec md5sum {} \;
Hope awk works for you:
ls -Al | awk 'NR>1{for(i=9;i<NF;i++)printf $i" ";print $i}'
In case you're interested in sed:
ls -Al | sed '1d;s/^\([^ ]* *\)\{8\}//'

How can I add text to the same line?

I used this command to find mp3 files and write their name on log.txt:
find -name *.mp3 >> log.txt
I want to move the files using the mv command and I would like to append that to the log file so it could show the path where the files have been moved.
For example if the mp3 files are 1.mp3 and 2.mp3 then the log.txt should look like
1.mp3 >>>> /newfolder/1.mp3
2.mp3 >>>> /newfolder/2.mp3
How can I do that using unix commands? Thank you!
Using only move:
mv -v *.mp3 tmp/ > log.txt
or using find:
find -name '*.mp3' -exec mv -v {} test/ >> log.txt \;
You should probably use some scripting language like Perl or Python; text processing is rather awkward in the shell.
E.g. in Perl you can just postprocess the output of find, and print out what you did.
#!/usr/bin/perl -w
use strict;
use File::Find;
my #directories_to_search=("/tmp/");
sub wanted {
print "$File::Find::name >>> newdir/$_\n";
# do what you want with the file, e.g. invoke commands on it using system()
}
find(\&wanted, #directories_to_search);
Doing it in Perl or similar makes some things easier than in the shell; in particular handling of funny filenames (embedded spaces, special chars) is easier. Be careful when invoking syste() commands though.
For docs on the File::Find module see http://perldoc.perl.org/File/Find.html .
GNU find
find /path -type f -iname "*.mp3" -printf "%f/%p\n" | while IFS="/" -r read filename path
do
mv "$path" "$destination"
echo "$filename >>> $destination/$filename " > newfile.txt
done
output
$ touch 'test"quotes.txt'
$ ls -ltr
total 0
-rw-r--r-- 1 root root 0 2009-11-20 10:30 test"quotes.txt
$ mkdir temp
$ ls -l temp
total 0
$ find . -type f -iname "*\"*" -printf "%f:%p\n" | while IFS=":" read filename path; do mv "$filename" temp ; done
$ ls -l temp
total 0
-rw-r--r-- 1 root root 0 2009-11-20 11:53 test"quotes.txt

Resources