#!/bin/sh
file_list=`find . -type f`
IFS=$(echo) #this enables for loop to break on newline
for file_ in $file_list; do
file $file_
done
This shell script will amazingly report that (File name too long). I guess that the script feeds file with 3 times $file_list !!!
But if I change the file command with a simple echo, then the script will print all files in the current directory line by line which is expected.
It's not a good idea to iterate over the results of find. Use it's exec option instead:
find -type f -exec file {} \;
Related
So I've a file called allTextFiles.txt in which it has paths of a all regular text file.
eg:
./test/file1.txt
./test/file2.txt
./test/file3.txt
My task is to formulate shell command such that command CAT will go through all of these paths and display content of each file.
Is it possible ?
Using xargs with the allTextFiles.txt
You can use the command cat to list the content and xargs (Xargs command documentation) to process every line of the file. For example:
cat allTextFiles.txt | xargs cat
kk #./test/file1.txt
jj #./test/file2.txt
yy #./test/file3.txt
Using find command without allTextFiles.txt
You can use the command find (find command documentation) to go trough the main folder and search recursively. Once you find a file, to use the cat command.
You achieve the purpose of showing all the content of txt files recursively with only one command.
find . -type f -name "*.txt" -exec cat {} \;
kk #./test/file1.txt
jj #./test/file2.txt
yy #./test/file3.txt
Where the . means current directory. -type f only filters files. -name "*.txt" filters only files ending with .txt extension and the -exec part is where you process all the files found.
Does it cover your need?
You can use a loop; something like
while IFS= read -r file; do
cat "$file"
done < allTextFiles.txt
See Bash FAQ 001 for more discussion on reading a file a line at a time (Much of the content applies to all shells, not just bash).
If you are using bash:
readarray -t files < allTextFiles.txt
cat "${files[#]}"
is an alternative that avoids the explicit loop.
Why does running this command give me error message: No such file or directory ?
for i in `find ~/desktop -name '*.py'` ; do ./$i ; done
The complete error message makes it much more clear what the problem is:
bash: .//home/youruser/desktop/foo.py: No such file or directory
You can see that there is indeed no such file:
$ .//home/youruser/desktop/foo.py
bash: .//home/youruser/desktop/foo.py: No such file or directory
$ ls -l .//home/youruser/desktop/foo.py
ls: cannot access './/home/youruser/desktop/foo.py': No such file or directory
Here's instead how you can run a file /home/youruser/desktop/foo.py:
$ /home/youruser/desktop/foo.py
Hello World
So to run it in your loop, you can do:
for i in `find ~/desktop -name '*.py'` ; do $i ; done
Here's a better way of doing the same thing:
find ~/desktop -name '*.py' -exec {} \;
or with a shell loop:
find ~/desktop -name '*.py' -print0 | while IFS= read -d '' -r file; do "$file"; done
For an explanation of what ./ is and does, and why it makes no sense here, see this question
Try find and exec option. http://man7.org/linux/man-pages/man1/find.1.html
-exec command ;
Execute command; true if 0 status is returned. All following
arguments to find are taken to be arguments to the command
until an argument consisting of `;' is encountered. The
string `{}' is replaced by the current file name being
processed everywhere it occurs in the arguments to the
command, not just in arguments where it is alone, as in some
versions of find. Both of these constructions might need to
be escaped (with a `\') or quoted to protect them from
expansion by the shell. See the EXAMPLES section for examples
of the use of the -exec option. The specified command is run
once for each matched file. The command is executed in the
starting directory. There are unavoidable security problems
surrounding use of the -exec action; you should use the
-execdir option instead.
-exec command {} +
This variant of the -exec action runs the specified command on
the selected files, but the command line is built by appending
each selected file name at the end; the total number of
invocations of the command will be much less than the number
of matched files. The command line is built in much the same
way that xargs builds its command lines. Only one instance of
`{}' is allowed within the command, and (when find is being
invoked from a shell) it should be quoted (for example, '{}')
to protect it from interpretation by shells. The command is
executed in the starting directory. If any invocation with
the `+' form returns a non-zero value as exit status, then
find returns a non-zero exit status. If find encounters an
error, this can sometimes cause an immediate exit, so some
pending commands may not be run at all. This variant of -exec
always returns true.
The paths returned by the find statement will be absolute paths, like ~/desktop/program.py. If you put ./ in front of them, you get paths like ./~/desktop/ which don’t exist.
Replace ./$i with "$i" (the quotes to take care of file names with spaces etc.).
You should use $i and not ./$i
I was doing the same thing this exact moment. I wanted a script to find if there's any flac files in the directory and convert it to opus.
Here is my solution:
if test -n "$(find ./ -maxdepth 1 -name '*.flac' -print -quit)"
then
do this
else
do nothing
fi
I have a series of numbered sub-directories that may or may not contain zip files, and within those zip files are some single-line .txt files I need. Is it possible to use a combination of find and unzip -p to list the file path and the single line contents on the same output line? I'd like to save the results to a .txt and import it into excel to work with.
From the main directory I can successfully find and output the single line:
find . -name 'file.zip' -exec unzip -p {} file.txt \;
How can I prefix the find output (i.e. the file path) to the output of this unzip command? Ideally, I'd like each line of the text file to resemble:
./path/to/file1.zip "Single line of file1.txt file"
./path/to/file2.zip "Single line of file2.txt file"
and so on. Can anyone provide some suggestions? I'm not very experienced with linux command line beyond simple commands.
Thank you.
Put all the code you want to execute into a shell script, then use the exec feature to call the shell script, i.e.
cat finder.bash
#!/bin/bash
printf "$# : " # prints just the /path/to/file/file.zip
unzip -p "$#" file.txt
For now, get that to work, you can make it generic to pass others besides file.txt later.
Make the script executable
chmod 755 finder.bash
Call it from find. i.e.
find . -name 'file.zip' -exec /path/to/finder.bash {} \;
(I don't have an easy way to test this, so reply in comments with error msgs).
Here is my problem. I have a directory that contains multiple sub-directories. In each sub-directory, there is at least one script sh.
I want to do a script that execute sequentially all this scripts.
I am pretty new to linux.
Thanks for your help,
find . -name "*.sh" -exec {} \;
This is a shell command which, beginning in the directory it's being run in (specified by .), finds file names that end in .sh and then executes those files (the found file is substituted in the {}). The backslash prevents the semicolon from being expanded by the shell (here, bash).
Try doing it using find and for:
for file in `find . -type f -name "*.sh"`; do sh $file; done
Use can also store it in array and do it:
array=($(find . -type f -name "*.sh"))
for file in ${array[#]};do sh $file; done
From the top directory, run the following command:
for f in `find . -type f -name \*.sh`; do $f; done
The find command will locate all .sh files. The output of the find command (a whitespace separated list of pathnames to the scripts) becomes the input to the for command. The for command processes each input, assigning each entry to the variable f. The "$f" executes each script.
The following bash script finds a .txt file from the given directory path, then changes one word (change mountain to sea) from the .txt file
#!/bin/bash
FILE=`find /home/abc/Documents/2011.11.* -type f -name "abc.txt"`
sed -e 's/mountain/sea/g' $FILE
The output I am getting is ok in this case.
My problem is if I want to give the directory path as command line argument then it is not working. Suppose, I modify my bash script to:
#!/bin/bash
FILE=`find $1 -type f -name "abc.txt"`
sed -e 's/mountain/sea/g' $FILE
and invoke it like:
./test.sh /home/abc/Documents/2011.11.*
Error is:
./test.sh: line 2: /home/abc/Documents/2011.11.10/abc.txt: Permission denied
Can anybody suggest how to access directory path as command line argument?
Your first line should be:
FILE=`find "$#" -type f -name "abc.txt"`
The wildcard will be expanded before calling the script, so you need to use "$#" to get all the directories that it expands to and pass these as the arguments to find.
You don't need to pass .* to your script.
Have your script like this:
#!/bin/bash
# some sanity checks here
path="$1"
find "$path".* -type f -name "abc.txt" -exec sed -i.bak 's/mountain/sea/g' '{}' \;
And run it like:
./test.sh "/home/abc/Documents/2011.11"
PS: See how sed can be invoked directly from find itself using -exec option.