linux include all directories - linux

how would I type a file path in ubuntu terminal to include all files in all sub-directories?
If I had a main directory called "books" but had a ton of subdirectories with all sorts of different names containing files, how would I type a path to include all files in all subdirectories?
/books/???

From within the books top directory, you can use the command:
find . -type f
Then, if you wanted to, say run each file through cat, you could use the xargs command:
find . -type f | xargs cat
For more info, use commands:
man find
man xargs

It is unclear what you actually want ... Probably you will get a better solution to your problem, if you ask directly for it, not for one other problem you've come accross trying to circumvent the original problem.
do you mean something like the following?
file */*
where the first * expands for all subdirectories and the second * for all contained files ?
I have chosen the file command arbitrarily. You can choose whatever command you want to run on the files you get shell-expanded.
Also note that directories will also be included (if not excluded by name, e.g. *.png or *.txt).
The wildcard * is not exactly the file path to include all files in all subdirectories but it expands to all files (or directories) matching the wildcard expression as a list, e.g. file1 file2 file3 file4. See also this tutorial on shell expansion.
Note that there may be easy solutions to related problems. Like to copy all files in all subdirectories (cp -a for example, see man cp).
I also like find very much. It's quite easy to generate more flexible search patterns in combination with grep. To provide a random example:
du `find . | grep some_pattern_to_occur | grep -v some_pattern_to_not_occur`

./books/*
For example, assuming i'm in the parent directory of 'books':
ls ./books/*
EDIT:
Actually, to list all the tree recursively you should use:
ls -R ./books/*

Related

Linux command line: Find all files with a certain extension in a directory tree containing specific text

I would like to recursively traverse a directory tree and extract all files which contain a certain text in a remote Linux machine. I found a helpful command in this website:
grep -iRl "your-text-to-find" ./
Now however, I would like to modify this slightly by searching only in python (.py) files. I tried the following but it doesn't seem to work:
grep -iRl "your-text-to-find" ./*.py
How can I modify the command such that it recursively finds all .py files containing "your-text-to-find?
After looking through a few extra posts including this one, I found a command which works.
I am not exactly sure what xargs does but the post I mentioned explains a bit more:
find ./ -name *.py | xargs grep "text-to-find"

How to grep/find for a list of file names?

So for example, I have a text document of a list of file names I may have in a directory. I want to grep or use find to find out if those file names exist in a specific directory and the subdirectories within it. Current I can do it manually via find . | grep filename but that's one at a time and when I have over 100 file names I need to check to see if I have them or not that can be really pesky and time-consuming.
What's the best way to go about this?
xargs is what you want here. The case is following:
Assume you have a file named filenames.txt that contains a list of files
a.file
b.file
c.file
d.file
e.file
and only e.file doesn't exist.
the command in terminal is:
cat filenames.txt | xargs -I {} find . -type f -name {}
the output of this command is:
a.file
b.file
c.file
d.file
Maybe this is helpful.
If the files didn't move, since the last time, updatedb ran, often < 24h, your fastest search is by locate.
Read the filelist into an array and search by locate. In case the filenames are common (or occur as a part of other files), grep them by the base dir, where to find them:
< file.lst mapfile filearr
locate ${filearr[#]} | grep /path/where/to/find
If the file names may contain whitespace or characters, which might get interpreted by the bash, the usual masking mechanisms have to been taken.
A friend had helped me figure it out via find . | grep -i -Ff filenames.txt

Shell Script to finding files in specific folder and based on "age" of files

hello i got homework to make a shell script in linux to find a file in specific folder based on the "age" of those files. and after that i want to move that file to other specific folder.
thank you before
One way is to use the find command, and specify the "age" with -mtime (or -newer if age relative to other files). See man find for more details.
To move the files you can use mv (again, see man mv).
Directories can be passed as arguments or stored in variables and then used
as variables in the commands.
Without knowing anything else about your assignment I'd say use something like this:
find <directory> -mtime <n> | xargs mv -t <destination>
where xargs is used to pass the results from find to the mv command.

Linux terminal: Recursive search for string only in files w given file extension; display file name and absolute path

I'm new to Linux terminal; using Ubuntu Peppermint 5.
I want to recursively search all directories for a given text string (eg 'mystring'), in all files which have a given file extension (eg. '*.doc') in the file name; and then display a list of the file names and absolute file paths of all matches. I don't need to see any lines of content.
This must be a common problem. I'm hoping to find a solution which does the search quickly and efficiently, and is also simple to remember and type into the terminal.
I've tried using 'cat', 'grep', 'find', and 'locate' with various options, and piped together in different combinations, but I haven't found a way to do the above.
Something similar was discussed on:
How to show grep result with complete path or file name
and:
Recursively search for files of a given name, and find instances of a particular phrase AND display the path to that file
but I can't figure a way to adapt these to do the above, and would be grateful for any suggestions.
According to the grep manual, you can do this using the --include option (combined with the -l option if you want only the name — I usually use -n to show line numbers):
--include=glob
Search only files whose name matches glob, using wildcard matching as described under --exclude.
-l
--files-with-matches
Suppress normal output; instead print the name of each input file from which output would normally have been printed. The scanning of each file stops on the first match. (-l is specified by POSIX.)
A suitable glob would be "*.doc" (ensure that it is quoted, to allow the shell to pass it to grep).
GNU grep also has a recursive option -r (not in POSIX grep). Together with the globbing, you can search a directory-tree of ".doc" files like this:
grep -r -l --include="*.doc" "mystring" .
If you wanted to make this portable, then find is the place to start. But using grep's extension makes searches much faster, and is available on any Linux platform.
find . -name '*.doc' -exec grep -l 'mystring' {} \; -print
How it works:
find searches recursively from the given path .
for all files which name is '*.doc'
-exec grep execute grep on files found
suppress output from grep -l
and search inside the files for 'mystring'
The expression for grep ends with the {} \;
and -print print out all names where grep founds mystring.
EDIT:
To get only results from the current directory without recursion you can add:
-maxdepth 0 to find.

Retrieving the sub-directory, which had most recently been modified, in a Linux shell script?

How can I retrieve the sub-directory, which had most recently been modified, in a directory?
I am using a shell script on a Linux distribution (Ubuntu).
Sounds like you want the ls options
-t sort by modification time, newest first
And only show directories, use something like this answer suggests Listing only directories using ls in bash: An examination
ls -d */
And if you want each directory listed on one line (if your file/dirnames have no newlines or crazy characters) I'd add -1 So all together, this should list directories in the current directory, with the newest modified times at the top
ls -1td */
And only the single newest directory:
ls -1td */ | head -n 1
Or if you want to compare to a specific time you can use find and it's options like -cmin -cnewer -ctime -mmin -mtime and find can handle crazy names like newlines, spaces, etc with null terminated names options like -print0
How much the subdirectory is modified is irrelevant. Do you know the name of the subdirectory? Get its content like this:
files=$(ls subdir-name)
for file in ${files}; do
echo "I see there is a file named ${file}"
done

Resources