Linux find command questions - linux

I do not have a working Linux system to try these commands out with so I am asking on here if what I am planning on doing is the correct thing to do. (Doing this while I am downloading an ISO via a connection that I think dial-up is faster).
1, I am trying to find all files with the .log extension in the /var/log directory and sub-directories, writing the standard out to logdata.txt and standard out to logerrors.txt
I believe the command would be:
$ find /var/log/ -name *.log 1>logdata.txt 2>/home/username/logs/logerrors.txt.
2, Find all files with .conf in the /etc directory. standard out will be a file called etcdata and standard error to etcerrors.
$ find /etc -name *.conf 1>etcdata 2>etcerrors
3, find all files that have been modified in the last 30 minutes in the /var directory. standard out is to go into vardata and errors into varerrors.
Would that be:
$ find /var -mmin 30 1>vardata 2>varerrors.
Are these correct? If not what am I doing wrong?

1, I am trying to find all files with the .log extension in the /var/log directory and sub-directories, writing the standard out to logdata.txt and standard out to logerrors.txt
Here you go:
find /var/log/ -name '*.log' >logdata.txt 2>/home/username/logs/logerrors.txt
Notes:
You need to quote '*.log', otherwise the shell will expand them before passing to find.
No need to write 1>file, >file is enough
2, Find all files with .conf in the /etc directory. standard out will be a file called etcdata and standard error to etcerrors.
As earlier:
find /etc -name \*.conf >etcdata 2>etcerrors
Here I escaped the * another way, for the sake of an example. This is equivalent to '*.conf'.
3, find all files that have been modified in the last 30 minutes in the /var directory. standard out is to go into vardata and errors into varerrors.
find /var -mmin -30 >vardata 2>varerrors
I changed -mmin 30 to -mmin -30. This way it matches files modified within 30 minutes. Otherwise it matches files were modified exactly 30 minutes ago.

When using wildcards in the command, you need to make sure that they do not get interpreted by the shell. So, it is better to include the expression with wildcards in quotes. Thus, the first one will be:
find /var/log/ -name "*.log" 1>logdata.txt 2>/home/username/logs/logerrors.txt
Same comment on the second one where you should have "*.conf".

Related

How does the find command search for files

Quarrying out from another thread Move file that has aged x minutes, this question came up:
How does the find command found typically in Linux search for files in the current directory?
Consider a directory that contains a fairly large amount of files, then:
Firstly find MY_FILE.txt returns immediately and secondly find . -name MY_FILE.txt takes much longer.
I used strace -c to see what happens for both and I learned that the second command invokes a directory scan, which explains why it's slower.
So, the first command must be optimized. Can anybody point me to the appropriate resource or provide a quick explanation how this might be implemented?
The syntax for find is find <paths> <expression>, where paths is a list of files and directories to start the search from. find starts from those locations and then recurses (if they're directories).
When you write find . -name MY_FILE.txt it performs a recursive search under the ./ directory. But if you write find MY_FILE.txt then you're telling it to start the search at ./MY_FILE.txt, and so it does:
$ strace -e file find MY_FILE.txt
...
newfstatat(AT_FDCWD, "MY_FILE.txt", 0x556688ecdc68, AT_SYMLINK_NOFOLLOW) = -1 ENOENT (No such file or directory)
...
(No such file or directory)
: No such file or directory
+++ exited with 1 +++
Since the path doesn't exist, it only takes a single system call to determine that there's no such file. It calls newfstat(), gets a No such file or directory error, and that's that.
In other words, find MY_FILE.txt isn't equivalent to find . -name MY_FILE.txt. Heck, I wouldn't even call it useful because you're not asking it to search. You're just asking it to tell you if MY_FILE.txt exists in the current directory or not. But you could find that out by simply calling ls MY_FILE.txt.
Here's the difference:
[~]$ cd /usr
[/usr]$ find . -name sha384sum
./bin/sha384sum
[/usr]$ find sha384sum
find: ‘sha384sum’: No such file or directory
The first one performs a recursive search and finds /usr/bin/sha384sum. The second one doesn't recurse and immediately fails bcause /usr/sha384sum doesn't exist. It doesn't look any deeper. It's done in a nanosecond.

When to use -type d-exec chmod 750 / 640 in docker

I've started to work with Docker for local development moving from installing everything on my mac to containers. Looking through a number of projects I regularly see the following shell commands, particularly
find /www -type d -exec chmod 750 {} \; \
find /www -type f -exec chmod 640 {} \;
Firstly, what are they trying to achieve, secondly what do the commands actually mean and lastly why/ when would you want or need to use this?
I recently duplicated and modified another project and found pulling these commands out seemed to make no difference (fair enough it was no longer based on the same base container.... but still).
Any glimmer of enlightenment would be greatly appreciated.
EDITS:
That handy link in the comments below to explain shell tells us:
What: find all the folders in /www and execute the chmod command, changing the permissions to 750
- still unsure of 750, and more importantly why you would do this.
The commands sets all files and directories to be readable and writable by the owner, and readable by the group but the files can not be executed by anyone.
You might want to read up on unix permissions in a bit more detail first.
find /www -type f -exec chmod 640 {} \;
Find all files under /www and set the user to have read, write access (6) and the group to have read access (4). Other users have no access (0).
find /www -type d -exec chmod 750 {} \;
Find all directories under /www and set the user to have read, write and execute permissions (7) and the group to have read and execute permissions (5) to those directories. Other users have no permissions (0).
The \; after each find terminates the -exec command and must be escaped when run in a shell so it is not interpreted as a regular ; which is the end of the shell command. This can also be achieved with a + which is easier to read as it doesn't need to be escaped and more efficient. The efficiency can cause differences in output, if you are relying on the stdout/stderr somewhere else.
Execute permissions on a directory mean that a user can change into the directory and access the files inside. As a directory can't be executed in the sense of a executable file, the execute bit was overloaded to mean something else.
The link Cyrus posted to explainshell.com is an excellent tool as well.

Linux "find" returns all files

A few days ago I was reading about the Linux find tool and based on that I issued the following command to see if I have the Python.h file:
find . 'Python.h'
The problem is that all files in current dir and subdirs are returned. Shouldn't I get what I'm looking for?
You left out the parameter specifier -name:
find ./ -name 'Python.h'
find will recurse through all directories in the current directory. If you just want to see whether you have a file in the current directory, use ls:
ls Python.h
Use -name switch:
find . -name 'Python.h'
Otherwise it takes the name as location to look at.

Linux Shell Script to change file mode

I am entirely new to shell scripts. I know how to change the mode for a single file.
But, I need to meet the following requirement and my operating system is red hat enterprise linux
To find all files in a directory which are having 640 mode and then change it to 644.
Like wise im having 10 directories where i need to recursively find all the files in all directories and change the mode to 644.
Later sending email with the file names whose status have been changed.
Expecting your kind assistance to complete this requirement.
Some research points to get you going.
The find command can be used to find files of varying properties under a given point.
You can use it with -type to select specific file types (regular files, directories, etc).
You can use it with -maxdepth to restrict how deep it will go down the tree.
You can use it with -perm to select only files with specific permissions.
You can use it with -print to output the filenames, including capturing them to a text file for later mailing (with a tool like mailx).
You can use it with -exec to carry out arbitrary commands on each file matching the conditions.
chmod is the command to change permissions.
For example, the following commands will find all regular files of the form *.dat, in the current directory (no subdirectories) and with permission 640, then change all those permissions to 644:
find . -type f -perm 640 -name '*.dat' -maxdepth 1 -exec chmod 644 {} ';'
All these options, and more, can be found in the manpage for find with the command:
man find
or by looking for some good explanations, such as the GNU find documentation.
However, find is not a tool for the faint of heart, it will bite you at every opportunity. Expect to ask at least another ten questions here before you get what you need :-)

How to have cron delete file sizes over certain size

Can someone please tell me what command line to input into cron to delete all files in a certain directory over a certain size. Thank you
(I'm on an apache server... and I'm using the cpanel cron program)
Try using
find path/to/directory -type f -size +150k
for specifying file size in kb. In case you need limit in MB some other day, use 150M instead.
The current command will delete all files within that directory and its subdirectory, so you may want to use the maxdepth option for deleting files within a directory and not in its subdirectories
find path/to/directory -maxdepth 1 -type f -size +150k

Resources