How to find a particular folder through terminal in fedora - linux

Presently i am using linux(Fedora 15) and i ma trying to search a folder in the entire file system like with below command
find / -name "apache-tomcat*"
The execution of the above command is taking more and more time that a user cant wait and results are some thing like below
[root#user fedrik]# find / -name "apache-tomcat*"
find: `/proc/6236/task/6236/ns/net': No such file or directory
find: `/proc/6236/task/6236/ns/uts': No such file or directory
find: `/proc/6236/task/6236/ns/ipc': No such file or directory
find: `/proc/6236/ns/net': No such file or directory
find: `/proc/6236/ns/uts': No such file or directory
find: `/proc/6236/ns/ipc': No such file or directory
find: `/proc/6462/task/6462/ns/net': No such file or directory
.................
.................
But as i have mentioned it is taking long time to process and sometimes it is been strucked, so can anyone please let me know on how to search a particular folder by name with a command from linux terminal that will be very fast and should search in the entire file system like above i used '/'
Edit
Actually my intention is to search the folder something like apache-tomcat-7.0.37 in the entire filesystem,
for example there may be many folders like apache-tomcat-6.0.45, apache-tomcat-5.1.7, apache-tomcat-5.0.37........... on different locations on filesystem
So as we can observe only the last part(which is numerical part) is changing and the entire folder name is same, so is there a way to search for these kind of folders irrespective of the last numerical part , like by using regular expression or somethingl ike that.
Finally my intention is to find the folders of the format apache-tomcat-xxxxxxx on the entire file system, because if we search for just apache-tomcat we will get hundreds of results and even thousands too sometimes which is difficult to analyze and search from them
?

Try this:
locate apache-tomcat
It uses a database (updated by the hilariously-named updatedb, which you can run with sudo updatedb to refresh the search index).

locate apache-tomcat | grep -E '^apache-tomcat-[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+$'
or just use [0-9] instead of [[:digit:]]. That's probably more readable. Or
locate apache-tomcat | perl -ne 'print if /^apache-tomcat-\d+\.\d+\.\d+$/'
Whatever you do, you definitely want to use locate instead of find, as it will be much faster.

Related

Need a linux command to search a particular file in all directories

I need to search for a particular file name "i2need.txt" in all directories.
I tried this:
find * -type f
But I am getting all the files present in the directories.
I need only the i2need.txt file. Can anyone please help me in this.
If I'm understanding this question correctly you should just be able to start the search at the root directory /:
find / -name <filename>
Of course you'll likely get a lot of permission-related errors but the files' paths you have access to should be on stdout.
You can also filter out the errors with 2> /dev/null (redirecting it to /dev/null, effectively dropping it).
Alternately
locate i2need.txt
You might need to do this first
sudo updatedb

Find out the path to the original file of a hard link

Now, I get the feeling that some people will think that there was no original file of a hard link, but I would strongly disagree because of the following experiment I did.
Let's create a file with the content pwd and make a hard link to a subfolder:
echo "pwd" > original
mkdir subfolder
cp -l original subfolder/hardlink
Now let's see what the files output if I run it with shell:
sh original
sh subfolder/hardlink
The output is the same, even though the file hardlink is in a subfolder!
Sorry, for the long intro, but I wanted to make sure that nobody says that my following question is irrelevent.
So my question now is: If the content of the original file was not conveniently pwd, how do I find out the path to the original file from a hard link file?
I know that linux programs seem to know the path somehow, but not the filename, because some programs returned error messages that <path to original file>/hardlinkname was not found. But how do they do that?
Thanks in advance for an answer!
Edit: Btw, I fixed the error messages mentioned above by naming the hard links the same as the original file.
But how do they do that?
By looking for the same inode value. Here's one way you can list files with the same inode:
find /home -xdev -samefile original
replace /home with any other starting directory for find to start searching.
how do I find out the path to the original file from a hard link file?
For hard links there are no multiple files, just one file (inode) with multiple (file) names.
ADDENDUM:
is there no other way to find the hard links of an inode than searching through folders?
ln, ls, find, and stat are the common ways of discovering and querying the filesystem for inodes. Then depending on what next you want to accomplish, many file, directory, archiving, and searching commands recognize inode values. Some may require a special -inum or --follow or equivalent option to specify inodes.
The find example I gave above is just one such usage. Another is to combine with xargs to operate on all the found files. Here's one way to delete them all:
find /home -xdev -samefile original | xargs rm
Look under --help for other standard os commands. Most Linux distributions also come with help files that explain inodes and which tools work with inodes.
pwd is the present working directory, so of course, the output should be the same, since you didnt cd't into your subfolder.
Sorry to say, but there is no "original" file if you create other hardlinks. If you want to get other hardlinks of a file, look at How to find all hard links to a given file? for example.
Agree with #Emacs User. Your example of pwd is irrelevant and confused you.
There is no concept of original file for hard-links. The file names just act as a reference count to the content on the disk pointed by the i-node (see 'ls -li original subfolder/hardlink'). So even if you delete the original file hardlink still points to the same content.
It is impossible to find out as all hard links are treated the same way pointing to one inode.

Fast way to find file names in Linux and specify directory

This command is slow: find / -name 'program.c' 2>/dev/null
1) Any faster alternatives?
2) Is there an alternative to the above command to search for a file within a specific nested directory (but not the entire system)?
The first / in your command is the base directory from which find will begin searching. You can specify any directory you like, so if you know, for example, that program.c is somewhere in your home directory you could do find ~ -name 'program.c' or if it's in, say, /usr/src do find /usr/src -name 'program.c'
That should help with both 1 and 2.
If you want a command that's not find that can be faster you can check out the mlocate stuff. If you've done a recent updatedb (or had cron do it for you overnight) you can do locate <pattern> and it will show you everywhere that matches that pattern in a file/directory name, and that's usually quite fast.
For fast searching, you probably want locate
It is usually setup to do a daily scan of the filesystem, and index the files.
http://linux.die.net/man/1/locate
although locate & updatedb is for the whole system, the search usually is faster.

Linux shell:Is it possible to speedup finding files using "find" by using a predefined list of files/folders?

I primarily program in Linux, using tcsh shell. By default, my current directory is the root of my code base - I use "find" to locate whichever file I'm interested in modifying, and then once find shows up the location of the file, I can then edit/modify on Vim.
The problem is, due to the size of the code base, every time I ask find to show up the location of a file , it takes at least 4-5 seconds to complete the search, which are too short to be used for anything else !! So, since the rate is new files being added to the code base is very small, i'm looking for a way as follows:
1) Generate the list of all files in my code base
2) Have find look in only those locations/files to answer my query
I've seen how opening up files in cscope is lightning fast, as it stores the list of files previously. I'd like to use the same mechanism for find, just not from within the cscope window, but from the generic cmd line.
Any ideas ?
Install the locate, mlocate, or slocate package from your distribution, and either wait for cron to run the update task :) or run the updatedb command manually via the /etc/cron.daily/mlocate or similar file.
$ time locate kernel.txt
/home/sarnold/Local/linux-2.6/Documentation/sysctl/kernel.txt
/home/sarnold/Local/linux-2.6-config-all/Documentation/sysctl/kernel.txt
/home/sarnold/Local/linux-apparmor/Documentation/sysctl/kernel.txt
/usr/share/doc/libfuse2/kernel.txt.gz
real 0m0.595s
Yes. See slocate (or updatedb & locate).
The -U flag is particularily interesting because you can just index the directory that contains your code (and thus, updating or creating the database will be quick).
You could write a list of directories to a file and use them in your find command:
$ find /path/to/src -type d > dirs
$ find $(cat dirs) -type f -name "foo"
Alternatively, write a list of files to a file and use grep on it. The list of files is more likely to change than the list of dirs though.
$ find /path/to/src -type f > files
$ vi $(grep foo files)
find in conjunction with xargs (substituting -exec) does differ significantly in execution timings:
http://forrestrunning.wordpress.com/2011/08/01/find-exec-xargs/

Find folders with specific name and no symlink pointing to them

I'm trying to write a shell script under linux, which lists all folders (recursively) with a certain name and no symlink pointing to it.
For example, I have:
/home/htdocs/cust1/typo3_src-4.2.11
/home/htdocs/cust2/typo3_src-4.2.12
/home/htdocs/cust3/typo3_src-4.2.12
Now I want to go through all subdirectories of /home/htdocs and find those folders typo3_*, that are not pointed to from somewhere.
Should be possible with a shellscript or a command, but I have no idea how.
Thanks for you help
Stefan
I think none of the common file systems store if there are symlinks pointing to this file in the file node, so you would have to scan all other files to see if it is a symlink to this one. If you don't limit your depth of search to a certain level, this might take a very long time. If you want to perform that search in /home/htdocs, for example, it would work something like this:
# find specified folders:
find /home/htdocs -name 'typo3_*' -type d | while read folder; do
# list all symlinks pointing to $folder
find -L /home/htdocs -samefile "$folder"|grep -v "$folder\$"
done

Resources