This question already has answers here:
How to find all files containing specific text (string) on Linux?
(54 answers)
Closed 8 years ago.
I need to grep for a particular port number from a huge set of files.
I am using a command:
find . |xargs grep "9461"
But it does not finds all the occurrences for number 9461.
Can anyone suggest a better unix/linux command to do so.
The kind of files it gets is : x.log, y.txt,z.htm, a.out etc files
But it was not able to get abc.conf files
You surely have some reason for using find in combination with grep, but just in case:
You can replace your command by:
grep -r "9461" .
and if you want even line numbers
grep -rn "9461" .
As JonathanLefflero commented, there is also the option -e that make grep match againt a regular expression, so, the ultimate command would be
grep -rne 9461
You should take a look on grep man page
A final note, you should check if what you want to grep is "9461" or 9461 without "".
Related
This question already has answers here:
Delete files with string found in file - Linux cli
(8 answers)
Closed 5 years ago.
I using this command for search all file contain this word . I want to remove all file contain this word in specific directory . grep command perfectly. suggest me how can I used
rm -rf
with below command
grep -l -r -i "Pending" . | grep -n . | wc -l
This could be done by using the l flag and piping the filenames to xargs:
-l
(The letter ell.) Write only the names of files containing selected
lines to standard output. Pathnames are written once per file searched.
If the standard input is searched, a pathname of (standard input) will
be written, in the POSIX locale. In other locales, standard input may be
replaced by something more appropriate in those locales.
grep -l -r 'Pending' . | xargs rm
The above will delete all files in the current directory containing the word Pending.
If I have two strings, for example "class" and "btn", what is the linux command that would allow me to search for these two strings in the entire directory.
To be more specific, lets say I have directory that contains few folders with bunch of .php files. My goal is to be able to search throughout those .php files so that it prints out only files that contain "class" and "btn" in one line. Hopefully this clarifies things better.
Thanks,
I normally use the following to search for strings inside my source codes. It searches for string and shows the exact line number where that text appears. Very helpful for searching string in source code files. You can always pipes the output to another grep and filter outputs.
grep -rn "text_to_search" directory_name/
example:
$ grep -rn "angular" menuapp
$ grep -rn "angular" menuapp | grep some_other_string
output would be:
menuapp/public/javascripts/angular.min.js:251://# sourceMappingURL=angular.min.js.map
menuapp/public/javascripts/app.js:1:var app = angular.module("menuApp", []);
grep -r /path/to/directory 'class|btn'
grep is used to search a string in a file. With the -r flag, it searches recursively all files in a directory.
Or, alternatively using the find command to "identify" the files to be searched instead of using grep in recursive mode:
find /path/to/your/directory -type f -exec grep "text_to_search" {} \+;
I have several documents hosted on a cloud instance. I want to extract all words conforming to a specific pattern into a .txt file. This is the pattern:
ABC123A
ABC123B
ABC765A
and so one. Essentially the words start with a specific character string 'ABC', have a fixed number of numerals, and end with a letter. This is my code:
grep -oh ABC[0-9].*[a-zA-Z]$ > /home/user/abcLetterMatches.txt
When I execute the query, it runs for several hours without generating any output. I have over 1100 documents. However, when I run this query:
grep -r ABC[0-9].*[a-zA-Z]$ > /home/user/abcLetterMatches.txt
the list of files with the strings is generated in a matter for seconds.
What do I need to correct in my query? Also, what is causing the delay?
UPDATE 1
Based on the answers, it's evident that the command is missing the file name on which it needs to be executed. I want to run the code on multiple document files (>1000)
The documents I want searched are in multiple sub-directories within a directory. What is a good way to search through them? Doing
grep -roh ABC[0-9].*[a-zA-Z]$ > /home/user/abcLetterMatches.txt
only returns the file names.
UPDATE 2
If I use the updated code from the answer below:
find . -exec grep -oh "ABC[0-9].*[a-zA-Z]$" >> ~/abcLetterMatches.txt {} \;
I get a no file or directory error
UPDATE 3
The pattern can be anywhere in the line.
You can use this regexp :
~/ grep -E "^ABC[0-9]{3}[A-Z]$" docs > filename
ABC123A
ABC123B
ABC765A
There is no delay, grep is just waiting for the input you didn't give it (and therefore it waits on standard input, by default). You can correct your command by supplying argument with filename:
grep -oh "ABC[0-9].*[a-zA-Z]$" file.txt > /home/user/abcLetterMatches.txt
Source (man grep):
SYNOPSIS
grep [OPTIONS] PATTERN [FILE...]
To perform the same grepping on several files recursively, combine it with find command:
find . -exec grep -oh "ABC[0-9].*[a-zA-Z]$" >> ~/abcLetterMatches.txt {} \;
This does what you ask for:
grep -hr '^ABC[0-9]\{3\}[A-Za-z]$'
-h to not get the filenames.
-r to search recursively. If no directory is given (as above) the current one is used. Otherwise just specify one as the last argument.
Quotes around the pattern to avoid accidental globbing, etc.
^ at the beginning of the pattern to — together with $ at the end — only match whole lines. (Not sure if this was a requirement, but the sample data suggests it.)
\{3\} to specify that there should be three digits.
No .* as that would match a whole lot of other things.
This question already has answers here:
Argument list too long error for rm, cp, mv commands
(31 answers)
Closed 8 years ago.
I'm trying to rename the files like:
Name1_searchstats_metrics_20141230T133000036.log
to something like: Name2_searchstats_metrics_20141230T133000036.log
I'm trying: rename -n 's/\Name1_/\Name2_/' *.log but am getting the error:
bash: /usr/bin/rename: Argument list too long
Can someone please help ?
probably the easiest solution, since you're using bash is to iterate over the list of files with a for loop:
$ for i in *; do rename -n 's/Name1_/Name2_/' $i; done
you can also filter the files if needed by using any wildcard in the command, like *.log.
There are other more convoluted ways to achieve this, especially if you need to do particular string manipulation of the file name, i.e. using awk or find -exec, but hopefully this could help you sort things out in a clear way.
Edited answer as suggested by #glglgl
a more comprehensive and detailed explanation of the above can be found on superuser:
https://superuser.com/questions/31464/looping-through-ls-results-in-bash-shell-script
If the argument list is too long for a linux command, xargs usually comes to the rescue.
Try this:
ls *.log | xargs rename -n 's/\Name1_/\Name2_/'
Hello
I'm looking some script or program that use keywords or pattern search in files ex. php, html, etc and show where is this file
I use command cat /home/* | grep "keyword"
but i have too many folders and files and this command causes big uptime :/
I need this script to find fake websites (paypal, ebay, etc)
find /home -exec grep -s "keyword" {} \; -print
You don't really say what OS (and shell) you are using. You might want to retag your question to help us out.
Because you mention cat | ... , I am assuming you are using a Unix/Linux variant, so here are some pointers for looking at files. (bmargulies solution is good too).
I'm looking some script or program that use keywords or pattern search in files
grep is the basic program for searching files for text strings. Its usage is
grep [-options] 'search target' file1 file2 .... filen
(Note that 'search target' contains a space, if you don't surround spaces in your searchTarget with double or single quotes, you will have a minor error to debug.)
(Also note that 'search target' can use a wide range of wild-card characters, like .,?,+,,., and many more, that is beyond the scope of your question). ... anyway ...
As I guess you have discovered, you can only cram so many files at a time into the comand-line, even when using wild-card filename expansion. Unix/linux almost always have a utiltiyt that can help with that,
startDir=/home
find ${startDir} -print | xargs grep -l 'Search Target'
This, as one person will be happy to remind you, will require further enhancements if your filenames contain whitespace characters or newlines.
The options available for grep can vary wildly based on which OS you are using. If you're lucky, you type the following to get the man page for your local grep.
man grep
If you don't have your page buffer setup for a large size, you might need to do
man grep | page
so you can see the top of the 'document'. Press any key to advance to the next page and when you are at the end of the document, the last key press returns you to the command prompt.
Some options that most greps have that might be useful to you are
-i (ignore case)
-l (list filenames only (where txt is found)
There is also fgrep, which is usually interpretted to mean 'file' grep
becuase you can give it a file of search targets to scan for, and is used like
fgrep [-other_options] -f srchTargetsFile file1 file2 ... filen
I need this script to find fake websites (paypal, ebay, etc)
Final solution
you can make a srchFile like
paypal.fake.com
ebay.fake.com
etc.fake.com
and then combined with above, run the following
startDir=/home
find ${startDir} -print | xargs fgrep -il -f srchFile
Some greps require that the -fsrchFile be run together.
Now you are finding all files starting /home, searching with fgrep for paypay, ebay, etc in all files. The -l says it will ONLY print the filename where a match is found. You can remove the -l and then you will see the output of what is found, prepended with the filename.
IHTH.