A quick way to search for certain lines of code through many files in a project [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am currently working on a C project that contains over 50 .h and .c files. I would like to know if there is a quick way to search for certain lines of code (like ctrl+f for a window for example) without having to actually search each file one by one.
Thank you in advance

On Linux/Unix there's a command line tool called grep you can use it to search multiple files for a string. For examples if I wanted to search for strcpy in all files:
~/sandbox$ grep -rs "strcpy"*
test.c: strcpy(OSDMenu.name,"OSD MENU");
-r gives searches recursivly so you get all the files in all directories (from the current one) searched. -s ignores warnings, in case you run into non-readable files.
Now if you wanted to search for something custom, and you can't remember the case there's options like -i to allow for case insenstive searches.
~/sandbox$ grep -rsi "myint" *
test.c: int myInt = 5;
test.c: int MYINT = 10;
You can also use regular expressions in case you forgot exactly what you were looking for was called (indeed the name, 'grep' comes from the sed command g/re/p -- global/regular expression/print:
~/sandbox$ grep -rsi "my.*" *
test.c: int myInt = 5;
test.c: int MYINT = 10;
test.c: float myfloat = 10.9;

install cygwin if you aren't using *nix and use find/grep, e.g.
find . -name '*\.[ch]' | xargs grep -n 'myfuncname'
In fact, I made this a little script findinsrc that can be called with findinsrc path1, [path2, ...] pattern. The central line, after checking arguments etc, is
find "${#:1:$#-1}" -type f \( -iname '*.c' -o -iname '*.cpp' -o -iname '*.h' -o -iname '*.hpp' \) -print0 | xargs -0 grep -in "${#:$#}"
"${#:1:$#-1}" are the positional parameters 1 .. n-1, that is, the path(s), supplied as the starting points for find. "${#:$#}" is the last parameter, the pattern supplied to grep.
the -o "option" to find is a logical OR combining the search criteria; because the "default" combination of options is AND, all the ORs must be parenthesized for correct evaluation. Because parentheses have special meaning to the shell, they must be escaped so that they are passed through to find as command line arguments.
-print0 instructs find to separate its output items not with a newline or space but with a null character which cannot appear in path names; this way, there is a clear distinction between whitespace in a path ("My Pictures" nonsense) and separation between paths.
-iname is a case insensitive search, in case files are ending in .CPP etc.
xargs -0 is there specifically to digest find -print0 output: xargs will separate arguments read from stdin at null bytes, not at whitespace.
grep -in: -i instructs grep to perform a case insensitive search (which suits my bad memory and is catered exactly to this "find the bloody function no matter the capitalization you know what I mean" use case). The -n prints the line number, in addition to the file name, where the match occurred.
I have similar scripts findinmake, whre the find pattern includes regular Makefiles, CMakeLists.txt and a proprietary file name; and findinscripts that looks through bat, cmd and sh files. That seemed easier than introducing options to a generic script.

You can use grep to search through the files using the terminal/command line.
grep -R "string_to_search" .
-R to be recursive, search in all sub directories too
Then string you want
Then is the location, . for the current directory

On windows you can use findstr which will find files that contain strings that either exactly match or regular expression match the specified string / pattern.
findstr /?
from the command line will give you the usage. It can also recurse subdirectories (/s).

If you're using a text editor and the shell, then you can use shell tools like grep.
grep -R "some pattern" directory
However you should consider using an IDE such as Eclipse (it's not just for Java), Netbeans (there is a C plugin) or KDevelop. IDEs have keyboard shortcuts for things like "find everywhere the highlighted function is called".
Or of course there's Emacs...

Related

Finding .php files with certain variable declaration (string search) on command line (Shell)

I'm attempting to find all .PHP files that are in certain depth of a directory (at least 4 levels down, but not more than 5 levels in).
I'm logged into my Centos server with root authority via shell.
The string I want to search for is:
$slides='';
What I have in front of me.. I would expect it to work. I tried to escape the $ with a \ (I thought perhaps it works like regex, needing special chars excluded). I tried without the ='' portion, or tried adding \'\' to that part.. or remove the ='' altogether to simplify. nothing.
find . -maxdepth 5 -mindepth 4 -type f -name ‘*.php’ -print | xargs grep "\$slides=’’" *
I'm already running it under the directory under which I want to recursively search.
Also - I have the filter to look for *.php only but I still get a bunch of directory names in the return with a warning that says grep: [dir_name]: Is a directory
Clearly I am missing something here as far as syntax of grep command goes, or how the filter works here. I use grep more in PHP so this is quite a transition for me!
So you were almost right. The problem looks to have been the grep part of the command
grep "\$slides=''" *
Namely the * was the issue. From the bash manual
After word splitting, unless the -f option has been set (see The Set
Builtin), Bash scans each word for the characters ‘*’, ‘?’, and ‘[’.
If one of these characters appears, and is not quoted, then the word
is regarded as a pattern, and replaced with an alphabetically sorted
list of filenames matching the pattern
When you piped the found files with find into xargs and attempted to grep them with *, grep would have interpreted this as you wanting to find the string $slides='' in a list of filenames/directories returned by the glob *, and you cannot grep directories without supplying the -r flag to grep, so it returned an error.
Instead, what you wanted to do is pipe the found files with find into xargs so it can add the list of filenames to the grep command, as that's what xargs does. From the xargs manual
xargs reads items from the standard input, delimited by blanks (which
can be protected with double or single quotes or a backslash) or
newlines, and executes the command (default is /bin/echo) one or more
times with any initial- arguments followed by items read from
standard input. Blank lines on the standard input are ignored.
Making the correct command
find . -maxdepth 5 -mindepth 4 -type f -name '*.php' -print0 | xargs -0 grep "\$slides=''"
Using the -print0 flag in find, and the -0 flag in xargs, to use NUL as the delimiter, in case any filenames contained newlines.
If you want to use shell_exec from your PHP code, it is a program execution function which allows you to run a command like 'ls -al' in the operating system shell and get the result returned into a variable. Querystrings are not commands you can use in this way.
Do you mean running PHP from the command line so that it runs from the shell, not from the web server:
php -r 'echo "hello world\n";'
If you run PHP 4.3 and above, you can use the PHP Command Line Interface (CLI) which can also execute scripts stored in files. Have a look at the syntax and examples at: http://php.net/features.commandline

Why does find -regex command, differ from find | grep?

The find command below outputs nothing, and does not find any "include" files or directories.
find -regex "*include*" 2>/dev/null
However piping the find command into grep -E seems to find most include files.
find ./ 2>/dev/null | grep -E "*include*"
I've left out the output since the first is blank and the second matches quite a few files.
I'm starting to need to dig through linux system files to find the answers I need (especially to find macro values). In order to do that I have been using find | grep -E to find the files that that should have the macro I am looking for.
Below is the line I tried today with find (my root directory is /), and nothing is output. I don't want to run the command as root so I pipe the errors out to /dev/null. I checked the errors for regex syntax errors but nothing. Its still looping through all directories since I still get a "find: /var/lib: Permission Denied" Error
find -regex "*include*" 2>/dev/null
However this seems to work and give me everything I want.
find ./ 2>/dev/null | grep -E "*include*"
So my main question is why does find -regex not output the same as find | grep -E ?
Regular expressions are not a language, but a general mathematical construct with many different notations and dialects thereof.
For simple patterns you can often get away with ignoring this fact since most dialects use very similar notation, but since you are specifying an ill defined pattern with a leading asterisk, you get into engine specific behavior.
grep -E uses the GNU implementation of POSIX ERE, and interprets your pattern as ()*includ(e)* and therefore matches includ followed by zero or more es. (POSIX says that the behavior of a leading asterisk is undefined).
find uses Emacs Regex, and interprets it as \*includ(e)* and therefore requires a literal asterisk in the filename.
If you want the same result from both, you can use find -regextype posix-egrep, or you can specify a regex that is equivalent in both such as .*include.* to match include as a substring.
As I understand your question you want to find files in Linux directories
You should use this library
yum install locate
If you use ubuntu
sudo apt-get install locate
Prepare library
sudo updatedb
Then start search
locate include

Defining custom emacs find-grep shortcut

Emacs supports M-x find-grep which searches for a string and opens two buffers. One buffer with the search results and other one opens the file which contains the search string.
Currently M-x find-grep expands to following command Run find (like this): find . -type f -exec grep -nH -e {} +.
How can I modify find-grep (or define a new shortcut?) which adds more options to grep and find commands
(e.g. Ignore log files or include only java files find . -iname '*.java'.
Do not modify find-grep. Write your own, similar, command. Start with a copy of its code, if you like. Instead of the part where it invokes program find to implement find . -type f -exec grep -nH -e () +, substitute your own preferred command line. Simplify and adjust to taste (e.g., find . -iname '*.java').
Both find and grep have their own languages (syntax) -- find, in particular. To use them, you need to know (1) what you are trying to do and (2) how to do that using their languages.
Unless you specify exactly what you are trying to do, the only help we can give you here is general guidance about invoking find and grep from Emacs. For that, the find-grep code is a good guide -- see above.
The quick and dirty way is to prefix find-grep, i.e. C-u M-x find-grep. It allows you to edit the command line before executing it.
If you want to permanently change it, you can define a wrapper. This is for rgrep, but find-grep should be similar.
(defvar grep-context-lines 2
"Default number of context lines (non-matching lines before and
after the matching line) for `rgrep-context'.")
(defun rgrep-context (arg)
"Like `rgrep', but adds a '-C' parameter to get context lines around matches.
Default number of context lines is `grep-context-lines', and can
be specified with a numeric prefix."
(interactive "p")
(setq arg (or arg grep-context-lines))
(let ((grep-find-template
(format "find <D> <X> -type f <F> -print0 | xargs -0 -e grep <C> -nH -C %d -e <R>"
arg))
grep-host-defaults-alist
current-prefix-arg)
(call-interactively 'rgrep)))
Note: your grep-find-template might be different; you're probably best off if you modify your default rather than just copying this one. The default is generated by grep-compute-defaults.

How do I create a bash file that creates a symbolic link (linux) for each file moved from Folder A to Folder B [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
How do I create a bash file that creates a symbolic link (linux) for each file moved from Folder A to Folder B.
However, this would be done selecting the 150 biggest current files from Folder A.
Can probably write it in a one liner, but easier in a simple bash script like
#!/bin/bash
FolderA="/some/folder/to/copy/from/"
FolderB="/some/folder/to/copy/to/"
while read -r size file; do
mv -iv "$file" "$FolderB"
ln -s "${FolderB}${file##*/}" "$file"
done < <(find "$FolderA" -maxdepth 1 -type f -printf '%s %p\n'| sort -rn | head -n150)
Note ${file##*/} removes everything before the last /, per
${parameter##word}
Remove matching prefix pattern. The word is expanded to produce a pattern just as in pathname expansion. If the
pattern matches the beginning of the value of parameter, then the result of the expansion is the expanded value of
parameter with the shortest matching pattern (the ``#'' case) or the longest matching pattern (the ``##'' case)
deleted. If parameter is # or *, the pattern removal operation is applied to each positional parameter in turn, and
the expansion is the resultant list. If parameter is an array variable subscripted with # or *, the pattern removal
operation is applied to each member of the array in turn, and the expansion is the resultant list.
Also, it may seem like a good idea to just do for file in $(command), but process substitution and while/read works better in general to avoid word-splitting issues like splitting up files with spaces, etc...
As with any task, break it up into smaller pieces, and things will fall into place.
Select the biggest 150 files from FolderA
This can be done with du, sort, and awk, the output of which you stuff into an array:
du -h /path/to/FolderA/* | sort -h | head -n150 | awk '{print $2}'
Move files from FolderA into FolderB
Take the list from the last command, and iterate through it:
for file in ${myarray[#]}; do mv "$file" /path/to/FolderB/; done
Make a symlink to the new location
Again, just iterate through the list:
for file in ${myarray[#]; do ln -s "$file" "/path/to/FolderB/${file/*FolderA\/}"; done

How can I use grep to show just filenames on Linux? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 11 months ago.
The community reviewed whether to reopen this question 12 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
How can I use grep to show just file-names (no in-line matches) on Linux?
I am usually using something like:
find . -iname "*php" -exec grep -H myString {} \;
How can I just get the file-names (with paths), but without the matches? Do I have to use xargs? I didn't see a way to do this on my grep man page.
The standard option grep -l (that is a lowercase L) could do this.
From the Unix standard:
-l
(The letter ell.) Write only the names of files containing selected
lines to standard output. Pathnames are written once per file searched.
If the standard input is searched, a pathname of (standard input) will
be written, in the POSIX locale. In other locales, standard input may be
replaced by something more appropriate in those locales.
You also do not need -H in this case.
From the grep(1) man page:
-l, --files-with-matches
Suppress normal output; instead print the name of each input
file from which output would normally have been printed. The
scanning will stop on the first match. (-l is specified by
POSIX.)
For a simple file search, you could use grep's -l and -r options:
grep -rl "mystring"
All the search is done by grep. Of course, if you need to select files on some other parameter, find is the correct solution:
find . -iname "*.php" -execdir grep -l "mystring" {} +
The execdir option builds each grep command per each directory, and concatenates filenames into only one command (+).

Resources