How can I use grep to show just filenames on Linux? [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 11 months ago.
The community reviewed whether to reopen this question 12 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
How can I use grep to show just file-names (no in-line matches) on Linux?
I am usually using something like:
find . -iname "*php" -exec grep -H myString {} \;
How can I just get the file-names (with paths), but without the matches? Do I have to use xargs? I didn't see a way to do this on my grep man page.

The standard option grep -l (that is a lowercase L) could do this.
From the Unix standard:
-l
(The letter ell.) Write only the names of files containing selected
lines to standard output. Pathnames are written once per file searched.
If the standard input is searched, a pathname of (standard input) will
be written, in the POSIX locale. In other locales, standard input may be
replaced by something more appropriate in those locales.
You also do not need -H in this case.

From the grep(1) man page:
-l, --files-with-matches
Suppress normal output; instead print the name of each input
file from which output would normally have been printed. The
scanning will stop on the first match. (-l is specified by
POSIX.)

For a simple file search, you could use grep's -l and -r options:
grep -rl "mystring"
All the search is done by grep. Of course, if you need to select files on some other parameter, find is the correct solution:
find . -iname "*.php" -execdir grep -l "mystring" {} +
The execdir option builds each grep command per each directory, and concatenates filenames into only one command (+).

Related

Combine PDFs with spaces in file names [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 months ago.
Improve this question
I have a directory with lots of PDFs that have spaces in their file names.
file 1.pdf
file 2.pdf
file 3.pdf
# And so on
I ran this command in that directory.
pdftk `ls -v` cat output combined-report.pdf
But the terminal spat out a bunch of errors like this.
Error: Unable to find file.
Error: Failed to open PDF file:
file
Error: Unable to find file.
Error: Failed to open PDF file:
1.pdf
How do I combine the PDFs using pdftk or any other package in Arch Linux? To clarify, I want to combine the files in the order printed by ls -v
Just use a wildcard while creating combining the pdfs like:
pdftk *.pdf cat output newfile.pdf
Or else you could use something like this:
pdftk file\ 1.pdf file\ 2.pdf cat output newfile.pdf
Try this:
find . -name 'file*.pdf' -print0 | sort -z -V | xargs -0 -I{} pdftk {} cat output combined-report.pdf
or this:
ls -v file*.pdf | xargs -d'\n' -I{} pdftk {} cat output combined-report.pdf
In the first line, "-print0", "-z", and "-0" tell the corresponding command to use null as delimiter. The "-V" parameter for sort specifies "version sort" which I think should produce the sorting you wanted. Normally, the parameters that are piped are appended to the end of xargs. "-I{}" specifies a placeholder, "{}", that you can use to put them in the middle of the command.
The second line is similar, except that it takes parameter from "ls", and use newline '\n' as delimiter.
Note: there are potential problems with using "ls". See the link posted by #stephen-p.

List only numerical directory names in linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
How to list only numerical directory names in Linux.
Only directories, that has only numeric characters?
There are multiple solutions to do it.
1.You can List just dirs and then remove . and / from the names and then Grep just numerical ones:
ls -d ./*/ | sed 's/\.\///g' | sed 's/\///g' | grep -E '^[0-9]+$'
2.By "ls" & "grep" & then "awk". Just list with details, Grep dirs and then Print 9th column:
ls -llh | grep '^d' | awk '{print $9}'
Good luck In Arbaeen.
In bash, you can benefit from extended globbing:
shopt -s extglob
ls -d +([0-9])/
Where
+(pattern-list)
Matches one or more occurrences of the given patterns
The / at the end limits the list to directories, and -d prevents ls from listing their contents.

A quick way to search for certain lines of code through many files in a project [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am currently working on a C project that contains over 50 .h and .c files. I would like to know if there is a quick way to search for certain lines of code (like ctrl+f for a window for example) without having to actually search each file one by one.
Thank you in advance
On Linux/Unix there's a command line tool called grep you can use it to search multiple files for a string. For examples if I wanted to search for strcpy in all files:
~/sandbox$ grep -rs "strcpy"*
test.c: strcpy(OSDMenu.name,"OSD MENU");
-r gives searches recursivly so you get all the files in all directories (from the current one) searched. -s ignores warnings, in case you run into non-readable files.
Now if you wanted to search for something custom, and you can't remember the case there's options like -i to allow for case insenstive searches.
~/sandbox$ grep -rsi "myint" *
test.c: int myInt = 5;
test.c: int MYINT = 10;
You can also use regular expressions in case you forgot exactly what you were looking for was called (indeed the name, 'grep' comes from the sed command g/re/p -- global/regular expression/print:
~/sandbox$ grep -rsi "my.*" *
test.c: int myInt = 5;
test.c: int MYINT = 10;
test.c: float myfloat = 10.9;
install cygwin if you aren't using *nix and use find/grep, e.g.
find . -name '*\.[ch]' | xargs grep -n 'myfuncname'
In fact, I made this a little script findinsrc that can be called with findinsrc path1, [path2, ...] pattern. The central line, after checking arguments etc, is
find "${#:1:$#-1}" -type f \( -iname '*.c' -o -iname '*.cpp' -o -iname '*.h' -o -iname '*.hpp' \) -print0 | xargs -0 grep -in "${#:$#}"
"${#:1:$#-1}" are the positional parameters 1 .. n-1, that is, the path(s), supplied as the starting points for find. "${#:$#}" is the last parameter, the pattern supplied to grep.
the -o "option" to find is a logical OR combining the search criteria; because the "default" combination of options is AND, all the ORs must be parenthesized for correct evaluation. Because parentheses have special meaning to the shell, they must be escaped so that they are passed through to find as command line arguments.
-print0 instructs find to separate its output items not with a newline or space but with a null character which cannot appear in path names; this way, there is a clear distinction between whitespace in a path ("My Pictures" nonsense) and separation between paths.
-iname is a case insensitive search, in case files are ending in .CPP etc.
xargs -0 is there specifically to digest find -print0 output: xargs will separate arguments read from stdin at null bytes, not at whitespace.
grep -in: -i instructs grep to perform a case insensitive search (which suits my bad memory and is catered exactly to this "find the bloody function no matter the capitalization you know what I mean" use case). The -n prints the line number, in addition to the file name, where the match occurred.
I have similar scripts findinmake, whre the find pattern includes regular Makefiles, CMakeLists.txt and a proprietary file name; and findinscripts that looks through bat, cmd and sh files. That seemed easier than introducing options to a generic script.
You can use grep to search through the files using the terminal/command line.
grep -R "string_to_search" .
-R to be recursive, search in all sub directories too
Then string you want
Then is the location, . for the current directory
On windows you can use findstr which will find files that contain strings that either exactly match or regular expression match the specified string / pattern.
findstr /?
from the command line will give you the usage. It can also recurse subdirectories (/s).
If you're using a text editor and the shell, then you can use shell tools like grep.
grep -R "some pattern" directory
However you should consider using an IDE such as Eclipse (it's not just for Java), Netbeans (there is a C plugin) or KDevelop. IDEs have keyboard shortcuts for things like "find everywhere the highlighted function is called".
Or of course there's Emacs...

Meaning of command ls -lt | wc -l [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
My friend just passed me this command to count the number of files in a directory:
$ ls -lt | wc -l
Can someone please help me flush out the meaning of this command? I know that ls is to list all the files. But what does -lt mean?
Also, I get a different count if I use ls | wc -l with no -lt option. Why is that the case?
You'll want to get familiar with the "man (manual) pages":
$ man ls
In this case you'll see:
-l (The lowercase letter ``ell''.) List in long format. (See below.) If
the output is to a terminal, a total sum for all the file sizes is
output on a line before the long listing.
-t Sort by time modified (most recently modified first) before sorting the
operands by lexicographical order.
Another way you can see the effect of the options is to run ls without piping to the wc command. Compare
$ ls
with
$ ls -l
and
$ ls -lt

Trimming linux log files [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
It seems like a trivial issue, but I did not find a solution.
I have a number of log files in a php installation on Debian/Linux that tend to grow quite a bit and I would like to trim nightly to the last 500 lines or so.
How do I do it, possibly in shell and applying a command to *log?
For this, I would suggest to use logrotate with a configuration to your liking instead of programming your own script.
There might be a more elegant way to do this programmatically, but it is possible to use tail and a for-loop for this:
for file in *.log; do
tail -500 "$file" > "$file.tmp"
mv -- "$file.tmp" "$file"
done
If you want to save history of older files, you should check out logrotate.
Otherwise, this can be done trivially with the command line:
LOGS="/var/log"
MAX_LINES=500
find "$LOGS" -type f -name '*.log' -print0 | while read -d '' file; do
tmp=$(mktemp)
tail -n $MAX_LINES $file > $tmp
mv $tmp $file
done

Resources