How to show a 'grep' result with the complete path or file name [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 10 months ago.
Improve this question
How can I get the complete file path when I use grep?
I use commands like
cat *.log | grep somethingtosearch
I need to show the result with the complete file path from where the matched result were taken out.
How can I do it?

Assuming you have two log-files in:
C:/temp/my.log
C:/temp/alsoMy.log
'cd' to C: and use:
grep -r somethingtosearch temp/*.log
It will give you a list like:
temp/my.log:somethingtosearch
temp/alsoMy.log:somethingtosearch1
temp/alsoMy.log:somethingtosearch2

I think the real solution is:
cat *.log | grep -H somethingtosearch

Command:
grep -rl --include="*.js" "searchString" ${PWD}
Returned output:
/root/test/bas.js

If you want to see the full paths, I would recommend to cd to the top directory (of your drive if using Windows)
cd C:\
grep -r somethingtosearch C:\Users\Ozzesh\temp
Or on Linux:
cd /
grep -r somethingtosearch ~/temp
If you really resist on your file name filtering (*.log) and you want recursive (files are not all in the same directory), combining find and grep is the most flexible way:
cd /
find ~/temp -iname '*.log' -type f -exec grep somethingtosearch '{}' \;

It is similar to BVB Media's answer.
grep -rnw 'blablabla' `pwd`
It works fine on my UbuntuĀ 16.04 (Xenial Xerus) Bash.

For me
grep -b "searchsomething" *.log
worked as I wanted

This works when searching files in all directories.
sudo ls -R | grep -i something_bla_bla
The output shows all files and directories which include "something_bla_bla". The directories with path, but not the files.
Then use locate on the wanted file.

The easiest way to print full paths is to replace the relative start path with the absolute path:
grep -r --include="*.sh" "pattern" ${PWD}

Use:
grep somethingtosearch *.log
and the filenames will be printed out along with the matches.

Related

How to search for a string in entire linux system? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am looking to search for a string for example "/uniquexx" in the entire hard drive and find the files where it is referenced? How could I do that? I tried grep and find / but no luck
grep -r blablastring /
-r means recursive searching from all subdirectories.
If you want to search only text files, try ack. It's like grep, but defaults to skipping file types it recognizes as binary. It also highlights matches by default, when searching recursively in a directory.
Some answer point with use of grep.
What this actually does is make a list of every file on the system, and then for each file, execute grep with the given arguments and the name of each file
use
find / -xdev '(' -type f -a -name '*.txt' -a -size -2M -a -mtime -5 ')' -print0 | xargs -0 grep -H "800x600"
read more: How to search text throughout entire file system?
You can try:
grep -r -H "your string" /home/yourdir
-H means you will shown the filename contains your string.
Anyway if you want to search within the WHOLE linux directory, you need sudo privileges.

finding files and moving their folders [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have a huge number of text files, organized in a big folder tree, on Debian Linux. What I need is to find all text files having a specific name pattern and then move the containing folder to a destination.
Example:
/home/spenx/src/a12/a1a22.txt
/home/spenx/src/a12/a1a51.txt
/home/spenx/src/a12/a1b61.txt
/home/spenx/src/a12/a1x71.txt
/home/spenx/src/a167/a1a22.txt
/home/spenx/src/a167/a1a51.txt
/home/spenx/src/a167/a1b61.txt
/home/spenx/src/a167/a1x71.txt
The commands:
find /home/spenx/src -name "a1a2*txt"
mv /home/spenx/src/a12 /home/spenx/dst
mv /home/spenx/src/a167 /home/spenx/dst
The result:
/home/spenx/dst/a12/a1a22.txt
/home/spenx/dst/a167/a1a22.txt
Thank you for your help.
SK
combination of find, dirname and mv along with xargs should solve your problem
find /home/spenx/src -name "a1a2*txt" | xargs -n 1 dirname | xargs -I list mv list /home/spenx/dst/
find will fetch list of files
dirname will extract path of file. Note that it can only take one argument at a time
mv will move source directories to destination
xargs is the key to allow output of one command to be passed as arguments to next command
For details of options used with xargs, refer to its man page of just do man xargs on terminal
You can execute:
find /home/spenx/src name "a1a2*txt" -exec mv {} /home/spenx/dst \;
Font: http://www.cyberciti.biz/tips/howto-linux-unix-find-move-all-mp3-file.html
Create this mv.sh script in the current directory that will contain this:
o=$1
d=$(dirname $o)
mkdir /home/spenx/dst/$d 2>/dev/null
mv $o /home/spenx/dst/$d
Make sure it is executable by this command:
chmod +x mv.sh
Next call this command:
find /home/spenx/src -name "a1a2*txt" -exec ./mv.sh {} \;
find /home/spenx/src -name "a1a2*txt" -exec mv "{}" yourdest_folder \;
There's probably multiple ways to do this, but, since it seems you might have multiple matches in a single directory, I would probably do something along this line:
find /home/spenx/src -name "a1a2*txt" -print0 | xargs -0 -n 1 dirname | sort -u |
while read d
do
mv "${d}" /home/spenx/dst
done
It's kind of long, but the steps are:
Find the list of all matching files (the find part), using -print0 to compensate for any names that have spaces or other odd characters in them
extract the directory part of each file name (the xargs ... dirname part)
sort and uniquify the list to get rid of duplicates
Feed the resulting list into a loop that moves each directory in turn

List of All Folders and Sub-folders [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
In Linux, I want to find out all Folder/Sub-folder name and redirect to text file
I tried ls -alR > list.txt, but it gives all files+folders
You can use find
find . -type d > output.txt
or tree
tree -d > output.txt
tree, If not installed on your system.
If you are using ubuntu
sudo apt-get install tree
If you are using mac os.
brew install tree
find . -type d > list.txt
Will list all directories and subdirectories under the current path. If you want to list all of the directories under a path other than the current one, change the . to that other path.
If you want to exclude certain directories, you can filter them out with a negative condition:
find . -type d ! -name "~snapshot" > list.txt
As well as find listed in other answers, better shells allow both recurvsive globs and filtering of glob matches, so in zsh for example...
ls -lad **/*(/)
...lists all directories while keeping all the "-l" details that you want, which you'd otherwise need to recreate using something like...
find . -type d -exec ls -ld {} \;
(not quite as easy as the other answers suggest)
The benefit of find is that it's more independent of the shell - more portable, even for system() calls from within a C/C++ program etc..

How can I run dos2unix on an entire directory? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
The community reviewed whether to reopen this question 11 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have to convert an entire directory using dos2unix. I am not able to figure out how to do this.
find . -type f -print0 | xargs -0 dos2unix
Will recursively find all files inside current directory and call for these files dos2unix command
If it's a large directory you may want to consider running with multiple processors:
find . -type f -print0 | xargs -0 -n 1 -P 4 dos2unix
This will pass 1 file at a time, and use 4 processors.
As I happened to be poorly satisfied by dos2unix, I rolled out my own simple utility. Apart of a few advantages in speed and predictability, the syntax is also a bit simpler :
endlines unix *
And if you want it to go down into subdirectories (skipping hidden dirs and non-text files) :
endlines unix -r .
endlines is available here https://github.com/mdolidon/endlines
A common use case appears to be to standardize line endings for all files committed to a Git repository:
git ls-files -z | xargs -0 dos2unix
Keep in mind that certain files (e.g. *.sln, *.bat) are only used on Windows operating systems and should keep the CRLF ending:
git ls-files -z '*.sln' '*.bat' | xargs -0 unix2dos
If necessary, use .gitattributes
It's probably best to skip hidden files and folders, such as .git. So instead of using find, if your bash version is recent enough or if you're using zsh, just do:
dos2unix **
Note that for Bash, this will require:
shopt -s globstar
....but this is a useful enough feature that you should honestly just put it in your .bashrc anyway.
If you don't want to skip hidden files and folders, but you still don't want to mess with find (and I wouldn't blame you), you can provide a second recursive-glob argument to match only hidden entries:
dos2unix ** **/.*
Note that in both cases, the glob will expand to include directories, so you will see the following warning (potentially many times over): Skipping <dir>, not a regular file.
For any Solaris users (am using 5.10, may apply to newer versions too, as well as other unix systems):
dos2unix doesn't default to overwriting the file, it will just print the updated version to stdout, so you will have to specify the source and target, i.e. the same name twice:
find . -type f -exec dos2unix {} {} \;
I think the simplest way is:
dos2unix $(find . -type f)
I've googled this like a million times, so my solution is to just put this bash function in your environment.
.bashrc or .profile or whatever
dos2unixd() {
find $1 -type f -print0 | xargs -0 dos2unix
}
Usage
$ dos2unixd ./somepath
This way you still have the original command dos2unix and it's easy to remember this one dos2unixd.
I have had the same problem and thanks to the posts here I have solved it. I knew that I have around a hundred files and I needed to run it for *.js files only.
find . -type f -name '*.js' -print0 | xargs -0 dos2unix
Thank you all for your help.
for FILE in /var/www/html/files/*
do
/usr/bin/dos2unix FILE
done
If there is no sub-directory, you can also take
ls | xargs -I {} dos2unix "{}"

"rm" (delete) 8 million files in a directory? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have 8 million files in my /tmp and I need to remove them. This server is also running pretty important app and I can not overload it.
$ ls | grep .| xargs rm
The above makes my app unresponsive.
Do you have any ideas how to remove these files? Thanks in advance!
Well yes, don't use ls (because it may sort files, and the file list may draw more memory than you would like), don't add pointless indirections like a pipe, or xargs.
find . -type f -delete
grep . is match anything, including nothing.
Cut it out of your chain to remove a process launched for each file. That should speed things up nicely.
ls | xargs rm -rf
Note that this will choke on whitespace, so an improvement is
ls | xargs -I{} rm -v {}
Of course, a much faster method is to remove the directory and recreate it. However, you do need to take care that your script doesn't get "lost" in the directory tree and remove stuff it shouldn't.
rm -rf dir
mkdir dir
Note that there are some subtle differences between removing all files, and removing and recreating the directory. Removing all files will only remove visible files and directories; while removing the directory and recreating will remove all files and directories, visible and hidden.
try this:
ls -1 | grep -v -e "ignoreFile" -e "ignoreFile2" | xargs rm -rf
ls -1 is simplifying ls | grep .
grep -v will remove lines from the list. just give it any files that should not be deleted, separating patterns with -e flag
And just for a complete explaination:
(I'm guessing this is already known)
rm -rf :
-r recursive
-f force

Resources