Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
What is the purpose of the below command ?
grep -ir nashorn ./ | grep "^[^:]*\.java"
If finds all lines that contain the string nashorn, case-insensitively, in files in the current directory hierarchy whose names contain .java.
The -i option to grep makes it match case-insensitively. The -r option makes it recurse into all directories in the directory arguments and search all the files. So the first part of the pipeline matches nashorn in all files in the current directory, recursively.
The output of that command will be in the format:
filename:matching line
The second grep matches those lines. ^ means the beginning of the lines, [^:]* means a sequence of characters that doesn't include :, which restricts it to the filename part of the line. ANd \.java matches .java literally. So it only matches lines where .java is in the filename part of the line.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have a source code which is having text and binary file. I have to find and collect all the human unreadable files present in source code. How I can do this?
Although the answer of Far Had is correct, you don't even need a for-loop for this. As you state yourself, all your files are within one directory, so you can simply run:
file *
The answers containing "text" (be it ASCII, unicode or something else) indicate human readable files.
This piece of code returns a list of all non ascii text files in current directory.
Hope this will help:
for i in `find . -type f`; do file $i; done |grep -v text | cut -d : -f 1
You could replace the . (dot) after the find with any other location in your filsystem.
One way is to use perl (File::Find module) like this:
perl -MFile::Find -e '#directories=shift || "."; sub wanted { ! -T && print "$File::Find::name\n"; }; find(\&wanted, #directories);'
NOTE: The above command defaults to searching the current directory.
To search a specific directory e.g. /tmp, just type the above command followed by a space and /tmp
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I would like to understand better what this command does, when I ran it it appeared the number 77, as shown in the image, does this represent the number of words in the list? Or is there something more to explain?
Let's go through the commands:
ls /bin - lists the files in the directory /bin
sort - sorts standard input, so in this case, sort the list of files in /bin
tee /tmp/lista - writes standard input (the sorted list) into /tmp/lista and passes the list on
wc -l - counts the lines (-l = lines), so the count of files
To sum it up, the command saves a sorted list of files /bin to /tmp/lista and prints the number of files in /bin.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
How to search in multiple files or folders for a string inside a plain text file?
For example I need to find the string "foo" in all files in the folder "/home/thisuser/bar/baz/"
You need to have read privileges on the files you will be searching in. If you do have them, then simply use
grep -r "foo" /home/thisuser/bar/baz/*
to search in a certain folder or
grep "foo" /home/thisuser/bar/baz/somefile.txt
if you need to search in a specific file, in this case "somefile.txt".
Basically the syntax is
grep [options] [searched string] [path]
// -r is an option which states that it will use recursive search
Another useful options are "-n" to show on which line in which file the string is located, "-i" to ignore case, "-s" to suppress some messages like "can't read file" or "not found" and "-I" to ignore binary files.
If you use
grep -rnisI "foo" /home/thisuser/bar/baz/*
you will exactly know where to look.
Use
grep -r "foo" /home/thisuser/bar/baz/
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
when I use * in cp, I think it follows the same rule as regex.
so "cp temp/* test/" should copies everything over, however, when temp folder is empty it throws exception saying it cannot find file or directory, which indicte * cannot match "nothing".
Then I create a file test.txt under temp and do:
cp temp/test.txt* test/
It works, which indicate * indeed match "nothing".
I get confused about the behavior. Can anyone explain a little bit?
Thanks
What's happening is the * expansion is done by your shell (bash probably). The pattern temp/testfile.txt* did match temp/testfile.txt (* matches zero or more characters), so bash passed that onto cp.
However, bash is set, by default, to pass the wildcard as-is on to the app if it doesn't match anything (there's an option called nullglob to turn this non-intuitive behavior off). So it passed temp/* literally to cp, which complained that it didn't exist.
The shell does the expansion, so it's not cp specific.
If not match is found, there's no substitution, the original string (temp/*) is reserved and passed to the application. Of course cp cannot find a file by that name.
# echo nosuchfile*
nosuchfile*
Some clarification for "nothing":
temp/* means entries (files/directories/...) in temp directory, but there weren't any files, so it failed.
temp/test.txt* means entries starting with test.txt in the temp directory.
Wildcard globbing is not the same as regular expressions, complete with their own rules.
Different shells have different rules ... you make want to look at Wikipdia to get an overview.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How is it possible to delete files containing a string as an embedded string except at the
beginning or end by using wild-cards.
I'm an amateur started Ubuntu less than a month.
rm ?*foo?*
removes files containing foo provided that there is at least one character before and after, so "foobar" and "barfoo" will NOT be deleted, whereas "barfoobar" will be.
As a precaution, do
ls ?*foo?*
first to make sure that you aren't deleting the wrong stuff. And be very careful not to accidentally include any spaces as rm ?* foo?* is almost certainly very bad. To provide some protection, wrap the argument in quotes, thus
rm "?*foo?*"
I don't think this is possible with a single expansion pattern. You can use grep for filtering instead:
ls -d '*foo*' | egrep -v '^foo|foo$' | xargs rm
So the ls lists everything containing foo, then egrep removes the files with matches at the beginning/end, and finally xargs runs a command (rm in this case) on each remainder.
The dangerous thing about this technique is that filenames may contain special characters like line breaks or asterisks, so use at your own risk!