GNU grep on FreeBSD not working properly - linux

I have a weird problem on FreeBSD 8.4-STABLE with grep (GNU grep) 2.5.1-FreeBSD.
If I try to grep -Hnr searchstring I didn't get any output, but grep is running said ps aux and is keep running until I kill the process.
If I copy a testfile in an empty directory and do
cat testfile | grep searchstring it is working.
But if I try to
grep -Hnr searchstring in that directory I also get no output, grep keeps running and running but didn't produce any matches.
Anybody knows how to solve this?

Even though you gave -r, you still have to give grep a file argument. Othersize, as you've discovered, it just sits there waiting for input on stdin.
You want
grep -Hnr searchstring .
# ....................^^
That will recursively find files under the current directory.

Though it doesn't seem to be documented, if you invoke grep with the -r option and no file or directory name arguments, it defaults to the current directory, almost as if you had typed grep -R pattern . except that ./ does not appear in the output.
Apparently this is a fairly new feature.
If you do a recursive grep in a directory with a lot of contents, it could simply take a long time -- perhaps forever if there are device files such as /dev/zero that can produce infinite output.

Related

Getting the most recent filename where the extension name is case *in*sensitive

I am trying to get the most recent .CSV or .csv file name among other comma separated value files where the extension name is case insensitive.
I am achieving this with the following command, provided by someone else without any explanation:
ls -t ~(i:*.CSV) | head -1
or
ls -t -- ~(i:*.CSV) | head -1
I have two questions:
What is the use of ~ and -- in this case? Does -- helps here?
How can I get a blank response when there is no .csv or .CSV file in
the folder? At the moment I get:
/bin/ls: cannot access ~(i:*.CSV): No such file or directory
I know I can test the exit code of the last command, but I was wondering maybe there is a --silent option or something.
Many thanks for your time.
PS: I made my research online quite thorough and I was unable to find an answer.
The ~ is just a literal character; the intent would appear to be to match filenames starting with ~ and ending with .csv, with i: being a flag to make the match case-insensitive. However, I don't know of any shell that supports that particular syntax. The closest thing I am aware of would be zsh's globbing flags:
setopt extended_glob # Allow globbing flags
ls ~(#i)*.csv
Here, (#i) indicates that anything after it should be matched without regard to case.
Update: as #baptistemm points out, ~(i:...) is syntax defined by ksh.
The -- is a conventional argument, supported by many commands, to mean that any arguments that follow are not options, but should be treated literally. For example, ls -l would mean ls should use the -l option to modify its output, while ls -- -l means ls should try to list a file named -l.
~(i:*.CSV) is to tell to shell (this is only supported apparently in ksh93) the enclosed text after : must be treated as insensitive, so in this example that could all these possibilites.
*.csv or
*.Csv or
*.cSv or
*.csV or
*.CSv or
*.CSV
Note this could have been written ls -t *.[CcSsVv] in bash.
To silent errors I suggest you to look for in this site for "standard error /dev/null" that will help.
I tried running commands like what you have in both bash and zsh and neither worked, so I can't help you out with that, but if you want to discard the error, you can add 2>/dev/null to the end of the ls command, so your command would look like the following:
ls -t ~(i:*.CSV) 2>/dev/null | head -1
This will redirect anything written to STDERR to /dev/null (i.e. throw it out), which, in your case, would be /bin/ls: cannot access ~(i:*.CSV): No such file or directory.

perl -p -i -e inside a shell script

#!/usr/bin/env bash
DOCUMENT_ROOT=/var/www/html/
rm index.php
cd modules/mymodules/
perl -p -i -e 's/x.xx.xx.y/dd.dd.ddd.dd/g' `grep -ril y.yy.yy.y *`
shows shows a warning
-i used with no filenames on the command line, reading from STDIN.
It prevents running rest of the scripts.
Any solutions ?
Actually I need to run
perl -p -i -e 's/x.xx.xx.y/dd.dd.ddd.dd/g' `grep -ril y.yy.yy.y *`
Inside a shell script
I am using ubuntu with docker.
Let's look at this a step at a time. First, you're running this grep command:
grep -ril y.yy.yy.y *
This recursively searches all files and directories in your current directory. It looks for files containing the string "y.yy.yy.yy" in any case and returns a list of the files which contain this text.
This command will return either a list of filenames or nothing.
Whatever is returned from that grep command is then passed as arguments to your Perl command:
perl -p -i -e 's/x.xx.xx.y/dd.dd.ddd.dd/g' [list of filenames]
If grep returns a list of files, then the -p option here means that each line in every file in the list is (in turn) run through that substitution and then printed to a new file. The -i means there's one new file for each old file and the new files are given the same names as the old files (the old files are deleted once the command has run).
So far so good. But what happens if the grep doesn't return any filenames? In that case, your Perl command doesn't get any filenames and that would trigger the error that you are seeing.
So my (second) guess is that your grep command isn't doing what you want it to and is returning an empty list of filenames. Try running the grep command on its own and see what you get.

Finding multiple strings in directory using linux commends

If I have two strings, for example "class" and "btn", what is the linux command that would allow me to search for these two strings in the entire directory.
To be more specific, lets say I have directory that contains few folders with bunch of .php files. My goal is to be able to search throughout those .php files so that it prints out only files that contain "class" and "btn" in one line. Hopefully this clarifies things better.
Thanks,
I normally use the following to search for strings inside my source codes. It searches for string and shows the exact line number where that text appears. Very helpful for searching string in source code files. You can always pipes the output to another grep and filter outputs.
grep -rn "text_to_search" directory_name/
example:
$ grep -rn "angular" menuapp
$ grep -rn "angular" menuapp | grep some_other_string
output would be:
menuapp/public/javascripts/angular.min.js:251://# sourceMappingURL=angular.min.js.map
menuapp/public/javascripts/app.js:1:var app = angular.module("menuApp", []);
grep -r /path/to/directory 'class|btn'
grep is used to search a string in a file. With the -r flag, it searches recursively all files in a directory.
Or, alternatively using the find command to "identify" the files to be searched instead of using grep in recursive mode:
find /path/to/your/directory -type f -exec grep "text_to_search" {} \+;

How to grep lines in a cron job whose crontab appears to be deleted?

I'm trying to search for a specific word in my cron job, which we'll call word for now.
The machine is an AWS server, and I've only ever used one username, so I do not think it is an issue of cron jobs being under a different user.
So, in the root directory, I do egrep -r ".*word.*" . Nothing comes up. I would assume the original crontab was deleted at some point, even though the process is still running.
However, when I do crontab -l, I comb through the entire output (takes a while), I can see that there definitely is a cronjob with this word.
What is the best way to grep lines in a cronjob? Or, does egrep only work on certain types of files? Thanks.
Wait. If you're doing this:
egrep -r ".*word.*"
It won't return because you haven't pointed egrep at a file to grep through. Generally, you use grep like this:
egrep "word" filename
egrep -r "word" directory/
egrep -r "word" *
You can grep the output of crontab -l like this:
crontab -l | egrep "word"
Other notes:
the .*word.* is unnecessary, as word will match. It should be faster (greedy matching)
egrep -r word * in your home directory wouldn't match the crontab, as that isn't stored in your homedir. (it's likely at /var/spool/cron/crontabs/$USER, but that's not terribly relevant)
you simply have to do:
crontab -l | grep 'word'
That's all.

XARGS, GREP and GNU parallel

Being a linux newbie I am having trouble figuring out some of the elementary aspects of text searching.
What I want to accomplish is as follows:
I have a file with a list of absolutepaths to a particular path.
I want to go through this list of files and grep for a particular pattern
If the pattern is found in that file, I would like to redirect it to a different output file.
Since these files are spread out on the NFS, I would like to speed up the lookup using GNU parallel.
So..what I did was as follows:
cat filepaths|xargs -iSomePath echo grep -Pl '\d+,\d+,\d+,\d+' \"SomePath\"> FoundPatternsInFile.out| parallel -v -j 30
When I run this command, I am getting the following error repeatedly:
grep: "/path/to/file/name": No such file or directory
The file and the path exists. Can somebody point out what I might be doing wrong with xargs and grep?
Thanks
cat filepaths | parallel -j 30 grep -Pl '\d+,\d+,\d+,\d+' {} > FoundPatternsInFile.out
In this case you can even leave out {}.

Resources