I have a file that has a list of malicious file names. There are many file names contains blank spaces. I need to find them and change their permissions. I have tried the following:
grep -E ". " suspicious.txt | xargs -0 chmod 000
But I am getting an error:
:File name too long
An ideas?
OK, you have one filename per line in your file, and the problem is that xargs without -0 will treat spaces and tabs as well as the newlines as file separators, while xargs with -0 expects the filenames to be separated by NUL characters and won't care about the newlines at all.
So turn the newlines into NULs before feeding the result into the xargs -0 command:
grep -E ". " suspicious.txt | tr '\n' '\0' | xargs -0 chmod 000
Update:
See Mark Reeds correct answer. This was wrong because nulls were needed for the filenames from the file, not the filenames generated by grep.
Original:
You need something more like this:
grep -Z -E ". " suspicious.txt | xargs -0 chmod 000
From xargs man page:
Because Unix filenames can contain blanks and newlines, this default behaviour is often problematic; filenames containing blanks and/or newlines are incorrectly processed by xargs. In these situations it is better to use the -0 option, which prevents such problems. When using this option you will need to ensure that the program which produces the input for xargs also uses a null character as a separator.
From grep man page:
-Z, --null
Output a zero byte (the ASCII NUL character) instead of the character that normally follows a file name. For example, grep -lZ outputs a zero byte after each file name instead of the usual newline. This option makes the output unambiguous, even in the presence of file names containing unusual characters like newlines. This option can be used with commands like find -print0, perl -0, sort -z, and xargs -0 to process arbitrary file names, even those that contain newline characters.
Related
I am trying to search for files with specific text but excluding a certain text and showing only the files.
Here is my code:
grep -v "TEXT1" *.* | grep -ils "ABC2"
However, it returns:
(standard input)
Please suggest. Thanks a lot.
The output should only show the filenames.
Here's one way to do it, assuming you want to match these terms anywhere in the file.
grep -LZ 'TEXT1' *.* | xargs -0 grep -li 'ABC2'
-L will match files not containing the given search term
use -LiZ if you want to match TEXT1 irrespective of case
The -Z option is needed to separate filenames with NUL character and xargs -0 will then separate out filenames based on NUL character
If you want to check these two conditions on same line instead of anywhere in the file:
grep -lP '^(?!.*TEXT1).*(?i:ABC2)' *.*
-P enables PCRE, which I assume you have since linux is tagged
(?!regexp) is a negative lookahead construct, so ^(?!.*TEXT1) will ensure the line doesn't have TEXT1
(?i:ABC2) will match ABC2 case insensitively
Use grep -liP '^(?!.*TEXT1).*ABC2' if you want to match both terms irrespective of case
(standard input)
This error is due to use of grep -l in a pipeline as your second grep command is reading input from stdin not from a file and -l option is printing (standard input) instead of the filename.
You can use this alternate solution in a single awk command:
awk '/ABC2/ && !/TEXT1/ {print FILENAME; nextfile}' *.* 2>/dev/null
$ find ~/AppData/Local/atom/ -name atom.sh -type f | grep -FzZ 'cli/atom.sh' | sed '/^\s*$/d' | cat -n
1 /c/Users/devuser/AppData/Local/atom/app-1.12.1/resources/cli/atom.sh
2 /c/Users/devuser/AppData/Local/atom/app-1.12.2/resources/cli/atom.sh
3
I tried a number of sed/awk-based options to get rid of the blank line. (#3 in the output). But I can't quite get it right...
I need to get the last line into a variable...
The below actual command I am working with fails to give an output...
find ~/AppData/Local/atom/ -name atom.sh -type f | grep -FzZ 'cli/atom.sh' | sed '/^$/d' | tail -n 1
Below sed command would remove all the empty lines.
sed '/^$/d' file
or
Below sed command would remove all the empty lines and also the lines having only spaces.
sed '/^[[:blank:]]*$/d' file
Add -i parameter to do an in-place edit.
sed -i '/^[[:blank:]]*$/d' file
The problem is the -Z flag of grep. It causes grep to terminate matched lines with a null character instead of a newline. This won't work well with sed, because sed processes input line by line, and it expects each line to be terminated by a newline. In your example grep doesn't emit any newline characters, so as far as sed is concerned, it receives a block of text without a terminating newline, so it processes it as a single line, and so the pattern /^\s*$/ doesn't match anything.
Furthermore, the -z flag would only make sense if the filenames in the output of find were terminated by null characters, that is with the -print0 flag. But you're not using that, so the -z flag in grep is pointless. And the -Z flag is pointless too, because that should be used when the next command in the pipeline expects null-terminated records, which is not your case.
Do like this:
find ~/AppData/Local/atom/ -name atom.sh -type f -print0 | grep -zF 'cli/atom.sh' | tr '\0' '\n' | tail -n 1
Not all sed's support the token \s for any whitespace character.
You can use the character class [[:blank:]] (for space or tab), or [[:space:]] (for any whitespace):
sed '/^[[:blank:]]*$/d'
sed '/^[[:space:]]*$/d'
I have two .txt files.
'target.txt' is a list of target files
'destination.txt' is a list of (on corresponding lines) of destinations.
I'd like to create a command that does the following:
cp [line 1 from target.txt] [line 1 from destination.txt]
For each line of the files.
paste target.txt destination.txt | sed -e 's/^/cp /' > cp.cmds
Then, after inspecting cp.cmds for correctness, you can just run it as a shell script.
sh cp.cmds
The paste command merges two files by concatenating corresponding lines.
paste target.txt destination.txt | while read target dest; do
cp $target $dest
done
This will not work if any of the filenames contain spaces, though. If that's a requirement, I would use awk to read the first file into an array, then when reading the second file print a cp command with the corresponding lines and quotes around them, and pipe this to sh to execute it.
To handle whitespace in the filenames:
paste -d\\n target.txt destination.txt | xargs -d\\n -n2 -x cp
paste -d\\n interleaves lines of the argument files
xargs -d\\n -n2 reads two complete lines at a time and applies them as two arguments at the end of the command line. The -d flag disables all special processing of quotes, apostrophes and backslashes in the input lines, as well as the eof character (by default _).
The -d command-line options to xargs is a GNU extension. If you are stuck with a Posix standard xargs, you can use the following alternative, courtesy of the Open Group (see example 2, near the end of the page):
paste -d\\n target.txt destination.txt |
sed 's/[^[:alnum:]]/\\&/g' |
xargs -E "" -n 2 -x cp
The sed command backslash-escapes every non-alphanumeric character
xargs -E "" disables the end-of-file character handling.
I have the following problem.
Got a file which includes certain paths/files of a FS.
These for some reason do include the whole range of special characters, like space, single/double quotes, even sometimes the Copyright ASCII.
I need to run each line of the file and pass it to another command.
What I tried so far is:
<input_file xargs -I % command %
Which was working until I got this message from xargs
xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option
But usinf this option did not work at all for me
xargs: argument line too long
Does anybody have a solution which does work ok with special characters.
Doesn't have to be with xargs, but I need to pass the line as it is to the command.
Many thanks in advance.
You should separate the filenames with the \0 NULL character for processing.
This can be done with
find . <args> -print0 | xargs -0
or if you must process the file with filenames, change the '\n` to '\0', e.g.
tr '\n' '\0' < filename | xargs -0 -n1 -I% echo "==%=="
the -n 1 says,
-n max-args
Use at most max-args arguments per command line.
and you should to use "%" quotes to enclosing %
The xargs -0 -n1 -I% echo "==%==" solution didn't work for me on my Mac OS X, so I found another one.
<input_with_single_quotes cat | sed "s/'/\\\'/" | xargs -I {} echo {}
This replaces the ' character with \' that works well as an input to the commands in xargs.
I'm trying to use grep to get all μs under a directory, unfortunately, μ is not a keyboard character, any ideas?
BTW, for normal keyboard words, I could use
find / -type f -print | xargs grep -inE <search_word> 2>/dev/null
to find out all plain text files that contain the search word.
Would You mind using sed instead of grep?
sed -n '/\xb5/p'
However grep should also work:
grep -P '\xb5'
In Bash, you can use the shell's quoting facilities to pass non-ASCII content. In order to correctly identify the search string, we need to know the encoding of the files you are grepping. If they are in UTF-8, you need a different search string than if they are in ISO-8859-1 or UTF-16.
If your shell's locale agrees with the contents of the file, this should all work undramatically out of the box, but here are a couple of workarounds.
# grep ISO-8859-1 \xB5
grep $'\xB5' file
# grep UTF-8 U+03BC
grep $'\xCE\xBC' file
# grep UTF-16be U+03BC
grep $'\x03\xBC' file
# grep UTF-16le U+03BC
grep $'\xBC\x03' file
Some older versions of grep have a problem with non-ASCII characters; as a workaround, you can also use Perl.
perl -ne 'print if m/\u03BC/' file
You might have to play around with Perl's Unicode facilities to get this to work.