I need to count the occurrences of the hex string 0xFF 0x84 0x03 0x07 in a binary file, without too much hassle... is there a quick way of grepping for this data from the linux command line or should I write dedicated code to do it?
Patterns without linebreaks
If your version of grep takes the -P parameter, then you can use grep -a -P, to search for an arbitrary binary string (with no linebreaks) inside a binary file. This is close to what you want:
grep -a -c -P '\xFF\x84\x03\x07' myfile.bin
-a ensures that binary files will not be skipped
-c outputs the count
-P specifies that your pattern is a Perl-compatible regular expression (PCRE), which allows strings to contain hex characters in the above \xNN format.
Unfortunately, grep -c will only count the number of "lines" the pattern appears on - not actual occurrences.
To get the exact number of occurrences with grep, it seems you need to do:
grep -a -o -P '\xFF\x84\x03\x07' myfile.bin | wc -l
grep -o separates out each match onto its own line, and wc -l counts the lines.
Patterns containing linebreaks
If you do need to grep for linebreaks, one workaround I can think of is to use tr to swap the character for another one that's not in your search term.
# set up test file (0a is newline)
xxd -r <<< '0:08 09 0a 0b 0c 0a 0b 0c' > test.bin
# grep for '\xa\xb\xc' doesn't work
grep -a -o -P '\xa\xb\xc' test.bin | wc -l
# swap newline with oct 42 and grep for that
tr '\n\042' '\042\n' < test.bin | grep -a -o -P '\042\xb\xc' | wc -l
(Note that 042 octal is the double quote " sign in ASCII.)
Another way, if your string doesn't contain Nulls (0x0), would be to use the -z flag, and swap Nulls for linebreaks before passing to wc.
grep -a -o -P -z '\xa\xb\xc' test.bin | tr '\0\n' '\n\0' | wc -l
(Note that -z and -P may be experimental in conjunction with each other. But with simple expressions and no Nulls, I would guess it's fine.)
use hexdump like
hexdump -v -e '"0x" 1/1 "%02X" " "' <filename> | grep -oh "0xFF 0x84 0x03 0x07" |wc -w
hexdump will output binary file in the given format like 0xNN
grep will find all the occurrences of the string without considering the same ones repeated on a line
wc will give you final count
did you try grep -a?
from grep man page:
-a, --text
Process a binary file as if it were text; this is equivalent to the --binary-files=text option.
How about:
$ hexdump a.out | grep -Ec 'ff ?84 ?03 ?07'
This doesn't quite answer your question, but does solve the problem when the search string is ASCII but the file is binary:
cat binaryfile | sed 's/SearchString/SearchString\n/g' | grep -c SearchString
Basically, 'grep' was almost there except it only counted one occurrence if there was no newline byte in between, so I added the newline bytes.
Related
The hexdump command converts any file to hex values.
But what if I have hex values and I want to reverse the process, is this possible?
There is a similar tool called xxd. If you run xxd with just a file name it dumps the data in a fairly standard hex dump format:
# xxd bdata
0000000: 0001 0203 0405
......
Now if you pipe the output back to xxd with the -r option and redirect that to a new file, you can convert the hex dump back to binary:
# xxd bdata | xxd -r >bdata2
# cmp bdata bdata2
# xxd bdata2
0000000: 0001 0203 0405
I've written a short AWK script which reverses hexdump -C output back to the
original data. Use like this:
reverse-hexdump.sh hex.txt > data
Handles '*' repeat markers and generating original data even if binary.
hexdump -C and reverse-hexdump.sh make a data round-trip pair. It is
available here:
GitHub reverse-hexdump repo
Direct to reverse-hexdump.sh
Restore file, given only the output of hexdump file
If you only have the output of hexdump file and want to restore the original file, first note that hexdump's default output depends on the endianness of the system you ran hexdump on!
If you have access to the system that created the dump, you can determinate its endianness using below command:
[[ "$(printf '\01\03' | hexdump)" == *0103* ]] && echo big || echo little
Reversing little-endian hexdump
This is the most common case. All x86/x64 systems are little-endian. If you don't know the endianness of the system that ran hexdump file, try this.
sed 's/ \(..\)\(..\)/ \2\1/g;$d' dump | xxd -r
The sed part converts hexdump's format into xxd's format, at least so far that xxd -r works.
Reversing big-endian hexdump
sed '$d' dump | xxd -r
Known Bugs (see comment section)
A trailing null byte is added if the original file was of odd length (e.g. 1, 3, 5, 7, ..., byte long).
Repeating sections of the original file are not restored correctly if they were hexdumped using a *.
You can check your dump for above problematic cases by running below command:
grep -qE '^\*|^[0-9a-f]*[13579bdf] *$' dump && echo bug || echo ok
Better alternative to create hexdumps in the first place
Besides the non-posix (and therefore not so portable) xxd there is od (octal dump) which should be available on all unix-like systems as it is specified by posix:
od -tx1 -An -v
Will print a hexadecimal dump, grouping digits as single bytes (-tx1), with no Address prefixes (-An, similar to xxd -p) and without abbreviating repeated sections as * (-v). You can reverse such a dump using xxd -r -p.
As someone who sucks at bash, I could not understand the examples already posted.
Here is what would have helped me when I was originally searching:
Take your text file "AYE.TXT" and convert it into a hex dump called "BEE.TXT"
xxd -p "AYE.TXT" > "BEE.TXT"
Take your hex dump file ("BEE.TXT") and covert it back to ascii file "CEE.TXT"
xxd -r -p "BEE.TXT" > "CEE.TXT"
Now that you have some simple working code, feel free to check out
"xxd -help" on the command line for an explanation of what all those flags do.
(That part is the easy part, the hard part is the specifics of the bash syntax)
There is a tonne of more elegant ways to get this done, but I've quickly hacked something together that Works for Me (tm) when regenerating a binary file from a hex dump generated by hexdump -C some_file.bin:
sed 's/\(.\{8\}\) \(..\) \(..\) \(..\) \(..\) \(..\) \(..\) \(..\) \(..\)/\1: \2\3 \4\5 \6\7 \8\9/g' some_file.hexdump | sed 's/\(.*\) \(..\) \(..\) \(..\) \(..\) \(..\) \(..\) \(..\) \(..\) |/\1 \2\3 \4\5 \6\7 \8\9 /g' | sed 's/.$//g' | xxd -r > some_file.restored
Basically, uses 2 sed processeses, each handling it's part of each line. Ugly, but someone might find it useful.
If you don't have xxd, use hexdump, od, perl or python:
The following all give the same output:
# If you only have hexdump
hexdump -ve '1/1 "%.2x"' mybinaryfile > mydump
# This gives exactly the same output as:
xxd -p mybinaryfile > mydump
# Or, much slower:
od -v -t x1 -An < mybinaryfile | tr -d "\n " > mydump
# Or, the fastest:
perl -pe 'BEGIN{$/=\1e6} $_=unpack "H*"' < mybinaryfile > mydump
# Or, if you somehow have Python, and not Perl:
python -c "print(open('mybinaryfile','rb').read().hex())" > mydump
Then you can copy and paste, or pipe the output, and convert back with:
xxd -r -p mydump mybinaryfileagain
# Or
xxd -r -p < mydump > mybinaryfileagain
The hexdump command is available almost everywhere, and is usually part of the default busybox - if it's not linked, you can try running busybox hexdump or busybox xxd.
If xxd is not available to reverse the data, then you can try awk
The old days: Zmodem
In the old days we used to use X/Y/Zmodem which is available in the package lrzsz which can tolerate lossy comms - but it's a bidirectional protocol so the binaries need to be running at the same time and there needs to be bidirectional comms:
# Demo on local machine, using FIFOs
mkfifo /tmp/fifo-in
mkfifo /tmp/fifo-out
sz -b mybinaryfile > /tmp/fifo-out < /tmp/fifo-in
mkdir out; cd out
rz -b < /tmp/fifo-out > /tmp/fifo-in
Luckily, screen supports receiving Zmodem, so if you're in a screen session:
screen
telnet somehost
Then type Ctrl+A and : and then zmodem catch and Enter. Then inside the screen on the remote host, issue:
# sz -b mybinaryfile
Press Enter when you see the string starting with "!!!".
When you see "Transfer Complete", you may want to run reset if you want to continue the terminal session normally.
This program reverses hexdump -C output back to the original data.
Usage:
make
make test
./unhexdump -i inputfile -o outputfile
see https://github.com/zhouzq-thu/unhexdump!
i found more simple solution:
bin2hex
echo -n "abc" | hexdump -ve '1/1 "%02x"'
hex2bin
echo -n "616263" | grep -Eo ".{2}" | sed 's/\(.*\)/\\x\1/' | tr -d '\n' | xargs -0 echo -ne
If we use grep -c option it will give you the each occurrences only once per line. But I need the total no of matched occurrences not line count.
Use this
grep -o pattern path | wc -l
You can use -o flag to output only the matched part and then pipe it to wc -w to get word count.
Eg: ls ~ | grep -o pattern | wc -w
On Ubuntu 10.04.4 LTS, I did the following small test and got a surprising result:
First, I created a file with 5 lines and name it as a.txt:
echo -e "1\n2\n3\n4\n5" > a.txt
$ cat a.txt
1
2
3
4
5
Then I run wc to count the number of lines
$ wc -l a.txt
5 a.txt
However, when I run grep to count the number of lines that have line breaks I got an answer that I did not understand:
$ grep -c -P '\n' a.txt
3
My question is: how does grep get this number? Shouldn't it be 4?
Please Read The Fine Manual!
seq 1 5 | wc -l
5
seq 1 5 | grep -ac $'\n'
5
I don't understand where is the problem!?
seq 1 5 | hd
00000000 31 0a 32 0a 33 0a 34 0a 35 0a |1.2.3.4.5.|
Explanation:
-a switch tell grep to open file in binary mode. IE don't care about text formatting.
$'\n' syntax is resolved by bash himself, before running grep. Doing this give the ability to pass control characters as arguments to any command under bash.
Grep cannot see new line character. It searches for inline pattern.
Consider using grep -c -P '$' a.txt to match the ending of each line.
The newline character is not part of lines. grep uses the newline character as the record separator, and removes it from the lines, so that patterns with $ work as expected. For example, to search for lines ending with foo you can use the pattern foo$ instead of foo\n$. That would be very inconvenient.
So grep -c -P '\n' a.txt should give you 0. If you're getting 3, that sounds extremely strange, but perhaps it can be explained the highly experimental remark in man grep:
-P, --perl-regexp
Interpret PATTERN as a Perl regular expression (PCRE, see
below). This is highly experimental and grep -P may warn of
unimplemented features.
I'm in Debian/Wheezy, which is much more recent than Ubuntu 10.04. If the -P is "highly experimental" today, it's not too difficult to imagine it was buggy in older systems. This is just a guess though.
To count the number of newlines, use wc -l, not a grep -c hack.
Btw, interestingly:
$ printf hello >> a.txt
$ wc -l a.txt
5 a.txt
$ grep -c '' a.txt
6
That is, printf doesn't print a newline, so after we append "hello" to a.txt, there won't be a newline at the end of the file. So wc -l counts newline characters, not exactly "lines", and grep '' (empty string) matches all lines.
I think you want to use
$ grep -c -P "." a.txt
5
$ echo "6" >> a.txt
$ grep -c -P "." a.txt
6
$ cat a.txt
1
2
3
4
5
6
How to check if a large file contains only zero bytes ('\0') in Linux using a shell command? I can write a small program for this but this seems to be an overkill.
If you're using bash, you can use read -n 1 to exit early if a non-NUL character has been found:
<your_file tr -d '\0' | read -n 1 || echo "All zeroes."
where you substitute the actual filename for your_file.
The "file" /dev/zero returns a sequence of zero bytes on read, so a cmp file /dev/zero should give essentially what you want (reporting the first different byte just beyond the length of file).
If you have Bash,
cmp file <(tr -dc '\000' <file)
If you don't have Bash, the following should be POSIX (but I guess there may be legacy versions of cmp which are not comfortable with reading standard input):
tr -dc '\000' <file | cmp - file
Perhaps more economically, assuming your grep can read arbitrary binary data,
tr -d '\000' <file | grep -q -m 1 ^ || echo All zeros
I suppose you could tweak the last example even further with a dd pipe to truncate any output from tr after one block of data (in case there are very long sequences without newlines), or even down to one byte. Or maybe just force there to be newlines.
tr -d '\000' <file | tr -c '\000' '\n' | grep -q -m 1 ^ || echo All zeros
It won't win a prize for elegance, but:
xxd -p file | grep -qEv '^(00)*$'
xxd -p prints a file in the following way:
23696e636c756465203c6572726e6f2e683e0a23696e636c756465203c73
7464696f2e683e0a23696e636c756465203c7374646c69622e683e0a2369
6e636c756465203c737472696e672e683e0a0a766f696420757361676528
63686172202a70726f676e616d65290a7b0a09667072696e746628737464
6572722c202255736167653a202573203c
So we grep to see if there is a line that is not made completely out of 0's, which means there is a char different to '\0' in the file. If not, the file is made completely out of zero-chars.
(The return code signals which one happened, I assumed you wanted it for a script. If not, tell me and I'll write something else)
EDIT: added -E for grouping and -q to discard output.
Straightforward:
if [ -n $(tr -d '\0000' < file | head -c 1) ]; then
echo a nonzero byte
fi
The tr -d removes all null bytes. If there are any left, the if [ -n sees a nonempty string.
Completely changed my answer based on the reply here
Try
perl -0777ne'print /^\x00+$/ ? "yes" : "no"' file
How do I replace whitespaces with tabs in linux in a given text file?
Use the unexpand(1) program
UNEXPAND(1) User Commands UNEXPAND(1)
NAME
unexpand - convert spaces to tabs
SYNOPSIS
unexpand [OPTION]... [FILE]...
DESCRIPTION
Convert blanks in each FILE to tabs, writing to standard output. With
no FILE, or when FILE is -, read standard input.
Mandatory arguments to long options are mandatory for short options
too.
-a, --all
convert all blanks, instead of just initial blanks
--first-only
convert only leading sequences of blanks (overrides -a)
-t, --tabs=N
have tabs N characters apart instead of 8 (enables -a)
-t, --tabs=LIST
use comma separated LIST of tab positions (enables -a)
--help display this help and exit
--version
output version information and exit
. . .
STANDARDS
The expand and unexpand utilities conform to IEEE Std 1003.1-2001
(``POSIX.1'').
I think you can try with awk
awk -v OFS="\t" '$1=$1' file1
or SED if you preffer
sed 's/[:blank:]+/,/g' thefile.txt > the_modified_copy.txt
or even tr
tr -s '\t' < thefile.txt | tr '\t' ' ' > the_modified_copy.txt
or a simplified version of the tr solution sugested by Sam Bisbee
tr ' ' \\t < someFile > someFile
Using Perl:
perl -p -i -e 's/ /\t/g' file.txt
better tr command:
tr [:blank:] \\t
This will clean up the output of say, unzip -l , for further processing with grep, cut, etc.
e.g.,
unzip -l some-jars-and-textfiles.zip | tr [:blank:] \\t | cut -f 5 | grep jar
Example command for converting each .js file under the current dir to tabs (only leading spaces are converted):
find . -name "*.js" -exec bash -c 'unexpand -t 4 --first-only "$0" > /tmp/totabbuff && mv /tmp/totabbuff "$0"' {} \;
Download and run the following script to recursively convert soft tabs to hard tabs in plain text files.
Place and execute the script from inside the folder which contains the plain text files.
#!/bin/bash
find . -type f -and -not -path './.git/*' -exec grep -Iq . {} \; -and -print | while read -r file; do {
echo "Converting... "$file"";
data=$(unexpand --first-only -t 4 "$file");
rm "$file";
echo "$data" > "$file";
}; done;
Using sed:
T=$(printf "\t")
sed "s/[[:blank:]]\+/$T/g"
or
sed "s/[[:space:]]\+/$T/g"
You can also use astyle. I found it quite useful and it has several options too:
Tab and Bracket Options:
If no indentation option is set, the default option of 4 spaces will be used. Equivalent to -s4 --indent=spaces=4. If no brackets option is set, the
brackets will not be changed.
--indent=spaces, --indent=spaces=#, -s, -s#
Indent using # spaces per indent. Between 1 to 20. Not specifying # will result in a default of 4 spaces per indent.
--indent=tab, --indent=tab=#, -t, -t#
Indent using tab characters, assuming that each tab is # spaces long. Between 1 and 20. Not specifying # will result in a default assumption of
4 spaces per tab.`
This will replace consecutive spaces with one space (but not tab).
tr -s '[:blank:]'
This will replace consecutive spaces with a tab.
tr -s '[:blank:]' '\t'
If you are talking about replacing all consecutive spaces on a line with a tab then tr -s '[:blank:]' '\t'.
[root#sysresccd /run/archiso/img_dev]# sfdisk -l -q -o Device,Start /dev/sda
Device Start
/dev/sda1 2048
/dev/sda2 411648
/dev/sda3 2508800
/dev/sda4 10639360
/dev/sda5 75307008
/dev/sda6 96278528
/dev/sda7 115809778
[root#sysresccd /run/archiso/img_dev]# sfdisk -l -q -o Device,Start /dev/sda | tr -s '[:blank:]' '\t'
Device Start
/dev/sda1 2048
/dev/sda2 411648
/dev/sda3 2508800
/dev/sda4 10639360
/dev/sda5 75307008
/dev/sda6 96278528
/dev/sda7 115809778
If you are talking about replacing all whitespace (e.g. space, tab, newline, etc.) then tr -s '[:space:]'.
[root#sysresccd /run/archiso/img_dev]# sfdisk -l -q -o Device,Start /dev/sda | tr -s '[:space:]' '\t'
Device Start /dev/sda1 2048 /dev/sda2 411648 /dev/sda3 2508800 /dev/sda4 10639360 /dev/sda5 75307008 /dev/sda6 96278528 /dev/sda7 115809778
If you are talking about fixing a tab-damaged file then use expand and unexpand as mentioned in other answers.
sed 's/[[:blank:]]\+/\t/g' original.out > fixed_file.out
This will for example reduce the amount of tabs.. or spaces into one single tab.
You can also do it for situations of multiple spaces/tabs into one space:
sed 's/[[:blank:]]\+/ /g' original.out > fixed_file.out