I want to see the sizes of images within a directory. For this purpose I do
$ for file in *.jpg; do jpegtopnm $file | pnmfile; done
Then I can see
jpegtopnm: WRITING PPM FILE
stdin: PPM raw, 960 by 1280 maxval 255
jpegtopnm: WRITING PPM FILE
stdin: PPM raw, 960 by 1280 maxval 255
jpegtopnm: WRITING PPM FILE
stdin: PPM raw, 1200 by 1600 maxval 255
and so on.
I would like to see
960 by 1280
960 by 1280
1200 by 1600
.............
How one can do this?
Answer
The command jpegtopnm is a part of netpbm - package of graphics manipulation programs and libraries:
$ apt-file -l find pnmfile
netbpm
Then we must read "man netbpm":
-quiet Suppress all informational messages
Thus we solved our problem:
$ for file in *.jpg; do jpegtopnm $file -quiet | pnmfile | cut -c 16-28; done
4000 by 3000
2592 by 1944
4000 by 3000
............
About "cut -c 16-28".
This is a filter that selects characters from 16 to 28 in a string
"stdin: PPM raw, 960 by 1280 maxval 255".
If you have at your directory images with different sizes such as 4000x5000, 300x400, 2x3, 40x67 etc it won't work properly. For that reason you have to use more complicated way. It is a "cut" filter by fields(-f). The field separator will be a space character(-d ' ').
$ for file in *.jpg; do jpegtopnm $file -quiet | pnmfile | cut -d ' ' -f 3-5; done
700 by 900
65 by 40
2 by 3
7000 by 9000
4000 by 3000
............
Related
I have a raw image that was taken with v4l2-ctl after the camera had been setup like:
# media-ctl -d /dev/media0 -l "'rzg2l_csi2 10830400.csi2':1 -> 'CRU output':0 [1]"
# media-ctl -d /dev/media0 -V "'rzg2l_csi2 10830400.csi2':1 [fmt:UYVY8_2X8/1280x960 field:none]"
# media-ctl -d /dev/media0 -V "'ov5645 0-003c':0 [fmt:UYVY8_2X8/1280x960 field:none]"
and then the picture got snapped with:
# v4l2-ctl --device /dev/video0 --stream-mmap --stream-to=frame.raw --stream-count=1
now I've tried multiple methods to convert this into a jpeg but nothing seems to yield the expected output
the raw file can be downloaded here: https://drive.google.com/file/d/1VqXnrJDYbzdtSsWfTlm2mX9rl1-Rl_7F/view?usp=sharing
I tried out the following command:
convert -verbose -size 1280x960 UYVY:frame.raw frame.bmp
which I found on Converting from YUV(UYVY) to RGB using imagemagick
but it doesn't do the trick
Your frame is 2457600 bytes and your pixel dimensions are 1280x960, so you have:
bits per pixel = 2457600 * 8 / (1280 * 960) = 16
You can get a list of the pixel formats that ffmpeg supports using:
ffmpeg -pix_fmts 2> /dev/null
Sample Output
FLAGS NAME NB_COMPONENTS BITS_PER_PIXEL
-----
IO... yuv420p 3 12
IO... yuyv422 3 16
IO... rgb24 3 24
IO... bgr24 3 24
IO... yuv422p 3 16
IO... yuv444p 3 24
IO... yuv410p 3 9
...
...
That means you can get a list of pixel formats that contain Y, U and V with 16 bits per pixel like this:
ffmpeg -pix_fmts 2> /dev/null | awk '/y/ && /u/ && /16$/ {print}'
IO... yuyv422 3 16
IO... yuv422p 3 16
IO... yuvj422p 3 16
IO... uyvy422 3 16
IO... yuv440p 3 16
IO... yuvj440p 3 16
IO... yvyu422 3 16
Now you can run a loop, iterating over all the 16-bit per pixel YUV formats and see what ffmpeg makes of your image - naming each result after the format so you can identify which is which:
ffmpeg -pix_fmts 2> /dev/null |
awk '/y/ && /u/ && /16$/ {print $2}' |
while read f; do
ffmpeg -y -s:v 1280x960 -pix_fmt $f -i frame.raw $f.jpg
done
That gives you these files:
-rw-r--r-- 1 mark staff 304916 3 Feb 09:38 yuv440p.jpg
-rw-r--r-- 1 mark staff 227123 3 Feb 09:38 yuvj422p.jpg
-rw-r--r-- 1 mark staff 39543 3 Feb 09:38 yuyv422.jpg
-rw-r--r-- 1 mark staff 39545 3 Feb 09:38 yvyu422.jpg
And I guess that yuyv422.jpg is your image, so that means you can extract it with:
ffmpeg -y -s:v 1280x960 -pix_fmt yuyv422 -i frame.raw result.jpg
If you wanted to do that with ImageMagick, you could do something like this:
#!/bin/bash
python3 <<EOF
import numpy as np
h, w = 960, 1280
# Load raw file into Numpy array
raw = np.fromfile('frame.raw', np.uint8)
raw[0::2].tofile('Y') # Starting at the 1st byte, write every 2nd byte to file "Y"
raw[1::4].tofile('U') # Starting at the 2nd byte, write every 4th byte to file "U"
raw[3::4].tofile('V') # Starting at the 3rd byte, write every 4th byte to file "V"
EOF
# Load the Y channel, then the U and V channels forcibly resizing them, then combine and go to sRGB
magick -depth 8 -size 1280x960 gray:Y \
\( -size 640x960 gray:U gray:V -resize 1280x960\! \) \
-set colorspace YUV -combine -colorspace sRGB result.jpg
If yo don't like/have Python, that part can be replaced with some basic C as follows:
#include <stdint.h>
#include <stdio.h>
// Split YUYV file called "frame.raw" into separate channels with filenames "Y", "U" and "V"
// Compile with: clang -O3 splitter.c -o splitter
int main(){
FILE *in, *Y, *U, *V;
uint8_t buffer[4];
size_t bytesRead;
// Open input file and 1 output file per channel
in = fopen("frame.raw", "rb");
Y = fopen("Y", "wb");
U = fopen("U", "wb");
V = fopen("V", "wb");
// read up to sizeof(buffer) bytes
while ((bytesRead = fread(buffer, 1, sizeof(buffer), in)) > 0)
{
fputc(buffer[0], Y);
fputc(buffer[1], U);
fputc(buffer[2], Y);
fputc(buffer[3], V);
}
}
Having had so much fun doing ffmpeg, Python, and C versions, I thought I'd try just doing it in the shell - converting bytes to lines and so I could pick alternate lines instead of alternate bytes. This works the same as the above:
#!/bin/bash
# Build JPEG image from YUYV image with packed bytes in order YUYVYUYV...
# Use "xxd" to convert bytes into lines, then extract alternate lines - which is easier than extracting bytes
H=960
W=1280
INPUT="frame.raw"
# Take top byte of every uint16 and put into "Y.pgm"
xxd -c1 -p "$INPUT" | sed -n 'p;n' | xxd -r -p | magick -size ${W}x${H} -depth 8 gray:- Y.pgm
# Take bottom byte of every 2nd uint16, starting at the 1st, resize up to full width and put into "U.pgm"
xxd -c1 -p "$INPUT" | sed -n 'n;p' | sed -n 'p;n' | xxd -r -p | magick -size $((W/2))x${H} -depth 8 gray:- -resize ${W}x${H}\! U.pgm
# Take bottom byte of every 2nd uint16, starting at the 2nd, resize up to full width and put into "V.pgm"
xxd -c1 -p "$INPUT" | sed -n 'n;p' | sed -n 'n;p' | xxd -r -p | magick -size $((W/2))x${H} -depth 8 gray:- -resize ${W}x${H}\! V.pgm
# Load the 3 channels, combine and convert to JPEG
magick {Y,U,V}.pgm -set colorspace YUV -combine -colorspace sRGB result.jpg
# Remove litter
rm {Y,U,V}.pgm
As regards colour cast removal, as I said in the comments, the " normal" way, AFAIK, is to get the average colour of the image and invert its Hue then blend that "negated cast" back with the original image to offset the original colour cast. Here is a crude attempt - if anyone knows better please ping me!
Step 1: Get average colour cast
magick result.jpg -resize 1x1\! cast.png
Step 2: Invert the cast
magick cast.png -modulate 100,100,0 correction.png
Step 3: Blend the original with the correction and brighten maybe
magick result.jpg correction.png -define compose:args=50,50 -compose blend -composite -auto-level result.jpg
Here are the original and corrected versions:
Obviously you can change the percentages for different degrees of "correction".
In order to count the lines of my repository, I typed the code below, and found out that images and pdfs are also included in the word count.
git ls-files | xargs wc -l
When someone asks you for the scale of the repository, would you include the images/pdfs?
If not, could someone help me answer the questions below?
How to exclude the files under "/pdfs" directory
How to exclude .jpg and .png?
You can make use of cloc. It counts blank lines, comment lines, and physical lines of source code in many programming languages. Cloc can take file, directory, and/or archive names as inputs. For instance, if you want to count the number of lines of code in your repository and exclude some directories while counting, you can specify those directories separated by comma like this:
cloc --exclude-dir=imagedir,pdfdir your_repository
cloc will show you the report like this:
387 text files.
387 unique files.
22 files ignored.
github.com/AlDanial/cloc v 1.88 T=0.97 s (376.5 files/s, 152866.0 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Go 235 17216 11769 95308
InstallShield 2 410 0 11178
XML 41 1418 159 2738
Python 5 516 523 1792
Bourne Shell 21 266 283 1512
JSON 19 24 0 1005
Markdown 23 452 0 797
AsciiDoc 4 119 0 312
Ruby 4 44 31 238
YAML 4 4 2 113
WiX source 1 19 24 112
make 3 16 25 68
DOS Batch 2 13 2 38
WiX include 1 0 0 28
Dockerfile 1 13 9 17
-------------------------------------------------------------------------------
SUM: 366 20530 12827 115256
-------------------------------------------------------------------------------
You can also use CLOC with Git like this:
cloc $(git ls-files)
which is equivalent to
git ls-files | xargs cloc
cloc sounds like it does the job. You should remove space+tab from IFS if you use command sub though: IFS=$'\n' cloc $(git ls-files)
If you just want to know a word count or line count, you could bodge it together like this. It gives you the language too. Clone the repo, test for text file / file type, count lines, delete files.
#!/bin/sh -e
# Get dir name from URL + remove trailing slashes - works for _most_ urls
url=${1:? No URL given}
url=${url%/}; url=${url%/}
repo=${1##*/}
repo=${repo%.git}
dir=./$repo
# Clone repo in tmp
cd "${TMPDIR:-/tmp}"
[ -e "$dir" ] && { echo Exists: "$dir" >&2; exit 1; }
trap 'rm -rf "$dir"' EXIT INT
git clone "$url"
# Get column 1 width, for alignment
max_path_length=$(printf '%s\n' "$dir/"* | wc -L)
# Extract and print the data
printf '\n%s\n\n' "$repo text files details:"
for file in "$dir"/*; do
mime=$(file --brief --mime-type "$file")
type=${mime%%/*}
if [ "$type" = text ]; then
lines=$(grep -c . "$file") || true
lang=${mime##*/}
printf "%-${max_path_length}s %s\n" "${file#$dir}" "[$lang, $lines lines]"
total_lines=$((total_lines + lines))
fi
done
printf '\n%s\n\n' "${dir#./} total lines: $total_lines"
Example output:
$ git-wc 'git://git.savannah.gnu.org/sed.git'
Cloning into 'sed'...
remote: Counting objects: 6276, done.
remote: Compressing objects: 100% (1134/1134), done.
remote: Total 6276 (delta 4994), reused 6276 (delta 4994)
Receiving objects: 100% (6276/6276), 2.14 MiB | 495.00 KiB/s, done.
Resolving deltas: 100% (4994/4994), done.
sed text files details:
/AUTHORS [plain, 6 lines]
/BUGS [plain, 101 lines]
/COPYING [plain, 553 lines]
/ChangeLog-2014 [plain, 2586 lines]
/Makefile.am [x-makefile, 123 lines]
/NEWS [plain, 498 lines]
/README [plain, 12 lines]
/README-hacking [plain, 58 lines]
/THANKS.in [plain, 63 lines]
/basicdefs.h [x-c, 83 lines]
/bootstrap [x-shellscript, 930 lines]
/bootstrap.conf [plain, 121 lines]
/cfg.mk [plain, 343 lines]
/configure.ac [x-m4, 294 lines]
/init.cfg [plain, 163 lines]
/thanks-gen [x-perl, 12 lines]
sed total lines: 5946
If the repo is local, you can just adjust the input methods. I'm sure the idea is clear. I know cloning the whole repo may be the dumbest way to do something like this, but sometimes you just want to know a thing. Plus you can use bash/sh - eg. [[ "$file" == "$dir/<exclude-dir>/* ]].
I got an file test1.log
04/15/2016 02:22:46 PM - kneaddata.knead_data - INFO: Running kneaddata v0.5.1
04/15/2016 02:22:46 PM - kneaddata.utilities - INFO: Decompressing gzipped file ...
Input Reads: 69766650 Surviving: 55798391 (79.98%) Dropped: 13968259 (20.02%)
TrimmomaticSE: Completed successfully
04/15/2016 02:32:04 PM - kneaddata.utilities - DEBUG: Checking output file from Trimmomatic : /home/liaoming/kneaddata_v0.5.1/WGC066610D/WGC066610D_kneaddata.trimmed.fastq
04/15/2016 05:32:31 PM - kneaddata.utilities - DEBUG: 55798391 reads; of these:
55798391 (100.00%) were unpaired; of these:
55775635 (99.96%) aligned 0 times
17313 (0.03%) aligned exactly 1 time
5443 (0.01%) aligned >1 times
0.04% overall alignment rate
and the other files in the same format but different contents,like test2.log,test3.log to test60.log
I would like to extract two numbers from these files.For example the test1.log, the two numbers would be 55798391 55775635.
So the final generated file counts.txt would be something like this:
test1 55798391 55775635
test2 51000000 40000000
.....
test60 5000000 30000000
awk to the rescue!
$ awk 'FNR==9{f=$1} FNR==10{print FILENAME,f,$1}' test{1..60}.log
if not in the same directory, either call within a loop or create the file list and pipe to xargs awk
$ for i in {1..60}; do awk ... test$i/test$i.log; done
$ for i in {1..60}; do echo test$i/test$i.log; done | xargs awk ...
I am alright at writing Linux scripts but could use some advice. I know the problem is sort of vague, so if you can provide any help whatsoever I will appreciate it!
The following issue is for personal growth, and because I am writing some network tools for fun/learning. No homework involved (I'm a senior in college, none of my classes require this stuff!)
I am using tshark to get information about packet captures. This is what it looks like:
rachel#Ubuntu-1:~/PCAP$ tshark -r LargeTorrent.pcap -q -z io,phs
===================================================================
Protocol Hierarchy Statistics
Filter:
eth frames:4309 bytes:3984321
ip frames:4119 bytes:3969006
icmp frames:1316 bytes:1308988
udp frames:1408 bytes:1350786
data frames:1368 bytes:1346228
dns frames:16 bytes:1176
nbns frames:14 bytes:1300
http frames:8 bytes:1596
nbdgm frames:2 bytes:486
smb frames:2 bytes:486
mailslot frames:2 bytes:486
browser frames:2 bytes:486
tcp frames:1395 bytes:1309232
data frames:1300 bytes:1294800
http frames:6 bytes:3763
data-text-lines frames:2 bytes:324
xml frames:2 bytes:3205
tcp.segments frames:1 bytes:787
nbss frames:34 bytes:5863
smb frames:17 bytes:3047
pipe frames:4 bytes:686
lanman frames:4 bytes:686
smb2 frames:13 bytes:2444
bittorrent frames:10 bytes:1709
tcp.segments frames:2 bytes:433
bittorrent frames:2 bytes:433
bittorrent frames:1 bytes:258
bittorrent frames:2 bytes:221
bittorrent frames:2 bytes:221
arp frames:146 bytes:8760
ipv6 frames:44 bytes:6555
udp frames:40 bytes:6211
dns frames:18 bytes:1711
dhcpv6 frames:14 bytes:2114
http frames:6 bytes:1014
data frames:2 bytes:1372
icmpv6 frames:4 bytes:344
===================================================================
What I would like for it to look like:
rachel#Ubuntu-1:~/PCAP$ tshark -r LargeTorrent.pcap -q -z io,phs
===================================================================
Protocol Hierarchy Statistics
Filter:
Protocol Bytes
=====================================
eth 984321
ip 3969006
icmp 1308988
udp 1350786
data 1346228
dns 1176
nbns 1300
http 1596
nbdgm 486
smb 486
mailslot 486
browser 486
tcp 1309232
data 1294800
http 3763
data-text-lines 324
xml 3205
tcp.segments 787
nbss 5863
smb 3047
pipe 686
lanman 686
smb2 2444
bittorrent 1709
tcp.segments 433
bittorrent 433
bittorrent 258
bittorrent 221
bittorrent 221
arp 8760
ipv6 6555
udp 6211
dns 1711
dhcpv6 2114
http 1014
data 1372
icmpv6 344
===================================================================
Edit: I am going to add the original question for the purpose of making sense of the (great) answer that was provided.
Originally, I wanted to only print statistics for "leaves" because eth, ip, etc. are all parents and their statistics are not necessary for my purposes. In addition, instead of having a god-awful block of text with only spaces to show hierarchy, I wanted to erase all the statistics for parents, and show them as breadcrumbs behind the child.
Example:
eth frames:4309 bytes:3984321
ip frames:4119 bytes:3969006
icmp frames:1316 bytes:1308988
udp frames:1408 bytes:1350786
data frames:1368 bytes:1346228
dns frames:16 bytes:1176
Should become
eth:ip:icmp - 1308988 bytes
eth:ip:udp:data - 1346228 bytes
eth:ip:udp:dns - 1176 bytes
To preserve the hierarchy and avoid printing useless statistics.
Anyway, the approved answer by Etan solved this perfectly! And for those of you who are on my level who are unsure of how to proceed after this answer, this will help you finish up:
Save the given script as a filename.awk file
Save the block of text you want to manipulate as a filename.txt file
Call awk -f filename.awk filename.txt
Optionally pipe the output to a file ( awk -f filename.awk filename.txt >> output.txt )
The output I originally thought you wanted could be achieved with this awk script. (I think this can probably be done cleaner but this seems to work well enough.)
function entry() {
# Don't want to print empty entries.
if (ind[0]) {
printf "%s", ind[0]
for (i = 1; i <= ls; i++) {
printf ":%s", ind[i]
}
split(b, a, /:/)
printf " - %s %s\n", a[2], a[1]
}
}
# Found our data marker. Note that and print the current line.
$1 == "Filter:" {d=1; print; next}
# Print lines until we see our data marker.
!d {print; next}
# Print empty lines.
!NF {print; next}
# Save our trailing line for later.
/===/ {suf=$0; next}
{
# Save our previous indentation level.
ls = s
# Find our new indentation level (by where the first field starts).
s = (match($0, /[^[:space:]]/)-1) / 2
# If the current line is at or below the last indent level print the last line.
if (s <= ls) {
entry()
}
# Save the current line's byte count.
b=$NF
# Save the current line's field name.
ind[s] = $1
}
END {
# Print a final line if we had one.
entry()
# Print the suffix line if we have one.
if (suf) {
print suf
}
}
Which, on the sample input, gets you this output.
===================================================================
Protocol Hierarchy Statistics
Filter:
eth:ip:icmp - 1308988 bytes
eth:ip:udp:data - 1346228 bytes
eth:ip:udp:dns - 1176 bytes
eth:ip:udp:nbns - 1300 bytes
eth:ip:udp:http - 1596 bytes
eth:ip:udp:nbdgm:smb:mailslot:browser - 486 bytes
eth:ip:tcp:data - 1294800 bytes
eth:ip:tcp:http:data-text-lines - 324 bytes
eth:ip:tcp:http:xml:tcp.segments - 787 bytes
eth:ip:tcp:nbss:smb:pipe:lanman - 686 bytes
eth:ip:tcp:nbss:smb2 - 2444 bytes
eth:ip:tcp:bittorrent:tcp.segments:bittorrent:bittorrent - 258 bytes
eth:ip:tcp:bittorrent:bittorrent:bittorrent - 221 bytes
eth:arp - 8760 bytes
eth:ipv6:udp:dns - 1711 bytes
eth:ipv6:udp:dhcpv6 - 2114 bytes
eth:ipv6:udp:http - 1014 bytes
eth:ipv6:udp:data - 1372 bytes
eth:ipv6:icmpv6:data - 344 bytes
===================================================================
Output like what you edited to indicate you want is probably more easily handled with sed though.
/Filter:/a \
Protocol Bytes \
=====================================
s/frames:[^ ]*//
s/ b/b/
s/bytes:\([^ ]*\)/\1/
Which ends up with output.
===================================================================
Protocol Hierarchy Statistics
Filter:
Protocol Bytes
=====================================
eth 3984321
ip 3969006
icmp 1308988
udp 1350786
data 1346228
dns 1176
nbns 1300
http 1596
nbdgm 486
smb 486
mailslot 486
browser 486
tcp 1309232
data 1294800
http 3763
data-text-lines 324
xml 3205
tcp.segments 787
nbss 5863
smb 3047
pipe 686
lanman 686
smb2 2444
bittorrent 1709
tcp.segments 433
bittorrent 433
bittorrent 258
bittorrent 221
bittorrent 221
arp 8760
ipv6 6555
udp 6211
dns 1711
dhcpv6 2114
http 1014
data 1372
icmpv6 344
===================================================================
A simple script with sed will work as well.
$ printf "\n==========================================================\n"; printf "Protocol Hierarchy Statistics\nFilter:\n\n";printf "\nProtocol\t\t\t\t Bytes\n================================================\n" && sed -e 's/\(frames[:].*bytes[:]\)\(.*$\)/\2/' dat/tshark.txt | tail -n+4 | head -n-1 && printf "================================================\n"
broken down into script form (where dat/tshark.txt is the filename holding the tshark output):
printf "\n==========================================================\n"
printf "Protocol Hierarchy Statistics\nFilter:\n\n"
printf "\nProtocol\t\t\t\t Bytes\n================================================\n"
sed -e 's/\(frames[:].*bytes[:]\)\(.*$\)/\2/' dat/tshark.txt | tail -n+4 | head -n-1
printf "================================================\n"
Output
==========================================================
Protocol Hierarchy Statistics
Filter:
Protocol Bytes
================================================
eth 3984321
ip 3969006
icmp 1308988
udp 1350786
data 1346228
dns 1176
nbns 1300
http 1596
nbdgm 486
smb 486
mailslot 486
browser 486
tcp 1309232
data 1294800
http 3763
data-text-lines 324
xml 3205
tcp.segments 787
nbss 5863
smb 3047
pipe 686
lanman 686
smb2 2444
bittorrent 1709
tcp.segments 433
bittorrent 433
bittorrent 258
bittorrent 221
bittorrent 221
arp 8760
ipv6 6555
udp 6211
dns 1711
dhcpv6 2114
http 1014
data 1372
icmpv6 344
================================================
Formatting
Following on from your comment on how to align the bytes info given the variable length of the protocol tags, you can make use of printf to format the output as you have indicated. Like Ethan, I started working on your original question that had the tags consolidated. My initial approach was to read the different levels into different associative arrays that could be combined into what you initially specified. Doing so, I had to produce the output lined up using printf. Here is the first attempt I made working with the first 4-levels of your tshark data:
declare -i ln=0
declare -A l1 l2 l3 l4
## read each line in file and assing to associative arrays for each level
while read -r line; do
ln=${#line} # base level on length of line read
[ $ln -gt 66 ] && continue;
[ $ln -eq 66 ] && { iface="${line%% *}"; l1[${iface}]="${line##* }"; }
[ $ln -eq 64 ] && { proto="${iface}:${line%% *}"; l2[${proto}]="${line##* }"; }
[ $ln -eq 62 ] && { ptype="${proto}:${line%% *}"; l3[${ptype}]="${line##* }"; }
[ $ln -le 60 ] && { data="${ptype}:${line%% *}"; l4[${data}]="${line##* }"; }
done < "$1"
## output a summary of the file
printf "\n4-level deep summary of file '%s':\n\n" "$1"
for i in "${!l1[#]}"; do
for j in "${!l2[#]}"; do
printf " %-32s %s\n" "$j" "${l2[$j]}"
for k in "${!l3[#]}"; do
printf " %-32s %s\n" "$k" "${l3[$k]}"
for l in "${!l4[#]}"; do
[ "${l%:*}" == "$k" ] && printf " %-32s %s\n" "$l" "${l4[$l]}"
done
done
done
done
The output it produced was for example:
eth:ip frames:4119 bytes:3969006
eth:ip:udp frames:1408 bytes:1350786
eth:ip:udp:data frames:1368 bytes:1346228
eth:ip:udp:nbdgm frames:2 bytes:486
eth:ip:udp:nbns frames:14 bytes:1300
You can look at the various printf statements in the code above and see how the alignment is handled. Let me know if you have further questions.
I'm a little surprised that tshark doesn't have a JSON or machine-readable way to get the -z io,phs info, when it has so many ways to extract packet info.
I tried playing with some of the above, but bash seems to have changed over the years (or has different defaults depending on the environment). I am also not sure which shell or version of it was used to produce the above.
The line lengths/output from tshark have also changed: My debugging showed different line lengths, so the trick above using line lengths, e.g. [ $ln -gt 66 ] didn't work for me.
It seems that read -r strips out leading/trailing whitespaces. If you actually want it, you need IFS= to make it give you the spaces:
## read each line in file
while IFS= read -r line ; do
...
done
The "nested" levels associative arrays is clever, but hard to work with - it shows what rabbit holes you can go down with bash - although now when iterating through it, bash produces it in "hash" order and not the order they were added.
Since I actually needed the data in the rest of my script, the nested arrays made it particularly fiddly to deal with. Fine for printf purposes where you just print the line, but what if you actually want to get the frames count for each item and do then do something with it.
Here was my attempt that simplified it a bit. I implemented it as a bash function which gets a few other bits of info from the sample file:
TSHARK=/usr/bin/tshark
CAPINFOS=/usr/bin/capinfos
declare -A fcount
declare -A bcount
declare -A capinfo
function loadcapinfo
{
local sample=$1
local statstofile=$2
local bytes
local frames
local key
if [ ! -f "$sample" ] ; then
echo "FATAL: loadcapinfo: file does not exist: $sample"
exit 1
fi
capinfo[start_time_epoch]=$($CAPINFOS -Tr -Sa $sample | cut -f2)
capinfo[start_time]=$($CAPINFOS -Tr -a $sample | cut -f2)
capinfo[end_time_epoch]=$($CAPINFOS -Tr -Se $sample | cut -f2)
capinfo[end_time]=$($CAPINFOS -Tr -e $sample | cut -f2)
capinfo[size]=$($CAPINFOS -Tr -s $sample | cut -f2)
declare -i ln=0
while IFS= read -r line ; do
ln=${#line} # base level on length of line read
[ $ln -le 1 ] && continue;
pat=".*frames:([0-9]+)\s+bytes:([0-9]+)"
pat_1="^(\w+)"
pat_2="^\s{2}(\w+)"
pat_3="^\s{4}(\w+)"
pat_4="^\s{6}(\w+)"
ethertype="ethertype"
[[ $line =~ $pat ]] && { frames=${BASH_REMATCH[1]}; bytes=${BASH_REMATCH[2]}; } || continue;
[[ $line =~ $pat_1 ]] && { encap="${BASH_REMATCH[1]}:${ethertype}"; key="${encap}"; }
[[ $line =~ $pat_2 ]] && { proto=${BASH_REMATCH[1]}; key="${encap}:${proto}"; }
[[ $line =~ $pat_3 ]] && { ptype=${BASH_REMATCH[1]}; key="${encap}:${proto}:${ptype}"; }
[[ $line =~ $pat_4 ]] && { data=${BASH_REMATCH[1]}; key="${encap}:${proto}:${ptype}:${data}"; }
[ "$proto" = "llc" ] && { key=${key/eth:ethertype:llc/eth:llc} ; }
fcount[${key}]=${frames:=0}
bcount[${key}]=${bytes:=0}
if [ -n "$statstofile" ] ; then
echo "${capinfo[start_time_epoch]},${key},${frames},${bytes}" >> $statstofile
fi
done < <($TSHARK -qr $sample -z io,phs)
unset fcount[0]
}
Now, after this in the script, we can do:
loadcapinfo /my/sample/file.pcap /tmp/stats.txt
Optionally write the counts to a file, /tmp/stats.txt
This uses one associative array for each count, and puts other info into capinfo so now we can do things like:
echo "IPv4 Packet Count is: ${fcount[eth:ethertype:ip]}"
echo "IPv6 Packet Count is: ${fcount[eth:ethertype:ipv6]}"
echo "ARP Count is: ${fcount[eth:ethertype:arp]}"
echo "STP Count is: ${fcount[eth:llc:stp]}"
echo "Start time: ${capinfo[start_time]}"
echo "End time: ${capinfo[end_time]}"
echo "File size: ${capinfo[size]}"
I made the keys match Wireshark's frame.protocols field, which inserts some "pseudo protocol" for most things called "ethertype". This way, if you want to then iterate through the associative array to find the packet(s) in the pcap file, you can use the information to find packets with a given protocol.
tshark -r /my/sample/file.pcap -Y "frame.protocols == eth:ethertype:ip:udp:snmp" -Tfields -e frame.number -e eth.src_resolved -e eth.dst_resolved -e ip.src -e ip.dst -e frame.protocols
for i in "${!fcount[#]}"; do
tshark -r /my/sample/file.pcap -Y "frame.protocols == $i" -Tfields -e frame.number -e eth.src_resolved -e eth.dst_resolved -e ip.src -e ip.dst -e frame.protocols > /tmp/$i.txt
done
If you look up how to clone an entire disk to another one on the web, you will find something like that:
dd if=/dev/sda of=/dev/sdb conv=notrunc,noerror
While I understand the noerror, I am getting a hard time understanding why people think that notrunc is required for "data integrity" (as ArchLinux's Wiki states, for instance).
Indeed, I do agree on that if you are copying a partition to another partition on another disk, and you do not want to overwrite the entire disk, just one partition. In thise case notrunc, according to dd's manual page, is what you want.
But if you're cloning an entire disk, what does notrunc change for you? Just time optimization?
TL;DR version:
notrunc is only important to prevent truncation when writing into a file. This has no effect on a block device such as sda or sdb.
Educational version
I looked into the coreutils source code which contains dd.c to see how notrunc is processed.
Here's the segment of code that I'm looking at:
int opts = (output_flags
| (conversions_mask & C_NOCREAT ? 0 : O_CREAT)
| (conversions_mask & C_EXCL ? O_EXCL : 0)
| (seek_records || (conversions_mask & C_NOTRUNC) ? 0 : O_TRUNC));
/* Open the output file with *read* access only if we might
need to read to satisfy a `seek=' request. If we can't read
the file, go ahead with write-only access; it might work. */
if ((! seek_records
|| fd_reopen (STDOUT_FILENO, output_file, O_RDWR | opts, perms) < 0)
&& (fd_reopen (STDOUT_FILENO, output_file, O_WRONLY | opts, perms) < 0))
error (EXIT_FAILURE, errno, _("opening %s"), quote (output_file));
We can see here that if notrunc is not specified, then the output file will be opened with O_TRUNC. Looking below at how O_TRUNC is treated, we can see that a normal file will get truncated if written into.
O_TRUNC
If the file already exists and is a regular file and the open
mode allows writing (i.e., is O_RDWR or O_WRONLY) it will be truncated
to length 0. If the file is a FIFO or terminal device file, the
O_TRUNC flag is ignored. Otherwise the effect of O_TRUNC is
unspecified.
Effects of notrunc / O_TRUNC I
In the following example, we start out by creating junk.txt of size 1024 bytes. Next, we write 512 bytes to the beginning of it with conv=notrunc. We can see that the size stays the same at 1024 bytes. Finally, we try it without the notrunc option and we can see that the new file size is 512. This is because it was opened with O_TRUNC.
$ dd if=/dev/urandom of=junk.txt bs=1024 count=1
$ ls -l junk.txt
-rw-rw-r-- 1 akyserr akyserr 1024 Dec 11 17:08 junk.txt
$ dd if=/dev/urandom of=junk.txt bs=512 count=1 conv=notrunc
$ ls -l junk.txt
-rw-rw-r-- 1 akyserr akyserr 1024 Dec 11 17:10 junk.txt
$ dd if=/dev/urandom of=junk.txt bs=512 count=1
$ ls -l junk.txt
-rw-rw-r-- 1 akyserr akyserr 512 Dec 11 17:10 junk.txt
Effects of notrunc / O_TRUNC II
I still haven't answered your original question of why when doing a disk-to-disk clone, why conv=notrunc is important. According to the above definition, O_TRUNC seems to be ignored when opening certain special files, and I would expect this to be true for block device nodes too. However, I don't want to assume anything and will attempt to prove it here.
openclose.c
I've written a simple C program here which opens and closes a file given as an argument with the O_TRUNC flag.
#include <stdio.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <time.h>
int main(int argc, char * argv[])
{
if (argc < 2)
{
fprintf(stderr, "Not enough arguments...\n");
return (1);
}
int f = open(argv[1], O_RDWR | O_TRUNC);
if (f >= 0)
{
fprintf(stderr, "%s was opened\n", argv[1]);
close(f);
fprintf(stderr, "%s was closed\n", argv[1]);
} else {
perror("Opening device node");
}
return (0);
}
Normal File Test
We can see below that the simple act of opening and closing a file with O_TRUNC will cause it to lose anything that was already there.
$ dd if=/dev/urandom of=junk.txt bs=1024 count=1^C
$ ls -l junk.txt
-rw-rw-r-- 1 akyserr akyserr 1024 Dec 11 17:26 junk.txt
$ ./openclose junk.txt
junk.txt was opened
junk.txt was closed
$ ls -l junk.txt
-rw-rw-r-- 1 akyserr akyserr 0 Dec 11 17:27 junk.txt
Block Device File Test
Let's try a similar test on a USB flash drive. We can see that we start out with a single partition on the USB flash drive. If it get's 'truncated', perhaps the partition will go away (considering it's defined in the first 512 bytes of the disk)?
$ ls -l /dev/sdc*
brw-rw---- 1 root disk 8, 32 Dec 11 17:22 /dev/sdc
brw-rw---- 1 root disk 8, 33 Dec 11 17:22 /dev/sdc1
$ sudo ./openclose /dev/sdc
/dev/sdc was opened
/dev/sdc was closed
$ sudo ./openclose /dev/sdc1
/dev/sdc1 was opened
/dev/sdc1 was closed
$ ls -l /dev/sdc*
brw-rw---- 1 root disk 8, 32 Dec 11 17:31 /dev/sdc
brw-rw---- 1 root disk 8, 33 Dec 11 17:31 /dev/sdc1
It looks like it has no affect whatsoever to open either the disk or the disk's partition 1 with the O_TRUNC option. From what I can tell, the filesystem is still mountable and the files are accessible and intact.
Effects of notrunc / O_TRUNC III
Okay, for my final test I will use dd on my flash drive directly. I will start by writing 512 bytes of random data, then writing 256 bytes of zeros at the beginning. For the final test, we will verify that the last 256 bytes remained unchanged.
$ sudo dd if=/dev/urandom of=/dev/sdc bs=256 count=2
$ sudo hexdump -n 512 /dev/sdc
0000000 3fb6 d17f 8824 a24d 40a5 2db3 2319 ac5b
0000010 c659 5780 2d04 3c4e f985 053c 4b3d 3eba
0000020 0be9 8105 cec4 d6fb 5825 a8e5 ec58 a38e
0000030 d736 3d47 d8d3 9067 8db8 25fb 44da af0f
0000040 add7 c0f2 fc11 d734 8e26 00c6 cfbb b725
0000050 8ff7 3e79 af97 2676 b9af 1c0d fc34 5eb1
0000060 6ede 318c 6f9f 1fea d200 39fe 4591 2ffb
0000070 0464 9637 ccc5 dfcc 3b0f 5432 cdc3 5d3c
0000080 01a9 7408 a10a c3c4 caba 270c 60d0 d2f7
0000090 2f8d a402 f91a a261 587b 5609 1260 a2fc
00000a0 4205 0076 f08b b41b 4738 aa12 8008 053f
00000b0 26f0 2e08 865e 0e6a c87e fc1c 7ef6 94c6
00000c0 9ced 37cf b2e7 e7ef 1f26 0872 cd72 54a4
00000d0 3e56 e0e1 bd88 f85b 9002 c269 bfaa 64f7
00000e0 08b9 5957 aad6 a76c 5e37 7e8a f5fc d066
00000f0 8f51 e0a1 2d69 0a8e 08a9 0ecf cee5 880c
0000100 3835 ef79 0998 323d 3d4f d76b 8434 6f20
0000110 534c a847 e1e2 778c 776b 19d4 c5f1 28ab
0000120 a7dc 75ea 8a8b 032a c9d4 fa08 268f 95e8
0000130 7ff3 3cd7 0c12 4943 fd23 33f9 fe5a 98d9
0000140 aa6d 3d89 c8b4 abec 187f 5985 8e0f 58d1
0000150 8439 b539 9a45 1c13 68c2 a43c 48d2 3d1e
0000160 02ec 24a5 e016 4c2d 27be 23ee 8eee 958e
0000170 dd48 b5a1 10f1 bf8e 1391 9355 1b61 6ffa
0000180 fd37 7718 aa80 20ff 6634 9213 0be1 f85e
0000190 a77f 4238 e04d 9b64 d231 aee8 90b6 5c7f
00001a0 5088 2a3e 0201 7108 8623 b98a e962 0860
00001b0 c0eb 21b7 53c6 31de f042 ac80 20ee 94dd
00001c0 b86c f50d 55bc 32db 9920 fd74 a21e 911a
00001d0 f7db 82c2 4d16 3786 3e18 2c0f 47c2 ebb0
00001e0 75af 6a8c 2e80 c5b6 e4ea a9bc a494 7d47
00001f0 f493 8b58 0765 44c5 ff01 42a3 b153 d395
$ sudo dd if=/dev/zero of=/dev/sdc bs=256 count=1
$ sudo hexdump -n 512 /dev/sdc
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
0000100 3835 ef79 0998 323d 3d4f d76b 8434 6f20
0000110 534c a847 e1e2 778c 776b 19d4 c5f1 28ab
0000120 a7dc 75ea 8a8b 032a c9d4 fa08 268f 95e8
0000130 7ff3 3cd7 0c12 4943 fd23 33f9 fe5a 98d9
0000140 aa6d 3d89 c8b4 abec 187f 5985 8e0f 58d1
0000150 8439 b539 9a45 1c13 68c2 a43c 48d2 3d1e
0000160 02ec 24a5 e016 4c2d 27be 23ee 8eee 958e
0000170 dd48 b5a1 10f1 bf8e 1391 9355 1b61 6ffa
0000180 fd37 7718 aa80 20ff 6634 9213 0be1 f85e
0000190 a77f 4238 e04d 9b64 d231 aee8 90b6 5c7f
00001a0 5088 2a3e 0201 7108 8623 b98a e962 0860
00001b0 c0eb 21b7 53c6 31de f042 ac80 20ee 94dd
00001c0 b86c f50d 55bc 32db 9920 fd74 a21e 911a
00001d0 f7db 82c2 4d16 3786 3e18 2c0f 47c2 ebb0
00001e0 75af 6a8c 2e80 c5b6 e4ea a9bc a494 7d47
00001f0 f493 8b58 0765 44c5 ff01 42a3 b153 d395
Summary
Through the above experimentation, it seems that notrunc is only important for when you have a file you want to write into, but don't want to truncate it. This seems to have no effect on a block device such as sda or sdb.