I'm trying to find a line in a yml configuration file and replace the next line with a specific value. I tried sed, but it seems it is not replacing or not able to find the pattern. Below is the snippit of that yml file
applicationConnectors:
- type: http
port: 14080
bindHost: 15.213.48.154
headerCacheSize: 512 bytes
outputBufferSize: 32KiB
maxRequestHeaderSize: 8KiB
maxResponseHeaderSize: 8KiB
inputBufferSize: 8KiB
idleTimeout: 30 seconds
minBufferPoolSize: 64 bytes
bufferPoolIncrement: 1KiB
maxBufferPoolSize: 64KiB
acceptorThreads: 1
selectorThreads: 2
acceptQueueSize: 1024
reuseAddress: true
useServerHeader: false
useDateHeader: true
useForwardedHeaders: true
adminConnectors:
- type: http
port: 14180
I want to change port value to 14081 for applicationConnectors as there is another port exists for adminConnectors
After the script execution it should look like:
applicationConnectors:
- type: http
port: 14081
bindHost: 15.213.48.154
headerCacheSize: 512 bytes
outputBufferSize: 32KiB
maxRequestHeaderSize: 8KiB
maxResponseHeaderSize: 8KiB
inputBufferSize: 8KiB
idleTimeout: 30 seconds
minBufferPoolSize: 64 bytes
bufferPoolIncrement: 1KiB
maxBufferPoolSize: 64KiB
acceptorThreads: 1
selectorThreads: 2
acceptQueueSize: 1024
reuseAddress: true
useServerHeader: false
useDateHeader: true
useForwardedHeaders: true
adminConnectors:
- type: http
port: 14180
I have tried below code:
var1="14081"
var2="port"
sed '/applicationConnectors:/{n;s/\($var2\).*\$/\1${var1}/}' configuration.yml > newfile
mv newfile configuration.yml
but it seems this code is not replacing anything.
sed is best for s/old/new, that is all. For anything else just use awk for clarity, portability, robustness, etc. Look:
$ awk -v rec='applicationConnectors' -v tag='port' -v val='14081' '
/^ [^ ]/{name=$1} name==(rec":") && $1==(tag":"){sub(/[^ ]+$/,""); $0=$0 val}
1' file
applicationConnectors:
- type: http
port: 14081
bindHost: 15.213.48.154
headerCacheSize: 512 bytes
outputBufferSize: 32KiB
maxRequestHeaderSize: 8KiB
maxResponseHeaderSize: 8KiB
inputBufferSize: 8KiB
idleTimeout: 30 seconds
minBufferPoolSize: 64 bytes
bufferPoolIncrement: 1KiB
maxBufferPoolSize: 64KiB
acceptorThreads: 1
selectorThreads: 2
acceptQueueSize: 1024
reuseAddress: true
useServerHeader: false
useDateHeader: true
useForwardedHeaders: true
adminConnectors:
- type: http
port: 14180
Want to change acceptQueueSize: to 17 instead? It's the same script with just different variable values:
$ awk -v rec='applicationConnectors' -v tag='acceptQueueSize' -v val='17' '
/^ [^ ]/{name=$1} name==(rec":") && $1==(tag":"){sub(/[^ ]+$/,""); $0=$0 val}
1' file
applicationConnectors:
- type: http
port: 14080
bindHost: 15.213.48.154
headerCacheSize: 512 bytes
outputBufferSize: 32KiB
maxRequestHeaderSize: 8KiB
maxResponseHeaderSize: 8KiB
inputBufferSize: 8KiB
idleTimeout: 30 seconds
minBufferPoolSize: 64 bytes
bufferPoolIncrement: 1KiB
maxBufferPoolSize: 64KiB
acceptorThreads: 1
selectorThreads: 2
acceptQueueSize: 17
reuseAddress: true
useServerHeader: false
useDateHeader: true
useForwardedHeaders: true
adminConnectors:
- type: http
port: 14180
Only try that with your currently accepted sed solution if you enjoy counting ns :-). Note also that this will work no matter what order the lines appear within each record since it keys off the name port rather than assuming that will appear some specific number of lines after applicationConnectors:. Finally, this will work even if the strings you're searching for or replacing with contain RE metachars (e.g. .), backreference chars (e.g. \1 or &), or sed delimiters (e.g. /).
Since the port line is the second after applicationConnectors: you need to use double n; and you should use double quotation marks around the sed command to allow variable interpolation inside:
sed "/applicationConnectors:/{n;n;s/\($var2\).*/\1: ${var1}/}" configuration.yml > newfile
See this online sed demo.
staging:
datasource:
jdbcUrl: xxx
driverclassname: yyy
username: zzzz
password: dddd
platform: wwww
sed command to replace value of jdbc url:
sed -i "/staging:/{n;n;s/\(jdbcUrl\).*/\1: AAAAA/}" application.yml
Related
Hi Wonderful People/My Gurus and all kind-hearted people.
I've a fixed width file and currently i'm trying to find the length of those rows that contain x bytes. I tried couple of awk commands but, it is not giving me the result that i wanted. My fixed width contains 208bytes, but there are few rows that don't contain 208 bytes. I"m trying to discover those records that doesn't have 208bytes.
this cmd gave me the file length
awk '{print length;exit}' file.text
here i tried to print rows that contain 101 bytes, but it didn't work.
awk '{print length==101}' file.text
Any help/insights here would be highly helpful
With awk:
awk 'length() < 208' file
Well, length() gives you the number of characters, not bytes. This number can differ in unicode context. You can use the LANG environment variable to force awk to use bytes:
LANG=C awk 'length() < 208' file
Perl to the rescue!
perl -lne 'print "$.:", length if length != 208' -- file.text
-n reads the input line by line
-l removes newlines from the input before processing it and adds them to print
The one-liner will print line number ($.) and the length of the line for each line whose length is different than 208.
if you're using gawk, then it's no issue, even in typical UTF-8 locale mode :
length(s) = # chars native to locale,
# typically that means # utf-8 chars
match(s, /$/) - 1 = # raw bytes # this also work for pure-binary
# inputs, without triggering
# any error messages in gawk Unicode mode
Best illustrated by example :
0000000 3347498554 3381184647 3182945161 171608122
: Ɔ ** LJ ** Ȉ ** ɉ ** 㷽 ** ** : 210 : \n
072 306 206 307 207 310 210 311 211 343 267 275 072 210 072 012
: ? 86 ? 87 ? 88 ? 89 ? ? ? : 88 : nl
58 198 134 199 135 200 136 201 137 227 183 189 58 136 58 10
3a c6 86 c7 87 c8 88 c9 89 e3 b7 bd 3a 88 3a 0a
0000020
# gawk profile, created Sat Oct 29 20:32:49 2022
BEGIN {
1 __ = "\306\206\307\207\310" (_="\210") \
"\311\211\343\267\275"
1 print "",__,_
1 STDERR = "/dev/stderr"
1 print ( match(_, /$/) - 1, "_" ) > STDERR # *A
1 print ( length(__), match(__, /$/) - 1 ) > STDERR # *B
1 print ( (__~_), match(__, (_) ".*") ) > STDERR # *C
1 print ( RSTART, RLENGTH ) > STDERR # *D
}
1 | _ *A # of bytes off "_" because it was defined as 0x88 \210
5 | 11 *B # of chars of "__", and
# of bytes of it :
# 4 x 2-byte UC
# + 1 x 3-byte UC = 11
1 | 3 *C # does byte \210 exist among larger string (true/1),
# and which unicode character is 1st to
# contain \210 - the 3rd one, by original definition
3 | 3 *D # notice I also added a ".*" to the tail of this match() :
# if the left-side string being tested is valid UTF-8,
# then this will match all the way to the end of string,
# inclusive, in which you can deduce :
#
# "\210 first appeared in 3rd-to-last utf-8 character"
Combining that inferred understanding :
RLENGTH = "3 chars to the end, inclusive",
with knowledge of how many to its left :
RSTART - 1 = "2 chars before",
yields a total count of 3 + 2 = 5, affirming length()'s result
I got an file test1.log
04/15/2016 02:22:46 PM - kneaddata.knead_data - INFO: Running kneaddata v0.5.1
04/15/2016 02:22:46 PM - kneaddata.utilities - INFO: Decompressing gzipped file ...
Input Reads: 69766650 Surviving: 55798391 (79.98%) Dropped: 13968259 (20.02%)
TrimmomaticSE: Completed successfully
04/15/2016 02:32:04 PM - kneaddata.utilities - DEBUG: Checking output file from Trimmomatic : /home/liaoming/kneaddata_v0.5.1/WGC066610D/WGC066610D_kneaddata.trimmed.fastq
04/15/2016 05:32:31 PM - kneaddata.utilities - DEBUG: 55798391 reads; of these:
55798391 (100.00%) were unpaired; of these:
55775635 (99.96%) aligned 0 times
17313 (0.03%) aligned exactly 1 time
5443 (0.01%) aligned >1 times
0.04% overall alignment rate
and the other files in the same format but different contents,like test2.log,test3.log to test60.log
I would like to extract two numbers from these files.For example the test1.log, the two numbers would be 55798391 55775635.
So the final generated file counts.txt would be something like this:
test1 55798391 55775635
test2 51000000 40000000
.....
test60 5000000 30000000
awk to the rescue!
$ awk 'FNR==9{f=$1} FNR==10{print FILENAME,f,$1}' test{1..60}.log
if not in the same directory, either call within a loop or create the file list and pipe to xargs awk
$ for i in {1..60}; do awk ... test$i/test$i.log; done
$ for i in {1..60}; do echo test$i/test$i.log; done | xargs awk ...
I am alright at writing Linux scripts but could use some advice. I know the problem is sort of vague, so if you can provide any help whatsoever I will appreciate it!
The following issue is for personal growth, and because I am writing some network tools for fun/learning. No homework involved (I'm a senior in college, none of my classes require this stuff!)
I am using tshark to get information about packet captures. This is what it looks like:
rachel#Ubuntu-1:~/PCAP$ tshark -r LargeTorrent.pcap -q -z io,phs
===================================================================
Protocol Hierarchy Statistics
Filter:
eth frames:4309 bytes:3984321
ip frames:4119 bytes:3969006
icmp frames:1316 bytes:1308988
udp frames:1408 bytes:1350786
data frames:1368 bytes:1346228
dns frames:16 bytes:1176
nbns frames:14 bytes:1300
http frames:8 bytes:1596
nbdgm frames:2 bytes:486
smb frames:2 bytes:486
mailslot frames:2 bytes:486
browser frames:2 bytes:486
tcp frames:1395 bytes:1309232
data frames:1300 bytes:1294800
http frames:6 bytes:3763
data-text-lines frames:2 bytes:324
xml frames:2 bytes:3205
tcp.segments frames:1 bytes:787
nbss frames:34 bytes:5863
smb frames:17 bytes:3047
pipe frames:4 bytes:686
lanman frames:4 bytes:686
smb2 frames:13 bytes:2444
bittorrent frames:10 bytes:1709
tcp.segments frames:2 bytes:433
bittorrent frames:2 bytes:433
bittorrent frames:1 bytes:258
bittorrent frames:2 bytes:221
bittorrent frames:2 bytes:221
arp frames:146 bytes:8760
ipv6 frames:44 bytes:6555
udp frames:40 bytes:6211
dns frames:18 bytes:1711
dhcpv6 frames:14 bytes:2114
http frames:6 bytes:1014
data frames:2 bytes:1372
icmpv6 frames:4 bytes:344
===================================================================
What I would like for it to look like:
rachel#Ubuntu-1:~/PCAP$ tshark -r LargeTorrent.pcap -q -z io,phs
===================================================================
Protocol Hierarchy Statistics
Filter:
Protocol Bytes
=====================================
eth 984321
ip 3969006
icmp 1308988
udp 1350786
data 1346228
dns 1176
nbns 1300
http 1596
nbdgm 486
smb 486
mailslot 486
browser 486
tcp 1309232
data 1294800
http 3763
data-text-lines 324
xml 3205
tcp.segments 787
nbss 5863
smb 3047
pipe 686
lanman 686
smb2 2444
bittorrent 1709
tcp.segments 433
bittorrent 433
bittorrent 258
bittorrent 221
bittorrent 221
arp 8760
ipv6 6555
udp 6211
dns 1711
dhcpv6 2114
http 1014
data 1372
icmpv6 344
===================================================================
Edit: I am going to add the original question for the purpose of making sense of the (great) answer that was provided.
Originally, I wanted to only print statistics for "leaves" because eth, ip, etc. are all parents and their statistics are not necessary for my purposes. In addition, instead of having a god-awful block of text with only spaces to show hierarchy, I wanted to erase all the statistics for parents, and show them as breadcrumbs behind the child.
Example:
eth frames:4309 bytes:3984321
ip frames:4119 bytes:3969006
icmp frames:1316 bytes:1308988
udp frames:1408 bytes:1350786
data frames:1368 bytes:1346228
dns frames:16 bytes:1176
Should become
eth:ip:icmp - 1308988 bytes
eth:ip:udp:data - 1346228 bytes
eth:ip:udp:dns - 1176 bytes
To preserve the hierarchy and avoid printing useless statistics.
Anyway, the approved answer by Etan solved this perfectly! And for those of you who are on my level who are unsure of how to proceed after this answer, this will help you finish up:
Save the given script as a filename.awk file
Save the block of text you want to manipulate as a filename.txt file
Call awk -f filename.awk filename.txt
Optionally pipe the output to a file ( awk -f filename.awk filename.txt >> output.txt )
The output I originally thought you wanted could be achieved with this awk script. (I think this can probably be done cleaner but this seems to work well enough.)
function entry() {
# Don't want to print empty entries.
if (ind[0]) {
printf "%s", ind[0]
for (i = 1; i <= ls; i++) {
printf ":%s", ind[i]
}
split(b, a, /:/)
printf " - %s %s\n", a[2], a[1]
}
}
# Found our data marker. Note that and print the current line.
$1 == "Filter:" {d=1; print; next}
# Print lines until we see our data marker.
!d {print; next}
# Print empty lines.
!NF {print; next}
# Save our trailing line for later.
/===/ {suf=$0; next}
{
# Save our previous indentation level.
ls = s
# Find our new indentation level (by where the first field starts).
s = (match($0, /[^[:space:]]/)-1) / 2
# If the current line is at or below the last indent level print the last line.
if (s <= ls) {
entry()
}
# Save the current line's byte count.
b=$NF
# Save the current line's field name.
ind[s] = $1
}
END {
# Print a final line if we had one.
entry()
# Print the suffix line if we have one.
if (suf) {
print suf
}
}
Which, on the sample input, gets you this output.
===================================================================
Protocol Hierarchy Statistics
Filter:
eth:ip:icmp - 1308988 bytes
eth:ip:udp:data - 1346228 bytes
eth:ip:udp:dns - 1176 bytes
eth:ip:udp:nbns - 1300 bytes
eth:ip:udp:http - 1596 bytes
eth:ip:udp:nbdgm:smb:mailslot:browser - 486 bytes
eth:ip:tcp:data - 1294800 bytes
eth:ip:tcp:http:data-text-lines - 324 bytes
eth:ip:tcp:http:xml:tcp.segments - 787 bytes
eth:ip:tcp:nbss:smb:pipe:lanman - 686 bytes
eth:ip:tcp:nbss:smb2 - 2444 bytes
eth:ip:tcp:bittorrent:tcp.segments:bittorrent:bittorrent - 258 bytes
eth:ip:tcp:bittorrent:bittorrent:bittorrent - 221 bytes
eth:arp - 8760 bytes
eth:ipv6:udp:dns - 1711 bytes
eth:ipv6:udp:dhcpv6 - 2114 bytes
eth:ipv6:udp:http - 1014 bytes
eth:ipv6:udp:data - 1372 bytes
eth:ipv6:icmpv6:data - 344 bytes
===================================================================
Output like what you edited to indicate you want is probably more easily handled with sed though.
/Filter:/a \
Protocol Bytes \
=====================================
s/frames:[^ ]*//
s/ b/b/
s/bytes:\([^ ]*\)/\1/
Which ends up with output.
===================================================================
Protocol Hierarchy Statistics
Filter:
Protocol Bytes
=====================================
eth 3984321
ip 3969006
icmp 1308988
udp 1350786
data 1346228
dns 1176
nbns 1300
http 1596
nbdgm 486
smb 486
mailslot 486
browser 486
tcp 1309232
data 1294800
http 3763
data-text-lines 324
xml 3205
tcp.segments 787
nbss 5863
smb 3047
pipe 686
lanman 686
smb2 2444
bittorrent 1709
tcp.segments 433
bittorrent 433
bittorrent 258
bittorrent 221
bittorrent 221
arp 8760
ipv6 6555
udp 6211
dns 1711
dhcpv6 2114
http 1014
data 1372
icmpv6 344
===================================================================
A simple script with sed will work as well.
$ printf "\n==========================================================\n"; printf "Protocol Hierarchy Statistics\nFilter:\n\n";printf "\nProtocol\t\t\t\t Bytes\n================================================\n" && sed -e 's/\(frames[:].*bytes[:]\)\(.*$\)/\2/' dat/tshark.txt | tail -n+4 | head -n-1 && printf "================================================\n"
broken down into script form (where dat/tshark.txt is the filename holding the tshark output):
printf "\n==========================================================\n"
printf "Protocol Hierarchy Statistics\nFilter:\n\n"
printf "\nProtocol\t\t\t\t Bytes\n================================================\n"
sed -e 's/\(frames[:].*bytes[:]\)\(.*$\)/\2/' dat/tshark.txt | tail -n+4 | head -n-1
printf "================================================\n"
Output
==========================================================
Protocol Hierarchy Statistics
Filter:
Protocol Bytes
================================================
eth 3984321
ip 3969006
icmp 1308988
udp 1350786
data 1346228
dns 1176
nbns 1300
http 1596
nbdgm 486
smb 486
mailslot 486
browser 486
tcp 1309232
data 1294800
http 3763
data-text-lines 324
xml 3205
tcp.segments 787
nbss 5863
smb 3047
pipe 686
lanman 686
smb2 2444
bittorrent 1709
tcp.segments 433
bittorrent 433
bittorrent 258
bittorrent 221
bittorrent 221
arp 8760
ipv6 6555
udp 6211
dns 1711
dhcpv6 2114
http 1014
data 1372
icmpv6 344
================================================
Formatting
Following on from your comment on how to align the bytes info given the variable length of the protocol tags, you can make use of printf to format the output as you have indicated. Like Ethan, I started working on your original question that had the tags consolidated. My initial approach was to read the different levels into different associative arrays that could be combined into what you initially specified. Doing so, I had to produce the output lined up using printf. Here is the first attempt I made working with the first 4-levels of your tshark data:
declare -i ln=0
declare -A l1 l2 l3 l4
## read each line in file and assing to associative arrays for each level
while read -r line; do
ln=${#line} # base level on length of line read
[ $ln -gt 66 ] && continue;
[ $ln -eq 66 ] && { iface="${line%% *}"; l1[${iface}]="${line##* }"; }
[ $ln -eq 64 ] && { proto="${iface}:${line%% *}"; l2[${proto}]="${line##* }"; }
[ $ln -eq 62 ] && { ptype="${proto}:${line%% *}"; l3[${ptype}]="${line##* }"; }
[ $ln -le 60 ] && { data="${ptype}:${line%% *}"; l4[${data}]="${line##* }"; }
done < "$1"
## output a summary of the file
printf "\n4-level deep summary of file '%s':\n\n" "$1"
for i in "${!l1[#]}"; do
for j in "${!l2[#]}"; do
printf " %-32s %s\n" "$j" "${l2[$j]}"
for k in "${!l3[#]}"; do
printf " %-32s %s\n" "$k" "${l3[$k]}"
for l in "${!l4[#]}"; do
[ "${l%:*}" == "$k" ] && printf " %-32s %s\n" "$l" "${l4[$l]}"
done
done
done
done
The output it produced was for example:
eth:ip frames:4119 bytes:3969006
eth:ip:udp frames:1408 bytes:1350786
eth:ip:udp:data frames:1368 bytes:1346228
eth:ip:udp:nbdgm frames:2 bytes:486
eth:ip:udp:nbns frames:14 bytes:1300
You can look at the various printf statements in the code above and see how the alignment is handled. Let me know if you have further questions.
I'm a little surprised that tshark doesn't have a JSON or machine-readable way to get the -z io,phs info, when it has so many ways to extract packet info.
I tried playing with some of the above, but bash seems to have changed over the years (or has different defaults depending on the environment). I am also not sure which shell or version of it was used to produce the above.
The line lengths/output from tshark have also changed: My debugging showed different line lengths, so the trick above using line lengths, e.g. [ $ln -gt 66 ] didn't work for me.
It seems that read -r strips out leading/trailing whitespaces. If you actually want it, you need IFS= to make it give you the spaces:
## read each line in file
while IFS= read -r line ; do
...
done
The "nested" levels associative arrays is clever, but hard to work with - it shows what rabbit holes you can go down with bash - although now when iterating through it, bash produces it in "hash" order and not the order they were added.
Since I actually needed the data in the rest of my script, the nested arrays made it particularly fiddly to deal with. Fine for printf purposes where you just print the line, but what if you actually want to get the frames count for each item and do then do something with it.
Here was my attempt that simplified it a bit. I implemented it as a bash function which gets a few other bits of info from the sample file:
TSHARK=/usr/bin/tshark
CAPINFOS=/usr/bin/capinfos
declare -A fcount
declare -A bcount
declare -A capinfo
function loadcapinfo
{
local sample=$1
local statstofile=$2
local bytes
local frames
local key
if [ ! -f "$sample" ] ; then
echo "FATAL: loadcapinfo: file does not exist: $sample"
exit 1
fi
capinfo[start_time_epoch]=$($CAPINFOS -Tr -Sa $sample | cut -f2)
capinfo[start_time]=$($CAPINFOS -Tr -a $sample | cut -f2)
capinfo[end_time_epoch]=$($CAPINFOS -Tr -Se $sample | cut -f2)
capinfo[end_time]=$($CAPINFOS -Tr -e $sample | cut -f2)
capinfo[size]=$($CAPINFOS -Tr -s $sample | cut -f2)
declare -i ln=0
while IFS= read -r line ; do
ln=${#line} # base level on length of line read
[ $ln -le 1 ] && continue;
pat=".*frames:([0-9]+)\s+bytes:([0-9]+)"
pat_1="^(\w+)"
pat_2="^\s{2}(\w+)"
pat_3="^\s{4}(\w+)"
pat_4="^\s{6}(\w+)"
ethertype="ethertype"
[[ $line =~ $pat ]] && { frames=${BASH_REMATCH[1]}; bytes=${BASH_REMATCH[2]}; } || continue;
[[ $line =~ $pat_1 ]] && { encap="${BASH_REMATCH[1]}:${ethertype}"; key="${encap}"; }
[[ $line =~ $pat_2 ]] && { proto=${BASH_REMATCH[1]}; key="${encap}:${proto}"; }
[[ $line =~ $pat_3 ]] && { ptype=${BASH_REMATCH[1]}; key="${encap}:${proto}:${ptype}"; }
[[ $line =~ $pat_4 ]] && { data=${BASH_REMATCH[1]}; key="${encap}:${proto}:${ptype}:${data}"; }
[ "$proto" = "llc" ] && { key=${key/eth:ethertype:llc/eth:llc} ; }
fcount[${key}]=${frames:=0}
bcount[${key}]=${bytes:=0}
if [ -n "$statstofile" ] ; then
echo "${capinfo[start_time_epoch]},${key},${frames},${bytes}" >> $statstofile
fi
done < <($TSHARK -qr $sample -z io,phs)
unset fcount[0]
}
Now, after this in the script, we can do:
loadcapinfo /my/sample/file.pcap /tmp/stats.txt
Optionally write the counts to a file, /tmp/stats.txt
This uses one associative array for each count, and puts other info into capinfo so now we can do things like:
echo "IPv4 Packet Count is: ${fcount[eth:ethertype:ip]}"
echo "IPv6 Packet Count is: ${fcount[eth:ethertype:ipv6]}"
echo "ARP Count is: ${fcount[eth:ethertype:arp]}"
echo "STP Count is: ${fcount[eth:llc:stp]}"
echo "Start time: ${capinfo[start_time]}"
echo "End time: ${capinfo[end_time]}"
echo "File size: ${capinfo[size]}"
I made the keys match Wireshark's frame.protocols field, which inserts some "pseudo protocol" for most things called "ethertype". This way, if you want to then iterate through the associative array to find the packet(s) in the pcap file, you can use the information to find packets with a given protocol.
tshark -r /my/sample/file.pcap -Y "frame.protocols == eth:ethertype:ip:udp:snmp" -Tfields -e frame.number -e eth.src_resolved -e eth.dst_resolved -e ip.src -e ip.dst -e frame.protocols
for i in "${!fcount[#]}"; do
tshark -r /my/sample/file.pcap -Y "frame.protocols == $i" -Tfields -e frame.number -e eth.src_resolved -e eth.dst_resolved -e ip.src -e ip.dst -e frame.protocols > /tmp/$i.txt
done
When i am doing pactl list i get lot of information. Out of those information, i am trying to only get the part start with Sink #0 till end of that section.
1) Information's
Sink #0
State: SUSPENDED
Name: auto_null
Description: Dummy Output
Driver: module-null-sink.c
Sample Specification: s16le 2ch 44100Hz
Channel Map: front-left,front-right
Owner Module: 14
Mute: no
Volume: 0: 0% 1: 0%
0: -inf dB 1: -inf dB
balance 0.00
Base Volume: 100%
0.00 dB
Monitor Source: auto_null.monitor
Latency: 0 usec, configured 0 usec
Flags: DECIBEL_VOLUME LATENCY
Properties:
device.description = "Dummy Output"
device.class = "abstract"
device.icon_name = "audio-card"
Source #0
State: SUSPENDED
Name: auto_null.monitor
Description: Monitor of Dummy Output
Driver: module-null-sink.c
Sample Specification: s16le 2ch 44100Hz
Channel Map: front-left,front-right
Owner Module: 14
Mute: no
Volume: 0: 80% 1: 80%
0: -5.81 dB 1: -5.81 dB
balance 0.00
Base Volume: 100%
0.00 dB
Monitor of Sink: auto_null
Latency: 0 usec, configured 0 usec
Flags: DECIBEL_VOLUME LATENCY
Properties:
device.description = "Monitor of Dummy Output"
device.class = "monitor"
device.icon_name = "audio-input-microphone"
2) I am trying, such as:
#!/bin/bash
command=$(pactl list);
# just get Sink #0 section not one line
Part1=$(grep "Sink #0" $command);
for i in $Part1
do
# show only Sink #0 lines
echo $i;
done
3) It output very strange
grep: dB: No such file or directory
How can i get that section using my BASH script, is there any other best way to work on such filtering?
Follow up: So i was also trying to keep it simple. such as:
pactl list | grep Volume | head -n1 | cut -d' ' -f2- | tr -d ' '
|________| |________| |______| |_____________| |_________|
| | | | |
command target get show 1 row cut empty Dont know..
to list
You can use several features of the sed editor to achieve your goal.
sed -n '/^Sink/,/^$/p' pactl_Output.txt
-n says "don't perform the standard option of printing each line of output
/^Sink/,/^$/ is a range regular expr, that says find a line that begins with Sink, then keep looking at lines until you find an empty line (/^$/).
the final char, p says Print what you have matched.
If there are spaces or tabs on the empty line, use " ...,/^$[${spaceChar}${tabChar}]*\$/p". Note the change from single quoting to dbl-quoting which will allow the variables ${spaceChar} and ${tabChar} to be expanded to their real values. You may need to escape the closing '$'. YOu'll need to define spaceChar and tabChar before you use them, like spaceChar=" " . No way here on S.O. for you to see the tabChar, but not all sed's support the \t version. It's your choice to go with pressing tab key or use \t. I would go with tab key as it is more portable.
While it is probably possible to accomplish your goal with bash, sed was designed for this sort of problem.
I hope this helps.
Try:
Part1=`echo $command | grep "Sink #0"`
instead of
Part1=$(grep "Sink #0" $command);
We have a very simple tcp messaging script that cats some text to a server port which returns and displays a response.
The part of the script we care about looks something like this:
cat someFile | netcat somehost 1234
The response the server returns is 'complete' once we get a certain character code (specifically &001C) returned.
How can I close the connection when I receive this special character?
(Note: The server won't close the connection for me. While I currently just CTRL+C the script when I can tell it's done, I wish to be able to send many of these messages, one after the other.)
(Note: netcat -w x isn't good enough because I wish to push these messages through as fast as possible)
Create a bash script called client.sh:
#!/bin/bash
cat someFile
while read FOO; do
echo $FOO >&3
if [[ $FOO =~ `printf ".*\x00\x1c.*"` ]]; then
break
fi
done
Then invoke netcat from your main script like so:
3>&1 nc -c ./client.sh somehost 1234
(You'll need bash version 3 for the regexp matching).
This assumes that the server is sending data in lines - if not you'll have to tweak client.sh so that it reads and echoes a character at a time.
How about this?
Client side:
awk -v RS=$'\x1c' 'NR==1;{exit 0;}' < /dev/tcp/host-ip/port
Testing:
# server side test script
while true; do ascii -hd; done | { netcat -l 12345; echo closed...;}
# Generate 'some' data for testing & pipe to netcat.
# After netcat connection closes, echo will print 'closed...'
# Client side:
awk -v RS=J 'NR==1; {exit;}' < /dev/tcp/localhost/12345
# Changed end character to 'J' for testing.
# Didn't wish to write a server side script to generate 0x1C.
Client side produces:
0 NUL 16 DLE 32 48 0 64 # 80 P 96 ` 112 p
1 SOH 17 DC1 33 ! 49 1 65 A 81 Q 97 a 113 q
2 STX 18 DC2 34 " 50 2 66 B 82 R 98 b 114 r
3 ETX 19 DC3 35 # 51 3 67 C 83 S 99 c 115 s
4 EOT 20 DC4 36 $ 52 4 68 D 84 T 100 d 116 t
5 ENQ 21 NAK 37 % 53 5 69 E 85 U 101 e 117 u
6 ACK 22 SYN 38 & 54 6 70 F 86 V 102 f 118 v
7 BEL 23 ETB 39 ' 55 7 71 G 87 W 103 g 119 w
8 BS 24 CAN 40 ( 56 8 72 H 88 X 104 h 120 x
9 HT 25 EM 41 ) 57 9 73 I 89 Y 105 i 121 y
10 LF 26 SUB 42 * 58 : 74
After 'J' appears, server side closes & prints 'closed...', ensuring that the connection has indeed closed.
Try:
(cat somefile; sleep $timeout) | nc somehost 1234 | sed -e '{s/\x01.*//;T skip;q;:skip}'
This requires GNU sed.
How it works:
{
s/\x01.*//; # search for \x01, if we find it, kill it and the rest of the line
T skip; # goto label skip if the last s/// failed
q; # quit, printing current pattern buffer
:skip # label skip
}
Note that this assumes there'll be a newline after \x01 - sed won't see it otherwise, as sed operates line-by-line.
Maybe have a look at Ncat as well:
"Ncat is the culmination of many key features from various Netcat incarnations such as Netcat 1.x, Netcat6, SOcat, Cryptcat, GNU Netcat, etc. Ncat has a host of new features such as "Connection Brokering", TCP/UDP Redirection, SOCKS4 client and server supprt, ability to "Chain" Ncat processes, HTTP CONNECT proxying (and proxy chaining), SSL connect/listen support, IP address/connection filtering, plus much more."
http://nmap-ncat.sourceforge.net
This worked best for me. Just read the output with a while loop and then check for "0x1c" using an if statement.
while read i; do
if [ "$i" = "0x1c" ] ; then # Read until "0x1c". Then exit
break
fi
echo $i;
done < <(cat someFile | netcat somehost 1234)