Wireshark unable to open btsnoop file - bluetooth

I followed the instructions in the answer to Bluetooth HCI snoop log not generated to create a btsnoop file using btsnooz.py from my Android bugreport file. When I opened the resulting btsnoop.log file in Wireshark I got the error The file "btsnoop.log" isn't a capture file in a format Wireshark understands.
The adb bugreport was done against an S7 Edge running Android 8.0.0.
A copy of the btsnoop.log file can be found here https://drive.google.com/file/d/1Y3544DrhPbI9YxktL6rSWkpAe-YeoPn4/view?usp=sharing
How I can analyze this file?

While it doesn't exactly answer the question perhaps it'll help find the answer (I'm not allowed to comment yet). Your trace starts with FFFE6200740073006E006F006F007000, where FFFE looks like UTF-16LE BOM and then goes "btsnoop" interleaved with zeroes. I got exactly that when running that script btsnooz.py "from under" python-2.7 that had been installed on my W10x64 laptop in the past - like this:
C:\python27-x64\python.exe btsnooz.py bugreport-2021-01-09-22-52-09.txt >bugreport.pcap
I then tried running it directly like this instead, hoping that W10 has some built-in python of its own:
.\btsnooz.py bugreport-2021-01-09-22-52-09.txt >bugreport.pcap
and got a twice smaller file that starts with "btsnoop" like valid ones do. Although I'm having a different issue with it (Wireshark can only decode the first few packets) it looks like a step forward as method #1 of running btsnooz.py above appears to corrupt the output file somehow.
Here's what btsnooz.py should write into the beginning of the file as follows from its code, so anything else there means incorrect execution of the script:
sys.stdout.write('btsnoop\x00\x00\x00\x00\x01\x00\x00\x03\xea')
HTH...
Update: Apparently that python script btsnooz.py is implied for use on Linux: just verified it on Ubuntu with python v2.7 and it produces correct traces parsed by Wireshark w/o any such errors. When used on Windows it seems to write 0D0A instead of every 0A, causing above described Wireshark parsing errors. Replacing those 0D0A "back" with 0A in a hex editor fixed those errors in my case.

I've just run into the same issue, and have managed to modify btsnooz.py to work correctly with Python 3 on Windows (and I expect Linux etc too, as I didn't do anything specifically with line endings, just encodings etc to match Python 2 behaviour). I've also made it output to a file called "btsnoop.log" in the current directory (overwriting any that might already exist) rather than trying to write to stdout.
I have included my modified code below - just save it as "btsnooz.py" and run python btsnooz.py <name of bugreport file>.txt, using Python 3
#!/usr/bin/env python
"""
This script extracts btsnooz content from bugreports and generates
a valid btsnoop log file which can be viewed using standard tools
like Wireshark.
btsnooz is a custom format designed to be included in bugreports.
It can be described as:
base64 {
file_header
deflate {
repeated {
record_header
record_data
}
}
}
where the file_header and record_header are modified versions of
the btsnoop headers.
"""
import base64
import fileinput
import struct
import sys
import zlib
# Enumeration of the values the 'type' field can take in a btsnooz
# header. These values come from the Bluetooth stack's internal
# representation of packet types.
TYPE_IN_EVT = 0x10
TYPE_IN_ACL = 0x11
TYPE_IN_SCO = 0x12
TYPE_IN_ISO = 0x17
TYPE_OUT_CMD = 0x20
TYPE_OUT_ACL = 0x21
TYPE_OUT_SCO = 0x22
TYPE_OUT_ISO = 0x2d
def type_to_direction(type):
"""
Returns the inbound/outbound direction of a packet given its type.
0 = sent packet
1 = received packet
"""
if type in [TYPE_IN_EVT, TYPE_IN_ACL, TYPE_IN_SCO, TYPE_IN_ISO]:
return 1
return 0
def type_to_hci(type):
"""
Returns the HCI type of a packet given its btsnooz type.
"""
if type == TYPE_OUT_CMD:
return '\x01'
if type == TYPE_IN_ACL or type == TYPE_OUT_ACL:
return '\x02'
if type == TYPE_IN_SCO or type == TYPE_OUT_SCO:
return '\x03'
if type == TYPE_IN_EVT:
return '\x04'
if type == TYPE_IN_ISO or type == TYPE_OUT_ISO:
return '\x05'
raise RuntimeError("type_to_hci: unknown type (0x{:02x})".format(type))
def decode_snooz(snooz):
"""
Decodes all known versions of a btsnooz file into a btsnoop file.
"""
version, last_timestamp_ms = struct.unpack_from('=bQ', snooz)
if version != 1 and version != 2:
sys.stderr.write('Unsupported btsnooz version: %s\n' % version)
exit(1)
# Oddly, the file header (9 bytes) is not compressed, but the rest is.
decompressed = zlib.decompress(snooz[9:])
fp.write('btsnoop\x00\x00\x00\x00\x01\x00\x00\x03\xea'.encode("latin-1"))
if version == 1:
decode_snooz_v1(decompressed, last_timestamp_ms)
elif version == 2:
decode_snooz_v2(decompressed, last_timestamp_ms)
def decode_snooz_v1(decompressed, last_timestamp_ms):
"""
Decodes btsnooz v1 files into a btsnoop file.
"""
# An unfortunate consequence of the file format design: we have to do a
# pass of the entire file to determine the timestamp of the first packet.
first_timestamp_ms = last_timestamp_ms + 0x00dcddb30f2f8000
offset = 0
while offset < len(decompressed):
length, delta_time_ms, type = struct.unpack_from('=HIb', decompressed, offset)
offset += 7 + length - 1
first_timestamp_ms -= delta_time_ms
# Second pass does the actual writing out to stdout.
offset = 0
while offset < len(decompressed):
length, delta_time_ms, type = struct.unpack_from('=HIb', decompressed, offset)
first_timestamp_ms += delta_time_ms
offset += 7
fp.write(struct.pack('>II', length, length))
fp.write(struct.pack('>II', type_to_direction(type), 0))
fp.write(struct.pack('>II', (first_timestamp_ms >> 32), (first_timestamp_ms & 0xFFFFFFFF)))
fp.write(type_to_hci(type).encode("latin-1"))
fp.write(decompressed[offset:offset + length - 1])
offset += length - 1
def decode_snooz_v2(decompressed, last_timestamp_ms):
"""
Decodes btsnooz v2 files into a btsnoop file.
"""
# An unfortunate consequence of the file format design: we have to do a
# pass of the entire file to determine the timestamp of the first packet.
first_timestamp_ms = last_timestamp_ms + 0x00dcddb30f2f8000
offset = 0
while offset < len(decompressed):
length, packet_length, delta_time_ms, snooz_type = struct.unpack_from('=HHIb', decompressed, offset)
offset += 9 + length - 1
first_timestamp_ms -= delta_time_ms
# Second pass does the actual writing out to stdout.
offset = 0
while offset < len(decompressed):
length, packet_length, delta_time_ms, snooz_type = struct.unpack_from('=HHIb', decompressed, offset)
first_timestamp_ms += delta_time_ms
offset += 9
fp.write(struct.pack('>II', packet_length, length))
fp.write(struct.pack('>II', type_to_direction(snooz_type), 0))
fp.write(struct.pack('>II', (first_timestamp_ms >> 32), (first_timestamp_ms & 0xFFFFFFFF)))
fp.write(type_to_hci(snooz_type).encode("latin-1"))
fp.write(decompressed[offset:offset + length - 1])
offset += length - 1
fp = None
def main():
if len(sys.argv) > 2:
sys.stderr.write('Usage: %s [bugreport]\n' % sys.argv[0])
exit(1)
iterator = fileinput.input(openhook=fileinput.hook_encoded("latin-1"))
found = False
base64_string = ""
for line in iterator:
if found:
if line.find('--- END:BTSNOOP_LOG_SUMMARY') != -1:
global fp
fp = open("btsnoop.log", "wb")
decode_snooz(base64.standard_b64decode(base64_string))
fp.close
sys.exit(0)
base64_string += line.strip()
if line.find('--- BEGIN:BTSNOOP_LOG_SUMMARY') != -1:
found = True
if not found:
sys.stderr.write('No btsnooz section found in bugreport.\n')
sys.exit(1)
if __name__ == '__main__':
main()

Related

CRC-32 values not matching

I am using SPI communication to communicate between Raspberry Pi and a microcontroller. I am sending a value of "32" (a 32-bit integer) or "0x00000020" with CRC value calculated by the microcontroller as "2613451423". I am running CRC32 on this 32-bit integer. Polynomial being used on MCU is "0x04C11DB7". Below is the snippet of code I am using on microcontroller:
GPCRC_Init_TypeDef initcrc = GPCRC_INIT_DEFAULT;
initcrc.initValue = 0xFFFFFFFF; // Standard CRC-32 init value
initcrc.reverseBits = true;
initcrc.reverseByteOrder =true; //Above line and this line converts data from big endian to little endian
/*********other code here**************/
for(int m=0;m<1;m++){
data_adc = ((uint8_t *)(StartPage+m));//reading data from flash memory,StartPage is the address from where to read (data stored as 32bit integers),here reading only a byte
ecode = SPIDRV_STransmitB(SPI_HANDLE, data_adc, 4, 0);//transmitting 4 bytes (32-bit value) through SPI over to RPi
GPCRC_Start(GPCRC); //Set CRC parameters such as polynomial
for(int i=0;i<1;i++){
GPCRC_InputU32(GPCRC,((uint32_t *)(StartPage+i)));//generating crc for the same 32 bit value
}
//I also tried using below code:
/*for(int i=0;i<4;i++){
GPCRC_InputU8(GPCRC,((uint8_t *)(StartPage+i)));//generating crc for the same 32 bit value
}*/
checksum[0] = ~GPCRC_DataRead(GPCRC); //CRC value inverted and stored in array
ecode = SPIDRV_STransmitB(SPI_HANDLE, checksum, 4, 0);//sending this value through SPI (in chunks of 4 bytes)
On RPi, I am collecting this value (i.e. "32", I receive the value correct) but the CRC calculated by RPi is "2172022818". I am using "zlib" to calculate CRC32. I have added a code snippet as well:
import datetime
import os
import struct
import time
import pigpio
import spidev
import zlib
bus = 0
device = 0
spi = spidev.SpiDev()
spi.open(bus, device)
spi.max_speed_hz = 4000000
spi.mode = 0
pi.set_mode(25, pigpio.INPUT)
rpi_crc=0
def output_file_path():
return os.path.join(os.path.dirname(__file__),
datetime.datetime.now().strftime("%dT%H.%M.%S") + ".csv")
def spi_process(gpio,level,tick):
print("Detected")
data = bytes([0]*4)
crc_data = bytes([0]*4)
spi.xfer2([0x02])
with open(output_file_path(), 'w') as f:
t1=datetime.datetime.now()
for x in range(1):
recv = spi.xfer2(data)
values = struct.unpack("<" +"I"*1, bytes(recv))
print(values)
rpi_crc = zlib.crc32(bytes(recv))
print('RPis own CRC generated:')
print(rpi_crc)
f.write("\n")
f.write("\n".join([str(x) for x in values]))
mcu_crc_bytes = spi.xfer2(crc_data)
mcu_crc = struct.unpack("<"+"I"*1,bytes(mcu_crc_bytes))
mcu_crc_int = int(''.join(map(str,mcu_crc)))
print('MCU sent this CRC:')
print(mcu_crc_int)
if (rpi_crc != mcu_crc_int):
spi.xfer([0x03])
t2=datetime.datetime.now()
print(t2-t1)
input("Press Enter to start the process ")
spi.xfer2([0x01])
cb1=pi.callback(25, pigpio.RISING_EDGE, spi_process)
while True:
time.sleep(1)
From this forum itself I got to know that it might be the issue of endianness so, I tried playing around with the endianness of one value and compare it with other but it still produces different values.
For ex: Value sent by RPi is:2172022818 (in decimal)
Changing it to Hex: 0x81767022
Changing the endianness: 0x22707681
Value sent by microcontroller is:2613451423 (in decimal)
Changing it to Hex: 0x9BC61A9F
As you can see both the values in bold are not same. Please let me know if I am doing something wrong or what could be going on here. Thanks!
EDIT:
Added more code to provide better overview of certain aspects which were missing before. Datasheet for microcontroller (CRC on Pg. 347): https://www.wless.ru/files/ZigBee/EFR32MG21/EFR32xG21_reference_manual.pdf
I was able to figure out the issues. I used this https://crccalc.com/?crc=C5&method=crc32&datatype=hex&outtype=0 to confirm CRC values that I was getting on microcontroller and RPi.
First issue was on microcontroller, where I was not even performing the CRC on the data, instead I was performing on the address where that data was stored.
Second issue was that MCU was performing CRC on the value which was stored in the little-endian form. On RPi also, CRC was being performed on values stored in little-endian form. Hence, since the endianness was same on both the devices, I did not have to reverse the bits or bytes.
After doing these changes, I was able to get correct and same CRC values on both RPi and microcontroller.

How to compute the value for `version made by` in zip header?

I'm struggling to compute the correct value for version made by in adm-zip.
The Zip Spec is unclear in my opinion how to find the binary or int value to set an option (e.g. Option 3 Unix) to the depending 2 Bytes in the central header.
The docs from adm-zip for the header setting does not help at all.
Mapping from the zip spec (4.4.2):
4.4.2.2 The current mappings are:
0 - MS-DOS and OS/2 (FAT / VFAT / FAT32 file systems)
1 - Amiga 2 - OpenVMS
3 - UNIX 4 - VM/CMS
I have found one possible solution by setting the entry.header.made property to 788.
(entry.header as any).made = 788;
(This value was only found by importing a zip created by another zip util.)
Can anyone explain how to compute this value 788 starting from the desired option 3?
Or how to compute this value for another option e.g. 10 - Windows NTFS?
Short description:
According to the specification the upper byte represents the OS which created the ZIP file. The lower byte is the version of the used ZIP specification.
In your example:
788 = 0x0314
OS which created the ZIP file:
0x03 (Upper Byte): UNIX
4.4.2.1 The upper byte indicates the compatibility of the file
attribute information. If the external file attributes
are compatible with MS-DOS and can be read by PKZIP for
DOS version 2.04g then this value will be zero. If these
attributes are not compatible, then this value will
identify the host system on which the attributes are
compatible. Software can use this information to determine
the line record format for text files etc.
ZIP specification version:
0x14 (Lower Byte): Version 2.0
0x14 / 10 = 2 (Major version number)
0x14 % 10 = 0 (Minor version number)
4.4.2.3 The lower byte indicates the ZIP specification version
(the version of this document) supported by the software
used to encode the file. The value/10 indicates the major
version number, and the value mod 10 is the minor version
number.
For Windows NTFS, the correct "version made by" value should be:
0x0A14 = 2580
0x0A (Upper Byte): Windows NTFS (Win32)
0x14 (Lower Byte): Version 2.0
Extract from adm-zip source:
var _verMade = 0x14,
_version = 0x0A,
_flags = 0,
_method = 0,
_time = 0,
_crc = 0,
_compressedSize = 0,
_size = 0,
_fnameLen = 0,
_extraLen = 0,
_comLen = 0,
_diskStart = 0,
_inattr = 0,
_attr = 0,
_offset = 0;
switch(process.platform){
case 'win32':
_verMade |= 0x0A00;
case 'darwin':
_verMade |= 0x1300;
default:
_verMade |= 0x0300;
}
Here you can see, that the version 2.0 (0x14) from the ZIP specification is used and there is a simple OR with the left shifted OS which created the ZIP file.
UPDATE:
I wrote a few simple JavaScript example functions which returns the right value for verMade and which returns the OS, major and minor version number from verMade.
Set version:
function zip_version_set(os, spec_major, spec_minor)
{
var ret = (parseInt(spec_major, 10) * 10) + parseInt(spec_minor, 10);
switch (os) {
case "dos":
ret |= 0x0000;
break;
case "win32":
ret |= 0x0A00;
break;
case "darwin":
ret |= 0x1300;
break;
default:
ret |= 0x0300;
}
return ret;
}
Usage:
Argument os:
Put her the OS string. Currently possible values are dos (MS-DOS), win32 (Windows NTFS), darwin (OS X) and the default is unix.
Argument spec_major:
Put here the major version number from the used ZIP specification.
Argument spec_minor:
Put here the minor version number from the used ZIP specification.
Return:
Returns verMade.
Get OS:
function zip_version_get_os(verMade)
{
var tmp;
var ret;
tmp = (verMade & 0xFF00) >> 8;
switch (tmp) {
case 0x00:
ret = "dos";
break;
case 0x03:
ret = "unix";
break;
case 0x0A:
ret = "win32";
break;
case 0x13:
ret = "darwin";
break;
default:
ret = "unimplemented";
}
return ret;
}
Usage:
Argument verMade:
Put here the verMade value.
Return:
Returns the OS as string.
Get major version number (ZIP specification):
function zip_version_get_major(verMade)
{
return ((verMade & 0xFF) / 10);
}
Usage:
Argument verMade:
Put here the verMade value.
Return:
Returns the major version from the used ZIP specification.
Get minor version number (ZIP specification):
function zip_version_get_minor(verMade)
{
return ((verMade & 0xFF) % 10);
}
Usage:
Argument verMade:
Put here the verMade value.
Return:
Returns the minor version from the used ZIP specification.

In Scapy/Kamene python, how do I find the global headers for a pcap file?

So in C, while reading from a Pcap file, you can use the C libpcap library to get all this information related to the global headers:
typedef struct pcap_hdr_s {
guint32 magic_number; /* magic number */
guint16 version_major; /* major version number */
guint16 version_minor; /* minor version number */
gint32 thiszone; /* GMT to local correction */
guint32 sigfigs; /* accuracy of timestamps */
guint32 snaplen; /* max length of captured packets, in octets */
guint32 network; /* data link type */
} pcap_hdr_t;
So I've searched for a long time to no avail in how to find these variables in python libraries Scapy/Kamene.
Can somebody please show me sample code from the Scapy/Kamene that'll help me find all these variables or at least a way to find all these variables??
This isn't possible in Scapy as of the time of this writing. You can still do this with python though by reading it as a struct of bytes:
import struct
LITTLE_ENDIAN = "<"
BIG_ENDIAN = ">"
with open("temp.pcap", "rb") as f:
filebytes = f.read()
if filebytes[:2] == b"\xa1\xb2":
endianness = BIG_ENDIAN
elif filebytes[:2] == b"\xd4\xc3":
endianness = LITTLE_ENDIAN
# pcapng is a completely different filetype and has different headers
# It's magic number is also the same between big/little endian
elif filebytes[:2] == b"\n\r":
raise ValueError("This capture is a pcapng file (expected pcap).")
else:
raise ValueError("This capture is the wrong filetype (expected pcap.")
# Endianness is < or > and is handled by checking magic number.
pcap_headers = struct.unpack(endianness + "IHHIIII", filebytes[:24])
print(pcap_headers)
---
(2712847316, 2, 4, 0, 0, 524288, 1)
Here, we unpack with < for little-endian on my macos system (> for big-endian). H reads 4 bytes, while I reads 2 bytes. You can read more about format characters in the Python3 struct documentation.
We can check the magic number easily:
>>> int(str("A1B2C3D4"), 16)
2712847316
It looks like this is indeed a pcap with the correct magic number. For magic number byte order shenanigans, take a look at this SO answer (there can be multiple correct pcap "magic numbers").
Thanks to #Cukic0d: scapy source code is a great place to look at parsing pcap magic numbers.

How to capture escape sequences sent by terminal?

How would one capture the escape sequences as they are sent by a terminal application (say Konsole for example) ? For example, if you hit PgDown, what is sent to the virtual console ?
I would like to record the byte stream sent to the virtual console (like when I hit "Ctrl+C" what escape sequence it produced) to a file I can then read with hexdump.
I did a small python script to do the trick :
#!/bin/env python
import curses
from pprint import pprint
buf = ''
def main(stdscr):
global buf
curses.noecho()
curses.raw()
curses.cbreak()
stdscr.keypad(False)
stop = stdscr.getkey()
c = stdscr.getkey()
buf = ''
while c != stop:
buf += c
c = stdscr.getkey()
def run():
curses.wrapper(main)
pprint(buf)
tmp = buf.encode('latin1')
pprint([hex(x) for x in tmp])
pprint([bin(x) for x in tmp])
run()
...It clear the screen, then type a key (e.g. a), then, type any thing, and type the same key as the first one to stop. Then, it will display all the bytes received (example : a [start recording]
Alt+b [stop recording] a produces the bytes : ['0x1b', '0x62'] with my terminal

MPD + Lua + conky

I need help writing a conkyrc file. It (should) call a lua script, which reads a FIFO pipe of PCM data generated by MPD. This FIFO is used to generate visualization in NCMPCPP but I'd rather it go to a bar graph on my desktop. And yeah, I'm aware that it would need to be updated pretty fast in order to look good. I'm hoping that instead of a full spectral analysis, I can do a simpler visualization that doesn't require an expensive FFT or wavlets... just something I can pipe into a graph showing activity of music at the moment.
[edit] Progress made, albeit with a lot of fudge factor...
my conkyrc file
lua_load /home/adam/Programming/myWork/conky/mpd.lua
update_interval .05
TEXT
HERP DEE DERP DEE DERP DEE DUUUR
${lua_bar fifo_func}
My lua file
do
-- configuration
local interval = 5
-- local variables protected from the evil outside world
local next_update
local buf
local int = 0
local colour = 0
local function update_buf()
buf = os.time()
end
local f = assert(io.open("/tmp/mpd.fifo", "rb"))
local block = 2048 * 2 --2048 samples, 2 bytes per sample
local list = {}
function conky_fifo_func()
local bytes = f:read(block) --read a sample of block bytes
local power = 0
for i = 0, 2047 do
--j = string.byte(bytes, 2*i, 2*i+1) --extract 2 bytes
j = string.format("%u", string.byte(bytes, i*2, i*2+1))
power = power + math.abs(j-(256/2))
--io.write(j..'\n')
end
r=((power/10000)-20) * 15
io.write(r .. '\n')
return r
end
-- returns a percentage value that loops around
function conky_int_func()
int = int + 1
return int % 100
end
end
Based on the
NCMPCPP source code, the FIFO is an array of 2048 16bit ints

Resources