How to set byte size of my fields in a custom scapy class? - scapy

Can someone please help me with an example code.
I would like to use scapy to create this packet structure:
MyPacketStructure:
field1: 2 bytes
field2: 16 bytes
field3: 12 bytes
field4: 17 bytes
data: 83 bytes
How do I define a scapy class for the above packet structure?
class MyPacketStructure(Packet):
what type do i use to set ("field1") for 2 bytes
what type do i use to set ("field2") for 16 bytes
what type do i use to set ("field3") for 12 bytes
what type do i use to set ("field4") for 17 bytes
what type do i use to set ("data") for 83 bytes
I see ByteField but it's only one byte. Which scapy datatype let's me set the number of bytes for that field.
For the data field or payload, what is the recommended datatype? What happens if i only 43 bytes? How do I pad the other 40 bytes for the data field?
Any help would be appreciated. Thank you in advance.

Related

Scapy How to create custom protocol/packet structure?

Can someone please help me with an example code.
I would like to use scapy to create this packet structure:
MyPacketStructure:
field1: 2 bytes
field2: 16 bytes
field3: 12 bytes
field4: 17 bytes
data: 83 bytes
How do I define a scapy class for the above packet structure?
class MyPacketStructure(Packet):
what type do i use to set ("field1") for 2 bytes
what type do i use to set ("field2") for 16 bytes
what type do i use to set ("field3") for 12 bytes
what type do i use to set ("field4") for 17 bytes
what type do i use to set ("data") for 83 bytes
I see ByteField but it's only one byte. Which scapy datatype let's me set the number of bytes for that field.
For the data field or payload, what is the recommended datatype? What happens if i only 43 bytes? How do I pad the other 40 bytes for the data field?
Any help would be appreciated. Thank you in advance.

Converting a string to base64

I'm trying to understand how Base64 works.
If you wanted to send !"# using Base64, what would it look like?
Here's my working out:
String: ! " #
Hex: 21 22 23
Binary: 00100001 00100010 00100011
Base64 conversion:
Hex: 4 12 8 23
Binary: 001000 010010 001000 100011
None of the final binary values are able to be represented using any of the ascii chars in Base64.
I've obviously misunderstood something here, if anyone can point me in the right direction with an example that would be great.
If I understand your question correctly, you are using trying to re-interpret the Base64 values as characters using an ASCII table (i.e. 0x04 would be EOT). However, you will have to use the base64 index table to convert the resulting numbers back to characters (note that the index values are in decimal, not in HEX there).
Here, your values will be
Base64:
Hex: 4 12 8 23
String: E S I j
Does that make sense?

Combination of two bytes calculation

I'm trying to calculate the result from two bytes returned from a string.
I know that the expected value should = 250 and the two bytes have values of [3E, 80]
When working this out on a calculator do I use the 3E value and move it 8 bits to the left using the RoL function? If so how do I then use the result combined with the 80 value? Add the two values together?
Thanks.

node.js: get byte length of the string "あいうえお"

I think, I should be able to get the byte length of a string by:
Buffer.byteLength('äáöü') // returns 8 as I expect
Buffer.byteLength('あいうえお') // returns 15, expecting 10
However, when getting the byte length with a spreadsheet program (libreoffice) using =LENB("あいうえお"), I get 10 (which I expect)
So, why do I get for 'あいうえお' a byte length of 15 rather than 10 using Buffer.byteLength?
PS.
Testing the "あいうえお" on these two sites, I get two different results
http://bytesizematters.com/ returns 10 bytes
https://mothereff.in/byte-counter returns 15 bytes
What is correct? What is going on?
node.js is correct. The UTF-8 representation of the string "あいうえお" is 15 bytes long:
E3 81 82 = U+3042 'あ'
E3 81 84 = U+3044 'い'
E3 81 86 = U+3046 'う'
E3 81 88 = U+3048 'え'
E3 81 8A = U+304A 'お'
The other string is 8 bytes long in UTF-8 because the Unicode characters it contains are below the U+0800 boundary and can each be represented with two bytes:
C3 A4 = U+E4 'ä'
C3 A1 = U+E1 'á'
C3 B6 = U+F6 'ö'
C3 BC = U+FC 'ü'
From what I can see in the documentation, LibreOffice's LENB() function is doing something different and confusing:
For strings which contain only ASCII characters, it returns the length of the string (which is also the number of bytes used to store it as ASCII).
For strings which contain non-ASCII characters, it returns the number of bytes required to store it in UTF-16, which uses two bytes for all characters under U+10000. (I'm not sure what it does with characters above that, or if it even supports them at all.)
It is not measuring the same thing as Buffer.byteLength, and should be ignored.
With regard to the other tools you're testing: Byte Size Matters is wrong. It's assuming that all Unicode characters up to U+FF can be represented using one byte, and all other characters can be represented using two bytes. This is not true of any character encoding. In fact, it's impossible. If you encode every characters up to U+FF using one byte, you've used up all possible values for that byte, and you have no way to represent anything else.

node: converting buffers to decimal values

I have a buffer that is filled with data and begins with < Buffer 52 49 ...>
Assuming this buffer is defined as buf, if I run buf.readInt16LE(0) the following is returned:
18770
Now, the binary representation of hex values 52 and 49 are:
01010010 01001001
If I were to convert the first 15 bits to decimal, omitting the 16th bit for two's complement I would get the following:
21065
Why didn't my results give me the value of 18770?
18770 is 01001001 01010010 which is your 2 bytes reversed, which is what the readInt*LE functions are going to do.
Use readInt16BE.
You could do this: parseInt("0x" + buf.toString("hex")). Probably a lot slower but would do in a pinch.

Resources