scapy - add a new protocol - length variable - scapy

Im trying to add a new protocol and I have difficulties with the following members:
2-bytes member which suppose to be the length of the string A
A - a unicode string which is not terminated with null.
I need to read the length X and according to it take X bytes of the string A.
Can someone help me to.understand how to implement the length variable and the unicode string?
Thanks

Related

Converting a string to bytes

im fairly new to OCaml, so excuse any stupid mistake.
Im trying to modify the elements of a string, yet it wont allow me, saying it is expecting an expression of type bytes. After doing some reading, I know why, so I tried to convert my string to bytes, with no success. I've looked everywhere on how to convert a string to bytes, but can't seem to find anything. Can someone help me?
There is a function Bytes.of_string that does this.
Bytes.of_string "my string abc";;
- : bytes = Bytes.of_string "my string abc"
It's interesting to note that the toplevel expression printer (the P part of REPL) prints byte values using a Bytes.of_string call.
You can convert back to a string with Bytes.to_string.
# let b = Bytes.of_string "hal9000";;
val b : bytes = Bytes.of_string "hal9000"
# Bytes.to_string (Bytes.map (fun c -> Char.chr (Char.code c + 1)) b);;
- : string = "ibm:111"

How to add Leading zero/zeros in azure logic app for the variable?

Ideally, there should be 6 digits in a variable called 'subject'
"how to fill the 0 if not 6digit in the subject variable?"
example *subject = 387592(continue the process)
*subject = 35885(add zero to make 6digit = 035885)
*subject = 7161( add zeros 007161)
python zfill(6) helps to solve this kind of issues
An old school string parsing approach should do the trick:
substring(concat('000000',variables('IntString4')),sub(length(concat('000000',variables('IntString4'))),6),6)
Here's the breakdown:
Concat the existing string with leading zeros
concat('000000',variables('IntString4')
Measure the length of the string and subtract the desired length to calculate the starting index of the eventual substring:
sub(length(concat('000000',variables('IntString4'))),6)
Use substring to capture the last 6 characters:
substring(#1,#2,6)
You could make this a bit more readable and dynamic with variables, but same logic applies.
Have you tried using the "formatNumber" function?
For example, in your case
formatNumber('35885', '000000') -> Result "035885"
formatNumber('7161', '000000') -> Result "007161"
https://learn.microsoft.com/es-es/azure/logic-apps/workflow-definition-language-functions-reference#formatNumber

How to convert string representing n bits to a byte string?

I'm pretty new to Python (never coded before in this language) and I need to pass a byte string to a function (as specified here[1]). It receives a string represents a binary number and I want to generate a byte string from it. I've tried countless things but I'm really stuck on how to do it, can you help me please?
I'm trying to pass this value to a library that handles DES, so I don't have any other option than doing it this way.
[1] https://www.dlitz.net/software/pycrypto/api/current/Crypto.Cipher.DES-module.html#new
from Crypto.Cipher import DES
key = "1101101110001000010000000000000100101000010000011000110100010100"
param = key.tobytes() # <-- The function I need
cipher = DES.new(key, DES.MODE_ECB)
Your key is in its current form a binary Number.
You could get the bytes (still as a string) from that simply with:
length = 8
input_l = [key[i:i+length] for i in range(0,len(key),length)]
And then convert each of these in a list to ints:
input_b = [int(b,base=2) for b in input_l]
The bytearray is then simply given by:
bytearray(input_b)
or
bytes(input_b)
depending on your usecase.
Converting this to a function is left as an exercise to the reader.

Buffer constructor not treating encoding correctly for buffer length

I am trying to construct a utf16le string from a javascript string as a new buffer object.
It appears that setting a new Buffer('xxxxxxxxxx', utf16le) will actually have a length of 1/2 what it is expected to have. Such as we will only see 5 x's in the console logs.
var test = new Buffer('xxxxxxxxxx','utf16le');
for (var i=0;i<test.length;i++) {
console.log(i+':'+String.fromCharCode(test[i]));
}
Node version is v0.8.6
It is really unclear what you want to accomplish here. Your statement can mean (at least) 2 things:
How to convert an JS-String into a UTF-16-LE Byte-Array
How to convert a Byte-Array containing a UTF-16-LE String into a JS-String
What you are doing in your code sample is decoding a Byte-Array in a string represented as UTF-16-LE to a UTF-8 string and storing that as a buffer. Until you actually state what you want to accomplish, you have 0 chance of getting a coherent answer.
new Buffer('FF', 'hex') will yield a buffer of length 1 with all bits of the octet set. Which is likely the opposite of what you think it does.

How do int-to-string casts work in Go?

I only started Go today, so this may be obvious but I couldn't find anything on it.
What does var x uint64 = 0x12345678; y := string(x) give y?
I know var x uint8 = 65; y := string(x) would give y the byte 65, character A, and common sense would suggest (since types larger than uint8 are allowed to be cast to strings) that they would simply be packed in to native byte order (i.e little endian) and assigned to the variable.
This does not seem to be the case:
hex.EncodeToString([]byte(y)) ==> "efbfbd"
First thought says this is an address with the last byte being left off because of some weird null terminator thingy, but if I allocate two x and y variables with two different values and print them out I get the same result.
var x, x2 uint64 = 0x10000000, 0x20000000
y, y2 := string(x), string(x2)
fmt.Println(hex.EncodeToString([]byte(y))) // "efbfbd"
fmt.Println(hex.EncodeToString([]byte(y2))) // "efbfbd"
Maddeningly I can't find the implementation for the string type anywhere although I probably haven't looked hard enough.
This is covered in the Spec: Conversions: Conversions to and from a string type:
Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD".
So effectively when you convert a numeric value to string, it can only yield a string having one rune (character). And since Go stores strings as the UTF-8 encoded byte sequences in memory, that is what you will see if you convert your string to []byte:
Converting a value of a string type to a slice of bytes type yields a slice whose successive elements are the bytes of the string.
When you try to conver the 0x12345678, 0x10000000 and 0x20000000 values to string, since they are outside of the range of valid Unicode code points, as per spec they are converted to "\uFFFD" which in UTF-8 encoding is []byte{239, 191, 189}; when encoded to hex string:
fmt.Println(hex.EncodeToString([]byte("\uFFFD"))) // Output: efbfbd
Or simply:
fmt.Printf("%x", "\uFFFD") // Output: efbfbd
Read the blog post Strings, bytes, runes and characters in Go for more details about string internals.
And btw since Go 1.5 the Go runtime is implemented (mostly) in Go, so these conversions are now implemented in Go and can be found in the runtime package: runtime/string.go, look for the intstring() function.

Resources