Can I get all custom section from a WebAssembly in Node.JS? - node.js

I see there is way i can get a specific custom section with given name.
var nameSections = WebAssembly.Module.customSections(module, "sec_name");
Is there any way I can get all the custom sections in a given WebAssembly module?

The JavaScript interface at this time does not offer that. The customSection function was discussed and approved here. There is currently an opened issue to address the question you asked here. Generally this issue status at this time is that providing such an API would bring too much complexities (not so much about the sections as a list, but about the content of each). That makes sense to me, because the WebAssembly standard is currently in very active development (many post-MVP features are in progress).
That said, it seems that you are left off to parse the binary by yourself. The binary format is documented here. Basically, after you parse the magic number and the version there is a list of sections: 1 byte ID, one u32 (LEB128 format with allowed trailing zeroes) for the section's byte content length and then are the bytes of the section's content (with that length). After this header and the content bytes, is the next section, if any, till the end.
The custom sections have an id of 0. The content of these sections starts with a name and after it are the section "user" bytes. This name is a vector of bytes. Each vector has a length (again u32) followed by that number of bytes that must be the UTF-8 encoded string.
So the pseudo-code is something like this:
parse the magic
parse the version
loop till there are bytes
read the section id
read the section u32 length
if id = 0 then
read the name u32 length
skip the name bytes
use the user bytes till the end of the section
jump to the end of the section

Related

How do `Map<String, String>` get to know, the `String` endings in `MethodChannel` arguments

If dart and kotlin code communicate through binary(array of 8-bit integers (0-255)), then how does String end or even int end is represented in, or determined from binary sequence of bytes, is there some special charCode or something else.
Also is there a way to save a List<int> as-it-is to a file.txt, so it can be read directly to List<int> instead of serialization.
Please guide this new dev,
Thanking you...
Since Flutter handles the MethodChannel, in both the Dart side and Kotlin side, it can be allowed to have its own internal protocol to communicate between the native layer and Flutter. In theory they could use JSON but they are probably using something else based on the supported types and also making it more efficient: https://docs.flutter.dev/development/platform-integration/platform-channels?tab=type-mappings-kotlin-tab#codec
For saving a List<int> to a file, you need to determine how you want to encode the content in the file and then how you want to decode it. It can be as simply as just saving each number separated by comma or encode the list into JSON.
If your list of numbers can be represented with Uint8List or Int8List, then you can basically just save the numbers as raw bytes to the file and then read them again.
But List<int> is a list of 64-bit numbers and you should therefore determine how you want to encode this exactly.
For writing to files, there are several ways to do it but the specific way depends on what you exactly want. So without any more details I can just suggest you check the API: https://api.dart.dev/stable/2.17.3/dart-io/File-class.html

gcloud translate submitting lists

The codelab example for using gcloud translate via python only translates one string:
sample_text = "Hello world!"
target_language_code = "tr"
response = client.translate_text(
contents=[sample_text],
target_language_code=target_language_code,
parent=parent,
)
for translation in response.translations:
print(translation.translated_text)
But since it puts sample_text in a list and iterates over the response, I take it one can submit a longer list. Is this true and can I count on the items in the response corresponding to the order of items in contents? This must be the case but I can't find a clear answer in the docs.
translate_text contents is a Sequence[str] but must be less than 30k (codepoints).
For longer than 30k, use batch_translate_text
APIs Explorer provides an explanation of the request and response types for the translateText method. This allows you to call the underlying REST API method and it generates a 'form' for you in which content is an array of string (as expected).
The TranslateTextResponse describes translations as having the same length as contents.
There's no obvious other way to map entries in contents with translations so these must be in the same order, translations[foo] being the translation of contents[foo].
You can prove this to yourself by:
making the call with multiple known translations
including one word not in the source language (i.e. notknowninenglish in English) to confirm the translation result.

SCAPY: How to skip bytes when dissecting a packet?

I am working with SCAPY for dissecting network packets. Some of them have a lot of random padding between strings.
Is there a way to tell SCAPY to just "skip the next 4 bytes after this field"?
Currently I am using a StrFixedLen field for every one of them and it just blows up the output and is very space consuming, and distracting...
I can't use the padded fields because I need to provide a padding byte, but in my case the padding consists of random bytes..
Thanks for any advice!
EDIT: here an example of a packet
x01x07x08x01
x13
ThisIsAString
x02x04x01x01
x10
AnotherOne
The packet is TLV ordered, so the 4-bytes are the Tag, the 5th one is the Legnth of the string, and then there's the Value (string). As there are many tags and I am not and will not be able to distinguish them all, I just want to extract the Length and Value (the string) parts of the packet data. So I want to ignore the 4-byte Tags. Currently I am doing this by declaring a StrLenField which just reads 4 bytes. But this is obviously then shown in the summary of the packet, because it's interpreted as a legitimate field. But I just want to have it ignored, so that SCAPY just skips the 4 bytes of a TLV-field when dissecting.
EDIT 2:
Here is a code snippet of what I am currently doing vs what I want to do:
StrFixedLenField("Padding_1", None, length=4),
FieldLenField("Field_1_Length", None, length_of="Field_1"),
StrLenField("Field_1", "", length_from=lambda x: x.Field_1_Length),
StrFixedLenField("Padding_2", None, length=4),
FieldLenField("Field_2_Length", None, length_of="Field_2"),
StrLenField("Field_2", "", length_from=lambda x: x.Field_2_Length)
Instead of the StrFixedLenField I'd like to have something like skip_bytes(4) so that SCAPY completely ignores these bytes.
Based on this GitHub fork, in which the author added a functionality that is somewhat similar to what I was searching for, I found a solution.
I added a class to fields.py that just inherits from the field class that I want to hide, which in my case it StrFixedLenField:
class StrFixedLenFieldHidden(StrFixedLenField):
pass
Then in packet.py in the definition of _show_or_dump(...) in the for loop which iterates through the fields of the packets, I added this:
for f in self.fields_desc:
if isinstance(f, StrFixedLenFieldHidden):
continue
...
By this, Scapy just ignores the field when doing the summary. In my case it was also important to hide the fields in the PDF dump. This can be done by adding the same lines to the for loop in the do_ps_dump(...) function in packet.py.
Keep in mind that now when running .show() or .show2() Scapy will not show these fields anymore, ALTHOUGH they are still there, so the packet will be properly assembled when sending the packet with sr(...) or whatever.

Reading .dfb file with rust throws invalid character error

I am new to rust and creating a POC to convert dbf file to csv. I am reading a .dbf file using rust library dbase.
The issue is, when i crate a sample .dbf file using dbfview the code works fine. But when i use .dbf file which i will be using in real time. I am getting the following error.
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: InvalidFieldType('M')', src/libcore/result.rs:999:5
Here is the code i am using from the given link.
use dbase::FieldValue;
let records = dbase::read("tests/data/line.dbf").unwrap();
for record in records {
for (name, value) in record {
println!("{} -> {:?}", name, value);
match value {
FieldValue::Character(string) => println!("Got string: {}", string),
FieldValue::Numeric(value) => println!("Got numeric value of {}", value),
_ => {}
}
}
}
I think the ^M shows the character appended by windows.
What can i do to handle this error and read the file successfully.
Any help will be much appreciated.
The short answer to your question is no, you will not be able to read this file with dbase-rs (or any current library) and you'll most likely have to rework this file to not contain a memo field.
A deep dive into the DBF file format
The InvalidFieldType error points at a structural feature of the file that your library cannot handle - a Memo field. We're going to deep-dive into the file to figure out why that is, and whether there is anything we can do to fix it.
This is the header definition:
Of particular importance is byte 28 (offset 0000010, byte 0C), which is a bitmask indicating if the table contains a bunch of possible things, most notably:
0x01 if the file comes with an associated .cdx file
0x02 if it contains a memo
0x04 if the file is actually a .dbc file (a database)
At 0x03, your file comes with both an associated .cdx file and contains a memo. As we know (ahead of time) that dbase-rs does not handle that, that's looking increasingly more likely.
Let's keep looking. From here on, each field is 32 bytes long.
Here are your fields:
Bytes 0-10 contain the field name, byte 11 is the type. Due to how the library you wanted to use can only parse certain fields, we only really care about byte 11.
In order of appearance by what the library can parse:
[x] CALL_ID (integer)
[x] CONTACT_ID (integer)
[x] CALL_DATE (DateTime)
[x] SUBJECT (char[])
[ ] NOTES (memo)
The last field is the problematic one. Looking into the library itself, this field type is not supported and will therefore yield an Error, which you are trying to unwrap(). This is the source of your error.
There are two three ways around it:
The "long" way is to patch the library to handle memo fields. This sounds easy, but in practice it really isn't. As the memos are stored in another file (typically a dbt file in the same folder), you're going to have to make that library read both files and reference both. The point of the memo type itself is to store more than 255 bytes of data in a field. You are the only one able to evaluate whether this work is worth the effort.
If your data is less than 255 bytes in size, you can replace that memo field with a char field, and dbfview should allow you to do this
If your field is longer than 255 bytes and you have access to the ability to run sub-processes (i.e. Command::run), you can sneak-convert it using a library that can process Memo fields in another language. this nodeJS library can, but read-only, for example.

struct.pack doesn't pack tightly

I want to create and later send a 5 byte struct like this:
import struct
struct.pack("?i", True, 0x01020304)
>>b'\x01\x00\x00\x00\x04\x03\x02\x01'
but as you see the 1 byte boolean get's padded by 3 bytes or filled up to an integer for some reason.
what I want as a result is:
>>b'\x01\x04\x03\x02\x01'
How can I do this and why does my solution not work? It seems to be correctly used according to the documentation.
My problem was solved by an answer to this question. The = character at the start of the format string tells the pack method to not align the data but to produce a byte string of exactly the specified length. Whether a format character allows alignment or not is specified in this chapter. Somehow I missed that.

Resources