Cannot call bson-ext's native functions - node.js

I am using bson-ext as a native binding.
Building the native .node file has worked well so far. I installed GCC, make and ran node-gyp configure and node-gyp build. I am now including the file in my node app like so:
var bson = require('./bson/build/Release/bson.node');
var object = bson.BSON.prototype;
console.log(object.serialize('hey'))
The problem is, I am getting this error:
console.log(object.serialize({test: "turkey"}))
TypeError: Illegal invocation
I get confused here because when I do console.log(object)
It outputs:
BSON {
calculateObjectSize: [Function: calculateObjectSize],
serialize: [Function: serialize],
serializeWithBufferAndIndex: [Function: serializeWithBufferAndIndex],
deserialize: [Function: deserialize],
deserializeStream: [Function: deserializeStream] }
These are the native functions from C++ that were built to bson.node, right? But, now I am just not sure how to call them. I have checked their github page as well for documentation, but with no luck.
Edit: The following:
var bson = require('./bson/build/Release/bson');
console.log(bson.BSON)
outputs:
[Function: BSON]
Next, I run:
console.log(bson.BSON().serialize({turkey: 'hey'}));
But receive:
console.log(bson.BSON().serialize({turkey: 'hey'}));
TypeError: First argument passed in must be an array of types
Then I run:
console.log(bson.BSON(['BSON']).serialize({turkey: 'hey'}));
And receive:
Segmentation fault

I found a solution inside the mongodb package.
(Specifically at \mongodb\node_modules\mongodb-core\lib\replset.js # line 15).
They are using .native()
BSON = require('bson').native().BSON;
var newBSON = new bson();
console.log(newBSON.serialize('hello'));
Which does work:
<Buffer 32 00 00 00 02 30 00 02 00 00 00 68 00 02 31 00 02 00 00 00 65 00 02 32 00 02 00 00 00 6c 00 02 33 00 02 00 00 00 6c 00 02 34 00 02 00 00 00 6f 00 00>
However, (Slightly off-topic, but, I made a question yesterday Why is JSON faster than BSON, and the conclusion was native code vs non-native code. But, now, even after installing bson natively, the performance results are roughly the same... bummer.

Related

Handling ArrayBuffer in mqtt message callback

A mqtt client send a binary message to certain topic. My node.js client subscribes to the topic and recieves the binary data. As our architecture payload is Int16Array. But I cannot cast it successfully to Javascript array.
//uint16 array [255, 256, 257, 258] sent as message payload, which contents <ff 00 00 01 01 01 02 01>
When I do this:
mqttClient.on("message", (topic, payload) => {
console.log(payload.buffer);
})
The output like:
ArrayBuffer {
[Uint8Contents]: <30 11 00 07 74 65 73 74 6f 7a 69 ff 00 00 01 01 01 02 01>,
byteLength: 19
}
which cant be cast to Int16Array because of odd length
It also contains more bytes than the original message
As it seems the original bytes exist at the end of the payload, which is offset for some reason.
Buffer also contains the offset and byte length information inside. By using them casting should be successful.
let arrayBuffer = payload.buffer.slice(payload.byteOffset,payload.byteOffset + payload.byteLength)
let int16Array = new Int16Array(arrayBuffer)
let array = Array.from(arrayBuffer)

BPF verifier rejects code: "invalid bpf_context access"

I'm trying to write a simple socket filter eBPF program that can access the socket buffer data.
#include <linux/bpf.h>
#include <linux/if_ether.h>
#define SEC(NAME) __attribute__((section(NAME), used))
SEC("socket_filter")
int myprog(struct __sk_buff *skb) {
void *data = (void *)(long)skb->data;
void *data_end = (void *)(long)skb->data_end;
struct ethhdr *eth = data;
if ((void*)eth + sizeof(*eth) > data_end)
return 0;
return 1;
}
And I'm compiling using clang:
clang -I./ -I/usr/include/x86_64-linux-gnu/asm \
-I/usr/include/x86_64-linux-gnu/ -O2 -target bpf -c test.c -o test.elf
However when I try to load the program I get the following verifier error:
invalid bpf_context access off=80 size=4
My understanding of this error is that it should be thrown when you try to access context data that hasn't been checked to be within data_end, however my code does do that:
Here is the instructions for my program
0000000000000000 packet_counter:
0: 61 12 50 00 00 00 00 00 r2 = *(u32 *)(r1 + 80)
1: 61 11 4c 00 00 00 00 00 r1 = *(u32 *)(r1 + 76)
2: 07 01 00 00 0e 00 00 00 r1 += 14
3: b7 00 00 00 01 00 00 00 r0 = 1
4: 3d 12 01 00 00 00 00 00 if r2 >= r1 goto +1 <LBB0_2>
5: b7 00 00 00 00 00 00 00 r0 = 0
which would imply that the error is being caused by reading the pointer to data_end? However it only happens if I don't try to check the bounds later.
This is because your BPF program is a “socket filter”, and that such programs are not allowed to do direct packet access (see sk_filter_is_valid_access(), where we return false on trying to read skb->data or skb->data_end for example). I do not know the specific reason why it is not available, although I suspect this would be a security precaution as socket filter programs may be available to unprivileged users.
Your program loads just fine as a TC classifier, for example (bpftool prog load foo.o /sys/fs/bpf/foo type classifier -- By the way thanks for the standalone working reproducer, much appreciated!).
If you want to access data for a socket filter, you can still use the bpf_skb_load_bytes() (or bpf_skb_store_bytes()) helper, which automatically does the check on length. Something like this:
#include <linux/bpf.h>
#define SEC(NAME) __attribute__((section(NAME), used))
static void *(*bpf_skb_load_bytes)(const struct __sk_buff *, __u32,
void *, __u32) =
(void *) BPF_FUNC_skb_load_bytes;
SEC("socket_filter")
int myprog(struct __sk_buff *skb)
{
__u32 foo;
if (bpf_skb_load_bytes(skb, 0, &foo, sizeof(foo)))
return 0;
if (foo == 3)
return 0;
return 1;
}
Regarding your last comment:
However it only happens if I don't try to check the bounds later.
I suspect clang compiles out the assignments for data and data_end if you do not use them in your code, so they are no longer present and no longer a problem for the verifier.

BSON/Binary to string for MongoDB Object ID in Node JS

I am working on Change Streams introduced in MongoDB Version 3.6. Change Streams have a feature where I can specify to start streaming changes from a particular change in history. In native driver for Node.js, to resume change stream, it says (documentation here)
Specifies the logical starting point for the new change stream. This should be the _id field from a previously returned change stream document.
When I print it in console, this is what I am getting
{ _id:
{ _data:
Binary {
_bsontype: 'Binary',
sub_type: 0,
position: 49,
buffer: <Buffer 82 5a 61 a5 4f 00 00 00 01 46 64 5f 69 64 00 64 5a 61 a5 4f 08 c2 95 31 d0 48 a8 2e 00 5a 10 04 7c c9 60 de de 18 48 94 87 3f 37 63 08 da bb 78 04> } },
...
}
My problem is I do not know how to store the _id of this format in a database or a file. Is it possible to convert this binary object to string so I can use it later to resume my change stream from that particular _id. Example code would be greatly appreciated.
Convert BSON Binary to buffer and back
const Binary = require('mongodb').Binary;
const fs = require('fs');
Save _data from _id:
var wstream = fs.createWriteStream('/tmp/test');
wstream.write(lastChange._id._data.read(0));
wstream.close();
Then rebuild resumeToken:
fs.readFile('/tmp/test', void 0, function(err, data) {
const resumeToken = { _data: new Binary(data) };
});

Decoding UDP Data with Node_pcap

I am trying to build quick nodejs script to look at some data in my network. Using node_pcap I manage to decode almost everything but the payload data that is end through the UDP protocol. This is my code (fairly basic but gives me the headers and payloads)
const interface = 'en0';
let filter = 'udp';
const pcap = require('pcap'),
pcap_session = pcap.createSession(interface, filter),
pcap_session.on('packet', function (raw_packet) {
let packet = pcap.decode.packet(raw_packet);
let data = packet.payload.payload.payload.data;
console.log(data.toString()); // not full data
});
When I try to print data using toString() method, it gives me most of the data but the beginning. I have something like this printed :
Li?��ddn-�*ys�{"Id":13350715,... I've cut the rest of the data which is the rest of the JSON.
But I am suspecting that the bit of data that I can't read contain some useful info such has how many packet, offset packet and so on..
I manage to get a part of it from the buffer from a payload :
00 00 00 01 52 8f 0b 4a 4d 3f cb de 08 00 01 00 00 00 04 a4 00 00 26 02 00 00 26 02 00 00 00 03 00 00 00 00 00 00 09 2d 00 00 00 00 f3 03 01 00 00 2a 00 02 00 79 00 05 73 01 d2
Although I have an idea of what kind of data it can be I have no idea of its structure.
Is there a way that I could decode this bit of the buffer ? I tried to look at some of the buffer method such as readInt32LE, readInt16LE but in vain. Is there some reading somewhere that can guide me through the process of decoding it?
[Edit] The more I looked into it, the more I suspect the data to be BSON and not JSON, that would explain why I can read some bit of it but not everything. Any chance someone manage to decode BSON from a packet ?
How does the library know which packet decoder to use?
It starts at Layer 2 of the TCP/IP stack by identifying which protocol is used https://github.com/node-pcap/node_pcap/blob/master/decode/pcap_packet.js#L29-L56
switch (this.link_type) {
case "LINKTYPE_ETHERNET":
this.payload = new EthernetPacket(this.emitter).decode(buf, 0);
break;
case "LINKTYPE_NULL":
this.payload = new NullPacket(this.emitter).decode(buf, 0);
break;
case "LINKTYPE_RAW":
this.payload = new Ipv4(this.emitter).decode(buf, 0);
break;
case "LINKTYPE_IEEE802_11_RADIO":
this.payload = new RadioPacket(this.emitter).decode(buf, 0);
break;
case "LINKTYPE_LINUX_SLL":
this.payload = new SLLPacket(this.emitter).decode(buf, 0);
break;
default:
console.log("node_pcap: PcapPacket.decode - Don't yet know how to decode link type " + this.link_type);
}
Then it goes upper and tries to decode the proper protocol based on the flags it finds in the header https://github.com/node-pcap/node_pcap/blob/master/decode/ipv4.js#L12-L17 in this particular case for the IPv4 protocol
IPFlags.prototype.decode = function (raw_flags) {
this.reserved = Boolean((raw_flags & 0x80) >> 7);
this.doNotFragment = Boolean((raw_flags & 0x40) > 0);
this.moreFragments = Boolean((raw_flags & 0x20) > 0);
return this;
};
Then in your case it would match with the udp protocol https://github.com/node-pcap/node_pcap/blob/master/decode/ip_protocols.js#L15
protocols[17] = require("./udp");
Hence, If you check https://github.com/node-pcap/node_pcap/blob/master/decode/udp.js#L32 the packet is automatically decoded and it exposes a toString method
UDP.prototype.toString = function () {
var ret = "UDP " + this.sport + "->" + this.dport + " len " + this.length;
if (this.sport === 53 || this.dport === 53) {
ret += (new DNS().decode(this.data, 0, this.data.length).toString());
}
return ret;
};
What does this mean for you?
In order to decode a udp(any) packet you just call the high level api pcap.decode.packet(raw_packet) and then call toString method to display the decoded body payload
pcap_session.on('packet', function (raw_packet) {
let packet = pcap.decode.packet(raw_packet);
console.log(packet.toString());
});

Java Card have a weird response to APDU with INS = 0x70

Below, you see a simple applet that returns 0x6781 to incoming APDU commands with INS=0x70 or INS=0x71:
package testPack;
import javacard.framework.*;
public class TestApp extends Applet
{
public static void install(byte[] bArray, short bOffset, byte bLength)
{
new TestApp().register(bArray, (short) (bOffset + 1), bArray[bOffset]);
}
public void process(APDU apdu)
{
if (selectingApplet())
{
return;
}
byte[] buf = apdu.getBuffer();
switch (buf[ISO7816.OFFSET_INS])
{
case (byte)0x70:
ISOException.throwIt((short)0x6781);
break;
case (byte)0x71:
ISOException.throwIt((short)0x6781);
break;
default:
ISOException.throwIt(ISO7816.SW_INS_NOT_SUPPORTED);
}
}
}
The problem is that, I receive 0x6C01 to the APDU command with INS=0x70:
Send: 00 A4 04 00 07 01 02 03 04 05 00 00 00
Recv: 90 00
Send: 00 70 00 00 00
Recv: 6C 01
Send: 00 70 00 00 01
Recv: 01 90 00
Send: 00 71 00 00 00
Recv: 67 81
I tried two different Java Cards (One is NXP JCOP v2.4.2 r3 and another is a KONA java card) through both contact and contactless interfaces and using two different cap file generated inside two different laptops via two different IDE!!!( How Suspicious am I? :D ) But the response is equal.
I suspect to the PCSC or Card Manager for this weird response. Because in the simulator, even the process method doesn't called for this special INS value.
What's wrong with it?
INS = 70with CLA = 00 is the MANAGE CHANNEL command according to the ISO-7816 specification, as well as INS = A4 means SELECT.
If you want to use these INS codes, you must use CLA >= 0x80 in order to specify that it is your proprietary command.
I think if class represent interindustry class then only INS will work as defined in standard,Here CLA - 00 represent interindusty command therefore, card response behvaiour was like same behaviour as Manage channel command because you used INS = 70.
6.16.4 Response message (nominal case)
Table 73 - MANAGE CHANNEL response APDU
Data field Logical channel number if P1-P2='0000'
Empty if P1-P2!='0000'
SW1-SW2 Status bytes
actually your card was returning logical channel no -01 to you. ManageChannel
it seems to me that if class is propriatry. here INS will not treated as defined INS in standard. Hope CLA 80 with same INS 0x70 will give you require result.
hope it helps.
[Bit 8 set to 1 indicates the proprietary class]

Resources