Parsing ByteBuffer in node? - node.js

I have data from an api that has returned like this:
var body = data;
data is equal to:
ByteBuffer {
buffer:
<Buffer 09 62 61 1f 04 01 00 10 01 11 61 99 5d 05 01 00 10 01>,
offset: 0,
markedOffset: -1,
limit: 18,
littleEndian: true,
noAssert: false
}
I've tried passing different functions to it to try to get the data from it. (I'm expecting at least 2 IDs.) Here is what I have tried so far and their results:
var message = body.readUint32(); // 526475785
var message = body.readCString(); // [blank]
var message = body.readUint8(); // 16
var message = body.readUint64(); // Long { low: -1721691903, high: 66909, unsigned: true }
I also tried:
var message = new ByteBuffer(8 + 8 + 4 + Buffer.byteLength(body.buffer) + 1, ByteBuffer.LITTLE_ENDIAN);
which returned:
ByteBuffer {
buffer:
<Buffer 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00>,
offset: 0,
markedOffset: -1,
limit: 39,
littleEndian: true,
noAssert: false
}
I also tried passing just 'body' in but that didn't work at all. Should I be parsing this differently? What exactly should I change to get the data? Thank you

You have to flip the bytebuffer first to make the ByteBuffer ready for read operations.
After the buffer is ready for read operations, use readIString to read the whole buffer as a string, you can use other operations such as readInt32 if you are expecting the buffer to be of other values than a string (I'm assuming a string since it is coming from an API).
body.flip().readIString();
A link to the ByteBuffer docs:
https://github.com/dcodeIO/bytebuffer.js/wiki/API

Related

Strange scancode 02 2A (554) on SendInput LShift

I have written a program to use an external (wireless) numpad as an input device for gaming. I am using NUMPAD0, NUMPAD. and ENTER for the modifier keys shift, ctrl and alt respectively and I've keymapped the rest of the numpad keys to WASD and some other keys. I've setup a LowLevelKeyboardProc to intercept and eat the "real" input of the numpad and send a custom WM_MESSAGE to my application's Windows Message loop to then send the keymapped "new" input via SendInput (using the scan codes of the keys I want to simulate).
This all works fine and as expected. Except for the left shift key:
Whenever I press NUMPAD0, which is mapped to left shift, not only the correct scan code 0x2A = 42 is send, but also another key with scan code 0x022a = 554. This seems like some sort of flag in bit #9, since 554 is 42 + 2^9, but I can find any documentation on this.
I've read about extended 2-byte scan codes with prefix 0xE0, but not 0x02 as in this case. This also only happens with the simulated shift key; ctrl and alt a behaving just fine.
Relevant parts of both the keyboard hook and the windows messaging stuff:
LowLevelKeyboardProc:
LPWSTR convertToHex(LPBYTE in, size_t size_in_Bytes)
{
std::stringstream tempS;
tempS << std::hex;
for(int i = 0; i<size_in_Bytes; i++)
{
tempS << std::setfill('0') << std::setw(2) << int(in[i]) << " ";
}
std::string tempString = tempS.str();
std::wstring tempW(tempString.begin(), tempString.end());
LPWSTR out= (LPWSTR) malloc((tempW.size()+2)* sizeof(WCHAR));
StringCbCopyW(out, tempW.size()* sizeof(WCHAR), tempW.c_str());
return out;
}
LRESULT CALLBACK LowLevelKeyboardProc(__in int nCode, __in WPARAM wParam, __in LPARAM lParam)
{
KBDLLHOOKSTRUCT *key=(KBDLLHOOKSTRUCT *)lParam;
wchar_t tempString[128];
LPWSTR hexOut = convertToHex((LPBYTE) key , sizeof(KBDLLHOOKSTRUCT));
StringCbPrintfW(tempString, sizeof(tempString), L"HEX: %s \n", hexOut);
OutputDebugStringW(tempString);
free(hexOut);
//if(key->scanCode > 0xFF)
// return -1; //ignore strange scancode
if(nCode==HC_ACTION)
{
if(isNumpad)
{
auto it = KeyMap.find(key->scanCode);
if(it != KeyMap.end())
{
PostMessage(hwndMain, WM_MY_KEYDOWN, (key->flags & 128), it->first);
StringCbPrintfW(tempString, sizeof(tempString), L"Intercept Key, VK: %d, scan: %d, flags: %d, time: %d, event: %d \n", key->vkCode, key->scanCode, key->flags, key->time, wParam);
OutputDebugStringW(tempString);
return -1;
}
}
}
// delete injected flag
//UINT mask = (1<<4);
//if(key->flags & mask)
// key->flags ^= mask;
//key->scanCode = MapVirtualKey(key->vkCode, MAPVK_VK_TO_VSC);
StringCbPrintfW(tempString, sizeof(tempString), L"Output key: VK: %d, scan: %d, flags: %d, time: %d, event: %d \n", key->vkCode, key->scanCode, key->flags, key->time, wParam);
OutputDebugStringW(tempString);
return CallNextHookEx(hhkHook,nCode,wParam,lParam);
}
Windows Message stuff:
case WM_MY_KEYDOWN:
{
DWORD OLD_KEY = lParam;
DWORD NEW_KEY = KeyMap[OLD_KEY];
bool keyUP = wParam;
INPUT inputdata;
inputdata.type=INPUT_KEYBOARD;
inputdata.ki.dwFlags=KEYEVENTF_SCANCODE ;
inputdata.ki.wScan=NEW_KEY;
inputdata.ki.wVk=MapVirtualKey(NEW_KEY, MAPVK_VSC_TO_VK);
inputdata.ki.time = 0;//key->time;
inputdata.ki.dwExtraInfo = 0;//key->dwExtraInfo;
if(keyUP)
{
inputdata.ki.dwFlags |= KEYEVENTF_KEYUP;
}
Sleep(1);
bool sendKey=false;
switch(inputdata.ki.wScan)
{
//there once was some additional logic here to handle the modifier keys ctrl, alt and shift seperately, but not anymore
default:
{
sendKey=true;
}
break;
}
if (sendKey)
{
UINT result = SendInput(1, &inputdata, sizeof(INPUT));
if(!result)
ErrorExit(L"SendInput didn't work!");
}
}
break;
Output when pressing the normal shift key on a physical keyboard:
//key down
HEX: a0 00 00 00 2a 00 00 00 00 00 00 00 29 cf d2 00 00 00 00 00 00 00 00 00
Output key: VK: 160, scan: 42, flags: 0, time: 13815593, event: 256
//key up
HEX: a0 00 00 00 2a 00 00 00 80 00 00 00 e5 cf d2 00 00 00 00 00 00 00 00 00
Output key: VK: 160, scan: 42, flags: 128, time: 13815781, event: 257
Output when pressing the simulated shift key on the numpad:
//keydown event numpad
HEX: 60 00 00 00 52 00 00 00 00 00 00 00 a1 ea d2 00 00 00 00 00 00 00 00 00
Intercept Key, VK: 96, scan: 82, flags: 0, time: 13822625, event: 256
//keydown event simulated shift key
HEX: a0 00 00 00 2a 00 00 00 10 00 00 00 a1 ea d2 00 00 00 00 00 00 00 00 00
Output key: VK: 160, scan: 42, flags: 0, time: 13822625, event: 256
// key up event with strange scancode 554 <- THIS IS WHAT I AM TALKING ABOUT!
HEX: a0 00 00 00 2a 02 00 00 80 00 00 00 3d eb d2 00 00 00 00 00 00 00 00 00
Output key: VK: 160, scan: 554, flags: 128, time: 13822781, event: 257
// key up event numpad
HEX: 2d 00 00 00 52 00 00 00 80 00 00 00 3d eb d2 00 00 00 00 00 00 00 00 00
Intercept Key, VK: 45, scan: 82, flags: 128, time: 13822781, event: 257
// key down with strange scan code
HEX: a0 00 00 00 2a 02 00 00 00 00 00 00 3d eb d2 00 00 00 00 00 00 00 00 00
Output key: VK: 160, scan: 554, flags: 0, time: 13822781, event: 256
//key up event simulated shift key
HEX: a0 00 00 00 2a 00 00 00 90 00 00 00 3d eb d2 00 00 00 00 00 00 00 00 00
Output key: VK: 160, scan: 42, flags: 128, time: 13822781, event: 257
Another thing to note is that the key-up event of this strange key scan code 554 is also exactly in reverse to what the rest is (see output).
As I can simply eat the key event for this strange scan code 554, this is actually not much of a problem functionality-wise, but I would still like to know what this scan code 554 is all about.

node.js USB (hid) Barcode Scanner read Buffer

i'm using node.js, to read data from a Barcode Scanner. So that is my code:
var HID = require('node-hid');
var usb = require('usb');
// Honeywell Scanner
var vid = 0xc2e;
var pid = 0xbe1;
var d = new HID.HID(vid, pid);
d.on("data", function (data) {
console.log(data);
});
d.on("error", function (error) {
console.log(error);
d.close();
});
My Problem is, that i get a Buffer that looks like < Buffer 00 00 00 00 00 00 00 00 >. After scanning a barcode (for example a barcode with the id 12) the console returns something like that
<Buffer 00 00 53 00 00 00 00 00>
<Buffer 00 00 00 00 00 00 00 00>
<Buffer 00 00 53 00 00 00 00 00>
<Buffer 00 00 00 00 00 00 00 00>
<Buffer 00 00 1e 00 00 00 00 00>
<Buffer 00 00 1f 00 00 00 00 00>
How can i convert this Buffer output into a readable string? In that case it would be a 12.
Thanks for your help!
I think what you want to do is to decode your data buffer.
To decode a buffer, you simply use the built-in .toString() method, passing in the character encoding to decode to:
data.toString('hex'); //<-- Decodes to hexadecimal
data.toString('base64'); //<-- Decodes to base64
If you don't pass anything to toString, utf8 will be the default.
EDIT
If you'd like to know which character encodings are currently supported by Node (other than hex, base64 and utf8), visit the official docs.

How do I decod node-hid data buffer from a&d scale

I have an A&D scale that I monitoring input from using node-hid. I am successfully reading the input, but I can't figure out how to decode the binary data. Any help is appreciated.
This is the code I am using:
var HID = require('node-hid');
var devices = HID.devices();
var device = new HID.HID('USB_0dbc_0005_14400000');
device.on("data", function(data){
console.log(data);
});
And this is what gets spat out when the scale is at zero.
<Buffer 00 00 53 00 00 00 00 00>
<Buffer 00 00 00 00 00 00 00 00>
<Buffer 00 00 57 00 00 00 00 00>
<Buffer 00 00 00 00 00 00 00 00>
<Buffer 00 00 62 00 00 00 00 00>
<Buffer 00 00 00 00 00 00 00 00>
<Buffer 00 00 62 00 00 00 00 00>
<Buffer 00 00 00 00 00 00 00 00>
<Buffer 00 00 62 00 00 00 00 00>
<Buffer 00 00 00 00 00 00 00 00>
<Buffer 00 00 62 00 00 00 00 00>
<Buffer 00 00 00 00 00 00 00 00>
<Buffer 00 00 62 00 00 00 00 00>
<Buffer 00 00 00 00 00 00 00 00>
<Buffer 00 00 63 00 00 00 00 00>
<Buffer 00 00 00 00 00 00 00 00>
<Buffer 00 00 62 00 00 00 00 00>
<Buffer 00 00 00 00 00 00 00 00>
<Buffer 00 00 62 00 00 00 00 00>
<Buffer 00 00 00 00 00 00 00 00>
<Buffer 00 00 58 00 00 00 00 00>
<Buffer 00 00 00 00 00 00 00 00>
<Buffer 00 00 53 00 00 00 00 00>
<Buffer 00 00 00 00 00 00 00 00>
I ended up just writing the buffer up on my whiteboard and thinking freely. I found some patterns in the data and wrote code for decoding and parsing the scale data. The code is included below for anyone that might need a push in the right direction.
"use strict";
var HID = require('node-hid');
var device = new HID.HID('USB_0dbc_0005_14400000');
var weight = []
var count = 0
module.exports.show_devices = function(){
console.log(devices);
}
module.exports.start_logging = function(){
device.on("data", function(data){
var bad_array = [
'\u0000S\u0000\u0000',
'\u0000W\u0000\u0000',
]
if (count < 20 && count % 2 == 0 && !contains(bad_array, data.toString('utf16le'))){
weight.push(key[data.toString('utf16le')])
}
if(count == 19){
display_weight()
}
if(count == 23){
count = -1
weight = []
}
count++
});
}
function display_weight(){
console.log(weight.join(''));
}
function contains(a, obj){
for(var i = 0; i < a.length; i++){
if(a[i] === obj){
return true;
}
}
return false;
}
var key = {
"\u0000Y\u0000\u0000" : "1",
"\u0000Z\u0000\u0000" : "2",
"\u0000[\u0000\u0000" : "3",
"\u0000\\\u0000\u0000" : "4",
"\u0000]\u0000\u0000" : "5",
"\u0000^\u0000\u0000" : "6",
"\u0000_\u0000\u0000" : "7",
"\u0000`\u0000\u0000" : "8",
"\u0000a\u0000\u0000" : "9",
"\u0000b\u0000\u0000" : "0",
"\u0000c\u0000\u0000" : "."
}

nodejs add null terminated string to buffer

I am trying to replicate a packet.
This packet:
2C 00 65 00 03 00 00 00 00 00 00 00 42 4C 41 5A
45 00 00 00 00 00 00 00 00 42 4C 41 5A 45......
2c 00 is the size of the packet...
65 00 is the packet id 101...
03 00 is the number of elements in the array...
Now here comes my problem, 42 4C 41 5A 45 is a string... There are exactly 3 Instances of that string in that packet if it is complete... But my problem is it is not just null terminated it has 00 00 00 00 spaces between those instances.
My code:
function channel_list(channels) {
var packet = new SmartBuffer();
packet.writeUInt16LE(101); // response packet for list of channels
packet.writeUInt16LE(channels.length)
channels.forEach(function (key){
console.log(key);
packet.writeStringNT(key);
});
packet.writeUInt16LE(packet.length + 2, 0);
console.log(packet.toBuffer());
}
But how do I add the padding?
I am using this package, https://github.com/JoshGlazebrook/smart-buffer/
Smart-Buffer keeps track of its position for you so you do not need to specify an offset to know where to insert the data to pad your string. You could do something like this with your existing code:
channels.forEach(function (key){
console.log(key);
packet.writeString(key); // This is the string with no padding added.
packet.writeUInt32BE(0); // Four 0x00's are added after the string itself.
});
I'm assuming you want: 42 4C 41 5A 45 00 00 00 00 42 4C 41 5A 45 00 00 00 00 etc.
Editing based on comments:
There is no built in way to do what you want, but you could do something like this:
channels.forEach(function (key){
console.log(key);
packet.writeString(key);
for(var i = 0; i <= (9 - key.length); i++)
packet.writeInt8(0);
});

SCSI Read (10) and Write (10) with the SCSI Generic Interface

I try to issue a scsi read(10) and write(10) to a SSD. I use this example code as a reference/basic code.
This is my scsi read:
#define READ_REPLY_LEN 32
#define READ_CMDLEN 10
void scsi_read()
{
unsigned char Readbuffer[ SCSI_OFF + READ_REPLY_LEN ];
unsigned char cmdblk [ READ_CMDLEN ] =
{ 0x28, /* command */
0, /* lun/reserved */
0, /* lba */
0, /* lba */
0, /* lba */
0, /* lba */
0, /* reserved */
0, /* transfer length */
READ_REPLY_LEN, /* transfer length */
0 };/* reserved/flag/link */
memset(Readbuffer,0,sizeof(Readbuffer));
memcpy( cmd + SCSI_OFF, cmdblk, sizeof(cmdblk) );
/*
* +------------------+
* | struct sg_header | <- cmd
* +------------------+
* | copy of cmdblk | <- cmd + SCSI_OFF
* +------------------+
*/
if (handle_scsi_cmd(sizeof(cmdblk), 0, cmd,
sizeof(Readbuffer) - SCSI_OFF, Readbuffer )) {
fprintf( stderr, "read failed\n" );
exit(2);
}
hex_dump(Readbuffer,sizeof(Readbuffer));
}
And this is my scsi write:
void scsi_write ( void )
{
unsigned char Writebuffer[SCSI_OFF];
unsigned char cmdblk [] =
{ 0x2A, /* 0: command */
0, /* 1: lun/reserved */
0, /* 2: LBA */
0, /* 3: LBA */
0, /* 4: LBA */
0, /* 5: LBA */
0, /* 6: reserved */
0, /* 7: transfer length */
0, /* 8: transfer length */
0 };/* 9: control */
memset(Writebuffer,0,sizeof(Writebuffer));
memcpy( cmd + SCSI_OFF, cmdblk, sizeof(cmdblk) );
cmd[SCSI_OFF+sizeof(cmdblk)+0] = 'A';
cmd[SCSI_OFF+sizeof(cmdblk)+1] = 'b';
cmd[SCSI_OFF+sizeof(cmdblk)+2] = 'c';
cmd[SCSI_OFF+sizeof(cmdblk)+3] = 'd';
cmd[SCSI_OFF+sizeof(cmdblk)+4] = 'e';
cmd[SCSI_OFF+sizeof(cmdblk)+5] = 'f';
cmd[SCSI_OFF+sizeof(cmdblk)+6] = 'g';
cmd[SCSI_OFF+sizeof(cmdblk)+7] = 0;
/*
* +------------------+
* | struct sg_header | <- cmd
* +------------------+
* | copy of cmdblk | <- cmd + SCSI_OFF
* +------------------+
* | data to write |
* +------------------+
*/
if (handle_scsi_cmd(sizeof(cmdblk), 8, cmd,
sizeof(Writebuffer) - SCSI_OFF, Writebuffer )) {
fprintf( stderr, "write failed\n" );
exit(2);
}
}
In the following example I do
scsi read
scsi write
scsi read
And I print the hexdumps of the data which is written (scsi write) and what is read (scsi read)
Read(10)
[0000] 00 00 00 44 00 00 00 44 00 00 00 00 00 00 00 00 ...D...D ........
[0010] 00 2C 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ........ ........
[0020] 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ........ ........
[0030] 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ........ ........
[0040] 00 00 00 00 ....
Write(10):
[0000] 00 00 00 00 00 00 00 24 00 00 00 00 00 00 00 00 ........ ........
[0010] 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ........ ........
[0020] 00 00 00 00 2A 00 00 00 00 00 00 00 00 00 41 62 ........ ......Ab
[0030] 63 64 65 66 67 00 cdefg.
Read(10):
[0000] 00 00 00 44 00 00 00 44 00 00 00 00 00 00 00 00 ...D...D ........
[0010] 04 00 20 00 70 00 02 00 00 00 00 0A 00 00 00 00 ....p... ........
[0020] 04 00 00 00 41 62 63 64 65 66 67 00 00 00 00 00 ....Abcd efg.....
[0030] 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ........ ........
[0040] 00 00 00 00 ....
fter running the three commands again, I should read Abcdefg with the first read. Right? But running them again changes nothing. You could now assume, that the memory I use has still the data from previous funcions, but I get the same result even though I run memset(Readbuff,0,sizeof(Readbuff)) before the sys_read() happens.
I assumed, that the LBA I try to write is maybe forbidden to write, and I read the cache. But interating over LBA Adresses from 0x00-0xFF changes nothing - That means, I read the same data (Abcdefg).
Do you know an example implementation of scsi read or writes with the scsi generic interface?
In SCSI, the units of the LBA and the transfer length are in blocks, sometimes called sectors. This is almost always 512 bytes. So, you can't read or write just 32 bytes. At a minimum, you'll have to do 512 bytes == one block. This one point is most of what you need to fix.
Your transfer length is zero in your scsi_write implementation, so it's not actually going to write any data.
You should use different buffers for the CDB and the write/read data. I suspect that confusion about these buffers is leading your implementation to write past the end of one of your statically-allocated arrays and over your ReadBuffer. Run it under valgrind and see what shows up.
And lastly, a lot could go wrong in whatever is in handle_scsi_cmd. It can be tricky to set up the data transfer... in particular, make sure you're straight on which way the data is going in the I/O header's dxfer_direction: SG_DXFER_TO_DEV for write, SG_DXFER_FROM_DEV for read.
Check out this example of how to do a read(16). This is more along the lines of what you're trying to accomplish.
https://github.com/hreinecke/sg3_utils/blob/master/examples/sg_simple16.c

Resources