Nodejs serial port to tcp - node.js

Is there a way to establish a host to stream a serial connection over tcp using nodejs - I want to stream the sensor data from the iot device on my computer to a connected computer to a web server. streaming of the raw data is fine - the remote computer will process it. I was looking into net and serialport npm packages - but im unsure of how to marry the two...
Thanks!

Preparation
Pretty much each vendor or device has its own serial communication protocol. Usually these devices also use packets with headers, checksums, but each device does this in a different way.
The first question is really, to what extend you want to forward the packet headers and checksum information. You may want to translate incoming packets to events or perhaps already to some kind of JSON message.
Assuming that you just want to forward the data in raw format without any pre-processing, it is still valuable to determine where a packet starts and ends. When you flush data over TCP/IP it's best not to do so halfway one of those serial packets.
For instance, it could be that your device is a barcode scanner. Most barcode scanners send a CR (carriage return) at the end of each scan. It would make sense to actively read incoming bytes looking for a CR character. Then each time a CR character is noticed you flush your buffer of bytes.
But well, it isn't always a CR. Some devices package their data between STX (0x02) and ETX (0x03) characters. And there are some that send fixed-length packages (e.g. 12 bytes per message).
Just for clarity, you could end up sending your data every 100 bytes while a message is actually 12 bytes. That would break some of the packet. Once in a while your TCP receiver would receive an incomplete packet. Having said all that. You could also add all this logic on the TCP receiving side. When an incomplete packet is received, you could keep it in a buffer in the assumption that the next incoming packet will contain the missing bytes.
Consider if it's worth it
Note that there are commercial RS232-to-ethernet devices that you can buy of the shelf and configure (~100EUR) that do exactly what you want. And often in the setup of that kind device you would have the option to configure a flush-character. (e.g. that CR). MOXA is probably the best you can get. ADAM also makes decent devices. These vendors have been making this kind of devices for ~30 years.
To get you started
But for the sake of exercise, here we go.
First of all, you would need something to communicate with your serial device.
I used this one:
npm install serialport#^9.1.0
You can pretty much blindly copy the following code. But obviously you need to set your own RS232 or USB port settings. Look in the manual of your device to determine the baudrate, databits, stopbits, parity and optionally RTS/DTR
import SerialPort from "serialport";
export class RS232Port {
private port: SerialPort;
constructor(private listener: (buffer: Buffer) => any, private protocol) {
this.port = new SerialPort("/dev/ttyS0", {
baudRate: 38400,
dataBits: 8,
stopBits: 1,
parity: "none",
});
// check your RTS/DTR settings.
// this.port.on('open', () => {
// this.port.set({rts: true, dtr: false}, () => {
// });
//});
const parser = this.port.pipe(this.protocol);
parser.on('data', (data) => {
console.log(`received packet:[${toHexString(data)}]`);
if (this.listener) {
this.listener(data);
}
});
}
sendBytes(buffer: Buffer) {
console.log(`write packet:[${toHexString(buffer)}]`);
this.port.write(buffer);
}
}
The code above continuously reads data from a serial device, and uses a "protocol" to determine where messages start/end. And it has a "listener", which is a callback. It can also send bytes with its sendBytes function.
That brings us to the protocol, which as explained earlier is something that should read until a separator is found.
Because I have no clue what your separator is. I will present you with an alternative, which just waits for a silence. It assumes that when there is no incoming data for a certain time, that the message will be complete.
export class TimeoutProtocol extends Transform {
maxBufferSize: number;
currentPacket: [];
interval: number;
intervalID: any;
constructor(options: { interval: number, maxBufferSize: number }) {
super()
const _options = { maxBufferSize: 65536, ...options }
if (!_options.interval) {
throw new TypeError('"interval" is required')
}
if (typeof _options.interval !== 'number' || Number.isNaN(_options.interval)) {
throw new TypeError('"interval" is not a number')
}
if (_options.interval < 1) {
throw new TypeError('"interval" is not greater than 0')
}
if (typeof _options.maxBufferSize !== 'number' || Number.isNaN(_options.maxBufferSize)) {
throw new TypeError('"maxBufferSize" is not a number')
}
if (_options.maxBufferSize < 1) {
throw new TypeError('"maxBufferSize" is not greater than 0')
}
this.maxBufferSize = _options.maxBufferSize
this.currentPacket = []
this.interval = _options.interval
this.intervalID = -1
}
_transform(chunk: [], encoding, cb) {
clearTimeout(this.intervalID)
for (let offset = 0; offset < chunk.length; offset++) {
this.currentPacket.push(chunk[offset])
if (this.currentPacket.length >= this.maxBufferSize) {
this.emitPacket()
}
}
this.intervalID = setTimeout(this.emitPacket.bind(this), this.interval)
cb()
}
emitPacket() {
clearTimeout(this.intervalID)
if (this.currentPacket.length > 0) {
this.push(Buffer.from(this.currentPacket))
}
this.currentPacket = []
}
_flush(cb) {
this.emitPacket()
cb()
}
}
Then finally the last piece of the puzzle is a TCP/IP connection. Here you have to determine which end is the client and which end is the server. I skipped that for now, because there are plenty of tutorials and code samples that show you how to set up a TCP/IP client-server connection.
In some of the code above I use a function toHexString(Buffer) to convert the content of a buffer to a hex format which makes it easier to print it to the console log.
export function toHexString(byteArray: Buffer) {
let s = '0x';
byteArray.forEach(function (byte) {
s += ('0' + (byte & 0xFF).toString(16)).slice(-2);
});
return s;
}

Related

Communicating with ESP32 via serial with serialports.js (node.js)

I have an esp32 board loaded with this software which is logging values about the number of detected Wi-Fi and Bluetooth devices detected via LoraWAN. In senddata.cpp it seems to be logging out the values that I need (though I'm not quite sure I understand how or where it is sending them via serial):
ESP_LOGD(TAG, "Sending count results: pax=%d / wifi=%d / ble=%d", count.pax, count.wifi_count, count.ble_count);
I set up a node.js app with the SerialPort.io library to be able to read data coming over serial. I've successfully identified the COM port on my PC that is receiving data, and I can log out the data buffer as follows:
const SerialPort = require("serialport").SerialPort;
const serialPort = new SerialPort({
path: "COM4",
baudRate: 9600,
autoOpen: false,
});
serialPort.open(function (error) {
if (error) {
console.log("failed to open: " + error);
} else {
console.log("serial port opened");
serialPort.on("data", function (data) {
// get buffered data and parse it to an utf-8 string
console.log(data);
data = data.toString("utf-8");
console.log(data);
});
serialPort.on("error", function (data) {
console.log("Error: " + data);
});
}
});
Which yields output in node.js as a buffer, e.g. <Buffer bc 08 AD>, but after the toString("utf-8") it is a bunch of gibberish. Clearly I am not encoding or decoding the serial output properly, but I'm not sure where to make adjustments. Does anyone know how I can get this serial output into the proper format to use in node.js?
--- Update Re: Questions ---
The board is a ttgo / lilygo lora32 - the library I used seems to say it supports both this board and communication over SPI. I am able to get readable data via the debug console with the platform.io extension for VSCode on Windows / Mac. I believe the baud is 9600, which was the only thing I seemed to need to specify on the serialports.io side.
I did receive this advice from the library author:
You need
a messagebuffer, to store the payload
a queue, as buffer for the serial data
a protocol, suitable for your application
1+2: see spislave.cpp (change the SPI transmit calls by serial port calls)
3: consider overhead and checksum, e.g. transfer the payload as byte array or UTF8 string, e.g. comma separated string with checksum, as used in NMEA.
Unfortunately I'm a bit out of my depth to make sense of that (though I'm working on it).
Also - the JavaScript code that has successfully worked via the things network uses to decode the payload from the board is here.
Does anyone know how I can get this serial output into the proper format to use in node.js?
It looks like many of the smaller/older boards this software interfaces with use baud 9600, but line 98 of main.cpp specifies a baud rate of 115200 for the debug messages:
// setup debug output or silence device
#if (VERBOSE)
Serial.begin(115200);
esp_log_level_set("*", ESP_LOG_VERBOSE);
I suspect switching to a baud rate of 115200 will help:
const serialPort = new SerialPort({
path: "COM4",
baudRate: 115200,
autoOpen: false,
});
If that doesn't do the trick, you can make sure the other serial parameters match those set in main.cpp, starting at line 394. UART0 is the one that emits the log messages on the ESP:
static void ble_spp_uart_init(void)
{
uart_config_t uart_config = {
.baud_rate = 115200,
.data_bits = UART_DATA_8_BITS,
.parity = UART_PARITY_DISABLE,
.stop_bits = UART_STOP_BITS_1,
.flow_ctrl = UART_HW_FLOWCTRL_RTS,
.rx_flow_ctrl_thresh = 122,
.source_clk = UART_SCLK_DEFAULT,
};
...
Looking at the API, you could specify the other parameters like so:
const serialPort = new SerialPort({
PATH: "COM4"
baudRate: 115200,
databits: 8,
parity: false,
stopbits: 1,
});
though I'm not quite sure I understand how or where it is sending them via serial
The ESP_LOG functions are really just special wrappers around vprintf. vprintf normally writes to stdout by default but the ESP redirects that to a dedicated UART (serial port). Check out the source:
static vprintf_like_t s_log_print_func = &vprintf;
void esp_log_writev(esp_log_level_t level,
const char *tag,
const char *format,
va_list args)
{
if (!esp_log_impl_lock_timeout()) {
return;
}
esp_log_level_t level_for_tag = s_log_level_get_and_unlock(tag);
if (!should_output(level, level_for_tag)) {
return;
}
(*s_log_print_func)(format, args);
}
That UART handles the encoding and buffers required to transmit over the serial port.
You need to check following things :
Check if baud rates of both the esp32 and node.js are same - most of the time garbage data received on COM if COM port configuration is not done properly.
Need to check, what data you are expecting and what is the format -is it in text (utf8) format or some binary format ? Accordingly you need to parse the out put.
It would be clear once you post a sample data the esp32 is sending and the buffer you are receiving on Node.js code.

native websocket api NodeJS for larger messages?

I was following an article about writing a socket server from scratch, and its mostly working with small frames / packages, but when I try to send about 2kb of data, I get this error:
.
internal/buffer.js:77
throw new ERR_OUT_OF_RANGE(type || 'offset',
^
RangeError [ERR_OUT_OF_RANGE]: The value of "offset" is out of range. It must be >= 0 and <= 7. Receive
d 8
at boundsError (internal/buffer.js:77:9)
at Buffer.readUInt8 (internal/buffer.js:243:5)
at pm (/home/users/me/main.js:277:24)
at Socket.<anonymous> (/home/users/me/main.js:149:15)
at Socket.emit (events.js:315:20)
at addChunk (_stream_readable.js:297:12)
at readableAddChunk (_stream_readable.js:273:9)
at Socket.Readable.push (_stream_readable.js:214:10)
at TCP.onStreamRead (internal/stream_base_commons.js:186:23) {
code: 'ERR_OUT_OF_RANGE'
}
Here's my server code (some details were changed for security, but here it is in its entirety for the line numbers etc.) but the relevant part here is the function pm [=parseMessage](towards the bottom):
let http = require('http'),
ch = require("child_process"),
crypto = require("crypto"),
fs = require("fs"),
password = fs.readFileSync(“./secretPasswordFile.txt”),
callbacks = {
CHANGEDforSecUrITY(m, cs) {
if(m.password === password) {
if(m.command) {
try {
cs.my = ch.exec(
m.command,
(
err,
stdout,
stderr
) => {
cs.write(ans(s({
err,
stdout,
stderr
})));
}
);
} catch(e) {
cs.write(ans(
s({
error: e.toString()
})
))
}
}
if(m.exit) {
console.log("LOL", cs.my);
if(cs.my && typeof cs.my.kill === "function") {
cs.my.kill();
console.log(cs.my, "DID?");
}
}
cs.write(
ans(
s({
hi: 2,
youSaid:m
}))
)
} else {
cs.write(ans(s({
hey: "wrong password!!"
})))
}
console.log("hi!",m)
}
},
banned = [
"61.19.71.84"
],
server = http.createServer(
(q,r)=> {
if(banned.includes(q.connection.remoteAddress)) {
r.end("Hey man, " + q.connection.remoteAddress,
"I know you're there!!");
} else {
ch.exec(`sudo "$(which node)" -p "console.log(4)"`)
console.log(q.url)
console.log(q.connection.remoteAddress,q.connection.remotePort)
let path = q.url.substring(1)
q.url == "/" &&
(path = "index.html")
q.url == "/secret" &&
(path = "../main.js")
fs.readFile(
"./static/" + path,
(er, f) => {
if(er) {
r.end("<h2>404!!</h2>");
} else {
r.end(f);
}
}
)
}
}
)
server.listen(
process.env.PORT || 80,
c=> {
console.log(c,"helo!!!")
server.on("upgrade", (req, socket) => {
if(req.headers["upgrade"] !== "websocket") {
socket.end("HTTP/1.1 400 Bad Request");
return;
}
let key = req.headers["sec-websocket-key"];
if(key) {
let hash = gav(key)
let headers = [
"HTTP/1.1 101 Web Socket Protocol Handshake",
"Upgrade: WebSocket",
"Connection: Upgrade",
`Sec-WebSocket-Accept: ${hash}`
];
let protocol = req.headers[
"sec-websocket-protocol"
];
let protocols = (
protocol &&
protocol.split(",")
.map(s => s.trim())
|| []
);
protocols.includes("json") &&
headers
.push("Sec-WebSocket-Protocol: json");
let headersStr = (
headers.join("\r\n") +
"\r\n\r\n"
)
console.log(
"Stuff happening",
req.headers,
headersStr
);
fs.writeFileSync("static/logs.txt",headersStr);
socket.write(
headersStr
);
socket.write(ans(JSON.stringify(
{
hello: "world!!!"
}
)))
}
socket.on("data", buf => {
let msg = pm(buf);
console.log("HEY MAN!",msg)
if(msg) {
console.log("GOT!",msg);
for(let k in msg) {
if(callbacks[k]) {
callbacks[k](
msg[k],
socket
)
}
}
} else {
console.log("nope");
}
});
});
}
)
function pm(buf) {
/*
*structure of first byte:
1: if its the last frame in buffer
2 - 4: reserved bits
5 - 8: a number which shows what type of message it is. Chart:
"0": means we continue
"1": means this frame contains text
"2": means this is binary
"0011"(3) - "0111" (11): reserved values
"1000"(8): means connection closed
"1001"(9): ping (checking for response)
"1010"(10): pong (response verified)
"1010"(11) - "1111"(15): reserved for "control" frames
structure of second byte:
1: is it "masked"
2 - 8: length of payload, if less than 126.
if 126, 2 additional bytes are added
if 127 (or more), 6 additional bytes added (total 8)
* */
const myFirstByte = buf.readUInt8(0);
const isThisFinalFrame = isset(myFirstByte,7) //first bit
const [
reserved1,
reserved2,
reserved3
] = [
isset(myFirstByte, 6),
isset(myFirstByte, 5),
isset(myFirstByte, 4) //reserved bits
]
const opcode = myFirstByte & parseInt("1111",2); //checks last 4 bits
//check if closed connection ("1000"(8))
if(opcode == parseInt("1000", 2))
return null; //shows that connection closed
//look for text frame ("0001"(1))
if(opcode == parseInt("0001",2)) {
const theSecondByte = buf.readUInt8(1);
const isMasked = isset(theSecondByte, 7) //1st bit from left side
let currentByteOffset = 2; //we are theSecondByte now, so 2
let payloadLength = theSecondByte & 127; //chcek up to 7 bits
if(payloadLength > 125) {
if(payloadLength === 126) {
payloadLength = buf.readUInt16BE(
currentByteOffset
) //read next two bytes from position
currentByteOffset += 2; //now we left off at
//the fourth byte, so thats where we are
} else {
//if only the second byte is full,
//that shows that there are 6 more
//bytes to hold the length
const right = buf.readUInt32BE(
currentByteOffset
);
const left = buf.readUInt32BE(
currentByteOffset + 4 //the 8th byte ??
);
throw new Error("brutal " + currentByteOffset);
}
}
//if we have masking byte set to 1, get masking key
//
//
//now that we have the lengths
//and possible masks, read the rest
//of the bytes, for actual data
const data = Buffer.alloc(payloadLength);
if(isMasked) {
//can't just copy it,
//have to do some stuff with
//the masking key and this thing called
//"XOR" to the data. Complicated
//formulas, llook into later
//
let maskingBytes = Buffer.allocUnsafe(4);
buf.copy(
maskingBytes,
0,
currentByteOffset,
currentByteOffset + 4
);
currentByteOffset += 4;
for(
let i = 0;
i < payloadLength;
++i
) {
const source = buf.readUInt8(
currentByteOffset++
);
//now mask the source with masking byte
data.writeUInt8(
source ^ maskingBytes[i & 3],
i
);
}
} else {
//just copy bytes directly to our buffer
buf.copy(
data,
0,
currentByteOffset++
);
}
//at this point we have the actual data, so make a json
//
const json = data.toString("utf8");
return p(json);
} else {
return "LOL IDK?!";
}
}
function p(str) {
try {
return JSON.parse(str);
} catch(e){
return str
}
}
function s(ob) {
try {
return JSON.stringify(ob);
} catch(e) {
return e.toString();
}
}
function ans(str) {
const byteLength = Buffer.byteLength(str);
const lengthByteCount = byteLength < 126 ? 0 : 2;
const payloadLength = lengthByteCount === 0 ? byteLength : 126;
const buffer = Buffer.alloc(
2 +
lengthByteCount +
byteLength
);
buffer.writeUInt8(
parseInt("10000001",2), //opcode is "1", at firstbyte
0
);
buffer.writeUInt8(payloadLength, 1); //at second byte
let currentByteOffset = 2; //already wrote second byte by now
if(lengthByteCount > 0) {
buffer.writeUInt16BE(
byteLength,
2 //more length at 3rd byte position
);
currentByteOffset += lengthByteCount; //which is 2 more bytes
//of length, since not supporting more than that
}
buffer.write(str, currentByteOffset); //the rest of the bytes
//are the actual data, see chart in function pm
//
return buffer;
}
function gav(ak) {
return crypto
.createHash("sha1")
.update(ak +'258EAFA5-E914-47DA-95CA-C5AB0DC85B11', "binary")
.digest("base64")
}
function isset(b, k) {
return !!(
b >>> k & 1
)
}
Given that this error does not happen with smaller packets, I'm taking an educated guess that this is due to the limitations of the code here, as mentioned in the offical RFC documentation:
5.4. Fragmentation
The primary purpose of fragmentation is to allow sending a message
that is of unknown size when the message is started without having to
buffer that message. If messages couldn't be fragmented, then an
endpoint would have to buffer the entire message so its length could
be counted before the first byte is sent. With fragmentation, a
server or intermediary may choose a reasonable size buffer and, when
the buffer is full, write a fragment to the network.
A secondary use-case for fragmentation is for multiplexing, where
it is not desirable for a large message on one logical channel to
monopolize the output channel, so the multiplexing needs to be free to
split the message into smaller fragments to better share the output
channel. (Note that the multiplexing extension is not described in
this document.)
Unless specified otherwise by an extension, frames have no semantic
meaning. An intermediary might coalesce and/or split frames, if no
extensions were negotiated by the client and the server or if some
extensions were negotiated, but the intermediary understood all the
extensions negotiated and knows how to coalesce and/or split frames
in the presence of these extensions. One implication of this is that
in absence of extensions, senders and receivers must not depend on
the presence of specific frame boundaries.
The following rules apply to fragmentation:
o An unfragmented message consists of a single frame with the FIN
bit set (Section 5.2) and an opcode other than 0.
o A fragmented message consists of a single frame with the FIN bit
clear and an opcode other than 0, followed by zero or more frames
with the FIN bit clear and the opcode set to 0, and terminated by
a single frame with the FIN bit set and an opcode of 0. A
fragmented message is conceptually equivalent to a single larger
message whose payload is equal to the concatenation of the
payloads of the fragments in order; however, in the presence of
extensions, this may not hold true as the extension defines the
interpretation of the "Extension data" present. For instance,
"Extension data" may only be present at the beginning of the first
fragment and apply to subsequent fragments, or there may be
"Extension data" present in each of the fragments that applies
only to that particular fragment. In the absence of "Extension
data", the following example demonstrates how fragmentation works.
EXAMPLE: For a text message sent as three fragments, the first
fragment would have an opcode of 0x1 and a FIN bit clear, the
second fragment would have an opcode of 0x0 and a FIN bit clear,
and the third fragment would have an opcode of 0x0 and a FIN bit
that is set.
o Control frames (see Section 5.5) MAY be injected in the middle
of
a fragmented message. Control frames themselves MUST NOT be
fragmented.
o Message fragments MUST be delivered to the recipient in the
order
sent by the sender. o The fragments of one message MUST NOT be interleaved between the
fragments of another message unless an extension has been
negotiated that can interpret the interleaving.
o An endpoint MUST be capable of handling control frames in the
middle of a fragmented message.
o A sender MAY create fragments of any size for non-control
messages.
o Clients and servers MUST support receiving both fragmented and
unfragmented messages.
o As control frames cannot be fragmented, an intermediary MUST NOT
attempt to change the fragmentation of a control frame.
o An intermediary MUST NOT change the fragmentation of a message
if
any reserved bit values are used and the meaning of these values
is not known to the intermediary.
o An intermediary MUST NOT change the fragmentation of any message
in the context of a connection where extensions have been
negotiated and the intermediary is not aware of the semantics of
the negotiated extensions. Similarly, an intermediary that didn't
see the WebSocket handshake (and wasn't notified about its
content) that resulted in a WebSocket connection MUST NOT change
the fragmentation of any message of such connection.
o As a consequence of these rules, all fragments of a message are
of
the same type, as set by the first fragment's opcode. Since
control frames cannot be fragmented, the type for all fragments in
a message MUST be either text, binary, or one of the reserved
opcodes.
NOTE: If control frames could not be interjected, the latency of a
ping, for example, would be very long if behind a large message.
Hence, the requirement of handling control frames in the middle of a
fragmented message.
IMPLEMENTATION NOTE: In the absence of any extension, a receiver
doesn't have to buffer the whole frame in order to process it. For
example, if a streaming API is used, a part of a frame can be
delivered to the application. However, note that this assumption
might not hold true for all future WebSocket extensions.
In the words of the article above:
Alignment of Node.js socket buffers with WebSocket message frames
Node.js socket data (I’m talking about net.Socket in this case, not
WebSockets) is received in buffered chunks. These are split apart with
no regard for where your WebSocket frames begin or end!
What this means is that if your server is receiving large messages
fragmented into multiple WebSocket frames, or receiving large numbers
of messages in rapid succession, there’s no guarantee that each data
buffer received by the Node.js socket will align with the start and
end of the byte data that makes up a given frame.
So, as you’re parsing each buffer received by the socket, you’ll need
to keep track of where one frame ends and where the next begins.
You’ll need to be sure that you’ve received all of the bytes of data
for a frame — before you can safely consume that frame’s data.
It may be that one frame ends midway through the same buffer in which
the next frame begins. It also may be that a frame is split across
several buffers that will be received in succession.
The following diagram is an exaggerated illustration of the issue. In
most cases, frames tend to fit inside a buffer. Due to the way the
data arrives, you’ll often find that a frame will start and end in
line with the start and end of the socket buffer. But this can’t be
relied upon in all cases, and must be considered during
implementation. This can take
some work to get right.
For the basic implementation that follows below, I have skipped any
code for handling large messages or messages split across multiple
frames.
So my problem here is that the article skipped the fragmentation code, which is kind of what I need to know... but in that RFC documentation, some examples of fragmentated and unfragmented packets are given:
5.6. Data Frames
Data frames (e.g., non-control frames) are identified by opcodes
where the most significant bit of the opcode is 0. Currently defined
opcodes for data frames include 0x1 (Text), 0x2 (Binary). Opcodes
0x3-0x7 are reserved for further non-control frames yet to be
defined.
Data frames carry application-layer and/or extension-layer data.
The opcode determines the interpretation of the data:
Text
The "Payload data" is text data encoded as UTF-8. Note that a
particular text frame might include a partial UTF-8 sequence;
however, the whole message MUST contain valid UTF-8. Invalid
UTF-8 in reassembled messages is handled as described in
Section 8.1.
Binary
The "Payload data" is arbitrary binary data whose interpretation
is solely up to the application layer.
5.7. Examples
o A single-frame unmasked text message
* 0x81 0x05 0x48 0x65 0x6c 0x6c 0x6f (contains "Hello")
o A single-frame masked text message
* 0x81 0x85 0x37 0xfa 0x21 0x3d 0x7f 0x9f 0x4d 0x51 0x58
(contains "Hello")
o A fragmented unmasked text message
* 0x01 0x03 0x48 0x65 0x6c (contains "Hel")
* 0x80 0x02 0x6c 0x6f (contains "lo")
o Unmasked Ping request and masked Ping response
* 0x89 0x05 0x48 0x65 0x6c 0x6c 0x6f (contains a body of "Hello",
but the contents of the body are arbitrary)
* 0x8a 0x85 0x37 0xfa 0x21 0x3d 0x7f 0x9f 0x4d 0x51 0x58
(contains a body of "Hello", matching the body of the ping)
o 256 bytes binary message in a single unmasked frame
* 0x82 0x7E 0x0100 [256 bytes of binary data]
o 64KiB binary message in a single unmasked frame
* 0x82 0x7F 0x0000000000010000 [65536 bytes of binary data]
So it would appear that is an example of a fragment.
Also this seems relevant:
6.2. Receiving Data
To receive WebSocket data, an endpoint listens on the underlying
network connection. Incoming data MUST be parsed as WebSocket frames
as defined in Section 5.2. If a control frame (Section 5.5) is
received, the frame MUST be handled as defined by Section 5.5. Upon
receiving a data frame (Section 5.6), the endpoint MUST note the
/type/ of the data as defined by the opcode (frame-opcode) from
Section 5.2. The "Application data" from this frame is defined as
the /data/ of the message. If the frame comprises an unfragmented
message (Section 5.4), it is said that A WebSocket Message Has Been
Received with type /type/ and data /data/. If the frame is part of
a fragmented message, the "Application data" of the subsequent data
frames is concatenated to form the /data/. When the last fragment is
received as indicated by the FIN bit (frame-fin), it is said that A
WebSocket Message Has Been Received with data /data/ (comprised of
the concatenation of the "Application data" of the fragments) and type
/type/ (noted from the first frame of the fragmented message).
Subsequent data frames MUST be interpreted as belonging to a new
WebSocket message.
Extensions (Section 9) MAY change the semantics of how data is
read, specifically including what comprises a message boundary.
Extensions, in addition to adding "Extension data" before the
"Application data" in a payload, MAY also modify the "Application
data" (such as by compressing it).
The problem:
I don't know how to check for fragments and line them up with the node buffers, as mentioned in the article, I'm only able to read very small buffer amounts.
How can I parse larger data chunks using the fragmentation methods mentioned in the RFC documentation and the lining-up of nodeJS buffers alluded to (but not explained) in the article?
I came across your question when I was working on my own "pure NodeJs WebSocket server". All worked fine for payloads less than 1-2 KiB. When I was trying to send more, but still within [64 KiB - 1] limit (16 bit payload length), it randomly blow up the server with ERR_OUT_OF_RANGE error.
Side note: https://medium.com/hackernoon/implementing-a-websocket-server-with-node-js-d9b78ec5ffa8 "Implementing a WebSocket server with Node.js" by Srushtika Neelakantam is excellent article! Before I found it the WebSocket was alwas a black box to me. She described very simple and easy to understand implementation of WebSocket client/server from scratch. Unfortunately it lacks (on purpose to not make article hard) support for larger payloads and buffers alignment. I just wanted to give Srushtika Neelakantam credit because without her article I would never write my own pure NodeJs WebSocket server.
The solution described in the article fails only because the NodeJs buffer is simply over and there are no more bytes to read but the function's logic expects more bytes. You end with ERR_OUT_OF_RANGE error. Code simply wants to read bytes that are not yet available but will be available in next 'data' event.
The solution to this problem is simply check if the next byte that you want to read from buffer is really available. As long as there are bytes you are fine. The challenge starts when there to less bytes or to much bytes. In order to be be more flexible the function that parses buffer should return not only payload but pair: payload and bufferRemainingBytes. It will allow to concat the buffers in the main data event handler.
We need to handle three cases:
When there is exactly the right amount of bytes in the buffer to build valid WebSocket frame we return
{ payload: payloadFromValidWebSocketFrame, bufferRemainingBytes: Buffer.alloc(0) }
When there are enough bytes to build valid WebSocket but still there are few left in the buffer we return
{ payload: payloadFromValidWebSocketFrame, bufferRemainingBytes: bufferBytesAfterValidWebSocketFrame }
This case also forces us to wrap all getParsedBuffer calls with a do-while loop. The bufferRemainingBytes could still contain second (or third, or more) valid WebSocket frame. We need to parse them all in currently processed socket data event.
When there are not enough bytes to build valid WebSocket frame we return empty payload and entire buffer as bufferRemainingBytes
{ payload: null, bufferRemainingBytes: buffer }
How to merge buffers together with bufferRemainingBytes in the subsequent socket data events? Here is the code:
server.on('upgrade', (req, socket) => {
let bufferToParse = Buffer.alloc(0); // at the beginning we just start with 0 bytes
// .........
socket.on('data', buffer => {
let parsedBuffer;
// concat 'past' bytes with the 'current' bytes
bufferToParse = Buffer.concat([bufferToParse, buffer]);
do {
parsedBuffer = getParsedBuffer(bufferToParse);
// the output of the debugBuffer calls will be on the screenshot later
debugBuffer('buffer', buffer);
debugBuffer('bufferToParse', bufferToParse);
debugBuffer('parsedBuffer.payload', parsedBuffer.payload);
debugBuffer('parsedBuffer.bufferRemainingBytes', parsedBuffer.bufferRemainingBytes);
bufferToParse = parsedBuffer.bufferRemainingBytes;
if (parsedBuffer.payload) {
// .........
// handle the payload as you like, for example send to other sockets
}
} while (parsedBuffer.payload && parsedBuffer.bufferRemainingBytes.length);
console.log('----------------------------------------------------------------\n');
});
// .........
});
Here is how my getParsedBuffer function looks like(it was called parseMessage in the article):
const getParsedBuffer = buffer => {
// .........
// whenever I want to read X bytes I simply check if I really can read X bytes
if (currentOffset + 2 > buffer.length) {
return { payload: null, bufferRemainingBytes: buffer };
}
payloadLength = buffer.readUInt16BE(currentOffset);
currentOffset += 2;
// .........
// in 99% of cases this will prevent the ERR_OUT_OF_RANGE error to happen
if (currentOffset + payloadLength > buffer.length) {
console.log('[misalignment between WebSocket frame and NodeJs Buffer]\n');
return { payload: null, bufferRemainingBytes: buffer };
}
payload = Buffer.alloc(payloadLength);
if (isMasked) {
// ......... I skip masked code as it's too long and not masked shows the idea same way
} else {
for (let i = 0; i < payloadLength; i++) {
payload.writeUInt8(buffer.readUInt8(currentOffset++), i);
}
}
// it could also happen at this point that we already have a valid WebSocket payload
// but there are still some bytes remaining in the buffer
// we need to copy all unused bytes and return them as bufferRemainingBytes
bufferRemainingBytes = Buffer.alloc(buffer.length - currentOffset);
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ this value could be >= 0
for (let i = 0; i < bufferRemainingBytes.length; i++) {
bufferRemainingBytes.writeUInt8(buffer.readUInt8(currentOffset++), i);
}
return { payload, bufferRemainingBytes };
}
Real life test of the described solution (64 KiB - 1 bytes):
In short - the above solution should work fine with payloads up to [64 KiB - 1] bytes. It's written entirely in pure NodeJs without any external library. I guess that is what you were looking for in your project ;)
Please find below the links to full version of my Binary Broadcast App on GitHub gist:
server https://gist.github.com/robertrypula/b813ffe23a9489bae1b677f1608676c8
client https://gist.github.com/robertrypula/f8da8f89819068a97bef4f27d04ad5b7
For some time (before I deploy the updated app with more features) the live demo of above's gist can be found here:
http://sndu.pl - let's send you the file
this is not perfect answer but an approach. this is how i would do what you are trying to do. i m writing pseudo code just to save time ;)
first i will be creating a custom object to communicate :
class Request {
id?: string; // unique id of the request, same request id can be used to continue a requst or to reply to a request
api?: string; // the request type i.e. what kind of request it is or how do you want this data to be used like a client can perform multiple operations on server like API_AUTH or API_CREATE_FILE etc.
complete?: boolean; // this is a flag if the request is complete or it needs to be added to the queue to wait for more data
error?: boolean; // this flag ll be helpful in request replies when the server has processed the request for an api and wants to respond with error or success
message?: string; // just sample message that can be shown or helpful raw note for the developer to debug
data?: any; // this is the actual data being sent
}
now for communicating between both the sides(i m taking server client approach in this example) we ll use this object.
now here is some pseudo code about how to process on server
class Server {
requestQueue: Map<string, Request> = new Map();
onRequestReceived(request: Request) {
if(request !== undefined){
switch(request.api){
case "API_LONG_DATA":
if(this.requestQueue.get(request.id) !== undefined){
if(request.complete){
// add this data to the requests in the querue, process the request and remove it from the queue
}else{
// add data to the request in the queue and resave it to the map
}
}else{
if(request.complete){
// process your request here
}else{
// add this request to queue
}
}
break;
case "API_AUTH":
// just a sample api
break;
}
}else{
// respond with error
}
}
}
this is easier than playing with buffers i believe and even i have used this approach a lot many times and sending large chunk of data is not a good practice because it can be used by someone to exploit your resources and it might fail in low networks.
so hope you get some hints from my approach ;)
UPDATE[full implementation]
first we need websoket package so
npm install websocket
now this is how we create websocket server in node.js using websocket package and process incoming requests
server.ts
import { WebSocketServer } from 'websocket';
import * as http from 'http';
// this is the request data object which ll serve as a common data entity that both server and client are aware of
class Request {
id?: string; // unique id of the request, same request id can be used to continue a requst or to reply to a request
api?: string; // the request type i.e. what kind of request it is or how do you want this data to be used like a client can perform multiple operations on server like API_AUTH or API_CREATE_FILE etc.
complete?: boolean; // this is a flag if the request is complete or it needs to be added to the queue to wait for more data
error?: boolean; // this flag ll be helpful in request replies when the server has processed the request for an api and wants to respond with error or success
message?: string; // just sample message that can be shown or helpful raw note for the developer to debug
data?: any; // this is the actual data being sent
}
// this is optional if you want to show 404 on the page
const server = http.createServer((request, response) => {
response.writeHead(404);
response.end();
});
server.listen(8080, function() {
console.log((new Date()) + ' Server is listening on port 8080');
});
const wsServer = new WebSocketServer({
httpServer: server,
autoAcceptConnections: false
});
function originIsAllowed(origin) {
// put logic here to detect whether the specified origin is allowed.
return true;
}
wsServer.on('request', (request) => {
if (originIsAllowed(request.origin)) {
const connection = request.accept('echo-protocol', request.origin);
// this is the request queue is there are any heavy request which cant fit into one request
const requestQueue: Map<string, Request> = new Map();
connection.on('message', (message) => {
// i consider that the data being sent to server is utf8 string
if (message.type === 'utf8') {
// here we construct the request object from incoming data
const request: Request = JSON.parse(message.utf8Data);
// here we process the request
switch(request.api){
case "API_LONG_DATA":
if(requestQueue.get(request.id) !== undefined){
if(request.complete){
// add this data to the requests in the querue, process the request and remove it from the queue
}else{
// add data to the request in the queue and resave it to the map
}
}else{
if(request.complete){
// process your request here
}else{
// add this request to queue
}
}
break;
case "API_AUTH":
// just a sample api
break;
}
}else{
// handle other data types
}
});
connection.on('close', (reasonCode, description) => {
// a connection as closed do cleanup here
});
}else{
// Make sure we only accept requests from an allowed origin
request.reject();
}
});
here is the way you send data from client
client.ts
import { WebSocketClient } from 'websocket';
// this is the request data object which ll serve as a common data entity that both server and client are aware of
class Request {
id?: string; // unique id of the request, same request id can be used to continue a requst or to reply to a request
api?: string; // the request type i.e. what kind of request it is or how do you want this data to be used like a client can perform multiple operations on server like API_AUTH or API_CREATE_FILE etc.
complete?: boolean; // this is a flag if the request is complete or it needs to be added to the queue to wait for more data
error?: boolean; // this flag ll be helpful in request replies when the server has processed the request for an api and wants to respond with error or success
message?: string; // just sample message that can be shown or helpful raw note for the developer to debug
data?: any; // this is the actual data being sent
}
const client = new WebSocketClient();
client.on('connectFailed', (error) => {
// handle error when connection failed
});
client.on('connect', (connection) => {
connection.on('error', (error)=> {
// handle when some error occurs in existing connection
});
connection.on('close', () => {
// connection closed
});
connection.on('message', function(message) {
// i m condsidering we are using utf8 data to communicate
if (message.type === 'utf8') {
// here we parse request object
const request: Request = JSON.parse(message.utf8Data);
// here you can handle the request object
}else{
// handle other data types
}
});
// here you start communicating with the server
// example 1. normal requst
const authRequest: Request = {
id: "auth_request_id",
api: "API_AUTH",
complete: true,
data: {
user: "testUser",
pass: "testUserPass"
}
}
connection.sendUTF(JSON.stringify(authRequest));
// example 2. long data request
const longRequestChunk1: Request = {
id: "long_chunck_request_id",
api: "API_LONG_CHUNCK",
complete: false, // observer this flag. as this is the first part of the chunk so this needs to be added to the queue on server
data: "..." // path one of long data
}
const longRequestChunk2: Request = {
id: "long_chunck_request_id", // request id must be the same
api: "API_LONG_CHUNCK", // same api
complete: true, // as this is the last part of the chunk so this flag is true
data: "..." // path one of long data
}
connection.sendUTF(JSON.stringify(longRequestChunk1));
connection.sendUTF(JSON.stringify(longRequestChunk2));
});
client.connect('ws://localhost:8080/', 'echo-protocol');
i can explain it furthure if you want ;)

How to prevent repeated responses from Node.js server

We're running into a problem where we're getting multiple responses sent from our Node server to a web client which are connected by a socket server (socket.io). By listening with Docklight, I can see that we're really only getting a single response from the serial device, but for some reason the Node server is sending multiples, and they accumulate, so the first time you send a serial command (and it doesn't matter what commands) might only get a couple, next time a couple more, next time a couple more and so on. So if you run several serial commands, you'll get back lots of multiple responses.
Our environment is Windows 7 64 bit, Node V 4.5.0, serialport V 4.0.1. However, this needs to run on Windows, Mac & Linux when we're done. The dev team (me & one other guy) are both fairly new to Node, but otherwise capable developers.
I think what's happening is I'm not using the .flush() & .drain() functions properly and the serialport buffer still contains serial data. Our proprietary devices return either S>, or <Executed/> prompts when a command has completed, so I store the serial response in a buffer until I see one or the other, then process the data (in this example just providing a boolean response whether the device is responding with one or the other or not). For example, if I send a <CR><LF> to one of our devices, it should respond with S> (or <Executed/> depending).
The client calls into the server with this:
socket.on('getDeviceConnected', readDeviceResponse);
function readDeviceResponse(isDeviceResponding) {
console.log('getDeviceConnected');
console.log(isDeviceResponding);
}
function getDeviceConnected() {
console.log("Sending carriage return / line feed.");
socket.emit('getDeviceConnected', '\r\n');
}
And on the server, here's what I'm trying:
socket.on('getDeviceConnected', function (connectionData) {
//write over serial buffer before the write occurs to prevent command accumulation in the buffer.
serialBuffer = '';
sbeSerialPort.write(connectionData, function (err, results) {
//since there's no way to tell if the serial device hasn't responded, set a time out to return a false after allowing one second to elapse
setTimeout(function () {
console.log('Inside getDeviceConnected setTimeout');
console.log('Is serial device responding:', isSerialDeviceResponding);
if (!isSerialDeviceResponding) {
console.log('Serial device timed out.');
socket.emit('getDeviceConnected', false);
}
}, 1000);
if (err) {
console.log('Serial port error level:', err);
}
if (results) {
if (results === 2) {
console.log('Serial port is responding');
}
}
});
sbeSerialPort.on('data', function (serialData) {
isSerialDeviceResponding = true;
console.log('Does S> prompt exist?', serialData.lastIndexOf('S>'));
while(!serialData.lastIndexOf('S>') > -1 || !serialData.lastIndexOf('<Executed/>') > -1){
serialBuffer += serialData;
break;
}
if (isSerialDeviceResponding) {
socket.emit('getDeviceConnected', true);
isSerialDeviceResponding = true;
}
sbeSerialPort.flush(function (err, results) {
if (err) {
console.log(err);
return;
}
if(results){
console.log('Serial port flush return code:', results);
}
});
});
I'm not very sure about the .flush() implementation here, and I've omitted the .drain() part because neither of them seems to do much of anything (assuming they were correctly implemented).
How do I insure that there is no data left in the serialport buffer when the .write() command is complete? Or do you see other problems with how I'm handling the serial data?
Edit, Source code up on pastebin.com:
Server.js
Client.js
HTML

USB-to-RS485 using Nodejs

I am trying to receive and send data from a vacuum gauge (previous Model of https://www.pfeiffer-vacuum.com/en/products/measurement/digiline/gauges/?detailPdoId=13238&request_locale=en_US) with a computer (Linux 16.04) via an USB-to-RS485-Interface (the half-duplex USB485-STISO from http://www.hjelmslund.dk/). When I send a request to the gauge using a specific protocol it is supposed to answer to the request and I should be able to receive it with the interface. I managed to send data but whenever I send data, it seems that nothing comes back. I'm trying to do this with Node.js. The Code that I used so far is:
function pack(address, action, parameter, data) {
var length = String('00' + data.length.toString()).slice(-2);
var bufferAsString = address + action + parameter + length + data;
var check = 0;
for (var i = 0; i < bufferAsString.length; ++i) {
check += bufferAsString.charCodeAt(i)
}
var checkSum = String('000' + String(check % 256)).slice(-3);
var buffer = Buffer.from(bufferAsString + checkSum),
carriageReturn = Buffer.from('\r');
return Buffer.concat([buffer, carriageReturn]);
}
var serialPort = require('serialport');
var SerialPort = serialPort.SerialPort;
var port = new SerialPort('/dev/ttyUSB0', {
baudrate: 9600,
dataBits: 8,
stopBits: 1,
parity: 'none'
}, false);
port.open(function(err) {
if (err) {
return console.log('Error opening port: ', err.message);
}
console.log(port.isOpen());
port.on('data', function (data) {
console.log('Data: ' + data);
});
port.on('close', function() {
console.log('port closed')
});
var sendBuffer = pack('001', '00', '740', '=?');
setInterval(function() {
port.write(sendBuffer, function(err, bytes) {
console.log('send' + bytes)
});
port.drain();
}, 1000)
});
That is supposed to send a request every second to the gauge to measure the pressure. I know that the request is being send since the TxD-Led blinks shortly every second. But I receive no answer to that request.
I also tried other methods of sending data (mostly via python and the terminal) but with similar success. The green lamp for sending always flashes up but then nothing happens and no answer is received.
I am at a loss as to what to try next and would really appreciate any help that you could give me.
UPDATE:
Ok so I seem to have found one possible error in the whole thing. I was working with an oszilloscope to capture the signal that is going out of the interface when I send something. I started with single ascii-characters to see if the most basic signals are cominng out right. For ascii '0' the signal that is being sent is 10000011001, for ascii '1' it is 10100011001. So those are almost what I would expect, except that there seem to be 2 startbits. Normally I would expect there to be only 1 startbit. Is there a way to change the amount of startbits sent?
Here are the outputs of the Oszilloscope:
this is a communication problem:
1 check the protocol-based communications parameters like baud rate, parity, start-/stop-bits they have to be consistent
(if you use UART protocol on RS-485 other protocols like MODBUS, Profibus,... are also possible, this is a difference to normal RS-232)
If the gauge uses 9600 baud for communication you can not use 115200 baud in your command. In the nodejs code you do not set any parameter (i assume you use the UART protocol because of your nodejs). If the gauge uses any other protocol the nodejs code will also not work, despite that there are not set any parameters like baud rate, parity,... in the code
https://en.wikipedia.org/wiki/RS-485
for other protocols node js serial module can not be used
http://libmodbus.org/
http://www.pbmaster.org/
2 check the proprietary commands you send to the gauge. When i want to read out the data of my multimeter i have to send an ASCII 'D' = 0100 0100 (bin) to get an answer (endianness ?) If i send any other value the multimeter stays silent.
http://electronicdesign.com/what-s-difference-between/what-s-difference-between-rs-232-and-rs-485-serial-interfaces
Unless you have DE pulled high and R︦E︦ tied to ground, your conversation will be rather one-sided.
And if you do wire as above, you need to be able to deal with your own echo in the received data.

Nodejs: Set highWaterMark of socket object

is it possible to set the highWaterMark of a socket object after it was created:
var http = require('http');
var server = http.createServer();
server.on('upgrade', function(req, socket, head) {
socket.on('data', function(chunk) {
var frame = new WebSocketFrame(chunk);
// skip invalid frames
if (!frame.isValid()) return;
// if the length in the head is unequal to the chunk
// node has maybe split it
if (chunk.length != WebSocketFrame.getLength()) {
socket.once('data', listenOnMissingChunks);
});
});
});
function listenOnMissingChunks(chunk, frame) {
frame.addChunkToPayload(chunk);
if (WebSocketFrame.getLength()) {
// if still corrupted listen once more
} else {
// else proceed
}
}
The above code example does not work. But how do I do it instead?
Further explaination:
When I receive big WebSocket frames they get split into multiple data events. This makes it hard to parse the frames because I do not know if this is a splitted or corrupted frame.
I think you misunderstand the nature of a TCP socket. Despite the fact that TCP sends its data over IP packets, TCP is not a packet protocol. A TCP socket is simply a stream of data. Thus, it is incorrect to view the data event as a logical message. In other words, one socket.write on one end does not equate to a single data event on the other.
There are many reasons that a single write to a socket does not map 1:1 to a single data event:
The sender's network stack may combine multiple small writes into a single IP packet. (The Nagle algorithm)
An IP packet may be fragmented (split into multiple packets) along its journey if its size exceeds any one hop's MTU.
The receiver's network stack may combine multiple packets into a single data event (as seen by your application).
Because of this, a single data event might contain multiple messages, a single message, or only part of a message.
In order to correctly handle messages sent over a stream, you must buffer incoming data until you have a complete message.
var net = require('net');
var max = 1024 * 1024 // 1 MB, the maximum amount of data that we will buffer (prevent a bad server from crashing us by filling up RAM)
, allocate = 4096; // how much memory to allocate at once, 4 kB (there's no point in wasting 1 MB of RAM to buffer a few bytes)
, buffer=new Buffer(allocate) // create a new buffer that allocates 4 kB to start
, nread=0 // how many bytes we've buffered so far
, nproc=0 // how many bytes in the buffer we've processed (to avoid looping over the entire buffer every time data is received)
, client = net.connect({host:'example.com', port: 8124}); // connect to the server
client.on('data', function(chunk) {
if (nread + chunk.length > buffer.length) { // if the buffer is too small to hold the data
var need = Math.min(chunk.length, allocate); // allocate at least 4kB
if (nread + need > max) throw new Error('Buffer overflow'); // uh-oh, we're all full - TODO you'll want to handle this more gracefully
var newbuf = new Buffer(buffer.length + need); // because Buffers can't be resized, we must allocate a new one
buffer.copy(newbuf); // and copy the old one's data to the new one
buffer = newbuf; // the old, small buffer will be garbage collected
}
chunk.copy(buffer, nread); // copy the received chunk of data into the buffer
nread += chunk.length; // add this chunk's length to the total number of bytes buffered
pump(); // look at the buffer to see if we've received enough data to act
});
client.on('end', function() {
// handle disconnect
});
client.on('error', function(err) {
// handle errors
});
function find(byte) { // look for a specific byte in the buffer
for (var i = nproc; i < nread; i++) { // look through the buffer, starting from where we left off last time
if (buffer.readUInt8(i, true) == byte) { // we've found one
return i;
}
}
}
function slice(bytes) { // discard bytes from the beginning of a buffer
buffer = buffer.slice(bytes); // slice off the bytes
nread -= bytes; // note that we've removed bytes
nproc = 0; // and reset the processed bytes counter
}
function pump() {
var pos; // position of a NULL character
while ((pos = find(0x00)) >= 0) { // keep going while there's a NULL (0x00) somewhere in the buffer
if (pos == 0) { // if there's more than one NULL in a row, the buffer will now start with a NULL
slice(1); // discard it
continue; // so that the next iteration will start with data
}
process(buffer.slice(0,pos)); // hand off the message
slice(pos+1); // and slice the processed data off the buffer
}
}
function process(msg) { // here's where we do something with a message
if (msg.length > 0) { // ignore empty messages
// here's where you have to decide what to do with the data you've received
// experiment with the protocol
}
}
You don't need to. Incoming data will almost certainly be split across two or more reads: this is the nature of TCP and there is nothing you can do about it. Fiddling with obscure socket parameters certainly won't change it. And the data will be lit but certainly not corrupted. Just treat the socket as what it is: a byte stream.

Resources