i made i client tcp socket with c# and its work without any problem with c# server too
first before i send data to the client i sent the header contain the data length adn some other
now i tried same client code with nodeJs server TCP i got some issuse with Buffer size in nodeJs
first this is my header
function writeInt(stream, int){
var bytes = new Array(5)
bytes[0] = 2
bytes[1] = int >> 24
bytes[2] = int >> 16
bytes[3] = int >> 8
bytes[4] = int
bytes[5] = 0
stream.write( Buffer.from(bytes))
}
//1,2,3,4 the size of the buffer
if i want to send message to client i use
var buf = Buffer.from("HELLO THIS IS A MESSAGE FROM SERVER");
writeInt(sock, buf.length)
sock.write(buf);
that's work because i use string but when i tried to send integer i got issuse
like
var buf = Buffer.from([1,2222,999,666]);
writeInt(sock, buf.length)
sock.write(buf);
first thing the length of the buffer its wrong when i use buf.length it's return 4!!
i tried other method found it in NodeJs website like
let buf = Buffer.alloc(3); // Init buffer without writing all data to zeros
buf.writeUInt16BE(220);
and
let buf = Buffer.allocUnsafe(2);
buf.writeUInt16BE(1234);
all those return wrong buffer length
or buffer length not like i sent with function writeInt
that's happen only with Integer when i sent string its work without any issuse
Related
How to write a single file while reading from multiple input streams of the exact same file from diffrent locations with NodeJS.
As its still not Clear Maybe?
I want to use more performance for the download lets say we have 2 locations for the same file each can perform only 10mb down stream so i want to download a part from the first location and the secund in parallel. to get it with 20mb.
so both streams need to get joined some how and both streams need to know the range they are downloading.
i have 2 examples
var http = require('http')
var fs = require('fs')
// will write to disk __dirname/file1.zip
function writeFile(fileStream){
//...
}
// This example assums downloading from 2 http locations
http.request('http://location1/file1.zip').pipe(writeFile)
http.request('http://location2/file1.zip').pipe(writeFile)
var fs = require('fs')
// will write to disk __dirname/file1.zip
function writeFile(fileStream){
//...
}
// this example is reading the same file from 2 diffrent disks
fs.readfFile('/mount/volume1/file1.zip').pipe(writeFile)
fs.readfFile('/mount/volume2/file1.zip').pipe(writeFile)
How i think that it would work
ReadStream needs to check if a defined content range is already writen befor rereading the next chunk from each file and maybe they should start in on a random location in the file to read.
if the total file content length is X we will divide it into smaller chunks and create a map where each entry has a fixed content length so we know what parts we got and what parts we are downloading in total.
Trying to answer this question my self
We can try to simply optimistic raise Read
let SIZE = 64; // 64 byte intervals
let buffers = []
let bytesRead = 0
function readParallel(filepath,callback){
fs.open(filepath, 'r', function(err, fd) {
fs.fstat(fd, function(err, stats) {
let bufferSize = stats.size;
while (bytesRead < bufferSize) {
let size = Math.min(SIZE, bufferSize - bytesRead);
let buffer = new Buffer(size),
let position = bytesRead
let length = size
let offset = bytesRead
let read = fs.readSync(fd, buffer, offset, length, position);
buffers.push(buffer);
bytesRead += read;
}
});
});
}
// At the End: buffers.concat() ==== "File Content"
fs.createReadStream() has an option you can pass it to specify the start
let f = fs.createReadStream("myfile.txt", {start: 1000});
You could also open a normal file descriptor with fs.open(), then fs.read() one byte from a position right before where you want the stream to be positioned using the position argument to fs.read() and then you can pass that file descriptor into fs.createReadStream() as an option and the stream will start with that file descriptor and position (though obviously the start option to fs.createReadStream() is a bit simpler).
Using csv-parse with csv-stringify from the CSV Project.
const fs = require('fs');
const parse = require('csv-parse');
const stringify = require('csv-stringify')
const stringifier = stringify();
const writeFile = fs.createWriteStream('out.csv');
fs.createReadStream('file1.csv').pipe(parse()).pipe(stringifier).pipe(writeFile);
fs.createReadStream('file2.csv').pipe(parse()).pipe(stringifier).pipe(writeFile);
Here I parse each file separately (using a different parse stream for each source), then pipe both to the same stringify stream which concatenates them, then write to destination.
Range Locking
The Answer is Advisory Locking it is as simple as Torrent does it
assign the whole file or a part of it to multiple smaller parts
lock the file range and fetch that range from a list of sources.
use the file created in part 1 as driver for a FIFO Queue it contains all meta
To get a File from Multiple Sources a JS Implementation would look like
if we assume all files are only i put no error handling in here
const queue = [];
const sources = ['https://example.com/file','https://example1.com/file'];
const fileSize = fetch({sources[0],{method: 'HEAD'}).then(({ headers })=>headers['Content-Size']);
const targetBuffer = new UInt8Array(fileSize);
const charset = 'x-user-defined';
// Maps to the UTF Private Address Space Area so you can get bits as chars
const binaryRawEnablingHeader = `text/plain; charset=${charset}`;
const requestDefaults = {
headers: {
'Content-Type': binaryRawEnablingHeader,
'range': 'bytes=2-5,10-13'
}
}
const downloadPlan = /* some logic that puts that bytes into the target WiP */
// use response.text() and then convert that to byte via
// UNICODE Private Area 0xF700-0xF7ff.
const convertToAbyte = (chars) =>
new Array(chars.length)
.map((_abyte,offset) =>
chars.charCodeAt(offset) & 0xff);
I am using ZLIB in NODEJS to compress a string. On compressing the string I get a BUFFER. I want to send that buffer as a PUT request, but the PUT request rejects the BUFFER as it needs only STRING. I am not able to convert BUFFER to STRING and then on the receiving end I cannot decompress that string, so I can get the original data. I am not sure how I can convert the buffer to string and then convert that string to buffer and then decompress the buffer to get the original string.
let zlib = require('zlib');
// compressing 'str' and getting the result converted to string
let compressedString = zlib.deflateSync(JSON.stringify(str)).toString();
//decompressing the compressedString
let decompressedString = zlib.inflateSync(compressedString);
The last line is causing an issue saying the input is invalid.
I tried to converted the the 'compressedString' to a buffer and then decompress it then also it does not help.
//converting string to buffer
let bufferedString = Buffer.from(compressedString, 'utf8');
//decompressing the buffer
//decompressedBufferString = zlib.inflateSync(bufferedString);
This code also gives the exception as the input is not valid.
I would read the documentation for zlib but the usage is pretty clear.
var Buffer = require('buffer').Buffer;
var zlib = require('zlib');
// create the buffer first and pass the result to the stream
let input = new Buffer(str);
//start doing the compression by passing the stream to zlib
let compressedString = zlib.deflateSync(input);
// To deflate you will have to do the same thing but passing the
//compressed object to inflateSync() and chain the toString()
let decompressedString = zlib.deflateSync(compressedString).toString();
There are a number of ways to handle streams but this is what you are trying to achieve with the code provided.
Try sending the buffer as a latin1 string not an utf8 string. For instance if your buffer is in the mybuf variable:
mybuf.toString('latin1');
And send mybuf to your API. Then in your frontend code you can do something like this, supposing your response is in the response variable:
const byteNumbers = new Uint8Array(response.length);
for (let i = 0; i < response.length; i++) {
byteNumbers[i] = response[i].charCodeAt(0);
}
const blob: Blob = new Blob([byteNumbers], {type: 'application/gzip'});
In my experience the transferred size will be just a little higher this way compared to sending the buffer, but at least unlike utf8 you can get your original binary data back. I still don't know how to do it with utf8 encoding, according to this SO answer it doesn't seem possible.
I'm now trying to send bytes continuously from node.js(server) to Android(client). Let me show the code example.
var net = require('net');
var server = net.createServer(function(c){
c.on('data', function(data){
if(data == 'foo'){
for(var i = 1; i <= 255; i++){
var byteData = makeBytedata();
c.write(byteData);
wait(100)
}
}
});
});
This code does not works fine because it sometimes combines byteData to one packet. Does anyone have solution to send bytes separately?
net.createServer create TCP server, TCP does not send messages separately. TCP is a stream protocol, which means that when you write bytes to the socket, you get the same bytes in the same order at the receiving end.
One work around: define a format for your message, so that your client can determine the beginning and end of a message within the socket stream. For example, you could use a \n to mark the end of a message.
for(var i = 1; i <= 255; i++){
var byteData = makeBytedata();
c.write(byteData + '\n');
}
Then the client could separate them by \n.
The other way could be to use UDP/Dgram
var dgram = require("dgram"),
server = dgram.createSocket('udp4');
server.on("message", function(msg, rinfo) {
// send message to client
});
I encountered strange websocket problem:
server in node.js:
var websocket = require('websocket');
var http = require('http');
var transportServer = http.createServer(function(request, response) {});
var wsServer = new websocket.server({ httpServer: transportServer });
wsServer.on('request',function(request) {
var connection = request.accept(null,request.origin);
console.log('connected');
connection.on('message',function(message) {
console.log('message received');
});
connection.on('close',function(){
console.error('connection closed');
process.exit(0);
});
});
transportServer.listen(5000);
client in browser:
var ws = new ReconnectingWebSocket 'ws://localhost:5000'
ws.onopen = function(){
var buf = '';
for(var i = 1; i <= 65536; ++i) buf = buf + 'a';
ws.send(buf);
}
Example above works but if I change 65536 in for loop to 65537 it fails - server does not print 'message received' and prints 'connection closed' instead and no error is reported on server or client. Is there any maximum message length in WebSocket?
WebSocket frames have their payload length defined in byte representation. Two bytes can maximal have a value of 0xffff (65535). To represent 65536 in bytes you need at least three bytes (0x010000).
So we have a limitation of the server side WebSocket implementation. You should consider using node-walve as server side WebSocket package as this will support lengths up to 4MiB. Serving bigger frames is currently not straight to do in node.js without downsides as unsigned 32Bit integers is the biggest number supported by nodes buffer module.
More detailed answer:
A single WebSocket frame supports three different frame lengths:
Single byte (up to 0x7d)
Two bytes (first byte will be 0x7e so we know to interpret following two bytes as length)
Eight bytes (first byte is 0x7f so we interpret following eight bytes as length)
However as pointed out in the beginning. Eight bytes would be a unsigned 64bit integer which is not supported by javascript natively.
The "websocket" package seems only to support two bytes of length determing bytes. "walve" is up to four bytes.
Websocket on Client:
socket.send('helloworld');
Websocket on Node.js:
socket.ondata = function(d, start, end){
// I suppose that the start and end indicates the location of the
// actual data 'hello world' after the headers
var data = d.toString('utf8', start, end);
// Then I'm just printing it
console.log(data);
});
but I'm getting this: �����]���1���/�� on the terminal :O
I have tried to understand this doc: https://www.rfc-editor.org/rfc/rfc6455#section-5.2 but it's hard to understand because I don't know what should I work with, I mean I can't see the data even with toString?
I have tried to follow and test with this questions answer How can I send and receive WebSocket messages on the server side? but I can't get it to work, with this answer I was getting an array like this [false, true, false, false, true, true, false] etc... and I don't really know what to do with it.. :\
So I'm a bit confused, what the hell should I do after I get the data from the client side to get the real message?
I'm using the original client side and node.js API without any library.
Which node.js library are you using? Judging by the fact that you are hooking socket.ondata that looks like the HTTP server API. WebSockets is not HTTP. It has an HTTP compatible handshake so that the WebSocket and HTTP service can live on the same port, but that's where the similarity ends. After the handshake, WebSockets is a framed full-duplex, long-lived message transport more similar to regular TCP sockets than to HTTP.
If you want to implement your own WebSocket server in node.js you are going to want to use the socket library directly (or build on/borrow existing WebSocket server code).
Here is a Node.js based WebSocket server that bridges from WebSocket to TCP sockets: https://github.com/kanaka/websockify/blob/master/other/websockify.js Note that it is for the previous Hixie version of the protocol (I haven't had opportunity or motivation to update it yet). The modern HyBI version of the protocol is very different but you might be able to glean some useful information from that implementation.
You can in fact start with Node's HTTP API. That is exactly what I did when writing the WebSocket-Node module https://github.com/Worlize/WebSocket-Node
If you don't want to use an existing WebSocket Library (though you really should just use an existing library) then you need to be able to parse the binary data format defined by the RFC. It's very clear about the format and exactly how to interpret the data. From each frame you have to read in all the flags, interpret the frame size, possibly read the masking key, and unmask the contents as you read them from the wire.
That is one reason you're not seeing anything recognizable... in WebSockets, all client-to-server communications is obfuscated by applying a random mask to the contents using XOR as a security precaution against possibly poisoning the cache of older proxy servers that don't know about websockets.
There are two things -
Which node.js version are you using? I have never seen a data event with start and endpt. The emitted event is just data with buffer/string as an argument.
More importantly, if you are hooking on to the HTTP socket you should take care of the HTTP Packet. It contains hearders, body and a trailer. There might be garbage in there.
Here is a solution from this post:
https://medium.com/hackernoon/implementing-a-websocket-server-with-node-js-d9b78ec5ffa8
parseMessage(buffer) {
const firstByte = buffer.readUInt8(0);
//const isFinalFrame = Boolean((firstByte >>> 7) & 0x1);
//const [reserved1, reserved2, reserved3] = [ Boolean((firstByte >>> 6) & 0x1),
Boolean((firstByte >>> 5) & 0x1), Boolean((firstByte >>> 4) & 0x1) ];
const opCode = firstByte & 0xF;
// We can return null to signify that this is a connection termination frame
if (opCode === 0x8)
return null;
// We only care about text frames from this point onward
if (opCode !== 0x1)
return;
const secondByte = buffer.readUInt8(1);
const isMasked = Boolean((secondByte >>> 7) & 0x1);
// Keep track of our current position as we advance through the buffer
let currentOffset = 2; let payloadLength = secondByte & 0x7F;
if (payloadLength > 125) {
if (payloadLength === 126) {
payloadLength = buffer.readUInt16BE(currentOffset);
currentOffset += 2;
} else {
// 127
// If this has a value, the frame size is ridiculously huge!
//const leftPart = buffer.readUInt32BE(currentOffset);
//const rightPart = buffer.readUInt32BE(currentOffset += 4);
// Honestly, if the frame length requires 64 bits, you're probably doing it wrong.
// In Node.js you'll require the BigInt type, or a special library to handle this.
throw new Error('Large payloads not currently implemented');
}
}
const data = Buffer.alloc(payloadLength);
// Only unmask the data if the masking bit was set to 1
if (isMasked) {
let maskingKey = buffer.readUInt32BE(currentOffset);
currentOffset += 4;
// Loop through the source buffer one byte at a time, keeping track of which
// byte in the masking key to use in the next XOR calculation
for (let i = 0, j = 0; i < payloadLength; ++i, j = i % 4) {
// Extract the correct byte mask from the masking key
const shift = j == 3 ? 0 : (3 - j) << 3;
const mask = (shift == 0 ? maskingKey : (maskingKey >>> shift)) & 0xFF;
// Read a byte from the source buffer
const source = buffer.readUInt8(currentOffset++);
// XOR the source byte and write the result to the data
data.writeUInt8(mask ^ source, i);
}
} else {
// Not masked - we can just read the data as-is
buffer.copy(data, 0, currentOffset++);
}
return data
}