Node.JS - Deal with DNS packet - node.js

I want read DNS packet catch by UDP server event on.message
How can I read fix size data
packet data manage like
I want read all field of DNS packet by separately by size.
--Read nodejs buffer object bit by bit. (specific size of bit)
var s = dgram.createSocket('udp4');
s.bind(53, function() {
});
s.on('message',function(msg,rinfo){
console.log("Length = "+msg.length);
console.log(msg.toString('binary'));
console.log(msg);
console.log("-----------------------------------------------------------------------");
});
how can achieve every field data?
with buffer of on.message param.
Thanks.

Please check https://www.npmjs.org/package/native-dns-packet
It provides exactly what you need:
Packet.parse(buffer) returns an instance of Packet
Packet.write(buffer, packet) writes the given packet into the buffer, truncating where appropriate

Related

Node.js - Is it acceptable to save a file stream to disk, solely for the purpose of getting the file length?

I have multiple Node.js servers running on my backend. One is an API server, which can accept image files from a 3rd party. The image files are streamed to the stdin of an ImageMagick process, and then the stdout of the ImageMagick process is streamed to a Node.js TCP server where the file is ultimately saved locally. The TCP server then needs to send a response to the API server after the file is successfully saved, so I need some way for the TCP server to know when it has the entire file (i.e. I can't simply close the socket on the API server after the file is sent).
One solution I could use is to save the stdout of the ImageMagick process to a temporary file on the API server, so I can get the length of the full file before I send it, and embed it in the beginning of the stream. Writing to disk throws a bit of a knot in the system though.
Is it acceptable to write a temp file to disk for the purpose of getting the length of the file, or is there a better / more efficient way to solve this problem?
Thanks
In case anybody else with a similar problem stumbles upon this, I found an alternative solution, which avoids the need to specify the file size altogether.
Create the TCP server using the {allowHalfOpen: true} option. The client can send the file to the TCP server, and then simply call the "end" method on the socket to signify that no more data will be written by the client (the socket is still readable by the client). With the "allowHalfOpen" option set on the server, the server can simply listen for the "end" event (which signifies that all of the data has been received), with the writable side of the server still open (allowing the server to send a response back to the client).
Note that the "allowHalfOpen" option defaults to false. If this option isn't set, the server automatically closes the socket when the writable side of the client is closed.
e.g.
SERVER
const fs = require('fs');
const net = require('net');
const server = net.createServer({allowHalfOpen: true});
server.on('connection', (socket) => {
const writeStream = fs.createWriteStream('myFile.png');
socket.on('data', (chunk) => {
writeStream.write(chunk);
});
socket.on('end', () => {
writeStream.end();
socket.end('File Written To Disk'); // send response and close socket
});
});
server.listen(8000);
CLIENT
const fs = require('fs');
const net = require('net');
const socket = net.connect(8000);
fs.createReadStream('myFile.png').pipe(socket);
socket.on('data', (chunk) => {
// response from TCP server
console.log(chunk.toString()); // File Written To Disk
});
socket.on('end', () => {
// server closed socket
});
I do not know if this is relevant. But ImageMagick miff: format is a streaming format. So you can write the dimensions of the image to a text file while streaming it out again via miff:.
convert rose: miff:- | convert - -format "%wx%h" -write info:tmp.txt miff:-
Here I use convert rose: miff:- to simulate your input stream.
Then I pipe it to the next convert, which reads the input stream, writes the WxH information to a tmp.txt text file, which you could access subsequently. The second convert also creates a miff:- output stream.
You could use NetPBM format in place of miff, since it also is a streaming format.

node.js tcp server, client and sent messages like merged

I've a node.js (tcp) server, i.e.
net.createServer
And I've a node.js client. I've created a module.export with a method
this.connect = function() {
var client = new net.Socket();
client.setTimeout(5000);
client.connect(settings.collectorport, settings.collectorhost, function() {
console.log('connected');
});
....
And another method, like
this.write = function(data, client) {
client.write(some-string);
}
From another node.js file, I invoke such methods from inside a loop like:
while (something) {
agent.write(data,client);
}
What happens is that sometimes the server receives, or the client send, two "some-string" all togheter.
I mean, if in the server I log what the server is receiving, I see, sometimes only one "message", sometimes two or three "messages" merged.
What can it be?
Sorry for my lexicon...
Assuming the number of messages you are sending matches the number of messages you are receiving/expecting on the other end, then I think what you're describing is basically how TCP works.
TCP is a stream and so you can't/shouldn't make any assumptions about "message boundaries." You could receive one byte or 10 kilobytes at a time, it just depends on the networking stack and some other factors, so you should be prepared to handle any amount of data that comes in (e.g. create a protocol, perhaps using a delimiter character, to sort out individual "messages").

Node tcp socket: Read binary data from Enfora MT4000

I'm working with an Enfora MT4000 device. The device sends data to tcp or udp server when certain event has occurred. Data can be send in binary or ASCII format, but I need to use binary.
Enfora device is configured with AT commands like this:
AT$EVENT=14,0,7,1,1
AT$EVENT=14,3,52,14,1578098
When I configure the device with ASCII, the server receives data in this format:
r 13 0 0 $GPRMC,211533.00,A,3321.856934,S,07040.240234,W,0.0,0.0,120514,2.3,W,A*2B
But, when I use binary, the data looks like this:
$2K� �Dk����a�H
Anyone knows how Node.js can convert binary data from a socket? I'm trying to do this with a very simple script.
// server
require('net').createServer(function (socket) {
console.log("connected");
socket.setEncoding(null);
socket.on('data', function (data) {
console.log(data.toString());
});
})
.listen(3041);
thanks.
The data argument in your 'data' event handler is already a Buffer. By calling data.toString() you are converting that Buffer to a (UTF-8 by default) string. You're probably better off keeping it as a Buffer and using that if you need the original binary data.

TCP Communication Behaviour

I am sending data asynchronously over a TCP Socket. I am currently connected to a SMSC simulator, on my local computer, just to check that all the packets are created correctly, before connecting to the real thing.
I am only sending a PDU once, and the SMSC receives it perfectly, and generates a response PDU and sends it back, but after that, an error Message pops up on the simulator specifying that it cannot receive 100 messages. The problem is that I only send it once, there is no loop running that constantly sends the messages, and I have debugged and checked that it send only once.
I Think that the problem might be with the creation of the PDU. I start by creating an byte array of size 1024, and then filling as necessary. When filled up, it does not use the entire space of the array. So I am thinking that when the simulator receives it, retrieves the data from the array, and then it reads the '0' bytes in the array after the actual data as a new message, since it gives me a response message saying that the data is not valid.
Is there anyway to avoid this, or am I just missing something here? According to me, when receiving value in byte array, you should only use the necessary encoding to retrieve the data, and the rest of the '0' bytes should be ignored?
Sorry if my question is vague.
The problem was indeed the 0bytes in the array.
I solved it by removing the 0 bytes from the array, after reading an article posted on Stack Overflow:
Here is the Solution:
private byte[] CleanArray(byte[] array)
{
int i = array.Length - 1;
while (array[i] == 0)
{
i--;
}
byte[] cleanedArray = new byte[i + 1];
Array.Copy(array, cleanedArray, i + 1);
return cleanedArray;
}

linux raw ethernet socket bind to specific protocol

I'm writing code to send raw Ethernet frames between two Linux boxes. To test this I just want to get a simple client-send and server-receive.
I have the client correctly making packets (I can see them using a packet sniffer).
On the server side I initialize the socket like so:
fd = socket(PF_PACKET, SOCK_RAW, htons(MY_ETH_PROTOCOL));
where MY_ETH_PROTOCOL is a 2 byte constant I use as an ethertype so I don't hear extraneous network traffic.
when I bind this socket to my interface I must pass it a protocol again in the socket_addr struct:
socket_address.sll_protocol = htons(MY_ETH_PROTOCOL);
If I compile and run the code like this then it fails. My server does not see the packet. However if I change the code like so:
socket_address.sll_protocol = htons(ETH_P_ALL);
The server then can see the packet sent from the client (as well as many other packets) so I have to do some checking of the packet to see that it matches MY_ETH_PROTOCOL.
But I don't want my server to hear traffic that isn't being sent on the specified protocol so this isn't a solution. How do I do this?
I have resolved the issue.
According to http://linuxreviews.org/dictionary/Ethernet/ referring to the 2 byte field following the MAC addresses:
"values of that field between 64 and 1522 indicated the use of the new 802.3 Ethernet format with a length field, while values of 1536 decimal (0600 hexadecimal) and greater indicated the use of the original DIX or Ethernet II frame format with an EtherType sub-protocol identifier."
so I have to make sure my ethertype is >= 0x0600.
According to http://standards.ieee.org/regauth/ethertype/eth.txt use of 0x88b5 and 0x88b6 is "available for public use for prototype and vendor-specific protocol development." So this is what I am going to use as an ethertype. I shouldn't need any further filtering as the kernel should make sure to only pick up ethernet frames with the right destination MAC address and using that protocol.
I've worked around this problem in the past by using a packet filter.
Hand Waving (untested pseudocode)
struct bpf_insn my_filter[] = {
...
}
s = socket(PF_PACKET, SOCK_DGRAM, htons(protocol));
struct sock_fprog pf;
pf.filter = my_filter;
pf.len = my_filter_len;
setsockopt(s, SOL_SOCKET, SO_ATTACH_FILTER, &pf, sizeof(pf));
sll.sll_family = PF_PACKET;
sll.sll_protocol = htons(protocol);
sll.sll_ifindex = if_nametoindex("eth0");
bind(s, &sll, sizeof(sll));
Error checking and getting the packet filter right is left as an exercise for the reader...
Depending on your application, an alternative that may be easier to get working is libpcap.

Resources