Node.js: Disable UDP DNS lookup and use the given IP instead - node.js

I have a simple CentOS node.js server that is supposed to consume high frequency UDP messages and then forward them to another service.
Trouble is that dgram.send does a DNS lookup on EVERY call. This DNS lookup is both slowing down the processing of the messages and occasionally getting the DNS server to blacklist the node.js host server thinking it's getting DOS'd.
The question is: how do I send a UDP packet in node.js WITHOUT incurring a DNS lookup?
Thanks for the time.

Glancing through the code for Node, it looks like you can pass an IP address to dgram.send and it won't do anything with DNS. Is it possible to look up or cache your IPs manually and then pass them to the send method?

Related

Designing a DSR load balancer

I want to build a DSR load balancer for an application I am writing. I wont go into the application because it is irrelevant for this discussion. My goal is to create a simple load balancer that does direct server response for TCP packets. The idea is to receive all packets at the load balancer, then using something like round robin, select a server from a list of available servers which are defined in some config file. The next step would be to alter the packer received and change the destination ip to be equal to the chosen backend server. Finally, the packet will be sent over to the backend server using normal system calls for sending packets. Theoretically the backend server should receive the packet, and send one back to the original requester, and then the requester can communicate directly with the backend server rather than going through the load balancer.
I am concerned that this design will not work as I expect it to. The main question is, what happens when computer A send a packet to IP Y, but receives a packet back in the same TCP stream from a computer at IP X? Will it continue to send packets to IP Y? Or will it switch over to IP X?
So it turns out this is possible, but only halfway so, and I will explain what I mean by this. I have three processes, one which is netcat, used to initiate an tcp request, a second process, the dsr-lb, which receives packets on a certain port, changes the destination ip to a backend server(passed in via command line arg), and forwards it off using raw sockets, and a third process which is a basic echo server. I got this working on a local setup. The local setup consists of netcat running on my desktop, and dsr-lb and echo servers running on two different linux VMs on the desktop as well. The path of the packets was like this:
nc -> dsr-lb -> echo -> nc
When I said it only half works, what I meant was that outgoing traffic has to always go through the dsr-lb, but returning traffic can go directly to the client. The client does not send further traffic directly to the backend server, but still goes through the dsr-lb. This makes sense since the client opened a socket to the dsr-lb ip, and internally still remembers this ip, regardless of where the packet came from.
The comment saying "if its from a different IP, it's not the same stream. tcp is connection-based" is incorrect. I read through the linux source code, specifically the receive tcp packet portion, and it turns out that linux uses source ip, source port, destination ip, and destination port to calculate a hash which is uses to find the socket that should receive the traffic. However, if no such socket matches the hash, it tries again using only the destination ip and destination port and that is how this "magic" works. I have no idea if this would work on a windows machine though.
One caveat to this answer is that I also spun up two remote VMs and tried the same experiment, and it did not work. I am guessing it worked while all the machines were on the same switch, but there might be a little more work to do to get it to work if it goes through different routers. I am still trying to figure this out, but from using tcpdump to analyze the traffic, for some reason the dsr-lb is forwarding to the wrong port on the echo server. I am not sure if something is corrupted, or if the checksum is wrong after changing the destination ip and some router along the way is dropping it or changing it somehow(I suspect this might be the case) but hopefully I can get it working over an actual network.
The theory should still hold though. The IP layer is basically a packet forwarding layer and routers should not care about the contents of the packets, they should just forward packets based on their routing tables, so changing the destination of the packet while leaving the source the same should result in the source receiving any answer. The fact that the linux kernel ultimately resolves packets to sockets just using destination ip and port means the only real roadblock to this working does not really exist.
Also, if anyone is wondering why bother doing this, it may be useful for a loadbalancer in front of websocket servers. Its not as great as a direct connection from client to websocket server, but it is better than a loadbalancer that handles both requests and responses, which makes it more scalable, and more able to run on less resources.

Writing linux conntrack module for DNS requests

I set up a netfilter rule that balances DNS requests using the random mode of the statistics module with some NAT rules. That part worked well however when a DNS client sends all its requests from the same source port the DNS requests all are balanced to the same backend server.
I'm assuming this happens because the connection tracking identifies all the UDP packets as part of the same UDP connection. I couldn't find an easy fix for this, is there one?
In the case there isn't I will have to write some code to make things behave how I'd like. What is the proper approach to doing this?
My first thought was to create something similar to ip_conntrack_ftp that identifies DNS connections by using the ip source/dest as well as the DNS sequence number.
You shouldn't need anything that complicated - you just need to find a way to make the load balancer work "per packet" instead of "per flow".

Linux Sockets/Connections

I have a gateway server that is also acting as a web proxy for clients and I need to get some information about the network connections. The gateway server has an internal and an external interface/IP-address.
If I use the 'netstat' or 'ss' command I get a display showing of all the clients' internal IP addresses/ports connecting to the gateway's internal IP address/squid-port. But if I run 'iftop' I get a display of the clients' internal IP addresses and the external IP address/port they are ultimately connecting to, it seems to ignore the proxy middleman.
The information from iftop is what I need, that is, internal ip:port to final ip:port ignoring the proxy, but I need to parse the output and can't seem to do that with iftop as it is interactive. Does anyone know a way to get iftop like information from a standard Linux command?
Thanks
If you are using a typical HTTP proxy setup, ss -at will show two entries: one for client--proxy, and one for proxy--webpage, just as it should, because two connections are live in such a case.

How to randomly choose the outgoing address from an IPv6 pool using Node.JS?

I'm trying to create and run a Node.JS proxy in a machine that has a pool of IPv6 addresses. I want the proxy to randomly choose one of these addresses for each request (making it difficult for the websites to track record of users' requests).
With wget I can achieve this by using the attribute --bind-address as following:
wget --bind-address OUTGOING_IP http://www.example.com/
Is there any way to achieve the same behavior using Node.JS?
If you want to make outbound HTTP requests from different IPs, have a look for "localAddress" option under "http.request":
http://nodejs.org/docs/latest/api/http.html#http_http_request_options_callback
If you want to start a TCP server to listen on a particular IP bound to your host, you would probably want to specify it when you create the server [i.e. server.listen(PORT, HOST)]:
http://nodejs.org/docs/latest/api/net.html#net_class_net_server
-- ab1

DNS Server Refusing Connection

I am implementing a dns client, in which i try to connect to a local dns server, but the dns server is returning the message with an error code 5 , which means that its refusing the connection.
Any thoughts on why this might be happening ?? Thanks
DNS response error code 5 ("Refused") doesn't mean that the connection to the DNS server is refused.
It means that the DNS server refuses to provide whatever data you asked for, or to do whatever action you asked it to do (for example a dynamic update).
Since you mention a "connection", I assume that you are using TCP?
DNS primarilly uses UDP, and some DNS servers will refuse all requests over TCP.
So the solution might be as simple as switching to UDP.
Otherwise, assuming you are building your own DNS client from scratch, my first guess would be that you are formatting the request incorrectly. Eventhough the DNS protocol seems fairly simple, it is very easy to get this wrong.
Finally, the DNS server may of course simply be configured to refuse requests for whatever you are asking.
explicitly adding the network from which i wanted to allow-recursion fixed this problem for me:
these two lines added to /etc/bind/named.conf.options
recursion yes;
allow-recursion { 10.2.0.0/16; };
Policy enforcement?
The DNS server could be configured to accept only connections from certain hosts.
Hmm, if you're able to access StackOverflow you have a working DNS server SOMEwhere. Try doing
host -v stackoverflow.com
and look for messages like
Received 50 bytes from 192.168.1.1#53 in 75 ms
then pick the address out of that line and use THAT as your DNS - it's obviously willing to talk to you.
If you're on Windows, use NSLOOKUP for the same purpose. Your name server's address will be SOMEwhere in the output.
EDIT:
When I'm stuck for a DNS server, I use the one whose address I can remember most easily: 4.2.2.2 . See how that works for you.
You might try monitoring the conversation using WireShark. It can also decode the packets for you, which might help you determine if your client's packets are correctly encoded. Just filter on port 53 (DNS) to limit the packets captured by the trace.
Also, make sure you're using UDP and not TCP for queries; TCP should be used primarily for zone transfers, not queries.

Resources