DNS Amplification attacks - dns

I have this question on my homework about DNS Amplification attacks that I am having trouble figuring out.
In order to implement a DNS amplification attack, the attacker must
trigger the creation of a sufficiently large volume of DNS response
packets from the intermediary to exceed the capacity of the link to
the target organization. Consider an attack where the DNS response
packets are 500 bytes in size (ignoring framing overhead). How many of
these packets per second must the attacker trigger to flood a target
organization using a 0.5-Mbps link? A 2 Mbps link? Or a 10 Mbps link?
If the DNS request packet to the intermediary is 60 bytes in size, how
much bandwidth does the attacker consumer to send the necessary rate
of DNS request packets for each of the three cases?
I know that an amplification attack has to do with converting a small request into a large response. The information I'm reading in my book doesn't give an exact value of how much they can amplify however, it at one point says "over 4000 bytes" and thats it. I assume the first part of the question it simply 0.5 MB / 500 bytes = how many packets per second it takes to flood the target, but that seems to simple (new to this topic). But that might just be me over thinking it. The second part I assume is just 60 bytes * the answer from the first part for each of the three cases, but I am unsure of the answer for the first part. Am I overthinking this or do I already have it solved?

You are correct that they are simply divisions. But you must first understand how the amplification attack works: the attacker makes a request to the intermediary, but spoofs the source address. The intermediary will then respond to the spoofed source instead of the attacker.
This works only in a couple of conditions: connectionless protocols (like UDP, ICMP), the intermediary is well connected having lots of available bandwidth, and the responses are much larger than the request packets. That is the amplification part. So if the attacker sends a 60 byte response, and in return triggers a 500 byte reply, the attacker will need less bandwidth to overwhelm the network connection of the victim.
Hopefully this is sufficient for you to figure out the rest of the homework.

Related

How to deal with millions queries to DNS server?

I'm wondering, how modern DNS servers dealing with millions queries per second, due to the fact that txnid field is uint16 type?
Let me explain. There is intermediate server, from one side clients sending to it DNS requests, and from other side server itself sending requests to upper DNS server (8.8.8.8 for example). So the thing is, that according to DNS protocol there is field txnid in the DNS header, which should be unchanged during request and response. Obviously, that intermediate DNS server with multiple clients replace this value with it's own txnid value (which is a counter), then sends request to external DNS server and after resolving replace this value back to client's one. And all of this will work fine for 65535 simultaneous requests due to uint16 field type. But what if we have hundreds of millions of them like Google DNS servers?
Going from your Google DNS server example:
In mid-2018 their servers were handling 1.2 trillion queries-per-day, extrapolating that growth says their service is currently handling ~20 million queries-per-second
They say that successful resolution of a cache-miss takes ~130ms, but taking timeouts into account pushes the average time up to ~400ms
I can't find any numbers on what their cache-hit rates are like, but I'd assume it's more than 90%. And presumably it increases with the popularity of their service
Putting the above together (2e7 * 0.4 * (1-0.9)) we get ~1M transactions active at any one time. So you have to find at least 20 bits of state somewhere. 16 bits comes for free because of the txnid field. As Steffen points out you can also use port numbers, which might give you another ~15 bits of state. Just these two sources give you more than enough state to run something orders of magnitude bigger than Google's DNS system.
That said, you could also just relegate transaction IDs to preventing any cache-poisoning attacks, i.e. reject any answers where the txnid doesn't match the inflight query for that question. If this check passes, then add the answer to the cache and resume any waiting clients.

How to temporarily buffer incoming network traffic for latency-sensitive HFT application?

We are running a Java-based trading application, and there are certain periods where we want to prioritize outgoing network traffic as much as possible for about 10 ms. Is there a way to temporarily buffer all incoming network traffic during a short time period, either on the network card or via a process or buffer on our Redhat Linux box?
The rationale behind this is that the incoming network traffic spikes during this same period, and the application processing this traffic is stealing CPU cycles from the process we are trying to prioritize. We do not have fine-grained control over the application treating the incoming network traffic.
We're on a 1 Gbps connection so a buffer of about 1 MB should be sufficient. We would prefer not dropping the incoming traffic and requesting retransmission as this would increase load on our network during quite busy periods.
Possible using Qos on the router, or using trickle to control your bandwidth by a sample configuration of :
/etc/trickled.conf.
see example in url.
I am not sure whether I understand your problem correctly. Your concern is sometimes you have priority to deal with output network traffic and at this time the incoming traffic will build up and finally might cause package drop or retransmission which you don't want. Therefore, you want to buffer your incoming traffic.
If my understanding is correct and your are using TCP, try to make your tcp buffer bigger.
http://kaivanov.blogspot.com/2010/09/linux-tcp-tuning.html and then Use netstat to check whether your change is effective.
Adrian, have you tried setting the priority of your outgoing communication process to be higher than that of the process receiving the incoming data? Using the nice command this can be achieved. Note that in Unix/Linux the lower the number the higher the priority.
Otherwise I am not sure this is possible without having a direct tie in between the two applications that are sending / receiving, allowing you to effectively ignore the incoming connections that are ready to read from until any data you have is sent out.

I need an advice about interval setting in small pinging app

I'm creating an application which let me to ping IP or IP range using time interval between each ping. My concern here is that if the interval will be allowed to be too small then my program would appear to do a ping flood.
What should I allow the minimum interval in milliseconds to be in my small app?
I would think that pinging a publicly available IP address more that once per second would look highly suspicious.
In general you should not ping any more frequently than is useful, it will only lead to needless network traffic and congestion. For example if the purpose of your app were to notify a user visually of a network issue, pinging more frequently that a user can respond serves no purpose.
Perhaps a better solution would be to use a statistical based algorithm that takes into account packet loss, response times and network loading. The algorithm could be adaptive in that it would trade off network loading against the value of the information being collected.

HTTP download and multiple threads

This may be a duplicate but I have not seen this being fully answered.
Does HTTP download throughput increase when using threads?
My thinking is that when the TCP stack on the server is waiting for a ack from the receiver before sending the next chunk of data, another thread is sending out a request for data which is then serviced, leading to an increase in throughput.
Is this correct?
Yes, that is pretty much correct. Threading HTTP requests would increase throughput, up until the maximum number of connections was met on the server, and then this increase would plateau. The performance increase would be limited to both the server and the client computers threading abilities, of course.
It's correct only at the startup, during the transfer TCP has a dynamic window of data that can be sent without receiving an ACK.
So while the data transfer is going on, in most situations every chunk of data that can be sent is sent, resulting in maximum throughput.
When you use multiple threads you reduce the dead time in the TCP handshaking.
It can also be useful if you have to download files froms different servers or if the server limits the bandwidth per connection.

Can bittorrent peers handle seeding large numbers of idle torrents

I'm considering using bittorrent for a large data dissemination problem where the data source is petascale and users will want up to several terabytes. Some details
Number of torrents potentially in the millions
torrent sizes ranging from 100Mb to 100Gb
A stable set of clusters around the world capable of acting as seeders each holding a large subset of the total torrents (say 60% on average)
A relatively small number of simultaneous users (less than 100) wanting to download on average a few terabytes of data.
I expect the number of active torrents to be small compared to the total available but quality of service is important so there must be several seeders for each torrent or some mechanism for launching new seeders.
My question is can bittorrent clients handle seeding huge numbers of torrents, most of which are idle? Would I need to stripe torrents across the seeders in a cluster or could each node be seeding all torrents it has access to? Which client would do the best job? Are there any tools for managing clusters of seeders?
I am assuming that trackers can be made to scale to this level.
There are 2 main problems:
Each torrent (typically) needs to announce to a tracker periodically, this might end up using a significant amount of bandwidth.
The bittorrent client itself need to be written in a way to scale with a large number of torrents
As for the tracker traffic, let's assume you have 1 million torrents, the typical re-announce interval is 30 minutes, but some tracker has it set to 1 hour. Let's be conservative and assume your tracker uses 1 hour announce intervals. You will have to make 1 million GET requests per hour, let's say each request is 400 bytes up and 100 bytes down (assuming most responses will not contain any peers), that's about 111 kB/s up and 28 kB/s down constantly. That's not so bad, but keep in mind that TCP requires an extra round-trip for establishing connections, so that's another 40 bytes down and 40 bytes up.
This can be mitigated by only using UDP trackers. Then you would only need a single connect-message, and you can reuse the connection ID for each announce. Each announce message would then be 100 bytes, and the returned message would be a bit more compact as well, let's assume 60 bytes. That would get you 28 kB/s up and 16kB/s down, just to keep the torrents announced. For this you would need a client with decent udp tracker support (one that caches the connection ID for instance).
Not too bad, assuming that's insignificant compared to the actual data your seeds would send.
However, you don't necessarily need to stripe your torrents across separate data centers, you could also use an HTTP server to seed the torrents. All major bittorrent clients support http seeding, and you wouldn't have to worry about announcing to the tracker (the URL is burned into the .torrent itself).
As for a client that scales well with torrents, I don't know for sure, I haven't done any measurements. It should be fairly straightforward to just generate a million random torrents and try to load it up.
I have done some optimization work in libtorrent rasterbar to make it scale well with many torrents, I haven't tried millions though.
I've written a blog post on this topic, here.
You may be looking for Hekate
It's in, at best, pre-alpha right now, but it's quite nearly what you're describing.
To not collapse under the overhead of useless tracker announces and scrapes in the millions (and that in every announce interval), you have to restrict your seeding clusters to only load the current working set of items that are requested right now. Downloaders need to get (download) the .torrent file from a central place anyway, and that could trigger loading it into the seeding clusters. Alternatively, determine activity for a particcular info-hash by recognizing announces that do NOT originate from one of your seed clusters.
rTorrent has fast-resume (meaning no hashing happens when an appropriately prepared .torrent is loaded), and is controllable via xmlrpc so you can decommission idle items. That way, a .torrent download can trigger the actual data to be available for the next 24 hours, or as long as there's activity in the swarm.
The protocol allows for this, but I do not know which clients would scale to millions of torrents. In the worst case, you would have to write your own seed-only client.
The protocol feature most relevant to your use case is that, when a peer connects to another, the connecting peer is supposed to send the torrent's info-hash first. This means that a single listening TCP port could be used to seed an unlimited amount of torrents, with almost zero resources used when idle.
This can be found on The BitTorrent Protocol Specification:
If both sides don't send the same value, they sever the connection. The one possible exception is if a downloader wants to do multiple downloads over a single port, they may wait for incoming connections to give a download hash first, and respond with the same one if it's in their list.
I also found the same on this Bittorrent Protocol Specification v1.0:
The initiator of a connection is expected to transmit their handshake immediately. The recipient may wait for the initiator's handshake, if it is capable of serving multiple torrents simultaneously (torrents are uniquely identified by their info_hash).
However, there is one thing that would increase your load, and it is the tracker. With the normal tracker protocol, each client has to periodically announce to the tracker each torrent it has, together with information like how much it has uploaded. With millions of torrents, this would present a somewhat high load. If you were writing your own mass-seed-only client, a separate protocol to announce your seeders to the tracker would be a good idea.

Resources