I am a computer science student and in my telecommunications course, we were asked a question that I can't solve. Here is the statement:
Either the local network below that accesses the Internet via a 20 Mbps access link.
All local network clients generate an average of 200 requests per second. Each request corresponds to an average of 100,000 bits. It is assumed that the average RTT of the access router to any web server is 8 seconds and that the local network capacity is 100 Mbps.
How intense is the traffic on the access link?
I can't find a formula that could help me. Could I have at least one clue to move forward?
Thank you.
You can calculate it by using this formula:
(L*a)/R
or
(arrival rate of bite)/(service rate of bits)
Hope this helps you
Related
I'm wondering, how modern DNS servers dealing with millions queries per second, due to the fact that txnid field is uint16 type?
Let me explain. There is intermediate server, from one side clients sending to it DNS requests, and from other side server itself sending requests to upper DNS server (8.8.8.8 for example). So the thing is, that according to DNS protocol there is field txnid in the DNS header, which should be unchanged during request and response. Obviously, that intermediate DNS server with multiple clients replace this value with it's own txnid value (which is a counter), then sends request to external DNS server and after resolving replace this value back to client's one. And all of this will work fine for 65535 simultaneous requests due to uint16 field type. But what if we have hundreds of millions of them like Google DNS servers?
Going from your Google DNS server example:
In mid-2018 their servers were handling 1.2 trillion queries-per-day, extrapolating that growth says their service is currently handling ~20 million queries-per-second
They say that successful resolution of a cache-miss takes ~130ms, but taking timeouts into account pushes the average time up to ~400ms
I can't find any numbers on what their cache-hit rates are like, but I'd assume it's more than 90%. And presumably it increases with the popularity of their service
Putting the above together (2e7 * 0.4 * (1-0.9)) we get ~1M transactions active at any one time. So you have to find at least 20 bits of state somewhere. 16 bits comes for free because of the txnid field. As Steffen points out you can also use port numbers, which might give you another ~15 bits of state. Just these two sources give you more than enough state to run something orders of magnitude bigger than Google's DNS system.
That said, you could also just relegate transaction IDs to preventing any cache-poisoning attacks, i.e. reject any answers where the txnid doesn't match the inflight query for that question. If this check passes, then add the answer to the cache and resume any waiting clients.
I'm trying to get the speed limit of a specific point on the map (lat, lng) using an API, but I can't find it in the Azure Maps documentation. I found it on Bing Maps, but I wanted to use Azure Maps instead if possible, as they give you 250k map free requests per month.
Thanks!
Yes you can access speed limit data in Azure Maps by using the reverse geocoding service and setting the "returnSpeedLimit" parameter to true: https://learn.microsoft.com/en-us/rest/api/maps/search/getsearchaddressreverse
You can also use the batch reverse geocoding service if you have a lot of data points: https://learn.microsoft.com/en-us/rest/api/maps/search/postsearchaddressreversebatch
You might also find the Traffic flow segment API interesting. It will tell you current speed of traffic on a section of road: https://learn.microsoft.com/en-us/rest/api/maps/traffic/gettrafficflowsegment The free flow speed isn't the speed limit, but the average speed vehicles travel that section of road when there is no traffic.
Similarly, the routing service can return the current speed due to traffic over each segment of a route if you set the "sectionType" parameter to "traffic". https://learn.microsoft.com/en-us/rest/api/maps/route/getroutedirections
We have lots of images in Azure Blob Storage (LRS Hot). We calculate around 15 million downloads per month for a total of 5000 GB egress (files are on average 350kB). I can calculate the price for the Blob Storage but the Function proxy is unknown. The Azure Functions pricing document doesn't say anything about proxy functions and specifically about bandwidth.
Question 1: Are these calculations correct?
Execution count price is €0,169 per million executions, which equals to 15 * 0,169€=2,54€/month.
GB-s price is €0,000014/GB-s and memory usage is rounded to nearest 128MB. If file download time is 0,2s and memory is 128MB we have 0,2 * (128/1024) * 15000000 * 0,000014 = 5,25€/month
Question 2: What about bandwidth? Is there any cost for that?
Q1: Mostly yes.
Azure Functions Proxies (Preview) works just like regular functions, meaning that any routing done by your proxy counts as one execution. Also, just like standard functions, it uses your GB-s while it's running. Your calculation approach is correct, with the caveat that reading from blog storage is actually a streaming activity, which will consume a fixed amount of memory multipled by the time it will take to each file to download.
Q2: This works the same way as Azure App Service. From the pricing page:
165 MB outbound network traffic included, additional outbound network bandwidth charged separately.
I have this question on my homework about DNS Amplification attacks that I am having trouble figuring out.
In order to implement a DNS amplification attack, the attacker must
trigger the creation of a sufficiently large volume of DNS response
packets from the intermediary to exceed the capacity of the link to
the target organization. Consider an attack where the DNS response
packets are 500 bytes in size (ignoring framing overhead). How many of
these packets per second must the attacker trigger to flood a target
organization using a 0.5-Mbps link? A 2 Mbps link? Or a 10 Mbps link?
If the DNS request packet to the intermediary is 60 bytes in size, how
much bandwidth does the attacker consumer to send the necessary rate
of DNS request packets for each of the three cases?
I know that an amplification attack has to do with converting a small request into a large response. The information I'm reading in my book doesn't give an exact value of how much they can amplify however, it at one point says "over 4000 bytes" and thats it. I assume the first part of the question it simply 0.5 MB / 500 bytes = how many packets per second it takes to flood the target, but that seems to simple (new to this topic). But that might just be me over thinking it. The second part I assume is just 60 bytes * the answer from the first part for each of the three cases, but I am unsure of the answer for the first part. Am I overthinking this or do I already have it solved?
You are correct that they are simply divisions. But you must first understand how the amplification attack works: the attacker makes a request to the intermediary, but spoofs the source address. The intermediary will then respond to the spoofed source instead of the attacker.
This works only in a couple of conditions: connectionless protocols (like UDP, ICMP), the intermediary is well connected having lots of available bandwidth, and the responses are much larger than the request packets. That is the amplification part. So if the attacker sends a 60 byte response, and in return triggers a 500 byte reply, the attacker will need less bandwidth to overwhelm the network connection of the victim.
Hopefully this is sufficient for you to figure out the rest of the homework.
I'm creating an application which let me to ping IP or IP range using time interval between each ping. My concern here is that if the interval will be allowed to be too small then my program would appear to do a ping flood.
What should I allow the minimum interval in milliseconds to be in my small app?
I would think that pinging a publicly available IP address more that once per second would look highly suspicious.
In general you should not ping any more frequently than is useful, it will only lead to needless network traffic and congestion. For example if the purpose of your app were to notify a user visually of a network issue, pinging more frequently that a user can respond serves no purpose.
Perhaps a better solution would be to use a statistical based algorithm that takes into account packet loss, response times and network loading. The algorithm could be adaptive in that it would trade off network loading against the value of the information being collected.