Average internet delay - delay

Just wondering, what is the average packet transmission delay between two hosts over the internet (ignoring packet loss and retransmission).
Now, hang a second before you write that it's too genenral and depends on too many factors (Location of the two hosts, network workload at a specific time, just to name a few), i'm aware of that.
Yet, that's why i'm asking what might be the AVERAGE delay. There must be some record for that.
Maybe it's appropriate to ask for seperate countrywide/continentwide/intercontinental average values, too. Whatever makes sense.

However you ask it, this question is WAAAYYYY too general. Ping times can give you a reasonable approximation, though. My avg to a google host:
round-trip (ms) min/avg/max/med = 20/23/37/21
Yahoo:
round-trip (ms) min/avg/max/med = 19/23/38/23
Baidu (China):
round-trip (ms) min/avg/max/med = 269/272/275/272
Pair (Pittsburgh):
round-trip (ms) min/avg/max/med = 63/66/73/67
Google and Y! are using content-distribution networks, so I am most likely hitting servers very nearby. Baidu is across the world from me. Pair is across the country. These are all from a relatively fast connection.
I'd expect a dialup user to see figures that are approximately 100-200 ms higher (depending on network activity at the time). Similarly, my figures would increase significantly if my network were heavily loaded (its not at the moment).
Does that help at all?

You may find the discussion of this stuff at this page interesting. The author argues that traffic is traveling at about half the speed of light (the speed of light being the best you can possibly do for traffic speed, assuming various scientists are right.

Related

What does BandwidthIn and BandwidthOut graph represent for a service?

I have a service and its bandwidth graph looks like this
What does it represent.? I am using tutum which shows me these graphs.!
Should I worry about it.? Please Explain! Any help is appreciated.!
Bandwidth is the the amount of data sent (Out) or received (In) in a period of time. Mbps stands for Mega bits per seconds, i.e., how many bits did you send or receive during that past whole second.
I am sure you heard about xxx Mpbs from your internet provider, in which case, it correspond to the maximum speed you can have, but you are not required to use the whole bandwidth all the time.
Same thing on Tutum, depending on your hosting provider / instance type you will also have a maximum Mbps bandwidth, but at any given t time, you are using YY Mbps out of your XX Mpbs maximum.
As the graph increase, it simply means that you send/receive more data, which can mean that you have a higher traffic or you are doing some kind of networking activity.

BitTorrent Optimistic Unchoke/Bandwith probing

While thinking about how BitTorrent works, a few questions come to my mind. Would appreciate if somebody can share a few possible responses.
Suppose a BitTorrent gets 50 peers from the tracker and then it establishes connection with 20 of them to form the peer-set. Is this peer-set randomly selected or based on their bandwith? (i understand that the peers which will be unchoked are selected based on their offered bandwidth) Subsequently, how is this bandwidth determined for each connection (a ping can give us the latency but not the bandwidth i assume)
The optimistic unchoke leads to the problem of free-riders in the system. Considering an unchoke might not always result in better peers, why is not possible to discard this policy at all? (I assume this policy helps peers with slow bandwith to fulfill requests, why cannot BitTorrent adopt a policy to probe the bandwith of the optimistic peer without sending data packets; and have another (maybe the 5th connection) for low-bandwidth peers so that they don't starve. This 5th channel will transmit at only a fraction of the bandwith compared to the other 4 channels) This may at least discourage free-riding?
traditionally the peers are selected at random. Some clients may have had weak biases based on previous interactions with the peers or CIDR distance. However, there is a recent proposal (which uTorrent and libtorrent implemens) suggests a consistent but uniformly distributed peer selection/priority algorithm. For more information, see this blog post. The unchoke algorithm is triggered every 15 seconds. The peers are then sorted by the number of bytes they sent during the last 15 seconds. The ones sending the most are then unchoked, and the rest are choked. So, the download rate is the 15 second average.
If you don't optimistically unchoke peers, there's no way for you to prove to them that you are better than the other peers in their unchoke set, and they will never unchoke you back. Without optimistic unchokes (also assuming you don't have the allow-fast extension), there is no way to start a download. When you first join, you won't have any pieces, you can't trade the first piece, you have to rely on being optimistically unchoked. Estimating someone's bandwidth without sending bulk data is hard and probably unreliable. Even if you got a good estimate of someones capacity, that wouldn't necessarily mean that capacity was available to you. The current mechanism is very robust in that it doesn't need to make assumptions about the network equipment between the peers (like the packet-train bandwidth estimation needs to do) and it looks at actual data.

iBeacon / Bluetooth Low Energy (BLE devices) - maximum number of beacons

I would like to track a large number of beacons (~500) at once within a 50-100 m radius via an app on an iPhone (5s). I've had a look at the spec and online and I can't see if there is any limit on the number of beacons you can track at once using BLE. Does anyone know if there is limitation on the number of beacons you can track exists or if an iPhone 5s would be up to the task of tracking that many beacons?
You used the word track, but iOS has two different methods: monitoring and ranging.
You can set a maximum of 20 regions to monitor. (Found in documentation for the startMonitoringForRegion: method.) Region limits mostly come into play if your app is in the background. The OS will alert your app when you enter or leave a region that you're monitoring (give or take a few minutes). The OS will even launch your app just to let it know what happened (although only for a short time).
The other method is ranging, which is to find all the beacons within the Bluetooth range of the device (typically around 100 feet give or take). If your beacons are spread out over 100 miles, then you probably won't run into any practical limit here. I have not found any documentation for this, and I have only four beacons that I'm testing with, and four at a time works.
Here's one way to handle your situation. Make all your 500 beacons use the same UUID, and make a beacon region using initWithProximityUUID:identifier: method. (Identifier is just for you -- it doesn't affect anything). Starting monitoring for that beacon region. That way, your app will be notified whenever one of your 500 beacons are found (give or take a few minutes). Once notified, you can use startRangingBeaconsInRegion: to find all the beacons around that area, then use the major and minor values to figure out which beacons the user is near.
I'll add to Tim Tisdall's answer, which sets out the right framework. I can't speak to the specific capabilities of the iPhone 5s, or iOS in general, but I don't see any reason why it wouldn't return every ADV_IND packet (i.e. beacon transmission) that it receives.
The question is, will the 500 beacons be able to transmit their ADV_IND packets without collisions?
It takes about 0.128ms to transmit an ADV_IND packet. The time between advertising transmissions is configurable between 20ms and 10240ms (at intervals of 0.625ms), so the probability of collisions depends on the configuration of the beacons.
Based on the Poisson distribution, the probability of a collision for any given ADV_IND packet is 1-exp(-2*N*(0.128/AI)), where N is the number of beacons within range, AI is the time in milliseconds of the advertising interval (assuming all the beacons are configured the same), and the 0.128 is the time in milliseconds it takes to send the ADV_IND packet. (See http://www3.cs.stonybrook.edu/~jgao/CSE590-fall09/aloha-analysis.pdf if you want an explanation.)
For 500 beacons with the maximum advertising interval of about 10 seconds, there will be a collision about once every 81 packets (or about 6 out of 500). If you're willing to wait for a couple intervals (i.e. 30 seconds), there's a good chance you'll be able to receive all 500 ADV_IND packets.
On the other hand, if the advertising interval is smaller, say 500ms, you'll have a collision about 23% of the time (or 113 out of 500). You'd have to wait for several more intervals to improve the probability that you'd see the broadcasts from all the beacons.
The other way to look at it is that the more beacons you have, the longer you have to wait to make sure you receive all their packets. (The math to calculate the delay to receive the packets with a certain probability from the number of beacons and the advertising interval is too much for me today.)
One caveat: if you want to connect to these beacons, as opposed to just receiving the ADV_IND packet, that requires an exchange of two more packets on the advertising channels, and the probability of a collision in the advertising channels goes up a bit.
If I am reading your question right, you want to put all 500 iBeacons within 100 meters of each other, meaning their transmissions will overlap. You will probably run into radio congestion problems long before you run into any limitations of iOS7 or your phone.
I have successfully tested 20 iBeacons in close proximity without problems, but 500 iBeacons is an extreme density. this discussion on the hardware issue suggests you may run into trouble.
At a minimum, the collisions of the transmissions of 500 iBEacons will make it take longer for your iOS device to see each iBeacon. Normally, iOS7 provides a ranging update once per second for each iOS device, but you may find that you get updates much less often. It all depends on your application whether or not less frequent updates are acceptable.
Even if delays are acceptable, I would absolutely test this before counting on it working at all. Unfortunately, that means getting your hands on lots of iBeacons.
I don't agree. It is true that ble beacons only transmit advertising data, but the transmission of such data last about 3ms (considering three advertising channels).
Having 500 beacons, WITHOUT considering any collision, the scanner will takes 1.5s to see them all.
But, if all beacons are configured in same way (same advertising interval) it is inevitable to have collisions which lead to have undiscovered beacons. Even if the advertising interval is different between beacons collisions occur. To avoid collision probability one should use longer advertising interval, but this lead to longer discovery latency.
This reasoning is very raw, it doesn't take care of many effects, but is just an order of magnitude calculation.
By the way, the question is not easy, there are many parameters which play role, some are known some are unknown. But I'm working with ble since one year about and, to me, 500 is a huge number and there is the possibility that you don't see the majority of nodes because of collisions.
I was doing some research into iBeacon's because of this question (I had no idea what it was about).
It seems that on the "beacon" side of things all that happens is general advertising packets are sent out. It's similar to how a device advertises that you can connect to it. However, you don't actually connect to iBeacon's, it just reads those advertising packets. There's no built-in limitation on how many advertising packets a device can receive.
So, it wouldn't surprise me if 500 iBeacon's would run with no issues. The advertising packets are small and are spaced out (time wise, they are repeated every X ms). There's no communication going from the phone to the iBeacon, the phone is simply receiving the packets it hears. If there's interference on one packet it'll likely manage to get the next one.

Bluetooth Ping Latency

I am currently working on a project involving a Lego Mindstorms kit. The brick is the NXT and I was curious about the bluetooth ping rates.
I ran a test of 100 pings on it and got some interesting results. The latencies seemed to fall into bands. I increased to 10,000 pings and it highlighted this trend even more clearly. Does anyone know what could cause this to happen?
In case it is relevant, the distance between the sender and receiver was about 3 metres.
Few reasons :
Buffering and internal timers to flush buffers can cause it.
Also depends on the ping intervals (i.e. time between subsequent pings), as the link might go to power save mode during inactivity and it will take a fine time to come back up.
Size of the ping packets
What bluetooth profile is used here ?

Getting UDP traffic statistic between particular hosts on Linux

I need to gather some network statistic to test my server application. I've tried many linux tools, but nothing I've found suits my needs.
Basically I want to gather some UDP statistics (bytes/time_interval, packets/time_interval, packets_loss), but regarding only two particular hosts - for example I want to get UDP statistic from traffic going from IP_A:PORT_A to IP_B:PORT_B.
Tools like tcpdump/wireshark can easily dump such traffic but I have problems with getting statistics like temporary speed (too see throughput peeks), and linux system statistics gives me number for all traffic.
It would be better to get text output so it will be possible to parse it.
Anyone has any idea how can I achieve it?
Thanks in advance
Harnen
Here's a tutorial for the libpcap library:
http://www.systhread.net/texts/200805lpcap1.php
To determine packets lost, your program will probably want to work on a pair of logs, and make sure UDP messages on the source are found on the destination. A good method for doing this is to maintain a window of packets equal to the amount of time your timeout is set to, load all the packets into the window, sort them, then search for all the packets in the desired time frame, marking them as found as you go. Once you've exhausted a minute, remove half of that minute from the buffer, and load the next thirty seconds and re-sort.
If you have lots (millions? probably should profile it) of packets, it may be faster to use what's called a Counting Bloom Filter, so you can determine if your packet is "probably" in there very quickly.
If you weren't looking for programming advice, take your question to serverfault.

Resources