Bandwith loss of distance - am I being fed a line of bull or do I have research to do? - packet-loss

I just had a strange conversation with a man who was trying to explain to me that it is impossible for two healthy networks to communicate at each-other over the ocean without significant bandwidth loss.
For example - if you have a machine connected at 100Mb/sec here http://www.hetzner.de/en/hosting/unternehmen/rechenzentrum attempt to communicate to a machine in the US with exactly the same setup you'd only achieve a fraction of the original connection speed. This would be true no matter how you distributed the load - the total loss over distance would be the same. "Full capacity" between the US and Germany would be less than half of what it would be to a data center a mile from the originator with the same setup.
If this is true that means my entire understanding of how packets work is wrong. I mean, if there's no packet loss why would there be any issue other than latency? I'm trying to understand his argument but am at a loss. He seems intelligent and 100% sure of his information. It was very difficult to understand because he explained data like a river and I was thinking of it as a series of packets.
Can someone explain to me what I'm missing, or am i just dealing with a madman in a position of authority and confidence?

He could be referring to the number of packets you would be able to have 'in flight' at any one time.
Take a look at Wikipedia's entry on Bandwidth Delay Product for some more information on this:
http://en.wikipedia.org/wiki/Bandwidth-delay_product
That said, depending on the link you have between those two places, then I don't think latency would be that much of an issue to cause problems with this (assuming a fibre connection, not satellite).
He could also be referring to the fact that there would be a number of round trips to setup a TCP connection so the apparent speed to an end user who might be setting up lots of small connections (web browsing) might be less.
-Matt

Related

No depot VRP - roadside assistance

I am researching a problem that is pretty unique.
Imagine a roadside assistance company that wants to dynamically route its vehicles. Hence for each packet of new incidents wants to create routes that will satisfy them, according to some constraints (time constraints, road accessibility, vehicle - incident matching).
The company has an heterogeneous fleet of vehicle (motorbikes for easy cases, up to tow trucks for the hard cases) and each incident states it's uniqueness (we know if it wants just fuel, or needs towing).
There is no depot, only the vehicles roaming on the streets.
The objective is to dynamically create routes on the way, having in mind the minimization of time and the total traveled distance.
Have you ever met such a problem? Do you have any idea in which VRP variant it belongs?
I have seen two previous questions but unfortunately they don't fit with my problem.
The respected optaplanner - VRP but with no depot and Does optaplanner out of box support VRP with multiple trips and no depot, which are both open VRPs.
Unfortunately I don't have code right now, as I am still modelling the way I will approach this problem.
I am really sorry for creating a suggestion question and not a real one.
Thank you so much in advance.
It's a rich dynamic/realtime vehicle routing problem. You won't find an exact name for your problem, as when VRPs get too complex they don't fit inside any of the standard categories.
It's clearly a dynamic/realtime problem (the terms are used interchangeably) as you would typically only find out about roadside breakdowns at short notice.
Sometimes you're servicing a broken down car, which would be a single stop (so a vehicle routing problem). Sometimes you're towing a car, which would be a pick-up delivery problem. So you have a mix of both together.
You would want to get to the broken down vehicles ASAP and some would need fixing sooner than others (think a car broken down in a dangerous position on a motorway). You would therefore need soft time windows so you can penalise lateness instead of the standard hard time windows supported in most VRP formulations.
Also for you to be able to scale to larger problems, you need an incremental optimiser that can restart from the previous (possibly now infeasible) solution when new jobs are added, vehicle positions are changed etc. This isn't supported out of the box in the open source solvers I know of.
We developed a commercial engine which does the above. We started off using the jsprit library, which supports mixing single stop and pickup delivery problems together. We later had to replace jsprit due to the amount of code we had to override to get it running happily for realtime problems, however jsprit may still prove a useful starting point for you. We discuss some of the early technical obstacles we had to overcome in getting jsprit to handle realtime problems in this white paper.

Should error detection mechanisms always be relied on?

I know that in networking, error detection (or sometimes correction) mechanisms are enforced in the data link layers, networking layers, in tcp or even higher layers. But for example, for each 4KB of data, considering error detection of all layers, a total of as much as 200 bytes of error checking bytes are used. So even with good checksum functions, theoretically, collisions are possible. Why do people use these error detection mechanisms then? Are anomalis that much unlikely to occur?
If you want short answer than no, they cannot always be relied on and if you have really critical data than you should encapsulate the data yourself or transfer with seperate channel some good hash like f.e. SHA-256 to confirm that data was transfered without mistakes.
Ethernet CRC will catch most of errors like single bit error or any odd number of single bit errors. Some errors can go undetected but its extremely rare and its discussive what exact probability of error is but it's less than 1 in 2^32. Moreover every Ethernet device between source and destination is recalculating so it is more robust to errors assuming that every device is working properly.
These remaining errors should be caught by IP and TCP checksums. But these checksum calculation could not detect all errors f.e. : reordering two byte words or multiple errors that sum to zero.
In "Performance of Checksums and CRCs over Real Data" by Jonathan Stone, Michael Greenwald, Craig Partridge and Jim Hughes you could find some real data that suggest that about one in billion TCP segments have correct checksum while containing corrupted data.
So I will say that error detection mechanism in ISO/OSI model give us enough protection in most applications, get rid of most errors while being effective and fast. But if you use some additional hash than you are allmost robust to errors. Just check these table from article on hash collisions

how to simulate aloha protocol using a programming language?

I'm new to networking in general and I read about this protocol called Aloha, and I would like to make a simple simulator for the Pure version of it.
Although I understand the concepts, I find it difficult to start.
Basically we have a N senders in the network. Each sender wants to send a packet. Now each sender doesn't care if the network is busy or under occupation by some other sender. If it wants to send data, it just sends it.
The problem is that if 2 senders send some data at the same time, then they will both collide and thus both packets will be destroyed.
Since they are destroyed the two senders will need to send again the same packets.
I understand this simple concept, the difficulty is, how to modulate this using probabilities.
Now, I need to find out the throughput which is the rate of (successful) transmission of frames.
Before going any further, we have to make some assumptions:
All frames have the same length.
Stations cannot generate a frame while transmitting or trying to transmit. (That is, if a station keeps trying to send a frame, it cannot be allowed to generate more frames to send.)
The population of stations attempts to transmit (both new frames and old frames that collided) according to a Poisson distribution.
I can't really understand the third assumption, how will I apply this probability in aloha?
I can't find a single code online in order to get an idea how this would be done...
here is some further information on this protocol:
http://en.wikipedia.org/wiki/ALOHAnet#Pure_ALOHA
I think you have to split the time into intervals; in each interval a certain number of stations attempts to transmit. This number is the number of events occurring in a fixed interval or time according to http://en.wikipedia.org/wiki/Poisson_distribution.
You have to model it according to the Poisson distribution.
My 2 cents, hope this helps

Do I need to worry about the overall audio gain in my program?

Is there level limiting somewhere in the digital audio chain?
I'm working on a tower defence game with OpenAL, and there will be many towers firing at the same time, all using the same sound effect (at least for now). My concern is that triggering too many sounds at the same time could lead to speakers being blown, or at the very least a headache for the user.
It seems to me that there should be a level limiter either in software, or at the sound card hardware level to prevent fools like me from doing this.
Can anyone confirm this, and if so, tell me where this limiter exists? Thanks!
as it is, you'd be lucky if the signal were simply clipped in the software before it hit the DAC. you can easily implement this yourself. when i say 'clipped', i mean that amplitudes that exceed the maximum are set to the maximum, rather than allowed to overflow, wrap, or other less unpleasant results. clipping at this stage often sounds terrible, but the alternatives i mentioned sound worse.
there's actually a big consideration to account for in this regard: are you rendering in float or int? if int, what is your headroom? with int, you could clip or overflow at practically any stage. with floating point, that will only happen as a serious design flaw. of course, you'll often eventually have to convert to int, when interfacing with the DAC/hardware. the DAC will limit the output because it handles signals within very specific limits. at worst, this will be the equivalent of (sampled) white noise at 0 dB FS (which can be an awful experience for the user). so... the DAC serves as a limiter, although this stage only makes it significantly less probable that a signal will cause hearing or equipment damage.
at any rate, you can easily avoid this, and i recommend you do it yourself, since you're directly in control of the number of sounds and their amplitude. at worst, samples with peaks of 0 dB FS will all converge at the same sample and you'll need to multiply signal (the sum of shots) by the reciprocal of the sum of shots:
output[i] = numShots > 1 ? allThoseShots[i]*(1.0/numShots) : allThoseShots[i];
that's not ideal in many cases (because there will be an exaggerated ducking sound). so you should actually introduce a limiter in addition to overall reduction for the number of simultaneous shots. then you back off the shots signal by a lower factor since their peaks will not likely converge at the same point in time. a simple limiter with ~10 ms of lookahead should prevent you from doing something awful. it would also be a good idea to detect heavy limiting in debug mode, this catches upstream design issues.
in any case, you should definitely consider appropriate gain compensation your responsibility - you never want to clip the output dac. in fact, you want to leave some headroom (ref: intersample peaks).

Optimizations in security systems

I would like to know what we can mean by saying a optimized security system(physical or logical security system).
Does it mean something like a system which can monitor performance of services, SQL, DB maintenance, logs etc.
Thanks
Optimized is a general term, you will have to get specific in terms of defining what you need to consider it optimized to an "acceptable" level. Plus there are different kinds of "optimization", such as for speed, memory usage, maintainability, etc.
Are you trying to figure out some criteria so that you can market your product as "optimized" and be able to explain it if someone asks what you mean?
If so, you need to figure out what your customers (or potential customers) actually care about. If they care about video resolution and disk space usage (how much the system can store before having to archive elsewhere), then you need to make your application smart (optimized! :) in those areas.
THEN, you could be more specific in your marketing and say, "optimized to use XYZ resolution and store up to 2 weeks of video on a standard hardware setup!" - which would actually mean something tangible to your customers, and show them that you care about what they care about.

Resources