On Jan 14, 2016, at 10:33 PM, Johnny Billquist <bqt
at softjar.se> wrote:
...
DECnet does *not* deal gracefully with massive packet loss. :-/
Performance really goes down the drain. Doing morse with a flashlight bouncing off the
moon will be faster.
No ARQ protocol deals gracefully with significant packet loss. There was a paper about
TCP decades ago, way back when the Internet was still called "ARPAnet". It
showed that a 1% packet loss rate would cause a 50% performance drop. With modern link
speeds, the impact is far greater yet. This is unavoidable because the response to packet
loss is timeout and retransmission, and a timeout by definition has to take longer than
the likely round trip latency.
There do exist protocols designed for substantial packet loss, but they are far from
mainstream. (Basically, they use FEC -- redundancy in transmission -- rather than timeout
and retry.) You might find them in deep space satellite downlinks, for example, or
shortwave or weak signal radio data networks like JT65. But DDCMP, X.25, TCP, TP4, and
DECnet are all examples of protocols designed with an assumption that packet loss is
fairly rare. It isn't usually stated explicitly, but as the ARPAnet paper showed, you
really need to aim for well under 1 percent loss, especially on fast links.
Many of these also assume that packet loss is random (caused by link error). The
exception is congestion control, which recent versions of DECnet have, and TCP as well
(originally based in part on the DECnet work, and then extended further).
paul