On Mar 27, 2021, at 4:51 PM, Thomas DeBellis
<tommytimesharing at gmail.com> wrote:
I believe our site had three situations. Our DN20's had both KMC's and
DUP's. The KMC's were used for local computer center communications of 10 to 30
feet (the 20's were running in a star configuration). The DECnet of the time was
reliable enough until you put a continuous load on the KMC's. If you wanted to
guarantee transmission, then you had to write an application level protocol on top of that
to declare a connection down and to renegotiate. This still works with the inconsistent
NI implementation on KLH10.
The KMC was a microprocessor (same or similar to the one in the DMC-11) which sat
on the Unibus and and converted the device under control to look like a DMA device to the
host system. Microcode was written to support KMC/DUP, KMC/DZ and KMC/LP although I don?t
know if that last one ever made it into customer hands. The main problem with using the
KMC was that it had to poll the device it was controlling and would burn through Unibus
cycles even when there was no traffic.
John.
I do not recall whether we ever determined where the
problem was; the Tops-20 DECnet III implementation of the time, the DN20 (MCB) or the KMC.
I don't recall what the KMC had for hardware error detection; since the lines were
synchronous, I would imagine it would flag certain framing errors, at a minimum. The
DN20's were left largely unused once we got NI based DECnet, whose performance blew
the KMC right out of the water. The CI speeds were just about incomprehensible for the
time. We put timing code into our transfer application, but I can't remember the
speeds but they were highly admired.
We had considered removing the DN20's, but conservatively kept them as a fail-back in
the case of a CI or NI outage. The other reason was for the 'non-local' lines
that ran to other parts of the campus and long distance to CCnet nodes. These both used
DUP's. I can't remember how the campus lines were run, but the long distance was
handled by a pair of 9600 baud modems on leased lines. I can't remember how the modem
was configured; synchronous or asynchronous.
In the case of the DUP's, there were plenty of errors to be had, but they were not as
highly loaded as the KMC's, which were saturated until the NI and CI came along.
The third case was that of the DN65 to talk to our IBM hardware. That was data center
local and ran a KMC. The protocol that was spoken was HASP bi-sync. I don't recall
what the error rate was; HASP would correct for this. A bucketload of data went over that
link. While I had to mess a tiny bit with the DN60 PDP-11 code to change a translate
table entry, I don't remember that I had to fiddle with the DN60 that much. IBMSPL
was another thing entirely; we were an early site and I had known one of the developers at
Marlboro. It had some teething problems, but eventually was fairly reliable.
I was wondering whether the problem of running DDCMP over UDP might be one of error
timing. If you blew it on a KMC or DUP, the hardware would let you know pretty quick;
milliseconds. The problem with UDP is how soon you declare an error. If you have a
packet going a long way, it might take too long to declare the error. It's a thought,
but you can get delays in TCP, too, so I'm not sure if the idea is half-baked.
On 3/27/2021 1:59 PM, John Forecast wrote:
>
>> On Mar 27, 2021, at 11:06 AM, Mark Berryman <mark at
theberrymans.com
<mailto:mark at theberrymans.com>> wrote:
>>
>> DDCMP was originally designed to run over intelligent synchronous controllers,
such as the DMC-11 or the DMR-11, although it could also be run over async serial lines.
Either of these could be local or remote. If remote, they were connected to a modem to
talk over a circuit provided by a common carrier and async modems had built in error
correction. From the DMR-11 user manual describing its features:
>> DDCMP implementation which handles message sequencing and error correction by
automatic retransmission
>>
>
> No. DDCMP was designed way before any of those intelligent controllers. DDCMP V3.0
was refined during 1974 and released as part of DECnet Phase I. The customer I was working
with had a pair of PDP-11/40?s, each having a DU-11 for DECnet communication at 9600 bps.
DDCMP V4.0 was updated in 1977 and released in 1978 as part of DECnet Phase II which
included DMC-11 support. The DMC-11/DMR-11 included an onboard implementation of DDCMP to
provide message sequencing and error correction. Quite frequently, customers would have a
DMC-11 on a system communicating with a DU-11 or DUP-11 on a remote system.
>
> John.
>
>> In other words, DDCMP expected the underlying hardware to provide guaranteed
transmission or be running on a line where the incidence of data loss was very low. UDP
provides neither of these.
>>
>> DDCMP via UDP over the internet is a very poor choice and will result in exactly
what you are seeing. This particular connection choice should be limited to your local
LAN where UDP packets have a much higher chance of surviving.
>>
>> GRE survives much better on the internet than does UDP and TCP guarantees
delivery. If possible, I would recommend using one these encapsulations for DECnet
packets going to any neighbors over the internet rather than UDP.
>>
>> Mark Berryman
>>
>>> On Mar 27, 2021, at 4:40 AM, Keith Halewood <Keith.Halewood at
pitbulluk.org <mailto:Keith.Halewood at pitbulluk.org>> wrote:
>>>
>>> Hi,
>>>
>>> I might have posted this to just Paul and Johnny but it?s probably good for a
bit of general discussion and it might enlighten me because I often have a lot of
difficulty in separating the layers and functionality around tunnels of various types,
carrying one protocol on top of another.
>>>
>>> I use Paul?s excellent PyDECnet and about half the circuits I have connecting
to others consist of DDCMP running over UDP. I feel as though there?s something missing
but that might be misunderstanding. A DDCMP packet is encapsulated in a UDP one and sent.
The receiver gets it or doesn?t because that?s the nature of UDP. I?m discovering it?s
often the latter. A dropped HELLO or its response brings a circuit down. This may explain
why there?s a certain amount of flapping between PyDECnet?s DDCMP over UDP circuits. I
notice it a lot between area 31 and me but but much less so with others.
>>>
>>> In the old days, DDCMP was run over a line protocol (sync or async) that had
its own error correction/retransmit protocol, was it not? So a corrupted packet containing
a HELLO would be handled at the line level and retransmitted usually long before a listen
timer expired?
>>>
>>> Are we missing that level of correction and relying on what happens higher up
in DECnet to handle missing packets?
>>>
>>> I?m having similar issues (at least on paper) with an implementation of the
CI packet protocol over UDP having initially and quite fatally assumed that a packet
transmitted over UDP would arrive and therefore wouldn?t need any of the lower level
protocol that a real CI needed. TCP streams are more trouble in other ways.
>>>
>>> Just some thoughts
>>>
>>> Keith
>>
>