On Mar 28, 2021, at 4:40 PM, Johnny Billquist <bqt
at softjar.se> wrote:
On 2021-03-28 22:10, Paul Koning wrote:
On Mar
28, 2021, at 3:33 PM, Johnny Billquist <bqt at softjar.se> wrote:
...
And at least in RSX, there is no generic "point-to-point" type of link.
The types that exist are:
. DDCMP
. Bisync
. SDLC/ADCCP/HDLC
. X.25
. Ethernet
. Hardware protocol
Where hardware is a bit like just saying that the hardware does something, and it's
good enough. No software required, and it will work.
But all except ethernet are more or less point-to-point. But DDCMP can also be used for
multidrop connections. The others would imply point-to-point, but will have possibly have
different other software layers added to handle the specifics of that link protocol.
It's certainly possible to define any number of data link types, with a wide
variety of services. Some are useable with DECnet, others are not.
That list was pulled out from the actual DECnet/RSX sources. It's not any arbitrary,
or generic description of DECnet or links, but trying to tell you what is in RSX.
For point to point, DDCMP and X.25 are both DNA
standard and work. Among the others you mentioned, I would think that Bisync and
SDLC/HDCL/ADCCP would work since they have comparable semantics. "Hardware"
does not appear to meet the stated requirements, not unless it has some sort of control
wires like the modem control signals of RS-232 to deliver the "restart
detection" requirement.
As I mentioned. It basically just assumes the hardware does something. Basically,
DECnet/RSX don't have a clue, and do not get involved, leaving it all to some
hardware.
There are no specific details that can be deduced further for that one. But it is one of
the possible values to assign to a line type.
That might have been used for PCL11-B support which was added in the Phase III RSX
product. It was a parallel TDM bus (at 1 Mb/s I think) connecting up to 16 systems. For
DECnet, each node appeared as the master of a multipoint link with up to 15 tributaries.
There was a full mesh of logical point-to-point connections between all nodes. Like the
DMC-11, it plugged into the system at the Data Link Control layer.
Although I?ve never seen it mentioned, DEC did build a prototype unibus ethernet
controller prior to the DEUNA. It had 1 transmit and 1 receive buff on the board and
transfers were handled by copying data through an I/O page address, sort of like some of
the early 3Com PC adapters. Hardware engineering used it for debugging the H4000
transceiver and only about a dozen were built. I got DECnet-RSX Phase III up and running
on this hardware using a similar addressing scheme to the PCL11-B.
John.
Then there is
GRE, which was done as a broadcast datalink but differs in that it doesn't have
addresses. It works reasonably well because for a two-station connection you can live
without having real addresses. It does have the Ethernet protocol type field; living
without that would be more painful though in theory still possible.
Obviously, GRE does not exist in the DECnet/RSX world. :-)
Multinet is quite another matter. It claims to
be a point to point datalink, but it doesn't obey any of the clearly written
requirements of a point to point datalink. That failure to conform is the reason it
doesn't work right. In the UDP case that is more obvious, but both versions are
broken, it's just that the TCP flavor doesn't show it quite so often.
Technically, TCP should work just fine. But yes, Multinet even have some funny specific
behavior making even TCP a bit tricky.
It's really annoying how they seem to have gone out of their way to make it harder.
But we've talked about this before.
It would have been easier and far more correct to
model the Multinet datalink as a broadcast link, just like GRE did. Unfortunately we
can't do that unilaterally because the routing layer behavior for those two subtypes
is different -- different hello protocols and different data packet headers. So we're
stuck with the bad decision the original fools made.
Right.
And we can't do it anyway, since the existing VMS implementation is the way it is,
and won't change.
If I want to, I could do something on the RSX side which would be more appropriate, and
you could obviously do that also in PyDECnet. But we'd still have to deal with the
broken VMS ways...
But I wouldn't actually model it according to ethernet. For a TCP connection, a ptp
link is the perfect way to look at it.
And I wouldn't probably use UDP at all. But if I would, then yes, it would be
modelled as a broadcast link, even though there would only ever be two points to it.
DDCMP is en entirey different beast. It provides that
reliable data link, and it sits on top of an unreliable physical layer. It handles packet
loss mostly because it handles bit error, which causes packet CRC error and is handled by
timeout and retransmit ("ARQ").
Right. Well... It handles transmission errors through CRC. It handles packet loss also by
using sequence numbers for packets. (Otherwise, a fully lost packet would not be detected.
CRC don't spot such things.)
Sequence numbers also detect duplicates and
reordered packets, with limits. The design of the sequence number space doesn't
account for reordering, since wires don't do that. But if the max queue size is well
below 255, and if the packet lifetime in the connecting network is less than the time it
takes the endpoints to send 256 packets, then reordering will be detected. This is
actually the same constraint as applies to any sequence numbered protocol (like TCP or
NSP) but DDCMP has comparatively small sequence numbers. Not as small as Bisync, though.
:-)
Right. And 255 outstanding packets is plenty in this case, I think.
> ...
> But as far as DDCMP goes, you should essentially not use UDP as a transport. That is
just bad. Use TCP instead. And then you're good.
No. It's certainly ok to use DDCMP over TCP -- except that it has to deal with the
oddity of doing both connect and listen concurrently. But UDP works fine.
Right. My bad. I was totally mixing up DDCMP with the DECnet routing layer and link
managment.
As far as using TCP, that comes down to the problem of how to establish the link in the
first place. It's outside of DDCMP itself, and is no different than if you were to run
DDCMP over a dial up line. Only one side is supposed to dial the number, while on the
other side you would be answering the incoming call.
But yes, you need some way to tell which end should be listening, and which end should do
the connecting.
Actually, the SIMH implementation, and therefore the PyDECnet one,
does not do so. That's different from Multinet, which comes with a connect vs. listen
setting. In the case of DDCMP, both endpoints connect and both listen. If a connection
is made the other pending one is abandoned. If both connections are made at essentially
the same time, things get a bit messy, which is why I don't really like this
technique.
Yeah. I wouldn't do it that way. That is ugly.
Better to designate one as the connector and the other as the listener.
I mean, really. This was already regarded in that same way already in the real world back
then. Dial up modems as well as X.25 are the exact same type of thing.
Both sides cannot initiate the connection. One is the initiator, and the other the one
accepting the connection.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol