On 2026-03-02 19:00, Paul Koning wrote:
On Mar 2, 2026, at 12:46 PM, Johnny Billquist
<bqt(a)softjar.se>wrote;wrote:
On 02/03/2026 18.42, Paul Koning wrote:
On Mar 2,
2026, at 11:44 AM, Johnny Billquist <bqt(a)softjar.se>wrote;wrote:
Hm. The command "MNC SET CIR IP-0-5 MODE DDCMP" should have given a warning
that this has been deprecated. In essence, it does nothing.
All links are just multinet links, which is DDCMP, with a small headerto just get whole
packets when using TCP.
Johnny
Well, Multinet is not DDCMP, it isn't even in the same province.
PyDECnet certainly supports that, and when used over TCP should work ok in spite of its
fundamental design errors.
Believe it or not. But my Multinet links in RSX, are run through the DDCMP point-to-point
handler.
All I do is add/remove the Multinet 4 byte header (well, I also make use of that header,
obviously). The rest I never had to touch.
DECnet packet sizes are controlled by the
executor parameters, in particular the segment buffer size you mentioned. Yes, Ethernet
supports bigger frames, but 576 was the conventional max size used with DECnet. Actually,
DDCMP also supports much bigger frames (up to 16k is the max the protocol can handle) but
that certainly isn't normally done, especially noton PDP-11s for obvious reasons...
The reason for the size error in the logs is because RSX expects a Multinet header before
the DDCMP packet, which encodes the packet length. And I do a little bit of sanity
checking on it, and that's why you see the error. I suspect/expect that when you say
DDCMP on pyDECnet, it don't have the Multinet header, so then the first 4 bytes are
interpreted as the Multinet header by RSX, which is not what it actually is.
Um, what? Now I'm confused.
Yeah. Sorry. My fault. Using bad terminology here, as well as not really
thinking enough before answering.
Multinet, as implemented in VMS, is a datalink, a
"pointt to point" onein DECnet terminology apart from the fact that it
doesn't include the required semantics. It has the 4 byte header you mentioned,
followed by the payload, so after those 4 bytes you see a routing header.
Right. In RSX, such lines are "controlled" by the DDCMP Point
"driver".
But it is named so, because all point-to-point links that existed back
in the DEC days were DDCMP. But the actual DDCMP link layer processing
happens in a lower level driver, and the DDCMP Point driver is just fed
the packet after the DDCMP bits have been stripped. Which is the same as
what you get from the multinet driver, once the multinet header is stripped.
DDCMP is also a datalink, and there too you have a
header (which includes the data length so it's all self describing) followed by
payload, witha CRC at the end.
Right.
Did you really mean that you have a Multinet header
followed by a DDCMPheader? Or did you just mean that you use the point to point datalink
dependent routing sublayer, passing it the Multinet payload or the DDCMP payload depending
on which datalink is used? That second option is what Iwould expect, and for Multinet
over TCP will work tolerably well -- at least if you use disconnect and reconnect at the
TCP layer to do the datalink reinitialization that's a required point to point service
the Multinet "designers" didn't bother to implement.
The latter. Sorry.
But just to point out something else that should be somewhat obvious -
multinet over TCP is better than DDCMP over TCP. You don't need the
DDCMP layer processing, since TCP is already guaranteeing what DDCMP
otherwise provides. So using DDCMP just adds extra processing and bytes
to transfer compared to Multinet, without any actual gain.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol