Speaking of VMS and DECnet, I did a fix yesterday to the routing layer
in RSX in order to work with older versions of VMS that have a bug
preventing connectivity to be established.
Basically, the problem is that when the link is being established, both
sides exchange some information, which contains things like the link
maximum buffer size. Commonly this is 576 bytes. But if you announce a
larger buffer size, the remote end can use that if it wants to. This is
separate from the segment buffer size, which is used by DECnet itself
when communicating.
However, older versions of VMS go bonkers if the link maximum size is
larger than some value. My guess would be if the size is larger than the
local link maximum buffer size, but it might be if it's larger than the
segment buffer size as well, or something complete else. I have not
explored this deeply.
But I changed RSX to instead of announcing the actual link maximum
buffer size, it will announce the segment buffer size.
This will cause the routing layer to split packets up in smaller parts
when it might not have needed to, but apart from that, things will work.
Newer versions of VMS do not have this problem, but it's observable on
VMS V5.4 at least.
The effect if the problem is hit is that the other side will announce
the circuit to come up, followed pretty soon by it being down again, and
this will repeat for ever. And the link never actually becomes operational.
The reason I've observed this, done the patch, and have the problem is
in the end because of a "problem" in the Qbus ethernet controllers. It's
a known problem, which is that if the controller runs out of buffers in
the middle of a receive which requires multiple buffers, the controller
will never cause an interrupt, and the host will not know that reception
of packets have stopped. (And reception have stopped since there are no
free buffers...)
This is usually not a problem for DECnet with 576 byte buffers, since
the segment size is 576 bytes. However, with TCP/IP, you normally have a
MTU of 1500. Which will require 3 buffers to receive full sized packets.
So I've seen this happen lots of times on loaded systems with high
TCP/IP packet rates. Which is what started my investigation.
The obvious solution to all of this is to instead tell DECnet to use
1500 byte buffers. That way you will never have a packet that requires
more than one buffer, and you will not get the hung ethernet controller.
But when you tell DECnet to use 1500 byte buffers, the routing layer
will then tell that the link maximum size is 1500, which then leads to
the problem mentioned above.
And of course, I know of one VMS V5.4 host on HECnet, to which Multinet
links then started failing, while it was working fine pretty much
everywhere else.
So I had to not use large buffers, and instead artificially try and tell
TCP/IP to not use large packets, in order to try and avoid ethernet
controller stops. (Oh, the joy!)
Anyway, I'm happy to tell that I've now included a "fixed" version of
the DECnet routing module for RSX so that this now works, with a slight
potential penalty for DECnet.
Johnny
On 2022-03-11 16:47, Paul Koning wrote:
On Mar 11, 2022, at 10:40 AM, Trevor Warwick
<twarwick(a)gmail.com> wrote:
I'm a recent joiner to HECnet, and wondered if anyone else has played around with
Phase V software at all ?
Obviously HECnet itself is a Phase IV network, but you can connect Phase V VAX endnodes
using the Phase IV compatibility mode.
All my systems are simulated with simh, and getting an Ethernet connection to work was
very straightforward, there's just a bit more local configuration required than with
Phase IV.
However, I spent quite a few years at DEC working on VAX synchronous device drivers, so I
was interested to see whether I could get a WAN connection going. The biggest issue was
device support. DECnet-VAX Phase V doesn't support any of the older microcoded
interfaces (DMC11, DMV11, KMX11 etc), and the later devices (DSV11, DSB32, DST32, etc)
aren't supported by simh. The only intersection is actually the DUP/DPV11, where there
was a new VMS driver (SEDRIVER) for Phase V, that replaces the individual
protocol-specific drivers that were previously used with these devices.
I've spent quite a long time this week dredging stuff out of memory from 30 years
ago, and still failing to get a DPV11 link to come up, until I eventually realised that a)
there are some significant differences between the DUP and DPV, and b) simh does not
support the DPV properly (how is one supposed to know this?) ! So after changing my
simulated microVAX into an 8600, and the DPV into a DUP, I've managed to get a DDCMP
link to come up between this and a PyDecnet router.
There still seem to be some rough edges, as it won't always come up until PyDecnet is
bounced, and it's also a bit slow for interactive use, as if there's something
that's only working on a retransmission, or timer expiry. Anyway, it's been a fun
nostalgia exercise to get this far, I'll probably mess about with it some more. Let me
know if you have any questions...
Could you send me traces of the PyDECnet issues you're seeing? I have never seen a
VMS DDCMP, but I've run the PyDECnet DDCMP against several others and it all works
without the sort of troubles you mentioned. I'd like to understand what the problem
is and if PyDECnet either needs a bugfix, or a workaround for a VMS bug.
Some day I may do at least partial Phase V in PyDECnet. Not soon, it's a big job.
paul
_______________________________________________
HECnet mailing list -- hecnet(a)lists.dfupdate.se
To unsubscribe send an email to hecnet-leave(a)lists.dfupdate.se
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol