The functioning DUP simulation only sends packets. If the connection is TCP, then a 2
byte packet size is stuffed ahead of the packet data on transmit and the receiving side
strips this from the data presented to the simulated system. In UDP mode, the UDP packet
size is stuffed into the beginning of the packet buffer and that size is then also
stripped just like the TCP case.
The problems that Reindert is seeing are probably related to timing of byte delivery into
the simulation or timing of successive packets being delivered which may possibly overrun
buffer space in the simulated OS.
From: owner-hecnet at Update.UU.SE <owner-hecnet at Update.UU.SE> On Behalf Of Paul
Koning
Sent: Sunday, December 12, 2021 10:29 AM
To: hecnet at update.uu.se
Subject: Re: [HECnet] native Dup sync line revisited --> preliminary tests reveals
problems
On Dec 11, 2021, at 7:41 PM, R. Voorhorst <R.Voorhorst at
swabhawat.com<mailto:R.Voorhorst at swabhawat.com>> wrote:
L.S.
This is the follow-up from the Dup-without-Kdp in character mode discussion somewhat
earlier, where I stated it was not supported per the specifications in Simh comments and
Mark P had some comments about that it was supported.
During a moment of spare time, I reactivated a triple node Rsx-11M+ test set, used for
specific Decnet testing for Phase-I to IV. These triple nodes were setup for using every
possible device for Decnet communications, amongst them the native (non-kdp) Dup. This one
in multipoint mode is character based and is driven through a Rsx Ddcmp software driver to
participate in Decnet communications.
If Simh Dup simulation is supported in native character mode, this would establish
sufficient proof for a well-functioning character based synchronous line.
Yes, if you use TCP mode. It's not likely to work in UDP mode because then the DMC
emulation expects to get a full DDCMP frame in a single UDP packet. But in TCP mode it
just picks apart the byte stream.
When the Dup line is interfaced to a Dmc line, the test reveals flapping line behaviour:
line up - circuit fault ? line down; when trying to transfer data an immediate circuit
fault appears.
When the Dup is interfaced to another Dup, there is packet exchange so basically it should
be able to work. However, although there are no problems with the lower length (8)
signaling packets, the moment 22 char packets are transferred, the receiver complains
about bad header as well as bad data crc checksums, so the packets are somewhat mangled.
It prohibits to start Decnet communications.
It looks like this in the snippet below:
DBG(10561953465)> DUP RCV: Line:0 0000 81 0C C0 00 01 01 1D DF 01 28 04 01 40 02 02 00
.........(..@<mailto:..@>...
DBG(10561953465)> DUP RCV: Line:0 0010 00 0F 00 00 1D 46
.....F
DBG(10561953465)> DUP RCV: rxnexttime=10561870955 (-20000 usecs)
DBG(10561953465)> DUP PKT: Line0: <<< RCV Packet len: 22
DBG(10561953465)> DUP PKT: Data Message, Count: 12, Num: 1, Flags: SQ, Resp: 0, HDRCRC:
BAD, DATACRC: BAD
...
Per Mark P?s comment that some kind of filtering takes place in non-kdp dup mode, that
might indicate that the source of the problems could be located in that corner.
That behaviour needs to be examined.
I wonder what kind of filtering that might be. The header looks completely normal for a
DDCMP data frame, it indicates a data length of 12 bytes which matches what you see in
that packet. 12 would be the expected length of a Routing Init message, normally the
first data packet sent after DDCMP startup is complete. Then again, its contents make no
sense as an Init message. Sending node 1.40, does that seem right? But the
"tiiinfo" (node type etc.) is 0x40 which isn't valid. Nor is the routing
version (0.0.15) or the hello timer value (0).
Could it be that one side is set in TELNET mode while the other is not? The settings must
match or 0xff bytes are mishandled.
paul