On Thursday, June 05, 2014 at 10:53 AM, Johnny Billquist wrote:
On 2014-06-05 19:23, Paul_Koning at
Dell.com wrote:
On Jun 5, 2014, at 12:47 PM, Johnny Billquist <bqt at softjar.se> wrote:
...
It don't. Believe me, I've seen this, and investigated it years ago.
When you have a real PDP-11 running on half-duplex 10Mb/s talking to
something on a 1GB/s full duplex, the PDP-11 simply can't keep up.
Makes sense. The issue isn't the 10 Mb/s Ethernet. The switch deals with
that part. The issue is that a PDP-11 isn't fast enough to keep up with a 10
Mb/s Ethernet going flat out. If I remember right, a Unibus is slower than
Ethernet, and while a Q22 bus is slightly faster and could theoretically keep
up, a practical system cannot.
It's several things. The Unibus is definitely slower than the ethernet if I
remember right. The Qbus, while faster, is also slower than ethernet.
So there is definitely a bottleneck at that level.
However, there is also an issue in the switch. If one system is pumping out
packets on a 1Gb/s port, and the switch is forwarding them to a 10Mb/s port,
the switch needs to buffer, and might need to buffer a lot.
There are limitations at that level as well, and I would not be surprised if that
also can come into play here.
Thridly, even given the limitations above, we then also have the software on
the PDP-11, which also needs to set up new buffers to receive packet into,
and the system itself will not be able to keep up here. So the ethernet
controller is probably running out of buffers to DMA data into as well.
All of this is absolutely true, but it would seem that no one is trying to push full wire
speed traffic between systems. It would seem that given high quality signal levels on
the wires in the data path (i.e. no excessive collisions due to speed/duplex mismatching),
that the natural protocol on the wire (with acknowledgements, etc.) should be able to move
data at the speed of the lowest component in the datapath. That may be either the Unibus
or Qbus or the PDP11's CPU.
Clearly we can't make old hardware work any faster than it ever did, and I certainly
didn't think you were raising an issue about that when you said:
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So
you seem to have hit sime other kind of limitation or something...
I wouldn't think that traffic between PDP11 systems would put so much data in flight
that all of the above issues would come into play.
Hmmm...Grind...Grind... I do seem have some vague recollection of an issue with some DEQNA
devices not being able to handle back-to-back packets coming in from the wire. This
issue might have been behind DEC's wholesale replacement/upgrading of every DEQNA in
the field, and it also may have had something to do with the DEQNA not being officially
supported as a cluster device...
Hey, just do:
sim> SET XQ TYPE=DELQA-T
and your all good. :-) Too bad you can't just upgrade real hardware like that.
- Mark