On Thursday, June 05, 2014 at 10:53 AM, Johnny Billquist wrote:
On 2014-06-05 19:23, Paul_Koning at Dell.com wrote:
On Jun 5, 2014, at 12:47 PM, Johnny Billquist <bqt at softjar.se> wrote:
...
It don't. Believe me, I've seen this, and investigated it years ago.
When you have a real PDP-11 running on half-duplex 10Mb/s talking to
something on a 1GB/s full duplex, the PDP-11 simply can't keep up.
Makes sense. The issue isn't the 10 Mb/s Ethernet. The switch deals with
that part. The issue is that a PDP-11 isn't fast enough to keep up with a 10
Mb/s Ethernet going flat out. If I remember right, a Unibus is slower than
Ethernet, and while a Q22 bus is slightly faster and could theoretically keep
up, a practical system cannot.
It's several things. The Unibus is definitely slower than the ethernet if I
remember right. The Qbus, while faster, is also slower than ethernet.
So there is definitely a bottleneck at that level.
However, there is also an issue in the switch. If one system is pumping out
packets on a 1Gb/s port, and the switch is forwarding them to a 10Mb/s port,
the switch needs to buffer, and might need to buffer a lot.
There are limitations at that level as well, and I would not be surprised if that
also can come into play here.
Thridly, even given the limitations above, we then also have the software on
the PDP-11, which also needs to set up new buffers to receive packet into,
and the system itself will not be able to keep up here. So the ethernet
controller is probably running out of buffers to DMA data into as well.
All of this is absolutely true, but it would seem that no one is trying to push full wire speed traffic between systems. It would seem that given high quality signal levels on the wires in the data path (i.e. no excessive collisions due to speed/duplex mismatching), that the natural protocol on the wire (with acknowledgements, etc.) should be able to move data at the speed of the lowest component in the datapath. That may be either the Unibus or Qbus or the PDP11's CPU.
Clearly we can't make old hardware work any faster than it ever did, and I certainly didn't think you were raising an issue about that when you said:
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So
you seem to have hit sime other kind of limitation or something...
I wouldn't think that traffic between PDP11 systems would put so much data in flight that all of the above issues would come into play.
Hmmm...Grind...Grind... I do seem have some vague recollection of an issue with some DEQNA devices not being able to handle back-to-back packets coming in from the wire. This issue might have been behind DEC's wholesale replacement/upgrading of every DEQNA in the field, and it also may have had something to do with the DEQNA not being officially supported as a cluster device...
Hey, just do:
sim> SET XQ TYPE=DELQA-T
and your all good. :-) Too bad you can't just upgrade real hardware like that.
- Mark
On 2014-06-05 19:23, Paul_Koning at Dell.com wrote:
On Jun 5, 2014, at 12:47 PM, Johnny Billquist <bqt at softjar.se> wrote:
...
It don't. Believe me, I've seen this, and investigated it years ago.
When you have a real PDP-11 running on half-duplex 10Mb/s talking to something on a 1GB/s full duplex, the PDP-11 simply can't keep up.
Makes sense. The issue isn t the 10 Mb/s Ethernet. The switch deals with that part. The issue is that a PDP-11 isn t fast enough to keep up with a 10 Mb/s Ethernet going flat out. If I remember right, a Unibus is slower than Ethernet, and while a Q22 bus is slightly faster and could theoretically keep up, a practical system cannot.
It's several things. The Unibus is definitely slower than the ethernet if I remember right. The Qbus, while faster, is also slower than ethernet.
So there is definitely a bottleneck at that level.
However, there is also an issue in the switch. If one system is pumping out packets on a 1Gb/s port, and the switch is forwarding them to a 10Mb/s port, the switch needs to buffer, and might need to buffer a lot. There are limitations at that level as well, and I would not be surprised if that also can come into play here.
Thridly, even given the limitations above, we then also have the software on the PDP-11, which also needs to set up new buffers to receive packet into, and the system itself will not be able to keep up here. So the ethernet controller is probably running out of buffers to DMA data into as well.
Johnny
Mark, a deuna couldn't even keep up with thick wire ethernet speed. Iirc it could read up to 3 Mb/s average over a period of time, with full sized frames. Probably the same for the deqna.
Verzonden vanaf mijn BlackBerry 10-smartphone.
Origineel bericht
Van: Mark Pizzolato - Info Comm
Verzonden: donderdag 5 juni 2014 13:58
Aan: hecnet at Update.UU.SE
Beantwoorden: hecnet at Update.UU.SE
Onderwerp: RE: [HECnet] Emulated XQ polling timer setting and data overrun
On Thursday, June 05, 2014 at 1:27 AM, Johnny Billquist wrote:
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
[...]
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization. Both
simh instances run on the same hardware. XQ set to different MAC
addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer speed
(I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So you
seem to have hit sime other kind of limitation or something...
Can you elaborate on these 'real network issues'?
This is the first I've heard of anything like this.
I have not had any experience with a real PDP11 talking to a simh PDP or VAX, but in the past there were no issues with multiple simh VAX simulators talking to real VAX systems on the same LAN.
Let me know.
Thanks.
- Mark
On Jun 5, 2014, at 12:47 PM, Johnny Billquist <bqt at softjar.se> wrote:
...
It don't. Believe me, I've seen this, and investigated it years ago.
When you have a real PDP-11 running on half-duplex 10Mb/s talking to something on a 1GB/s full duplex, the PDP-11 simply can't keep up.
Makes sense. The issue isn t the 10 Mb/s Ethernet. The switch deals with that part. The issue is that a PDP-11 isn t fast enough to keep up with a 10 Mb/s Ethernet going flat out. If I remember right, a Unibus is slower than Ethernet, and while a Q22 bus is slightly faster and could theoretically keep up, a practical system cannot.
paul
On 2014-06-05 17:55, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 5:59 AM, Johnny Billquist wrote:
On 2014-06-05 13:58, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 1:27 AM, Johnny Billquist wrote:
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
[...]
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization.
Both simh instances run on the same hardware. XQ set to different
MAC addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer
speed
(I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So you
seem to have hit sime other kind of limitation or something...
Can you elaborate on these 'real network issues'?
This is the first I've heard of anything like this.
I have not had any experience with a real PDP11 talking to a simh PDP or
VAX, but in the past there were no issues with multiple simh VAX simulators
talking to real VAX systems on the same LAN.
Let me know.
Same as I have talked about several times here. Doing file transfers from the
simh system to the real PDP-11 gets horrible performance. It's a simple
question of data overflow and loss, and DECnet performs poorly here
because recovery from lost packets is so bad when you are loosing lots of
packets.
Nothing you really can do about it, unless you want to throttle the ethernet
transmissions.
Hmmm...
There are several possibilities which may help or otherwise be relevant:
1) Your real PDP11 is connected to the LAN via some sort of transceiver. Hopefully you're not using Thickwire or Thinwire Ethernet, but some sort of 10BaseT transceiver connected to a switch port. I've seen switch ports which are VERY POOR at auto-detecting link speed and duplex (especially when dealing with 10Mbit devices). If you're connected through a managed switch, try to hard set the switch port's link speed to 10Mbit and Half Duplex.
2) Try to extend the number of receive buffers available using CFE as you mentioned.
Hopefully one of these will help.
It don't. Believe me, I've seen this, and investigated it years ago.
When you have a real PDP-11 running on half-duplex 10Mb/s talking to something on a 1GB/s full duplex, the PDP-11 simply can't keep up.
This is (repeating myself once more) why I implemented the throttling in the bridge. Talking to the PDP-11 with the bridge sitting inbetween it works just fine.
Traffic also works fine if the fast machine isn't trying to totally drown the PDP-11. So things like interactive traffic, and small stuff works just fine. File transfers is the obvious problem child.
Johnny
On Thursday, June 05, 2014 at 5:59 AM, Johnny Billquist wrote:
On 2014-06-05 13:58, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 1:27 AM, Johnny Billquist wrote:
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
[...]
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization.
Both simh instances run on the same hardware. XQ set to different
MAC addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer
speed
(I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So you
seem to have hit sime other kind of limitation or something...
Can you elaborate on these 'real network issues'?
This is the first I've heard of anything like this.
I have not had any experience with a real PDP11 talking to a simh PDP or
VAX, but in the past there were no issues with multiple simh VAX simulators
talking to real VAX systems on the same LAN.
Let me know.
Same as I have talked about several times here. Doing file transfers from the
simh system to the real PDP-11 gets horrible performance. It's a simple
question of data overflow and loss, and DECnet performs poorly here
because recovery from lost packets is so bad when you are loosing lots of
packets.
Nothing you really can do about it, unless you want to throttle the ethernet
transmissions.
Hmmm...
There are several possibilities which may help or otherwise be relevant:
1) Your real PDP11 is connected to the LAN via some sort of transceiver. Hopefully you're not using Thickwire or Thinwire Ethernet, but some sort of 10BaseT transceiver connected to a switch port. I've seen switch ports which are VERY POOR at auto-detecting link speed and duplex (especially when dealing with 10Mbit devices). If you're connected through a managed switch, try to hard set the switch port's link speed to 10Mbit and Half Duplex.
2) Try to extend the number of receive buffers available using CFE as you mentioned.
Hopefully one of these will help.
Good Luck.
- Mark
On 2014-06-05 13:58, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 1:27 AM, Johnny Billquist wrote:
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
[...]
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization. Both
simh instances run on the same hardware. XQ set to different MAC
addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer speed
(I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So you
seem to have hit sime other kind of limitation or something...
Can you elaborate on these 'real network issues'?
This is the first I've heard of anything like this.
I have not had any experience with a real PDP11 talking to a simh PDP or VAX, but in the past there were no issues with multiple simh VAX simulators talking to real VAX systems on the same LAN.
Let me know.
Same as I have talked about several times here. Doing file transfers from the simh system to the real PDP-11 gets horrible performance. It's a simple question of data overflow and loss, and DECnet performs poorly here because recovery from lost packets is so bad when you are loosing lots of packets.
Nothing you really can do about it, unless you want to throttle the ethernet transmissions.
Johnny
On Thursday, June 05, 2014 at 1:27 AM, Johnny Billquist wrote:
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
[...]
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization. Both
simh instances run on the same hardware. XQ set to different MAC
addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer speed
(I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So you
seem to have hit sime other kind of limitation or something...
Can you elaborate on these 'real network issues'?
This is the first I've heard of anything like this.
I have not had any experience with a real PDP11 talking to a simh PDP or VAX, but in the past there were no issues with multiple simh VAX simulators talking to real VAX systems on the same LAN.
Let me know.
Thanks.
- Mark
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
At 4:59 PM -0700 4/6/14, Mark Pizzolato - Info Comm wrote:
Did you change the number of receive buffers?
That does not seem possible under RSX.
It is, but you need to do it in CFE. No way of dynamically changing at runtime.
That really only indicates that you're running on a platform which can
do Ethernet network I/O in a parallel thread. Most platforms did not
use the threaded model previously. Previously, in 3.9, Polling was
still necessary since the simulator didn't really support an
asynchronous I/O model. The latest codebase has support for
asynchronous network AND disk I/O. On the current codebase if you've
got threads available, then network reads and writes are performed
asynchronously and interrupts (I/O completion) is triggered with a few
microseconds of latency. As I recall from earlier in these threads,
you're running under CentOS on VirtualBox. You didn't mention what
host OS you're running on. In any case, the threaded model might
just work better in the virtual environment.
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization. Both
simh instances run on the same hardware. XQ set to different MAC
addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer speed (I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh talking to a real PDP-11 sitting on the same physical network. So you seem to have hit sime other kind of limitation or something...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On Jun 4, 2014 5:38 PM, Jean-Yves Bernier <bernier at pescadoo.net> wrote: > > At 4:59 PM -0700 4/6/14, Mark Pizzolato - Info Comm wrote: > > >Did you change the number of receive buffers? > > That does not seem possible under RSX. > > > >That really only indicates that you're running on a platform which > >can do Ethernet network I/O in a parallel thread. Most platforms > >did not use the threaded model previously. Previously, in 3.9, > >Polling was still necessary since the simulator didn't really > >support an asynchronous I/O model. The latest codebase has support > >for asynchronous network AND disk I/O. On the current codebase if > >you've got threads available, then network reads and writes are > >performed asynchronously and interrupts (I/O completion) is > >triggered with a few microseconds of latency. As I recall from > >earlier in these threads, you're running under CentOS on VirtualBox. > >You didn't mention what host OS you're running on. In any case, > >the threaded model might just work better in the virtual environment. > > > I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization. Both > simh instances run on the same hardware. XQ set to different MAC > addresses, since this is now enforced. > > Asynchronous network/disk IO may explain the uncommon transfer speed > (I have filled a RM03 in seconds).
I didn't notice you were not testing traffic between VAXes . The threaded network I/o buffers more traffic which could naturally avoid overruns.
Disk I/o with RQ and RP are now asynchronous on VAX and PDP11.
- Mark