On 2014-06-05 17:55, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 5:59 AM, Johnny Billquist wrote:
On 2014-06-05 13:58, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 1:27 AM, Johnny Billquist wrote:
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
[...]
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization.
Both simh instances run on the same hardware. XQ set to different
MAC addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer
speed
(I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So you
seem to have hit sime other kind of limitation or something...
Can you elaborate on these 'real network issues'?
This is the first I've heard of anything like this.
I have not had any experience with a real PDP11 talking to a simh PDP or
VAX, but in the past there were no issues with multiple simh VAX simulators
talking to real VAX systems on the same LAN.
Let me know.
Same as I have talked about several times here. Doing file transfers from the
simh system to the real PDP-11 gets horrible performance. It's a simple
question of data overflow and loss, and DECnet performs poorly here
because recovery from lost packets is so bad when you are loosing lots of
packets.
Nothing you really can do about it, unless you want to throttle the ethernet
transmissions.
Hmmm...
There are several possibilities which may help or otherwise be relevant:
1) Your real PDP11 is connected to the LAN via some sort of transceiver. Hopefully you're not using Thickwire or Thinwire Ethernet, but some sort of 10BaseT transceiver connected to a switch port. I've seen switch ports which are VERY POOR at auto-detecting link speed and duplex (especially when dealing with 10Mbit devices). If you're connected through a managed switch, try to hard set the switch port's link speed to 10Mbit and Half Duplex.
2) Try to extend the number of receive buffers available using CFE as you mentioned.
Hopefully one of these will help.
It don't. Believe me, I've seen this, and investigated it years ago.
When you have a real PDP-11 running on half-duplex 10Mb/s talking to something on a 1GB/s full duplex, the PDP-11 simply can't keep up.
This is (repeating myself once more) why I implemented the throttling in the bridge. Talking to the PDP-11 with the bridge sitting inbetween it works just fine.
Traffic also works fine if the fast machine isn't trying to totally drown the PDP-11. So things like interactive traffic, and small stuff works just fine. File transfers is the obvious problem child.
Johnny
On Thursday, June 05, 2014 at 5:59 AM, Johnny Billquist wrote:
On 2014-06-05 13:58, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 1:27 AM, Johnny Billquist wrote:
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
[...]
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization.
Both simh instances run on the same hardware. XQ set to different
MAC addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer
speed
(I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So you
seem to have hit sime other kind of limitation or something...
Can you elaborate on these 'real network issues'?
This is the first I've heard of anything like this.
I have not had any experience with a real PDP11 talking to a simh PDP or
VAX, but in the past there were no issues with multiple simh VAX simulators
talking to real VAX systems on the same LAN.
Let me know.
Same as I have talked about several times here. Doing file transfers from the
simh system to the real PDP-11 gets horrible performance. It's a simple
question of data overflow and loss, and DECnet performs poorly here
because recovery from lost packets is so bad when you are loosing lots of
packets.
Nothing you really can do about it, unless you want to throttle the ethernet
transmissions.
Hmmm...
There are several possibilities which may help or otherwise be relevant:
1) Your real PDP11 is connected to the LAN via some sort of transceiver. Hopefully you're not using Thickwire or Thinwire Ethernet, but some sort of 10BaseT transceiver connected to a switch port. I've seen switch ports which are VERY POOR at auto-detecting link speed and duplex (especially when dealing with 10Mbit devices). If you're connected through a managed switch, try to hard set the switch port's link speed to 10Mbit and Half Duplex.
2) Try to extend the number of receive buffers available using CFE as you mentioned.
Hopefully one of these will help.
Good Luck.
- Mark
On 2014-06-05 13:58, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 1:27 AM, Johnny Billquist wrote:
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
[...]
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization. Both
simh instances run on the same hardware. XQ set to different MAC
addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer speed
(I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So you
seem to have hit sime other kind of limitation or something...
Can you elaborate on these 'real network issues'?
This is the first I've heard of anything like this.
I have not had any experience with a real PDP11 talking to a simh PDP or VAX, but in the past there were no issues with multiple simh VAX simulators talking to real VAX systems on the same LAN.
Let me know.
Same as I have talked about several times here. Doing file transfers from the simh system to the real PDP-11 gets horrible performance. It's a simple question of data overflow and loss, and DECnet performs poorly here because recovery from lost packets is so bad when you are loosing lots of packets.
Nothing you really can do about it, unless you want to throttle the ethernet transmissions.
Johnny
On Thursday, June 05, 2014 at 1:27 AM, Johnny Billquist wrote:
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
[...]
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization. Both
simh instances run on the same hardware. XQ set to different MAC
addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer speed
(I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So you
seem to have hit sime other kind of limitation or something...
Can you elaborate on these 'real network issues'?
This is the first I've heard of anything like this.
I have not had any experience with a real PDP11 talking to a simh PDP or VAX, but in the past there were no issues with multiple simh VAX simulators talking to real VAX systems on the same LAN.
Let me know.
Thanks.
- Mark
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
At 4:59 PM -0700 4/6/14, Mark Pizzolato - Info Comm wrote:
Did you change the number of receive buffers?
That does not seem possible under RSX.
It is, but you need to do it in CFE. No way of dynamically changing at runtime.
That really only indicates that you're running on a platform which can
do Ethernet network I/O in a parallel thread. Most platforms did not
use the threaded model previously. Previously, in 3.9, Polling was
still necessary since the simulator didn't really support an
asynchronous I/O model. The latest codebase has support for
asynchronous network AND disk I/O. On the current codebase if you've
got threads available, then network reads and writes are performed
asynchronously and interrupts (I/O completion) is triggered with a few
microseconds of latency. As I recall from earlier in these threads,
you're running under CentOS on VirtualBox. You didn't mention what
host OS you're running on. In any case, the threaded model might
just work better in the virtual environment.
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization. Both
simh instances run on the same hardware. XQ set to different MAC
addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer speed (I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh talking to a real PDP-11 sitting on the same physical network. So you seem to have hit sime other kind of limitation or something...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On Jun 4, 2014 5:38 PM, Jean-Yves Bernier <bernier at pescadoo.net> wrote: > > At 4:59 PM -0700 4/6/14, Mark Pizzolato - Info Comm wrote: > > >Did you change the number of receive buffers? > > That does not seem possible under RSX. > > > >That really only indicates that you're running on a platform which > >can do Ethernet network I/O in a parallel thread. Most platforms > >did not use the threaded model previously. Previously, in 3.9, > >Polling was still necessary since the simulator didn't really > >support an asynchronous I/O model. The latest codebase has support > >for asynchronous network AND disk I/O. On the current codebase if > >you've got threads available, then network reads and writes are > >performed asynchronously and interrupts (I/O completion) is > >triggered with a few microseconds of latency. As I recall from > >earlier in these threads, you're running under CentOS on VirtualBox. > >You didn't mention what host OS you're running on. In any case, > >the threaded model might just work better in the virtual environment. > > > I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization. Both > simh instances run on the same hardware. XQ set to different MAC > addresses, since this is now enforced. > > Asynchronous network/disk IO may explain the uncommon transfer speed > (I have filled a RM03 in seconds).
I didn't notice you were not testing traffic between VAXes . The threaded network I/o buffers more traffic which could naturally avoid overruns.
Disk I/o with RQ and RP are now asynchronous on VAX and PDP11.
- Mark
At 4:59 PM -0700 4/6/14, Mark Pizzolato - Info Comm wrote:
Did you change the number of receive buffers?
That does not seem possible under RSX.
That really only indicates that you're running on a platform which can do Ethernet network I/O in a parallel thread. Most platforms did not use the threaded model previously. Previously, in 3.9, Polling was still necessary since the simulator didn't really support an asynchronous I/O model. The latest codebase has support for asynchronous network AND disk I/O. On the current codebase if you've got threads available, then network reads and writes are performed asynchronously and interrupts (I/O completion) is triggered with a few microseconds of latency. As I recall from earlier in these threads, you're running under CentOS on VirtualBox. You didn't mention what host OS you're running on. In any case, the threaded model might just work better in the virtual environment.
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization. Both simh instances run on the same hardware. XQ set to different MAC addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer speed (I have filled a RM03 in seconds).
--
Jean-Yves Bernier
Are there any Phase II implementations out there that could be downloaded and run on an emulator such as SIMH? DECnet/E for RSTS V7 would be great since I know that well, but something else would also be interesting.
I m trying to make my DECnet/Python allow Phase II, as opposed to the standard one version back backward compatibility. That s not documented but it is perfectly doable. Finding a real test system is the tricky part.
It would be really neat to be able to find one that uses the marginally documented intercept feature. Does anyone here know where that was used? I have a vague impression that TOPS-10 and/or TOPS-20 treated the front end PDP-11 like another node. so that would be the relay node and the 10 or 20 the end node in a Phase II star network. Is that correct?
paul
On Wednesday, June 04, 2014 at 4:08 PM, Jean-Yves Bernier wrote:
At 10:44 AM -0700 4/6/14, Mark Pizzolato - Info Comm wrote:
If you're going to drive this deeply into testing it would really be
best if you ran with the latest code since your results may ultimately
suggest code changes, AND the latest code may behave significantly
different than the older versions.
Independent of which codebase you're running, and since you're now
tweaking the behavior of the simulated hardware, you may want to look
at sim> SHOW XQ STATS and try to analyze the relationship between these
stats and the ones the OS sees.
Also, you may want to explore what happens if:
NCP> DEFINE LINE QNA-0 RECEIVE BUFFER 32
Is done prior to starting your network... The limit of 32 may be
different on different Operating Systems...
Once again the latest code is available from:
https://github.com/simh/simh/archive/master.zip
Mark, what kind of magic did you perform on XQ?
I tried commit 753e4dc9 and overruns are gone. No more NFT/FAL hangups.
Transfer speed is higher than ever : 10000. blocks in 4 seconds. I can't believe
it. This ROCKS.
Significant changes have been made since 3.9, but I haven't seen substantial throughput differences.
Did you change the number of receive buffers?
Maybe the secret is here:
sim> sh xq
XQ address=17774440-17774457, vector=120, MAC=08:00:2B:AA:BB:01
type=DEQNA, polling=disabled, sanity=OFF
leds=(ON,ON,ON)
attached to eth0
"polling=disabled"
That really only indicates that you're running on a platform which can do Ethernet network I/O in a parallel thread. Most platforms did not use the threaded model previously. Previously, in 3.9, Polling was still necessary since the simulator didn't really support an asynchronous I/O model. The latest codebase has support for asynchronous network AND disk I/O. On the current codebase if you've got threads available, then network reads and writes are performed asynchronously and interrupts (I/O completion) is triggered with a few microseconds of latency. As I recall from earlier in these threads, you're running under CentOS on VirtualBox. You didn't mention what host OS you're running on. In any case, the threaded model might just work better in the virtual environment.
- Mark
At 10:44 AM -0700 4/6/14, Mark Pizzolato - Info Comm wrote:
If you're going to drive this deeply into testing it would really be best if you ran with the latest code since your results may ultimately suggest code changes, AND the latest code may behave significantly different than the older versions.
Independent of which codebase you're running, and since you're now tweaking the behavior of the simulated hardware, you may want to look at sim> SHOW XQ STATS and try to analyze the relationship between these stats and the ones the OS sees.
Also, you may want to explore what happens if:
NCP> DEFINE LINE QNA-0 RECEIVE BUFFER 32
Is done prior to starting your network... The limit of 32 may be different on different Operating Systems...
Once again the latest code is available from: https://github.com/simh/simh/archive/master.zip
Mark, what kind of magic did you perform on XQ?
I tried commit 753e4dc9 and overruns are gone. No more NFT/FAL hangups. Transfer speed is higher than ever : 10000. blocks in 4 seconds. I can't believe it. This ROCKS.
Maybe the secret is here:
sim> sh xq
XQ address=17774440-17774457, vector=120, MAC=08:00:2B:AA:BB:01
type=DEQNA, polling=disabled, sanity=OFF
leds=(ON,ON,ON)
attached to eth0
"polling=disabled"
--
Jean-Yves Bernier