Afternoon all,
What's the latest version of VWS/the workstation stuff? I've heard DECwindows is a bit heavy for a VS2000. ;)
I found JVWS044 (This...might be the Japanese version...) over on slave.hecnet.eu's archive...but I can't manage anything beyond 10K/sec there. Anyone have a newer version on a faster, more local link anywhere?
Thanks!
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
I know Chrissie is/was a member of the HECnet list so hopefully I'm not stepping on any toes, but there might be some here interested by this announcement, even interested in taking over the project. In any case I thought some would like to know.
John H. Reinhardt
-------- Original Message --------
Subject: [Linux-decnet-user] All finished
Date: Tue, 10 Jun 2014 09:02:37 +0100
From: Chrissie <christine.caulfield at googlemail.com>
To: linux-decnet-user at lists.sourceforge.net
So this is it, I'm announcing the end of my involvement in all things
Linux/DECnet related.
I've orphaned all the Debian packages and I'm going to leave Sourceforge
and this mailing list. I'm not going to delete the project (if that's
even possible) as I think there's value in leaving the code online in
case people want it.
Thank you do everyone who has contributed to this project in the form of
code, documentation, help on the mailing list, and just general
encourangement. It's been fun.
But life moves on and this project has been almost dead for a while and
I need to stop pretending that it's something I'm doing.
XX
Chrissie
_______________________________________________
Project Home Page: http://linux-decnet.wiki.sourceforge.net/
Linux-decnet-user mailing list
Linux-decnet-user at lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/linux-decnet-user
On 2014-06-07 00:03, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 4:50 PM, Mark Pizzolato wrote:
On Thursday, June 05, 2014 at 4:33 PM, Johnny Billquist wrote:
On 2014-06-06 01:16, Mark Pizzolato - Info Comm wrote:
OK. So you sound like you really need a throttling option for the
LAN
devices. I'll look over the throttling logic in the bridge and fold
in something similar.
Well, I'm not so sure that code should be a model to pattern anything after.
Everything about my bridge is just a hack. Done just to fix an
immediate problem without any proper design at any corner. :-)
It definitely has more going for it then my first thoughts. Before looking at
your bridge code, I was merely going to create an option to measure the time
between successive packets. Your model allows for some bursting but then
starts throttling.
I'll make it an option and the control variables (TIMEWINDOW, BURSTSIZE,
DELAY) configurable. Default will be no throttle.
The current simh code base has Ethernet transmit throttling support for XQ and XU devices.
The default behavior is unchanged (i.e. no throttling).
Throttling for a particular LAN interface can be enabled with:
sim> SET {XQ|XQB|XU|XUB} THROTTLE=DISABLE
sim> SET {XQ|XQB|XU|XUB} THROTTLE=ENABLE
sim> SET {XQ|XQB|XU|XUB} THROTTLE=TIME=n{;BURST=n{;DELAY=n}}
Where:
TIME=n specifies an inter-packet gap (in ms) which can trigger throttling
BURST=n specifies the number of successive packets which sent with a gap < TIME will trigger throttling
DELAY=n specifies the number of milliseconds to delay when before transmitting the next packet
Defaults for these are TIME=5, BURST=4 and DELAY=10
The defaults were taken from Johnny's Bridge program. Since I don't have working physical systems to test with, I'm looking for feedback on good working choices for these default values.
Cool. I'll test this in a couple of days. This would actually be a big improvement for me.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On Thursday, June 05, 2014 at 4:50 PM, Mark Pizzolato wrote:
On Thursday, June 05, 2014 at 4:33 PM, Johnny Billquist wrote:
On 2014-06-06 01:16, Mark Pizzolato - Info Comm wrote:
OK. So you sound like you really need a throttling option for the
LAN
devices. I'll look over the throttling logic in the bridge and fold
in something similar.
Well, I'm not so sure that code should be a model to pattern anything after.
Everything about my bridge is just a hack. Done just to fix an
immediate problem without any proper design at any corner. :-)
It definitely has more going for it then my first thoughts. Before looking at
your bridge code, I was merely going to create an option to measure the time
between successive packets. Your model allows for some bursting but then
starts throttling.
I'll make it an option and the control variables (TIMEWINDOW, BURSTSIZE,
DELAY) configurable. Default will be no throttle.
The current simh code base has Ethernet transmit throttling support for XQ and XU devices.
The default behavior is unchanged (i.e. no throttling).
Throttling for a particular LAN interface can be enabled with:
sim> SET {XQ|XQB|XU|XUB} THROTTLE=DISABLE
sim> SET {XQ|XQB|XU|XUB} THROTTLE=ENABLE
sim> SET {XQ|XQB|XU|XUB} THROTTLE=TIME=n{;BURST=n{;DELAY=n}}
Where:
TIME=n specifies an inter-packet gap (in ms) which can trigger throttling
BURST=n specifies the number of successive packets which sent with a gap < TIME will trigger throttling
DELAY=n specifies the number of milliseconds to delay when before transmitting the next packet
Defaults for these are TIME=5, BURST=4 and DELAY=10
The defaults were taken from Johnny's Bridge program. Since I don't have working physical systems to test with, I'm looking for feedback on good working choices for these default values.
- Mark
On Thursday, June 05, 2014 at 4:33 PM, Johnny Billquist wrote:
On 2014-06-06 01:16, Mark Pizzolato - Info Comm wrote:
OK. So you sound like you really need a throttling option for the LAN
devices. I'll look over the throttling logic in the bridge and fold in something
similar.
Well, I'm not so sure that code should be a model to pattern anything after.
Everything about my bridge is just a hack. Done just to fix an immediate
problem without any proper design at any corner. :-)
It definitely has more going for it then my first thoughts. Before looking at your bridge code, I was merely going to create an option to measure the time between successive packets. Your model allows for some bursting but then starts throttling.
I'll make it an option and the control variables (TIMEWINDOW, BURSTSIZE, DELAY) configurable. Default will be no throttle.
- Mark
On 2014-06-06 01:16, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 12:04 PM, Johnny Billquist wrote:
On 2014-06-05 20:46, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 10:53 AM, Johnny Billquist wrote:
On 2014-06-05 19:23, Paul_Koning at Dell.com wrote:
On Jun 5, 2014, at 12:47 PM, Johnny Billquist <bqt at softjar.se> wrote:
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So
you seem to have hit sime other kind of limitation or something...
I wouldn't think that traffic between PDP11 systems would put so much
data in flight that all of the above issues would come into play.
Hmmm...Grind...Grind... I do seem have some vague recollection of an
issue with some DEQNA devices not being able to handle back-to-back
packets coming in from the wire. This issue might have been behind DEC's
wholesale replacement/upgrading of every DEQNA in the field, and it also
may have had something to do with the DEQNA not being officially
supported as a cluster device...
Hey, just do:
sim> SET XQ TYPE=DELQA-T
and your all good. :-) Too bad you can't just upgrade real hardware like
that.
Uh... It's the PDP-11 that have problems receiving packets, not simh.
I knew that. It was a joke. Notice ":-)"....
Well, it was ambiguous. :-)
Also, my PDP-11 already have a DELQA-T. :-) Finally, of course a simulated
PDP-11 running on a fast machine will be able to output data at a high rate.
Why would it not?
It would. I never argued that it wouldn't.
I read it as you was thinking that it would not. Maybe I'm being too literal tonight.
Two real PDP-11 systems do not get any problems.
Sending data from a real PDP-11 to the one in simh (or whatever) does not
have any problems either.
It is only when you send lots of data from a simulated machine (be that a
PDP-11 or a VAX) to a real PDP-11 that you get these issues. I would suspect
you should be able to see similar issues if the receiving end was physical VAX
as well.
I just tried to fire up the old VAX Station 4000 I've got on the shelf. This system hasn't been booted in more than 5 years (maybe 10). When it last booted, it didn't have any working disks, so I was planning to boot it into a cluster from my simh host. Without a disk, I'll have to create a RAM disk to test file copies from simh side to real side... I haven't had a monitor for the system for at about 15 years, but the last booting activities worked fine with a cable to one of the serial ports. But today it doesn't work. :-(
You would have to boot VMS, since you don't have DECnet under much else. (I know you could have Ultrix with DECnet, but I somehow don't think that's the right way... :-) )
The problem is also partly DECnet. DECnet do not seem to keep packets that
arrive out of order. So if a packet in a sequence is lost, DECnet is going to
retransmit all packets from that point forward. Meaning that when the
session timer times out, the retransmission happens, and then you will yet
again drop a packet in the whole sequence of packets that are sent. Each
time the session timer times out, DECnet also do a backoff on the timeout
time of that timer, until the session timer is about 2 minutes. So after a while
you end up with DECnet sending a burst of packets, some of which are lost. It
then takes about 2 minutes before a retransmission happens, at which point
you get another 2 minute timeout. Thus, performance sucks.
TCP/IP is better (well, my TCP/IP anyway), in that when I loose a packet, I still
keep whatever later packets I get, so after a while I get to a stable mode
where TCP only sends one packet at a time, since the window is full. Only
actually lost packets needs to be retransmitted, so I actually do get to this
stable point.
OK. So you sound like you really need a throttling option for the LAN devices. I'll look over the throttling logic in the bridge and fold in something similar.
Well, I'm not so sure that code should be a model to pattern anything after. Everything about my bridge is just a hack. Done just to fix an immediate problem without any proper design at any corner. :-)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On Thursday, June 05, 2014 at 12:04 PM, Johnny Billquist wrote:
On 2014-06-05 20:46, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 10:53 AM, Johnny Billquist wrote:
On 2014-06-05 19:23, Paul_Koning at Dell.com wrote:
On Jun 5, 2014, at 12:47 PM, Johnny Billquist <bqt at softjar.se> wrote:
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So
you seem to have hit sime other kind of limitation or something...
I wouldn't think that traffic between PDP11 systems would put so much
data in flight that all of the above issues would come into play.
Hmmm...Grind...Grind... I do seem have some vague recollection of an
issue with some DEQNA devices not being able to handle back-to-back
packets coming in from the wire. This issue might have been behind DEC's
wholesale replacement/upgrading of every DEQNA in the field, and it also
may have had something to do with the DEQNA not being officially
supported as a cluster device...
Hey, just do:
sim> SET XQ TYPE=DELQA-T
and your all good. :-) Too bad you can't just upgrade real hardware like
that.
Uh... It's the PDP-11 that have problems receiving packets, not simh.
I knew that. It was a joke. Notice ":-)"....
Also, my PDP-11 already have a DELQA-T. :-) Finally, of course a simulated
PDP-11 running on a fast machine will be able to output data at a high rate.
Why would it not?
It would. I never argued that it wouldn't.
Two real PDP-11 systems do not get any problems.
Sending data from a real PDP-11 to the one in simh (or whatever) does not
have any problems either.
It is only when you send lots of data from a simulated machine (be that a
PDP-11 or a VAX) to a real PDP-11 that you get these issues. I would suspect
you should be able to see similar issues if the receiving end was physical VAX
as well.
I just tried to fire up the old VAX Station 4000 I've got on the shelf. This system hasn't been booted in more than 5 years (maybe 10). When it last booted, it didn't have any working disks, so I was planning to boot it into a cluster from my simh host. Without a disk, I'll have to create a RAM disk to test file copies from simh side to real side... I haven't had a monitor for the system for at about 15 years, but the last booting activities worked fine with a cable to one of the serial ports. But today it doesn't work. :-(
The problem is also partly DECnet. DECnet do not seem to keep packets that
arrive out of order. So if a packet in a sequence is lost, DECnet is going to
retransmit all packets from that point forward. Meaning that when the
session timer times out, the retransmission happens, and then you will yet
again drop a packet in the whole sequence of packets that are sent. Each
time the session timer times out, DECnet also do a backoff on the timeout
time of that timer, until the session timer is about 2 minutes. So after a while
you end up with DECnet sending a burst of packets, some of which are lost. It
then takes about 2 minutes before a retransmission happens, at which point
you get another 2 minute timeout. Thus, performance sucks.
TCP/IP is better (well, my TCP/IP anyway), in that when I loose a packet, I still
keep whatever later packets I get, so after a while I get to a stable mode
where TCP only sends one packet at a time, since the window is full. Only
actually lost packets needs to be retransmitted, so I actually do get to this
stable point.
OK. So you sound like you really need a throttling option for the LAN devices. I'll look over the throttling logic in the bridge and fold in something similar.
- Mark
On Jun 5, 2014, at 3:24 PM, Johnny Billquist <bqt at softjar.se> wrote:
On 2014-06-05 21:12, Paul_Koning at Dell.com wrote:
...
True, provided congestion control is working. In the days of DECnet Phase IV, congestion control was a topic of active research, rather than a well understood problem. (Things like the TCP/IP DEC bit are an outcome of that work as well as a lot of other less obvious knowledge that made its way into other protocols.) So in Phase IV, you probably don t have effective congestion control, and scenarios with widely differing bandwidth points are likely to behave poorly. In Phase V, that should all be much better.
I don't even know what the "DEC bit" in TCP/IP is. Never heard of it. (Feel free to educate me.)
But TCP have the slow start control, the ICMP source quench, handling of out of order packets, and I'm sure a few more tricks to better deal with this kind of situation.
It s officially the Congestion Experienced bit. http://minnie.tuhs.org/PhD/th/2Existing_Congestion_Contro.html has a large amount of stuff on the topic; section 4.2 mentions the DEC Bit. In fact, that whole page is full of references to DECnet work on the subject.
paul
On 2014-06-05 21:12, Paul_Koning at Dell.com wrote:
On Jun 5, 2014, at 2:46 PM, Mark Pizzolato - Info Comm <Mark at infocomm.com> wrote:
...
All of this is absolutely true, but it would seem that no one is trying to push full wire speed traffic between systems. It would seem that given high quality signal levels on the wires in the data path (i.e. no excessive collisions due to speed/duplex mismatching), that the natural protocol on the wire (with acknowledgements, etc.) should be able to move data at the speed of the lowest component in the datapath. That may be either the Unibus or Qbus or the PDP11's CPU.
True, provided congestion control is working. In the days of DECnet Phase IV, congestion control was a topic of active research, rather than a well understood problem. (Things like the TCP/IP DEC bit are an outcome of that work as well as a lot of other less obvious knowledge that made its way into other protocols.) So in Phase IV, you probably don t have effective congestion control, and scenarios with widely differing bandwidth points are likely to behave poorly. In Phase V, that should all be much better.
I don't even know what the "DEC bit" in TCP/IP is. Never heard of it. (Feel free to educate me.)
But TCP have the slow start control, the ICMP source quench, handling of out of order packets, and I'm sure a few more tricks to better deal with this kind of situation.
...
Hmmm...Grind...Grind... I do seem have some vague recollection of an issue with some DEQNA devices not being able to handle back-to-back packets coming in from the wire. This issue might have been behind DEC's wholesale replacement/upgrading of every DEQNA in the field, and it also may have had something to do with the DEQNA not being officially supported as a cluster device...
I m not sure about that for QNA. It certainly was an incorrigible device, which is why VMS dropped it, but I don t remember back to back packets being its specific issue.
I do remember that the 3C901 had this issue, and DECnet/DOS (Pathworks) ran into big trouble with that. There was even a proposal to throttle sending speeds across all DECnet implementations as a workaround for that design error; that proposal went down in flames very quickly indeed. So at that point it was even more clearly understood that back to back packets at the wire end of a NIC must always be handled.
Yeah. And I do not think that it is actually back-to-back packets that is the issue.
Johnny
On Jun 5, 2014, at 2:46 PM, Mark Pizzolato - Info Comm <Mark at infocomm.com> wrote:
...
All of this is absolutely true, but it would seem that no one is trying to push full wire speed traffic between systems. It would seem that given high quality signal levels on the wires in the data path (i.e. no excessive collisions due to speed/duplex mismatching), that the natural protocol on the wire (with acknowledgements, etc.) should be able to move data at the speed of the lowest component in the datapath. That may be either the Unibus or Qbus or the PDP11's CPU.
True, provided congestion control is working. In the days of DECnet Phase IV, congestion control was a topic of active research, rather than a well understood problem. (Things like the TCP/IP DEC bit are an outcome of that work as well as a lot of other less obvious knowledge that made its way into other protocols.) So in Phase IV, you probably don t have effective congestion control, and scenarios with widely differing bandwidth points are likely to behave poorly. In Phase V, that should all be much better.
...
Hmmm...Grind...Grind... I do seem have some vague recollection of an issue with some DEQNA devices not being able to handle back-to-back packets coming in from the wire. This issue might have been behind DEC's wholesale replacement/upgrading of every DEQNA in the field, and it also may have had something to do with the DEQNA not being officially supported as a cluster device...
I m not sure about that for QNA. It certainly was an incorrigible device, which is why VMS dropped it, but I don t remember back to back packets being its specific issue.
I do remember that the 3C901 had this issue, and DECnet/DOS (Pathworks) ran into big trouble with that. There was even a proposal to throttle sending speeds across all DECnet implementations as a workaround for that design error; that proposal went down in flames very quickly indeed. So at that point it was even more clearly understood that back to back packets at the wire end of a NIC must always be handled.
paul
On 2014-06-05 20:46, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 10:53 AM, Johnny Billquist wrote:
On 2014-06-05 19:23, Paul_Koning at Dell.com wrote:
On Jun 5, 2014, at 12:47 PM, Johnny Billquist <bqt at softjar.se> wrote:
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So
you seem to have hit sime other kind of limitation or something...
I wouldn't think that traffic between PDP11 systems would put so much data in flight that all of the above issues would come into play.
Hmmm...Grind...Grind... I do seem have some vague recollection of an issue with some DEQNA devices not being able to handle back-to-back packets coming in from the wire. This issue might have been behind DEC's wholesale replacement/upgrading of every DEQNA in the field, and it also may have had something to do with the DEQNA not being officially supported as a cluster device...
Hey, just do:
sim> SET XQ TYPE=DELQA-T
and your all good. :-) Too bad you can't just upgrade real hardware like that.
Uh... It's the PDP-11 that have problems receiving packets, not simh. Also, my PDP-11 already have a DELQA-T. :-)
Finally, of course a simulated PDP-11 running on a fast machine will be able to output data at a high rate. Why would it not?
Two real PDP-11 systems do not get any problems.
Sending data from a real PDP-11 to the one in simh (or whatever) does not have any problems either.
It is only when you send lots of data from a simulated machine (be that a PDP-11 or a VAX) to a real PDP-11 that you get these issues. I would suspect you should be able to see similar issues if the receiving end was physical VAX as well.
The problem is also partly DECnet. DECnet do not seem to keep packets that arrive out of order. So if a packet in a sequence is lost, DECnet is going to retransmit all packets from that point forward. Meaning that when the session timer times out, the retransmission happens, and then you will yet again drop a packet in the whole sequence of packets that are sent. Each time the session timer times out, DECnet also do a backoff on the timeout time of that timer, until the session timer is about 2 minutes. So after a while you end up with DECnet sending a burst of packets, some of which are lost. It then takes about 2 minutes before a retransmission happens, at which point you get another 2 minute timeout. Thus, performance sucks.
TCP/IP is better (well, my TCP/IP anyway), in that when I loose a packet, I still keep whatever later packets I get, so after a while I get to a stable mode where TCP only sends one packet at a time, since the window is full. Only actually lost packets needs to be retransmitted, so I actually do get to this stable point.
Johnny
On Thursday, June 05, 2014 at 10:53 AM, Johnny Billquist wrote:
On 2014-06-05 19:23, Paul_Koning at Dell.com wrote:
On Jun 5, 2014, at 12:47 PM, Johnny Billquist <bqt at softjar.se> wrote:
...
It don't. Believe me, I've seen this, and investigated it years ago.
When you have a real PDP-11 running on half-duplex 10Mb/s talking to
something on a 1GB/s full duplex, the PDP-11 simply can't keep up.
Makes sense. The issue isn't the 10 Mb/s Ethernet. The switch deals with
that part. The issue is that a PDP-11 isn't fast enough to keep up with a 10
Mb/s Ethernet going flat out. If I remember right, a Unibus is slower than
Ethernet, and while a Q22 bus is slightly faster and could theoretically keep
up, a practical system cannot.
It's several things. The Unibus is definitely slower than the ethernet if I
remember right. The Qbus, while faster, is also slower than ethernet.
So there is definitely a bottleneck at that level.
However, there is also an issue in the switch. If one system is pumping out
packets on a 1Gb/s port, and the switch is forwarding them to a 10Mb/s port,
the switch needs to buffer, and might need to buffer a lot.
There are limitations at that level as well, and I would not be surprised if that
also can come into play here.
Thridly, even given the limitations above, we then also have the software on
the PDP-11, which also needs to set up new buffers to receive packet into,
and the system itself will not be able to keep up here. So the ethernet
controller is probably running out of buffers to DMA data into as well.
All of this is absolutely true, but it would seem that no one is trying to push full wire speed traffic between systems. It would seem that given high quality signal levels on the wires in the data path (i.e. no excessive collisions due to speed/duplex mismatching), that the natural protocol on the wire (with acknowledgements, etc.) should be able to move data at the speed of the lowest component in the datapath. That may be either the Unibus or Qbus or the PDP11's CPU.
Clearly we can't make old hardware work any faster than it ever did, and I certainly didn't think you were raising an issue about that when you said:
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So
you seem to have hit sime other kind of limitation or something...
I wouldn't think that traffic between PDP11 systems would put so much data in flight that all of the above issues would come into play.
Hmmm...Grind...Grind... I do seem have some vague recollection of an issue with some DEQNA devices not being able to handle back-to-back packets coming in from the wire. This issue might have been behind DEC's wholesale replacement/upgrading of every DEQNA in the field, and it also may have had something to do with the DEQNA not being officially supported as a cluster device...
Hey, just do:
sim> SET XQ TYPE=DELQA-T
and your all good. :-) Too bad you can't just upgrade real hardware like that.
- Mark
On 2014-06-05 19:23, Paul_Koning at Dell.com wrote:
On Jun 5, 2014, at 12:47 PM, Johnny Billquist <bqt at softjar.se> wrote:
...
It don't. Believe me, I've seen this, and investigated it years ago.
When you have a real PDP-11 running on half-duplex 10Mb/s talking to something on a 1GB/s full duplex, the PDP-11 simply can't keep up.
Makes sense. The issue isn t the 10 Mb/s Ethernet. The switch deals with that part. The issue is that a PDP-11 isn t fast enough to keep up with a 10 Mb/s Ethernet going flat out. If I remember right, a Unibus is slower than Ethernet, and while a Q22 bus is slightly faster and could theoretically keep up, a practical system cannot.
It's several things. The Unibus is definitely slower than the ethernet if I remember right. The Qbus, while faster, is also slower than ethernet.
So there is definitely a bottleneck at that level.
However, there is also an issue in the switch. If one system is pumping out packets on a 1Gb/s port, and the switch is forwarding them to a 10Mb/s port, the switch needs to buffer, and might need to buffer a lot. There are limitations at that level as well, and I would not be surprised if that also can come into play here.
Thridly, even given the limitations above, we then also have the software on the PDP-11, which also needs to set up new buffers to receive packet into, and the system itself will not be able to keep up here. So the ethernet controller is probably running out of buffers to DMA data into as well.
Johnny
Mark, a deuna couldn't even keep up with thick wire ethernet speed. Iirc it could read up to 3 Mb/s average over a period of time, with full sized frames. Probably the same for the deqna.
Verzonden vanaf mijn BlackBerry 10-smartphone.
Origineel bericht
Van: Mark Pizzolato - Info Comm
Verzonden: donderdag 5 juni 2014 13:58
Aan: hecnet at Update.UU.SE
Beantwoorden: hecnet at Update.UU.SE
Onderwerp: RE: [HECnet] Emulated XQ polling timer setting and data overrun
On Thursday, June 05, 2014 at 1:27 AM, Johnny Billquist wrote:
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
[...]
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization. Both
simh instances run on the same hardware. XQ set to different MAC
addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer speed
(I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So you
seem to have hit sime other kind of limitation or something...
Can you elaborate on these 'real network issues'?
This is the first I've heard of anything like this.
I have not had any experience with a real PDP11 talking to a simh PDP or VAX, but in the past there were no issues with multiple simh VAX simulators talking to real VAX systems on the same LAN.
Let me know.
Thanks.
- Mark
On Jun 5, 2014, at 12:47 PM, Johnny Billquist <bqt at softjar.se> wrote:
...
It don't. Believe me, I've seen this, and investigated it years ago.
When you have a real PDP-11 running on half-duplex 10Mb/s talking to something on a 1GB/s full duplex, the PDP-11 simply can't keep up.
Makes sense. The issue isn t the 10 Mb/s Ethernet. The switch deals with that part. The issue is that a PDP-11 isn t fast enough to keep up with a 10 Mb/s Ethernet going flat out. If I remember right, a Unibus is slower than Ethernet, and while a Q22 bus is slightly faster and could theoretically keep up, a practical system cannot.
paul
On 2014-06-05 17:55, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 5:59 AM, Johnny Billquist wrote:
On 2014-06-05 13:58, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 1:27 AM, Johnny Billquist wrote:
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
[...]
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization.
Both simh instances run on the same hardware. XQ set to different
MAC addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer
speed
(I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So you
seem to have hit sime other kind of limitation or something...
Can you elaborate on these 'real network issues'?
This is the first I've heard of anything like this.
I have not had any experience with a real PDP11 talking to a simh PDP or
VAX, but in the past there were no issues with multiple simh VAX simulators
talking to real VAX systems on the same LAN.
Let me know.
Same as I have talked about several times here. Doing file transfers from the
simh system to the real PDP-11 gets horrible performance. It's a simple
question of data overflow and loss, and DECnet performs poorly here
because recovery from lost packets is so bad when you are loosing lots of
packets.
Nothing you really can do about it, unless you want to throttle the ethernet
transmissions.
Hmmm...
There are several possibilities which may help or otherwise be relevant:
1) Your real PDP11 is connected to the LAN via some sort of transceiver. Hopefully you're not using Thickwire or Thinwire Ethernet, but some sort of 10BaseT transceiver connected to a switch port. I've seen switch ports which are VERY POOR at auto-detecting link speed and duplex (especially when dealing with 10Mbit devices). If you're connected through a managed switch, try to hard set the switch port's link speed to 10Mbit and Half Duplex.
2) Try to extend the number of receive buffers available using CFE as you mentioned.
Hopefully one of these will help.
It don't. Believe me, I've seen this, and investigated it years ago.
When you have a real PDP-11 running on half-duplex 10Mb/s talking to something on a 1GB/s full duplex, the PDP-11 simply can't keep up.
This is (repeating myself once more) why I implemented the throttling in the bridge. Talking to the PDP-11 with the bridge sitting inbetween it works just fine.
Traffic also works fine if the fast machine isn't trying to totally drown the PDP-11. So things like interactive traffic, and small stuff works just fine. File transfers is the obvious problem child.
Johnny
On Thursday, June 05, 2014 at 5:59 AM, Johnny Billquist wrote:
On 2014-06-05 13:58, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 1:27 AM, Johnny Billquist wrote:
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
[...]
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization.
Both simh instances run on the same hardware. XQ set to different
MAC addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer
speed
(I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So you
seem to have hit sime other kind of limitation or something...
Can you elaborate on these 'real network issues'?
This is the first I've heard of anything like this.
I have not had any experience with a real PDP11 talking to a simh PDP or
VAX, but in the past there were no issues with multiple simh VAX simulators
talking to real VAX systems on the same LAN.
Let me know.
Same as I have talked about several times here. Doing file transfers from the
simh system to the real PDP-11 gets horrible performance. It's a simple
question of data overflow and loss, and DECnet performs poorly here
because recovery from lost packets is so bad when you are loosing lots of
packets.
Nothing you really can do about it, unless you want to throttle the ethernet
transmissions.
Hmmm...
There are several possibilities which may help or otherwise be relevant:
1) Your real PDP11 is connected to the LAN via some sort of transceiver. Hopefully you're not using Thickwire or Thinwire Ethernet, but some sort of 10BaseT transceiver connected to a switch port. I've seen switch ports which are VERY POOR at auto-detecting link speed and duplex (especially when dealing with 10Mbit devices). If you're connected through a managed switch, try to hard set the switch port's link speed to 10Mbit and Half Duplex.
2) Try to extend the number of receive buffers available using CFE as you mentioned.
Hopefully one of these will help.
Good Luck.
- Mark
On 2014-06-05 13:58, Mark Pizzolato - Info Comm wrote:
On Thursday, June 05, 2014 at 1:27 AM, Johnny Billquist wrote:
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
[...]
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization. Both
simh instances run on the same hardware. XQ set to different MAC
addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer speed
(I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So you
seem to have hit sime other kind of limitation or something...
Can you elaborate on these 'real network issues'?
This is the first I've heard of anything like this.
I have not had any experience with a real PDP11 talking to a simh PDP or VAX, but in the past there were no issues with multiple simh VAX simulators talking to real VAX systems on the same LAN.
Let me know.
Same as I have talked about several times here. Doing file transfers from the simh system to the real PDP-11 gets horrible performance. It's a simple question of data overflow and loss, and DECnet performs poorly here because recovery from lost packets is so bad when you are loosing lots of packets.
Nothing you really can do about it, unless you want to throttle the ethernet transmissions.
Johnny
On Thursday, June 05, 2014 at 1:27 AM, Johnny Billquist wrote:
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
[...]
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization. Both
simh instances run on the same hardware. XQ set to different MAC
addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer speed
(I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh
talking to a real PDP-11 sitting on the same physical network. So you
seem to have hit sime other kind of limitation or something...
Can you elaborate on these 'real network issues'?
This is the first I've heard of anything like this.
I have not had any experience with a real PDP11 talking to a simh PDP or VAX, but in the past there were no issues with multiple simh VAX simulators talking to real VAX systems on the same LAN.
Let me know.
Thanks.
- Mark
On 2014-06-05 02:38, Jean-Yves Bernier wrote:
At 4:59 PM -0700 4/6/14, Mark Pizzolato - Info Comm wrote:
Did you change the number of receive buffers?
That does not seem possible under RSX.
It is, but you need to do it in CFE. No way of dynamically changing at runtime.
That really only indicates that you're running on a platform which can
do Ethernet network I/O in a parallel thread. Most platforms did not
use the threaded model previously. Previously, in 3.9, Polling was
still necessary since the simulator didn't really support an
asynchronous I/O model. The latest codebase has support for
asynchronous network AND disk I/O. On the current codebase if you've
got threads available, then network reads and writes are performed
asynchronously and interrupts (I/O completion) is triggered with a few
microseconds of latency. As I recall from earlier in these threads,
you're running under CentOS on VirtualBox. You didn't mention what
host OS you're running on. In any case, the threaded model might
just work better in the virtual environment.
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization. Both
simh instances run on the same hardware. XQ set to different MAC
addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer speed (I
have filled a RM03 in seconds).
Interesting. I still have real network issues with the latest simh talking to a real PDP-11 sitting on the same physical network. So you seem to have hit sime other kind of limitation or something...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On Jun 4, 2014 5:38 PM, Jean-Yves Bernier <bernier at pescadoo.net> wrote: > > At 4:59 PM -0700 4/6/14, Mark Pizzolato - Info Comm wrote: > > >Did you change the number of receive buffers? > > That does not seem possible under RSX. > > > >That really only indicates that you're running on a platform which > >can do Ethernet network I/O in a parallel thread. Most platforms > >did not use the threaded model previously. Previously, in 3.9, > >Polling was still necessary since the simulator didn't really > >support an asynchronous I/O model. The latest codebase has support > >for asynchronous network AND disk I/O. On the current codebase if > >you've got threads available, then network reads and writes are > >performed asynchronously and interrupts (I/O completion) is > >triggered with a few microseconds of latency. As I recall from > >earlier in these threads, you're running under CentOS on VirtualBox. > >You didn't mention what host OS you're running on. In any case, > >the threaded model might just work better in the virtual environment. > > > I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization. Both > simh instances run on the same hardware. XQ set to different MAC > addresses, since this is now enforced. > > Asynchronous network/disk IO may explain the uncommon transfer speed > (I have filled a RM03 in seconds).
I didn't notice you were not testing traffic between VAXes . The threaded network I/o buffers more traffic which could naturally avoid overruns.
Disk I/o with RQ and RP are now asynchronous on VAX and PDP11.
- Mark
At 4:59 PM -0700 4/6/14, Mark Pizzolato - Info Comm wrote:
Did you change the number of receive buffers?
That does not seem possible under RSX.
That really only indicates that you're running on a platform which can do Ethernet network I/O in a parallel thread. Most platforms did not use the threaded model previously. Previously, in 3.9, Polling was still necessary since the simulator didn't really support an asynchronous I/O model. The latest codebase has support for asynchronous network AND disk I/O. On the current codebase if you've got threads available, then network reads and writes are performed asynchronously and interrupts (I/O completion) is triggered with a few microseconds of latency. As I recall from earlier in these threads, you're running under CentOS on VirtualBox. You didn't mention what host OS you're running on. In any case, the threaded model might just work better in the virtual environment.
I have tested commit 753e4dc9 on Mac OS 10.6, no virtualization. Both simh instances run on the same hardware. XQ set to different MAC addresses, since this is now enforced.
Asynchronous network/disk IO may explain the uncommon transfer speed (I have filled a RM03 in seconds).
--
Jean-Yves Bernier
Are there any Phase II implementations out there that could be downloaded and run on an emulator such as SIMH? DECnet/E for RSTS V7 would be great since I know that well, but something else would also be interesting.
I m trying to make my DECnet/Python allow Phase II, as opposed to the standard one version back backward compatibility. That s not documented but it is perfectly doable. Finding a real test system is the tricky part.
It would be really neat to be able to find one that uses the marginally documented intercept feature. Does anyone here know where that was used? I have a vague impression that TOPS-10 and/or TOPS-20 treated the front end PDP-11 like another node. so that would be the relay node and the 10 or 20 the end node in a Phase II star network. Is that correct?
paul
On Wednesday, June 04, 2014 at 4:08 PM, Jean-Yves Bernier wrote:
At 10:44 AM -0700 4/6/14, Mark Pizzolato - Info Comm wrote:
If you're going to drive this deeply into testing it would really be
best if you ran with the latest code since your results may ultimately
suggest code changes, AND the latest code may behave significantly
different than the older versions.
Independent of which codebase you're running, and since you're now
tweaking the behavior of the simulated hardware, you may want to look
at sim> SHOW XQ STATS and try to analyze the relationship between these
stats and the ones the OS sees.
Also, you may want to explore what happens if:
NCP> DEFINE LINE QNA-0 RECEIVE BUFFER 32
Is done prior to starting your network... The limit of 32 may be
different on different Operating Systems...
Once again the latest code is available from:
https://github.com/simh/simh/archive/master.zip
Mark, what kind of magic did you perform on XQ?
I tried commit 753e4dc9 and overruns are gone. No more NFT/FAL hangups.
Transfer speed is higher than ever : 10000. blocks in 4 seconds. I can't believe
it. This ROCKS.
Significant changes have been made since 3.9, but I haven't seen substantial throughput differences.
Did you change the number of receive buffers?
Maybe the secret is here:
sim> sh xq
XQ address=17774440-17774457, vector=120, MAC=08:00:2B:AA:BB:01
type=DEQNA, polling=disabled, sanity=OFF
leds=(ON,ON,ON)
attached to eth0
"polling=disabled"
That really only indicates that you're running on a platform which can do Ethernet network I/O in a parallel thread. Most platforms did not use the threaded model previously. Previously, in 3.9, Polling was still necessary since the simulator didn't really support an asynchronous I/O model. The latest codebase has support for asynchronous network AND disk I/O. On the current codebase if you've got threads available, then network reads and writes are performed asynchronously and interrupts (I/O completion) is triggered with a few microseconds of latency. As I recall from earlier in these threads, you're running under CentOS on VirtualBox. You didn't mention what host OS you're running on. In any case, the threaded model might just work better in the virtual environment.
- Mark
At 10:44 AM -0700 4/6/14, Mark Pizzolato - Info Comm wrote:
If you're going to drive this deeply into testing it would really be best if you ran with the latest code since your results may ultimately suggest code changes, AND the latest code may behave significantly different than the older versions.
Independent of which codebase you're running, and since you're now tweaking the behavior of the simulated hardware, you may want to look at sim> SHOW XQ STATS and try to analyze the relationship between these stats and the ones the OS sees.
Also, you may want to explore what happens if:
NCP> DEFINE LINE QNA-0 RECEIVE BUFFER 32
Is done prior to starting your network... The limit of 32 may be different on different Operating Systems...
Once again the latest code is available from: https://github.com/simh/simh/archive/master.zip
Mark, what kind of magic did you perform on XQ?
I tried commit 753e4dc9 and overruns are gone. No more NFT/FAL hangups. Transfer speed is higher than ever : 10000. blocks in 4 seconds. I can't believe it. This ROCKS.
Maybe the secret is here:
sim> sh xq
XQ address=17774440-17774457, vector=120, MAC=08:00:2B:AA:BB:01
type=DEQNA, polling=disabled, sanity=OFF
leds=(ON,ON,ON)
attached to eth0
"polling=disabled"
--
Jean-Yves Bernier