On 2019-07-16 20:04, John Forecast wrote:
On Jul 16, 2019, at 1:08 PM, Johnny Billquist
<bqt at softjar.se> wrote:
On 2019-07-16 17:50, John Forecast wrote:
On Jul
15, 2019, at 7:48 PM, Johnny Billquist <bqt at softjar.se> wrote:
On 2019-07-15 17:36, John Forecast wrote:
>> On Jul 14, 2019, at 11:55 PM, Thomas DeBellis <tommytimesharing at
gmail.com
<mailto:tommytimesharing at gmail.com>> wrote:
>>
> 2. RSX-11M/11S - any PDP-11 system with sufficient memory + devices
> Device drivers ran as part of the ?kernel? serialized by the fork block mechanism
Right. And I/O completion is normally handled by a couple of routines in the kernel, so
the driver itself can be pretty minimal.
> I guess RSX-11M+ could be consider a separate class adding I/D space, supervisor mode
and multi-processor support.
Nah. 11M+ is really very similar to 11M. They pretty much share the same codebase, but M+
extends some data structures with a bit more information, and also adds a couple of more
structures to make the system more flexible.
The splitting of I/D space, as well as supervisor mode actually have very little impact
on the kernel. There is obviously a bit more work at a context switch, and there are a
couple of details around ASTs, which can switch between user and supervisor, but much of
the rest don't know or care. And drivers can pretty much be moved straight over,
although you normally would want to modify them to take advantage of some nifty additional
capabilities provided by M+.
There were a lot of small changes throughout the networking code to get it running
on our dual-processor. We eventually shipped with the kernel networking code running with
cache bypass enabled and the intention of revisiting it in a later release. One the 11/74
was cancelled, we were never given the opportunity?
Ah, yes, I tend to skim over the mP changes, but you are right. DECnet is one of those
things that got hit because of that, as well as some kernel parts. It's still fairly
contained. It's just that CEX knows a little too much to not get affected. Most device
drivers are totally unaware of mP aspects.
It?s right on the hairy edge since it does its own driver loading and interrupt
handling. We first got DECnet-11M+ running on a standard 11/70 configuration and then I
managed to get time on the RSX-11M+ development quad processor (CASTOR::/POLLUX::) which
had its own machine room within the larger lab space - pretty much every known Unibus
device was attached to that machine. Booted my image and started DECnet running (this was
Phase III so probably DMC/DMR connectivity) and everything went silent followed by each
console typing an XDT> prompt in sequence - my total test run had probably lasted 5
minutes. Somewhere in the interrupt logic I had missed a cache bypass or cache flush and
all 4 processors had the same memory location in their caches and were quite happily using
it to modify memory.
Heh! I can imagine... :-)
Interesting
that you think you might have improved DECnet with a bit more work. As it stands, it works
fine today. But I haven't looked into how much might be running without cache enabled.
But I am running a simulated 11/74. That is Mim.Update.UU.SE. You can both browse to it,
and telnet to it. Usually I only have two CPUs online, since there are still some issues
with the emulation. But otherwise it works fine.
Does the simulator actually simulate the caches or just the control registers like
SIMH?
It don't really simulate the cache, however, with mP, it is horrendously
more complicated. The RSX code have timing dependency assumptions
between the processors, which you need to track properly, or RSX will
bugcheck on you. Which is what normally will happen easily, since the
threads for the different processors are just so damn fast today, and
the timing is done with tight loops in RSX, which only spins 64K times
before bugchecking.
So whenever there is an IIST interrupt, you essentially need to run all
the CPUs in lockstep for a while.
The cache as such is hard to simulate unless you simulate the whole
memory system at a rather low level. If you just use the actual memory,
then the underlying system the simulator runs on guarantees cache
coherency, since that is a basic property of any multiprocessor system
these days. The 11/74 is a rather odd ball in this context, as it has
shared memory, but individual caches on each CPU, which do not have any
form of cache coherency in the hardware, and the OS have to solve it all
"by hand".
Latest 11M is
V4.8, and latest 11M+ is V4.6.
I think they are even available to trailing-edge?
That?s the base 11M/11M+ distribution? Do you know if there was a corresponding
DECnet release?
The M+ 4.6 disk image at trailing-edge actually contains the DECnet code as well. And
yes, DECnet-11M-PLUS V4.5 and V4.6 was released in parallel with RSX-11M-PLUS V4.5 and
V4.6.
Yes, I saw that one. I?m still looking for a router implementation - the latest
I?ve seen is for 11M+ V4.2
The disk image on trailing-edge has the full functionality DECnet unless
I remember wrong. Mim is also running area routing, 11M+ V4.6, and as
mentioned mP.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol