On 27.6.2012 15:07, Peter Coghlan wrote:
My original plan was to make the domain available for all to receive email with either:
<username>@hecnet.org or
<username>@<DECnet-nodename>.hecnet.org (much easier to do BTW)
I'm running a PMDF mailserver on VMS which can gateway mail from the internet
to DECnet Mail-11. I could set it up to do this if there is interest.
The web address was (is) an after-thought.
I'm also running the OSU webserver but it looks like web hosting might be
addressed by others with more bandwidth. (I'm not great at providing content
either!).
Regards,
Peter Coghlan.
.
One thought occurred to me today; There used to be the Mailbus and Message Router products running on VMS. They could be used to route messages between different mail systems. They were used at DEC until about late 90's. The addresses used the format user.name(a)*.mts.dec.com.
Unfortunately I don't remember much of them anymore, but just thought that they could be used as a message gateway between HECnet and the Internet. If I remember correctly, the VMS Hobbyist licenses included those.
If someone has recent experience of those, he could possibly tell more exact details about the feasibility.
Regards,
Kari
On 2012-07-09 03:21, Paul_Koning at Dell.com wrote:
On Jul 8, 2012, at 9:01 PM, Bob Armstrong wrote:
I don't know if you understand the concept of memory partitions in RSX
properly.
No, I guess not. The usefulness of partitions in an unmapped system is
easy to see - since tasks have to be linked (er, "TKBed") to run at a
specific virtual address and all tasks have to share the same address space
at the same time, you have to fit the tasks into memory. It's really the
address space that's being partitioned as much as it is the memory. The
concept is not unlike overlays.
But once you have an MMU the whole procedure is pointless. Every task and
every processor mode can have its own contiguous virtual address space.
Nobody needs to share virtual address space with anybody else. And the
mapping to physical memory is (for practical purposes) completely arbitrary
and easily changed on the fly. What's gained by partitioning physical
memory into fixed chunks that are allocated only to specific uses and can't
be shared?
The point that I know of (having seen it at work in Typeset-11 which is based on RSX-11D or IAS) is to have some critical tasks be guaranteed memory, while others content which each other (but not with the critical tasks). That simple example results in two partitions, a specific one and a general one.
Right. Which is why DECnet-11M uses a bunch of partitions of its own. In M+ this was solved by actually being able to lock regions in memory.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On Jul 8, 2012, at 9:01 PM, Bob Armstrong wrote:
I don't know if you understand the concept of memory partitions in RSX
properly.
No, I guess not. The usefulness of partitions in an unmapped system is
easy to see - since tasks have to be linked (er, "TKBed") to run at a
specific virtual address and all tasks have to share the same address space
at the same time, you have to fit the tasks into memory. It's really the
address space that's being partitioned as much as it is the memory. The
concept is not unlike overlays.
But once you have an MMU the whole procedure is pointless. Every task and
every processor mode can have its own contiguous virtual address space.
Nobody needs to share virtual address space with anybody else. And the
mapping to physical memory is (for practical purposes) completely arbitrary
and easily changed on the fly. What's gained by partitioning physical
memory into fixed chunks that are allocated only to specific uses and can't
be shared?
The point that I know of (having seen it at work in Typeset-11 which is based on RSX-11D or IAS) is to have some critical tasks be guaranteed memory, while others content which each other (but not with the critical tasks). That simple example results in two partitions, a specific one and a general one.
paul
On 2012-07-09 03:01, Bob Armstrong wrote:
I don't know if you understand the concept of memory partitions in RSX
properly.
No, I guess not. The usefulness of partitions in an unmapped system is
easy to see - since tasks have to be linked (er, "TKBed") to run at a
specific virtual address and all tasks have to share the same address space
at the same time, you have to fit the tasks into memory. It's really the
address space that's being partitioned as much as it is the memory. The
concept is not unlike overlays.
Well, on an unmapped system, not only do you have to worry about the region, but you actually have to task build your program for a specific address in that region, since you don't have virtual memory.
Once you have an MMU, you (almost) stop worrying about the address offset parts of regions in TKB, but you still have to say which partition it should request the memory in.
Of course, you can always override which region a program tries to run in when you install it. The information specified in TKB is just the default values. Nothing force you to actually use those values.
This is what the help in RSX says:
.help tkb opt par
PAR=par-name[:base:length]
PAR specifies the partition for which the task is built.
In a mapped system, you can install your task in any system partition
or user partition large enough to contain it. In an unmapped system,
your task is bound to physical memory. Therefore, you must install
your task in a partition starting at the same memory address as that
of the partition for which it was built.
But once you have an MMU the whole procedure is pointless. Every task and
every processor mode can have its own contiguous virtual address space.
Right. Except that contiguous virtual address space does not necessarily map to a contiguous physical address space.
Nobody needs to share virtual address space with anybody else. And the
mapping to physical memory is (for practical purposes) completely arbitrary
and easily changed on the fly. What's gained by partitioning physical
memory into fixed chunks that are allocated only to specific uses and can't
be shared?
Tasks that share the same region compete for the memory resources. Tasks in separate regions do not.
Also, sometimes tasks want to share parts of their memory space with others.
Also, memory partitions are not allocated to a specific user. Memory partitions are managed by the system manager, and are normally set up at boot time (even if this can be handled and modified at any point).
Actually I thought RSX worked the way you described for M+ -> everything
ran in a "GEN" partition, and the system just made as many GEN partitions as
it wanted. Only if you were unmapped did you care which partition something
ran in.
Um... No. There is only one GEN partition. There can only be one partion called GEN. The name of a partition needs to be unique.
A memory partition is more like a meta object. When a task is run, it allocates region within the partition. Many tasks can run in the GEN partition (for example), but they all have their own regions. Otherwise they would all share the same memory.
And if more tasks try to allocate memory in a region than there is room for, they are constantly swapped in and out by the scheduler, based on their swapping priority.
When you don't have the ability to lock regions in memory, you can constantly be swapped out if some other process in the same partition requests more memory than there is free in that region.
Good idea. But there are SCSI controllers for the Q-bus, as well as the
KDA50...
I have both of those, but I'm not going to waste 'em on a 11/23! :-)
:-)
Actually that's a pretty realistic attitude back in the day - these are
more expensive mass storage devices and anybody who could have afforded them
probably would have bought a faster CPU too.
I know of places who still ran 11/23+ machine in production at least until only a couple of years ago. Moved from RQDX2 to SCSI about 6 years ago...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
I don't know if you understand the concept of memory partitions in RSX
properly.
No, I guess not. The usefulness of partitions in an unmapped system is
easy to see - since tasks have to be linked (er, "TKBed") to run at a
specific virtual address and all tasks have to share the same address space
at the same time, you have to fit the tasks into memory. It's really the
address space that's being partitioned as much as it is the memory. The
concept is not unlike overlays.
But once you have an MMU the whole procedure is pointless. Every task and
every processor mode can have its own contiguous virtual address space.
Nobody needs to share virtual address space with anybody else. And the
mapping to physical memory is (for practical purposes) completely arbitrary
and easily changed on the fly. What's gained by partitioning physical
memory into fixed chunks that are allocated only to specific uses and can't
be shared?
Actually I thought RSX worked the way you described for M+ -> everything
ran in a "GEN" partition, and the system just made as many GEN partitions as
it wanted. Only if you were unmapped did you care which partition something
ran in.
Good idea. But there are SCSI controllers for the Q-bus, as well as the
KDA50...
I have both of those, but I'm not going to waste 'em on a 11/23! :-)
Actually that's a pretty realistic attitude back in the day - these are
more expensive mass storage devices and anybody who could have afforded them
probably would have bought a faster CPU too.
Bob
On 2012-07-08 23:21, Mark Benson wrote:
On 8 Jul 2012, at 22:00, Dave McGuire wrote:
Oh MAN! A manual beside you is REQUIRED! ;)
I first did an 11M v4.1 SYSGEN when I was about 17. I was fortunate
to have a friend at work who gave me lots of advice, but as he was at
work and my 11/34 was at home, the "question/answer latency" was very
high! It took me a few nights and a few tries to get it right, but in
the end those scripts were very well-written and everything worked out fine.
I've done a few RSX-11M+ 4.2 SYSGENs in the last year on SimH. I am pretty good at them now, I can almost roll one for my default PDP-11 setup (11/73 with dual RD54s), I can *almost* do it from memory.
I found this invaluable though: http://9track.net/pdp11/rsx4_sysgen
And for doing the accompanying DECnet NETGEN: http://9track.net/pdp11/decnet4_netgen
Like Dave I had to redo it a few times over the first time to get it right but once you get it wright it's quite satisfying (even in an emulator).
My big issue is *redoing* SYSGENs, I can't work out how to do a subsequent re-SYSGEN post-original-SYSGEN in 11M+ to change the a hardware config and compiled drivers etc. It doesn't seem to want to do an auto-config. Back to the endless reams of PDF files on my Kindle I suppose :D
Believe me. An 11M SYSTEM is *nothing* like M+. Even knowledge and understanding of M+ SYSGEN is of basically no help if you ever try an 11M SYSGEN.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 2012-07-08 18:54, Bob Armstrong wrote:
Yes, 4MB is plenty of memory. But normal memory is often not the big issue
in RSX,
Yep, that's what killed the PDP-11. Huge physical address space; tiny
virtual address space. We used to have to go thru all kinds of contortions
(overlays and crap) to fit programs on the -11, even when there were loads
of physical RAM available.
Yep.
Thanks for the tip on pool, though.
Just out of curiosity, does RSX use supervisor mode on the processors that
have it (typically that goes together with I&D space). 2bsd uses super mode
just to get extra address space for the networking code.
11M+ can use supervisor mode. You can have shared libraries in supervisor space. You could place code or data in supervisor space, you can do just about anything with it. It's just more address space for your program. The kernel do not really use it itself. Not much to gain that way. RSX already have networking, as well as most everything else outside of the "kernel". Think of RSX as a microkernel, and you get close.
Network runs as a process. File system runs as a process. ANSI magtapes? That's its own process as well.
Also, don't forget to have the options included to allow for networking in
the kernel. :-)
Umm, Sure.... I trust that those will be obvious when I see them :-)
You should. :-)
your requirements on "user friendly" are pretty high.
Well, you can boot the same VMS distro on any VAX ever made and it pretty
much just figures it out :-) Just kidding, though - I'm not looking to
start an argument about which OS is better.
:-)
Lucky you. Or else I'd start about the difference between user friendly, and assuming the user is dumb. :-)
DECnet on 11M also means that you need to understand partitions,
Partitions? You mean memory partitions? On a processor with an MMU?? I
thought that was pretty much all dynamic on M and M+, and only unmapped
systems (like 11S) had to worry about that. Ok, you recently pointed out
that M can run on unmapped systems too, but the 11/23+ has a perfectly nice
22bit MMU and that's not an issue here.
Yes, partitions as in memory partitions. Yes, on a processor with an MMU. I don't know if you understand the concept of memory partitions in RSX properly. It don't really have anything to do with the MMU.
All memory in RSX belongs to one partition or another. Processes (or rather tasks) always compete for memory resources within a partition. Anything in another partition is just unrelated.
If a process wants to be guaranteed to get memory when needed, you create a separate memory partition, where it don't need to compete for the memory with anyone else.
If there isn't enough free memory in a partition when a task wants to run, it will not be scheduled. Possibly other regions of memory will be swapped out from that partition to make space for your needs. That is decided by the swap priority.
In M+, tasks can allocate and then lock memory in place. In 11M you instead use separate partitions, since 11M don't have all those features of M+, you instead solve the same problem by having separate partitions. But it becomes extra work for you, since you need to create those partitions, and make sure they are large enough to hold the memory regions that the task needs.
You have memory partitions whether you have an MMU or not. And you have them in both 11M and M+. However, in most cases, you have rather few partitions in M+, since you can just let most memory belong to the GEN partition, and allow all tasks to share from there. Special cases like DECnet create it's regions when it starts, and then locks them in memory, so that they can't get swapped out. Thus DECnet is happy.
Looking at PONDUS::, for example (my home machine), I have the following regions:
SECPOL 117734 00200400 01000000 SEC POOL
SYSPAR 117670 01200400 00205600 MAIN
DRVPAR 116334 01406200 00146500 MAIN
GEN 113134 01554700 16203100 MAIN
and those are all I have. And that is pretty normal. SECPOL is the secondary pool (used to offload the normal POOL even more). SYSPAR holds some extensions to the kernel, and some very important special tasks that you never want to be affected by other tasks behavior.
DRVPAR holds pretty much all device drivers that are SYSGENned in, whilel GEN holds everything else. Tasks, shared regions, shared libraries, most tasks, later loaded device drivers, as well as most everything used by DECnet.
You know that an RQDX and RD32 are bog slow?
Yes, but there are not a lot of other options on the 11/23+. There's an
RL02 drive, but it's not clear that's actually faster; it's also lots
smaller (in Mb, that is, not in cubic feet!) and probably not as reliable
(although that last one is arguable). Besides, I'd rather keep the RL02 as
removable media anyway.
Good idea. But there are SCSI controllers for the Q-bus, as well as the KDA50...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 8 Jul 2012, at 22:00, Dave McGuire wrote:
Oh MAN! A manual beside you is REQUIRED! ;)
I first did an 11M v4.1 SYSGEN when I was about 17. I was fortunate
to have a friend at work who gave me lots of advice, but as he was at
work and my 11/34 was at home, the "question/answer latency" was very
high! It took me a few nights and a few tries to get it right, but in
the end those scripts were very well-written and everything worked out fine.
I've done a few RSX-11M+ 4.2 SYSGENs in the last year on SimH. I am pretty good at them now, I can almost roll one for my default PDP-11 setup (11/73 with dual RD54s), I can *almost* do it from memory.
I found this invaluable though: http://9track.net/pdp11/rsx4_sysgen
And for doing the accompanying DECnet NETGEN: http://9track.net/pdp11/decnet4_netgen
Like Dave I had to redo it a few times over the first time to get it right but once you get it wright it's quite satisfying (even in an emulator).
My big issue is *redoing* SYSGENs, I can't work out how to do a subsequent re-SYSGEN post-original-SYSGEN in 11M+ to change the a hardware config and compiled drivers etc. It doesn't seem to want to do an auto-config. Back to the endless reams of PDF files on my Kindle I suppose :D
--
Mark Benson
http://DECtec.info
Twitter: @DECtecInfo
HECnet: STAR69::MARK
Online Resource & Mailing List for DEC Enthusiasts.
On 07/08/2012 12:30 PM, Johnny Billquist wrote:
Either you have never done an 11M SYSGEN, or else your requirements on
"user friendly" are pretty high. The M+ SYSGEN gives you help and
information at all times, gives sane questions that are understandable,
and can be answered pretty straight forward.
11M SYSGEN on the other hand does not give much information, asks that
you provide whole sequences of magic as responses sometimes, and is
pretty much arcane. A manual beside you is recommended. :-)
Oh MAN! A manual beside you is REQUIRED! ;)
I first did an 11M v4.1 SYSGEN when I was about 17. I was fortunate
to have a friend at work who gave me lots of advice, but as he was at
work and my 11/34 was at home, the "question/answer latency" was very
high! It took me a few nights and a few tries to get it right, but in
the end those scripts were very well-written and everything worked out fine.
This was in 1986, the band Genesis was touring, and I went to see
them. I came into work the next day wearing the T-shirt I got at the
concert, and my friend Richard (the guy helping me along with the
SYSGEN) told me that "Genesis" should've been spelled "GEN-A-SYS"!
I was intent on doing a SYSGEN because the installation I had on the
machine had no printer support, and I was given an LA180 with an LS11
controller and was dying to get it running.
(I have a weird duality about printers and books...I firmly believe
that paper has been obsolete for decades, but I've got a library of
1000+ books here and a HUGE printer fetish...I have no explanation!)
Eventually I did get it running and was happily printing out my
Macro-11 and Swedish Pascal program listings all day long. :-) The
LA180 is a unidirectional printer, so its characteristic noise is
"aaaWEEEEE! aaaWEEE! aaaWEEE!" That sound on my current LA180 always
brings back great memories.
-Dave
--
Dave McGuire, AK4HZ
New Kensington, PA
On 07/08/2012 11:54 AM, G ran hling wrote:
Just to "point on the nitty gritty details"
To my knowledge; M+ will run on 11/23+ but not upon 11/23 ! (22 bit
adressing required).
This is not quite true! Most KDF11-A (dual-wide, non-"+") 11/23 CPUs
DO in fact support 22-bit addressing.
It is a common misconception that all dual-wide 11/23 CPUs only have
18-bit addressing. It is only the early revs of the dual-wide board
that are limited to 18-bit addressing; prior to Rev C. The 11/23 was a
very popular system in its day; DEC made a LOT of them...the vast
majority of the boards you'll see in the field are Rev C or later.
-Dave
--
Dave McGuire, AK4HZ
New Kensington, PA