One thought occurred to me today; There used to be the Mailbus and Message Router products running on VMS. They could be used to route messages between different mail systems. They were used at DEC until about late 90's. The addresses used the format user.name(a)*.mts.dec.com.
Unfortunately I don't remember much of them anymore, but just thought that they could be used as a message gateway between HECnet and the Internet. If I remember correctly, the VMS Hobbyist licenses included those.
If someone has recent experience of those, he could possibly tell more exact details about the feasibility.
In my mind, using PMDF would be a better choice for SMTP<->DECnet connectivity. I believe that it's available from Process Software through their hobbyist program.
Of course, I'm one of the former PMDF developers, so I'm biased. :-)
--Marc
X.400? Well that's kinda amusing as of itself, no?
Ok, I'll confess - I'd never heard of x.400 until now. Like everything
else in the universe, it's in Wikipedia
http://en.wikipedia.org/wiki/X.400
"G=Bob;S=Armstrong;O=SpareTimeGizmos;P=SpareTimeGizmos;C=us" ?????
Give me "bob at jfcl.com" any day :-)
Bob
X.400? Well that's kinda amusing as of itself, no?
Sampsa
On 10 Jul 2012, at 22:59, Dennis Boone wrote:
One thought occurred to me today; There used to be the Mailbus and
Message Router products running on VMS. They could be used to route
messages between different mail systems. They were used at DEC until
about late 90's. The addresses used the format user.name(a)*.mts.dec.com.
Unfortunately I don't remember much of them anymore, but just thought
that they could be used as a message gateway between HECnet and the
Internet. If I remember correctly, the VMS Hobbyist licenses included
those.
The packages are in at least the June '04 SPL:
MAILbus 400 2.0C 04RAA SSB 4 [MTAC020]
Application Program
Interface for
OpenVMS VAX
MAILbus 400 Message 3.0 04QAA SSB 4 [MTA030]
Transfer Agent for
OpenVMS VAX
MAILbus 400 Message 1.2C 342AA SSB 4 [XMRC012]
Router Gateway for
OpenVMS
and my PAKs from the hobbyist program do include these.
De
One thought occurred to me today; There used to be the Mailbus and
Message Router products running on VMS. They could be used to route
messages between different mail systems. They were used at DEC until
about late 90's. The addresses used the format user.name(a)*.mts.dec.com.
Unfortunately I don't remember much of them anymore, but just thought
that they could be used as a message gateway between HECnet and the
Internet. If I remember correctly, the VMS Hobbyist licenses included
those.
The packages are in at least the June '04 SPL:
MAILbus 400 2.0C 04RAA SSB 4 [MTAC020]
Application Program
Interface for
OpenVMS VAX
MAILbus 400 Message 3.0 04QAA SSB 4 [MTA030]
Transfer Agent for
OpenVMS VAX
MAILbus 400 Message 1.2C 342AA SSB 4 [XMRC012]
Router Gateway for
OpenVMS
and my PAKs from the hobbyist program do include these.
De
On 27.6.2012 15:07, Peter Coghlan wrote:
My original plan was to make the domain available for all to receive email with either:
<username>@hecnet.org or
<username>@<DECnet-nodename>.hecnet.org (much easier to do BTW)
I'm running a PMDF mailserver on VMS which can gateway mail from the internet
to DECnet Mail-11. I could set it up to do this if there is interest.
The web address was (is) an after-thought.
I'm also running the OSU webserver but it looks like web hosting might be
addressed by others with more bandwidth. (I'm not great at providing content
either!).
Regards,
Peter Coghlan.
.
One thought occurred to me today; There used to be the Mailbus and Message Router products running on VMS. They could be used to route messages between different mail systems. They were used at DEC until about late 90's. The addresses used the format user.name(a)*.mts.dec.com.
Unfortunately I don't remember much of them anymore, but just thought that they could be used as a message gateway between HECnet and the Internet. If I remember correctly, the VMS Hobbyist licenses included those.
If someone has recent experience of those, he could possibly tell more exact details about the feasibility.
Regards,
Kari
On 2012-07-09 03:21, Paul_Koning at Dell.com wrote:
On Jul 8, 2012, at 9:01 PM, Bob Armstrong wrote:
I don't know if you understand the concept of memory partitions in RSX
properly.
No, I guess not. The usefulness of partitions in an unmapped system is
easy to see - since tasks have to be linked (er, "TKBed") to run at a
specific virtual address and all tasks have to share the same address space
at the same time, you have to fit the tasks into memory. It's really the
address space that's being partitioned as much as it is the memory. The
concept is not unlike overlays.
But once you have an MMU the whole procedure is pointless. Every task and
every processor mode can have its own contiguous virtual address space.
Nobody needs to share virtual address space with anybody else. And the
mapping to physical memory is (for practical purposes) completely arbitrary
and easily changed on the fly. What's gained by partitioning physical
memory into fixed chunks that are allocated only to specific uses and can't
be shared?
The point that I know of (having seen it at work in Typeset-11 which is based on RSX-11D or IAS) is to have some critical tasks be guaranteed memory, while others content which each other (but not with the critical tasks). That simple example results in two partitions, a specific one and a general one.
Right. Which is why DECnet-11M uses a bunch of partitions of its own. In M+ this was solved by actually being able to lock regions in memory.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On Jul 8, 2012, at 9:01 PM, Bob Armstrong wrote:
I don't know if you understand the concept of memory partitions in RSX
properly.
No, I guess not. The usefulness of partitions in an unmapped system is
easy to see - since tasks have to be linked (er, "TKBed") to run at a
specific virtual address and all tasks have to share the same address space
at the same time, you have to fit the tasks into memory. It's really the
address space that's being partitioned as much as it is the memory. The
concept is not unlike overlays.
But once you have an MMU the whole procedure is pointless. Every task and
every processor mode can have its own contiguous virtual address space.
Nobody needs to share virtual address space with anybody else. And the
mapping to physical memory is (for practical purposes) completely arbitrary
and easily changed on the fly. What's gained by partitioning physical
memory into fixed chunks that are allocated only to specific uses and can't
be shared?
The point that I know of (having seen it at work in Typeset-11 which is based on RSX-11D or IAS) is to have some critical tasks be guaranteed memory, while others content which each other (but not with the critical tasks). That simple example results in two partitions, a specific one and a general one.
paul
On 2012-07-09 03:01, Bob Armstrong wrote:
I don't know if you understand the concept of memory partitions in RSX
properly.
No, I guess not. The usefulness of partitions in an unmapped system is
easy to see - since tasks have to be linked (er, "TKBed") to run at a
specific virtual address and all tasks have to share the same address space
at the same time, you have to fit the tasks into memory. It's really the
address space that's being partitioned as much as it is the memory. The
concept is not unlike overlays.
Well, on an unmapped system, not only do you have to worry about the region, but you actually have to task build your program for a specific address in that region, since you don't have virtual memory.
Once you have an MMU, you (almost) stop worrying about the address offset parts of regions in TKB, but you still have to say which partition it should request the memory in.
Of course, you can always override which region a program tries to run in when you install it. The information specified in TKB is just the default values. Nothing force you to actually use those values.
This is what the help in RSX says:
.help tkb opt par
PAR=par-name[:base:length]
PAR specifies the partition for which the task is built.
In a mapped system, you can install your task in any system partition
or user partition large enough to contain it. In an unmapped system,
your task is bound to physical memory. Therefore, you must install
your task in a partition starting at the same memory address as that
of the partition for which it was built.
But once you have an MMU the whole procedure is pointless. Every task and
every processor mode can have its own contiguous virtual address space.
Right. Except that contiguous virtual address space does not necessarily map to a contiguous physical address space.
Nobody needs to share virtual address space with anybody else. And the
mapping to physical memory is (for practical purposes) completely arbitrary
and easily changed on the fly. What's gained by partitioning physical
memory into fixed chunks that are allocated only to specific uses and can't
be shared?
Tasks that share the same region compete for the memory resources. Tasks in separate regions do not.
Also, sometimes tasks want to share parts of their memory space with others.
Also, memory partitions are not allocated to a specific user. Memory partitions are managed by the system manager, and are normally set up at boot time (even if this can be handled and modified at any point).
Actually I thought RSX worked the way you described for M+ -> everything
ran in a "GEN" partition, and the system just made as many GEN partitions as
it wanted. Only if you were unmapped did you care which partition something
ran in.
Um... No. There is only one GEN partition. There can only be one partion called GEN. The name of a partition needs to be unique.
A memory partition is more like a meta object. When a task is run, it allocates region within the partition. Many tasks can run in the GEN partition (for example), but they all have their own regions. Otherwise they would all share the same memory.
And if more tasks try to allocate memory in a region than there is room for, they are constantly swapped in and out by the scheduler, based on their swapping priority.
When you don't have the ability to lock regions in memory, you can constantly be swapped out if some other process in the same partition requests more memory than there is free in that region.
Good idea. But there are SCSI controllers for the Q-bus, as well as the
KDA50...
I have both of those, but I'm not going to waste 'em on a 11/23! :-)
:-)
Actually that's a pretty realistic attitude back in the day - these are
more expensive mass storage devices and anybody who could have afforded them
probably would have bought a faster CPU too.
I know of places who still ran 11/23+ machine in production at least until only a couple of years ago. Moved from RQDX2 to SCSI about 6 years ago...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
I don't know if you understand the concept of memory partitions in RSX
properly.
No, I guess not. The usefulness of partitions in an unmapped system is
easy to see - since tasks have to be linked (er, "TKBed") to run at a
specific virtual address and all tasks have to share the same address space
at the same time, you have to fit the tasks into memory. It's really the
address space that's being partitioned as much as it is the memory. The
concept is not unlike overlays.
But once you have an MMU the whole procedure is pointless. Every task and
every processor mode can have its own contiguous virtual address space.
Nobody needs to share virtual address space with anybody else. And the
mapping to physical memory is (for practical purposes) completely arbitrary
and easily changed on the fly. What's gained by partitioning physical
memory into fixed chunks that are allocated only to specific uses and can't
be shared?
Actually I thought RSX worked the way you described for M+ -> everything
ran in a "GEN" partition, and the system just made as many GEN partitions as
it wanted. Only if you were unmapped did you care which partition something
ran in.
Good idea. But there are SCSI controllers for the Q-bus, as well as the
KDA50...
I have both of those, but I'm not going to waste 'em on a 11/23! :-)
Actually that's a pretty realistic attitude back in the day - these are
more expensive mass storage devices and anybody who could have afforded them
probably would have bought a faster CPU too.
Bob
On 2012-07-08 23:21, Mark Benson wrote:
On 8 Jul 2012, at 22:00, Dave McGuire wrote:
Oh MAN! A manual beside you is REQUIRED! ;)
I first did an 11M v4.1 SYSGEN when I was about 17. I was fortunate
to have a friend at work who gave me lots of advice, but as he was at
work and my 11/34 was at home, the "question/answer latency" was very
high! It took me a few nights and a few tries to get it right, but in
the end those scripts were very well-written and everything worked out fine.
I've done a few RSX-11M+ 4.2 SYSGENs in the last year on SimH. I am pretty good at them now, I can almost roll one for my default PDP-11 setup (11/73 with dual RD54s), I can *almost* do it from memory.
I found this invaluable though: http://9track.net/pdp11/rsx4_sysgen
And for doing the accompanying DECnet NETGEN: http://9track.net/pdp11/decnet4_netgen
Like Dave I had to redo it a few times over the first time to get it right but once you get it wright it's quite satisfying (even in an emulator).
My big issue is *redoing* SYSGENs, I can't work out how to do a subsequent re-SYSGEN post-original-SYSGEN in 11M+ to change the a hardware config and compiled drivers etc. It doesn't seem to want to do an auto-config. Back to the endless reams of PDF files on my Kindle I suppose :D
Believe me. An 11M SYSTEM is *nothing* like M+. Even knowledge and understanding of M+ SYSGEN is of basically no help if you ever try an 11M SYSGEN.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol