Well, there used to be big plans about X.400 to become the global standard of messaging...
Kari
On 10.7.2012 23:03, Sampsa Laine wrote:
X.400? Well that's kinda amusing as of itself, no?
Sampsa
On 10 Jul 2012, at 22:59, Dennis Boone wrote:
One thought occurred to me today; There used to be the Mailbus and
Message Router products running on VMS. They could be used to route
messages between different mail systems. They were used at DEC until
about late 90's. The addresses used the format user.name(a)*.mts.dec.com.
Unfortunately I don't remember much of them anymore, but just thought
that they could be used as a message gateway between HECnet and the
Internet. If I remember correctly, the VMS Hobbyist licenses included
those.
The packages are in at least the June '04 SPL:
MAILbus 400 2.0C 04RAA SSB 4 [MTAC020]
Application Program
Interface for
OpenVMS VAX
MAILbus 400 Message 3.0 04QAA SSB 4 [MTA030]
Transfer Agent for
OpenVMS VAX
MAILbus 400 Message 1.2C 342AA SSB 4 [XMRC012]
Router Gateway for
OpenVMS
and my PAKs from the hobbyist program do include these.
De
.
On 10.7.2012 23:22, Marc Chametzky wrote:
One thought occurred to me today; There used to be the Mailbus and
Message Router products running on VMS. They could be used to route
messages between different mail systems. They were used at DEC until
about late 90's. The addresses used the format user.name(a)*.mts.dec.com.
Unfortunately I don't remember much of them anymore, but just thought
that they could be used as a message gateway between HECnet and the
Internet. If I remember correctly, the VMS Hobbyist licenses included
those.
If someone has recent experience of those, he could possibly tell more
exact details about the feasibility.
In my mind, using PMDF would be a better choice for SMTP<->DECnet
connectivity. I believe that it's available from Process Software
through their hobbyist program.
Of course, I'm one of the former PMDF developers, so I'm biased. :-)
--Marc
.
I understand well your point. :)
You definitely know better because of your background. I know PMDF even less than MR & MB-400.
For me any choice which works well is fine. Of course someone have to take the responsibility to administer the gateway system.
I just happen to have most of the VMS SPL:s and the documentation. I can also dedicate a VAX or Alpha for the purpose. If needed, I could also take care of the adminstration after I've recalled the knowledge about MR & MB-400.
Let the jury make their verdict. :)
Kari
using PMDF would be a better choice for SMTP<->DECnet connectivity.
I believe that it's available from Process Software through their hobbyist
program.
PMDF is available thru the Hobbyist program, as is PMAS (anti-SPAM
software for VMS).
However, Multinet already has a SMTP <-> MAIL11 gateway built in if that's
all you want. No extra software is needed.
Bob
One thought occurred to me today; There used to be the Mailbus and Message Router products running on VMS. They could be used to route messages between different mail systems. They were used at DEC until about late 90's. The addresses used the format user.name(a)*.mts.dec.com.
Unfortunately I don't remember much of them anymore, but just thought that they could be used as a message gateway between HECnet and the Internet. If I remember correctly, the VMS Hobbyist licenses included those.
If someone has recent experience of those, he could possibly tell more exact details about the feasibility.
In my mind, using PMDF would be a better choice for SMTP<->DECnet connectivity. I believe that it's available from Process Software through their hobbyist program.
Of course, I'm one of the former PMDF developers, so I'm biased. :-)
--Marc
X.400? Well that's kinda amusing as of itself, no?
Ok, I'll confess - I'd never heard of x.400 until now. Like everything
else in the universe, it's in Wikipedia
http://en.wikipedia.org/wiki/X.400
"G=Bob;S=Armstrong;O=SpareTimeGizmos;P=SpareTimeGizmos;C=us" ?????
Give me "bob at jfcl.com" any day :-)
Bob
X.400? Well that's kinda amusing as of itself, no?
Sampsa
On 10 Jul 2012, at 22:59, Dennis Boone wrote:
One thought occurred to me today; There used to be the Mailbus and
Message Router products running on VMS. They could be used to route
messages between different mail systems. They were used at DEC until
about late 90's. The addresses used the format user.name(a)*.mts.dec.com.
Unfortunately I don't remember much of them anymore, but just thought
that they could be used as a message gateway between HECnet and the
Internet. If I remember correctly, the VMS Hobbyist licenses included
those.
The packages are in at least the June '04 SPL:
MAILbus 400 2.0C 04RAA SSB 4 [MTAC020]
Application Program
Interface for
OpenVMS VAX
MAILbus 400 Message 3.0 04QAA SSB 4 [MTA030]
Transfer Agent for
OpenVMS VAX
MAILbus 400 Message 1.2C 342AA SSB 4 [XMRC012]
Router Gateway for
OpenVMS
and my PAKs from the hobbyist program do include these.
De
One thought occurred to me today; There used to be the Mailbus and
Message Router products running on VMS. They could be used to route
messages between different mail systems. They were used at DEC until
about late 90's. The addresses used the format user.name(a)*.mts.dec.com.
Unfortunately I don't remember much of them anymore, but just thought
that they could be used as a message gateway between HECnet and the
Internet. If I remember correctly, the VMS Hobbyist licenses included
those.
The packages are in at least the June '04 SPL:
MAILbus 400 2.0C 04RAA SSB 4 [MTAC020]
Application Program
Interface for
OpenVMS VAX
MAILbus 400 Message 3.0 04QAA SSB 4 [MTA030]
Transfer Agent for
OpenVMS VAX
MAILbus 400 Message 1.2C 342AA SSB 4 [XMRC012]
Router Gateway for
OpenVMS
and my PAKs from the hobbyist program do include these.
De
On 27.6.2012 15:07, Peter Coghlan wrote:
My original plan was to make the domain available for all to receive email with either:
<username>@hecnet.org or
<username>@<DECnet-nodename>.hecnet.org (much easier to do BTW)
I'm running a PMDF mailserver on VMS which can gateway mail from the internet
to DECnet Mail-11. I could set it up to do this if there is interest.
The web address was (is) an after-thought.
I'm also running the OSU webserver but it looks like web hosting might be
addressed by others with more bandwidth. (I'm not great at providing content
either!).
Regards,
Peter Coghlan.
.
One thought occurred to me today; There used to be the Mailbus and Message Router products running on VMS. They could be used to route messages between different mail systems. They were used at DEC until about late 90's. The addresses used the format user.name(a)*.mts.dec.com.
Unfortunately I don't remember much of them anymore, but just thought that they could be used as a message gateway between HECnet and the Internet. If I remember correctly, the VMS Hobbyist licenses included those.
If someone has recent experience of those, he could possibly tell more exact details about the feasibility.
Regards,
Kari
On 2012-07-09 03:21, Paul_Koning at Dell.com wrote:
On Jul 8, 2012, at 9:01 PM, Bob Armstrong wrote:
I don't know if you understand the concept of memory partitions in RSX
properly.
No, I guess not. The usefulness of partitions in an unmapped system is
easy to see - since tasks have to be linked (er, "TKBed") to run at a
specific virtual address and all tasks have to share the same address space
at the same time, you have to fit the tasks into memory. It's really the
address space that's being partitioned as much as it is the memory. The
concept is not unlike overlays.
But once you have an MMU the whole procedure is pointless. Every task and
every processor mode can have its own contiguous virtual address space.
Nobody needs to share virtual address space with anybody else. And the
mapping to physical memory is (for practical purposes) completely arbitrary
and easily changed on the fly. What's gained by partitioning physical
memory into fixed chunks that are allocated only to specific uses and can't
be shared?
The point that I know of (having seen it at work in Typeset-11 which is based on RSX-11D or IAS) is to have some critical tasks be guaranteed memory, while others content which each other (but not with the critical tasks). That simple example results in two partitions, a specific one and a general one.
Right. Which is why DECnet-11M uses a bunch of partitions of its own. In M+ this was solved by actually being able to lock regions in memory.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On Jul 8, 2012, at 9:01 PM, Bob Armstrong wrote:
I don't know if you understand the concept of memory partitions in RSX
properly.
No, I guess not. The usefulness of partitions in an unmapped system is
easy to see - since tasks have to be linked (er, "TKBed") to run at a
specific virtual address and all tasks have to share the same address space
at the same time, you have to fit the tasks into memory. It's really the
address space that's being partitioned as much as it is the memory. The
concept is not unlike overlays.
But once you have an MMU the whole procedure is pointless. Every task and
every processor mode can have its own contiguous virtual address space.
Nobody needs to share virtual address space with anybody else. And the
mapping to physical memory is (for practical purposes) completely arbitrary
and easily changed on the fly. What's gained by partitioning physical
memory into fixed chunks that are allocated only to specific uses and can't
be shared?
The point that I know of (having seen it at work in Typeset-11 which is based on RSX-11D or IAS) is to have some critical tasks be guaranteed memory, while others content which each other (but not with the critical tasks). That simple example results in two partitions, a specific one and a general one.
paul