Gents,
It's been a while since I sent out the T1.1 beta 2. I had collected a number of cleanups and bugfixes as well as some doc updates. I don't know how many are using this release; I see one in the mapper but there might be more for all I know.
Earlier today I committed those pending improvements and created a new "T1.1 third beta" kit. You can get that code from the Subversion server:
svn://akdesign.dyndns.org/pydecnet/branches/t1.1/pydecnet
or alternatively from the Downloads link on the mapper website:
http://akdesign.dyndns.org:8080/resources/public/index.html
I'd like to do the V1.1 release reasonably soon, so I'd appreciate any feedback from people who have tried the beta (the earlier version or this one).
paul
Hey folks. This is a bit OT. I've been attempting to build APL-11
for a machine with neither FPP nor FIS, and have hit a wall. I get
multiple "Z" errors from the assembler, which is flagging instructions
whose behaviors differ between PDP-11 models.
Has anyone built this from source for something without FPP/FIS?
This is driving me nuts.
Thanks,
-Dave
--
Dave McGuire, AK4HZ
New Kensington, PA
Area 60 is back online, things have cooled back down, and the max temp for the next 10 days is only supposed to be about 80F.
The bad news is, I lost a Hard Drive. The good news is, it was the drive with my backups.
Zane
Hail all you RSTS buffs!
One of my nodes (PIRSTS) is running RSTS V10.1L and DECnet V4.1
Ever since I am on HECnet, I have observed that job slots are filled over
time with jobs under the DECnet account [29,206] in HB state, up to the
point that the job max is reached, and I cannot log in anymore.
My working hypothesis is that the polling processes (for HECnet mapping and
other inquiring minds - you know who you are) keep creating new jobs,
instead of reusing old ones.
Is there a way to tell DECnet/E that it should not keep jobs in HB state,
but log them out after use? Or reuse existing jobs on incoming connection
requests? On DECnet/VMS there are timer and other logicals that steer this
behavior.
Running a daily kill job seems, well, overkill.
Thanks,
Wilm
I've discovered that three of my four RA8x drives have failed due to bad
tachometer optical sensors. Apparently the material these guys were potted
with decades ago gradually turns opaque and, being as they're "optical",
that's Really Bad. Has anybody else had this issue? Anybody found a
replacement for them? It's just an IR LED and a phototransistor in a fancy
plastic housing, and that's still pretty common devices these days. I just
need one that's mechanically compatible with the RA8x HDA. A modern
production replacement seems like a better plan than a new old stock
replacement since any old production ones are just as likely to be bad.
Bob
Is anyone who's familiar with pyDECnet configuring available on a
communication system that's less async than email? I've got Matrix, IRC,
Discord, and Slack as well as WhatsApp and Signal available to me.
Thanks!
-brian
I thought I'd share an example of a non-optimal setup for people, so
that you can understand a little better what we currently have.
This is from area 1 to area 34. More specifically ANKE to area 34.
Now, physically, ANKE is in Stockholm, Sweden, while A34RTR is in
BÃ¥tbyggartorp, Sweden. They are actually not that far from each other,
physically, if you look on a map. Maybe 40 kilometers at the most.
However, in HECnet, it is 3 hops, and a cost of 20.
Now, when ANKE wants to talk to area 34, the next hops are:
PYTHON - New Boston, NH, USA (cost 8)
IMPRTR - Washington DC, USA (cost 4)
and then I *think* it must be A34RTR, since that should be the final
hop, but since both IMPRTR and A34RTR are Cisco boxes, I can't see.
And a guess that the cost of that last hop would be 8.
But clearly, such a roundabout way to talk to such a close node is
kindof silly. :-) We should have reasonable links, in reasonable
directions, and with appropriate costs, so that we don't have things
like this. No good reason to. It's not like we need to pay money to have
physical cables installed between places.
(This must have been such a fun work back in the day when you needed to
actually pay for the physical cables...)
If the link to PYTHON went down, the alternative route would be through
A39RTR(9), PYRTR(2), IMPRTR(4) and then A34RTR. The costs in
parenthesis. (A cost of 2 between A39RTR and PYRTR seems rather cheap,
but what do I know?)
A few suggestions on how to look at things:
If you have a machine that talks NICE, you can examine for both VMS, RSX
and PyDECnet, what the next hop towards an area is. Giving examples on MIM:
.ncp tell anke sho area 34 stat
Area status as of 10-SEP-22 15:34:49
Next
Area State Cost Hops Circuit Node
34 Reachable 20 3 DMC-15 41.1 (PYTHON)
.ncp tell anke sho cir dmc-15 cha
Circuit characteristics as of 10-SEP-22 15:36:02
Circuit = DMC-15
Level one cost = 8
Hello timer = 60, Listen timer = 630
This can then be repeated for node PYTHON and so on. But as noted, when
you get to a Cisco box, you can't do this. Cisco boxes do not speak NICE.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
I just had a conversation with one of the operators of a HECnet L2 router who has demoted it to L1 because of connectivity issues to other areas. That prompted some thinking about the DECnet requirements for fault tolerance.
The basic principle of DECnet Phase IV is that L1 routing (within an area) involves only routers of that area, and L2 routing (across areas) involves only L2 routers. Phase V changes that to some extent, but HECnet is not Phase V and isn't likely to be. :-) In addition, L1 routers send out of area traffic to some L2 router in their area, but without any awareness of the L2 topology.
This has several consequences:
1. If some of the L2 routers in your area can't see the destination area but others can, you may not be able to communicate even though it would seem that there is a way to get there from here.
2. If your area is split, i.e., some of its L2 routers can see one subset of the nodes in the area and other L2 routers can see a different subset, then out of area traffic inbound to that area may not reach its destination -- if it enters at the "wrong" L2 entry point.
I believe the issue I mentioned at the top was #1: one of the L2 routers went down and the remaining L2 routers of that area ended up at two sides of a partitioned L2 network.
Obviously HECnet isn't a production network, but still it would be nice for it to be tolerant of outages. Especially since we can insert additional routers easily with PyDECnet or Robert Jarratt's C router. The HECnet map can be set to show just the L2 network (using the layers menu, accessible via the layers icon in the top right corner of the map). It's easy to see a number of L2 routers that have only one connection to the rest of HECnet. It's also clear that a large fraction of the connectivity is via Sweden, which certainly is a fine option but it's a bit odd for a node in, say, western Canada to have only that one connection and none to nodes much closer to it.
The map display doesn't give a visual clue about singly-connected area routers for which there is no location information in the database (the ones plotted at Inaccessible Island). The data is there in the map data table; it wouldn't be too hard to do some post-processing on that data to find cases of no redundancy.
I'm curious if people would be interested in trying to make HECnet more fault tolerant. My router (PYTHON) can definitely help, especially for North American nodes, and I'm sure there are a number of others that feel the same.
paul
I think I already sent this to all the people who are directly connected
to A2RTR, but just in case I missed anybody, here it is for general
consumption -
I have moved A2RTR to an Amazon cloud server. The new IP is now
35.82.76.235, although I strongly recommend that you use the FQDN
decnet.jfcl.com instead. I've already updated the latter to point to the
new IP.
A few of you with passive, listen, connections on your end and who aren't
checking the source IP don't actually need to do anything. The new A2RTR
will just connect to you as before and you won't notice a difference.
Those who have active connections to A2RTR, or who have some kind of
source IP based filtering in place, will need to update the IP for A2RTR.
Once again, I really suggest that you use decnet.jfcl.com if at all
possible, but if not then the new IP is 35.82.76.235.
Bob
After I put the changes into Tops-20 Kermit to recognize Ultrix and
properly check protocol, I finally got around to testing it against few
hosts on HECnet. As I had dimly recalled from the late 1980's, Ultrix
supports Tops-10/Tops-20 NRT, viz:
Kermit-20>c nofar/st
[Remote system *NOFAR*:: is running _ULTRIX-32_]
[KERMIT-20: Connecting to DECnet node NOFAR::]
and also:
Kermit-20>c ostara/stay
[Remote system *OSTARA*:: is running _ULTRIX-32_]
[KERMIT-20: Connecting to DECnet node OSTARA::]
Sure made my day!
When I finish wringing some more bugs out of Kermit-20 (I'm redoing the
parity routines to use string instructions), I'll see about testing
against Ultrix in addition to Tops-10 and older versions of Tops-20
Kermit (I've got them going back to the 1980's)
SETHOST currently cannot connect to these hosts as it reports incorrect
configuration messages. That's probably an easy enough fix.
I recently added HDLC framing support to the DUP11 and DPV11 devices in
open_simh. This has only so far been tested with DECnet-Plus on VMS.
I wrote up another couple of examples, showing how to use HDLC as a DECnet
datalink, and also how to try a basic configuration of VAX P.S.I. for X.25
communication.
They're here: https://notlikethatlikethis.blogspot.com/
Paul (or anybody) - has the -daemon (aka "-d") option been removed from
pyDECnet v596? It's still listed in the doc files, but running it gives
pydecnet: error: unrecognized arguments: --daemon
Bob
In my changes to Kermit-20 and SETHOST, I check the configuration byte
to make sure that I'm communicating with either a Tops-10 or Tops-20
system. If it's not one of those, then I want to give a nifty,
informative error message, such as:
Kermit-20>connect APOLLO::
?VMS type systems do not support Tops-10/20 NRT communications.
Kermit-20>connect MIM::
?RSX-11M type systems do not support Tops-10/20 NRT communications.
Kermit-20>connect TRON::
?RSTS/E type systems do not support Tops-10/20 NRT communications.
Pretty nifty. I do this by doing a lookup into a handy table, indexed
by the OS type, viz:
hsttyp: eascii <RSTS>Â Â Â Â Â Â Â Â Â Â ;^d0
       eascii <RT-11>         ;^d1
       eascii <RSTS/E>        ;^d2
       eascii <RSX-11S>       ;^d3
       eascii <RSX-11M>       ;^d4
       eascii <RSX-11D>       ;^d5
       eascii <IAS>           ;^d6
       eascii <VMS>           ;^d7
       eascii <TOPS-20>       ;^d8 (TOPS20)
       eascii <TOPS-10>       ;^d9 (TOPS10)
       eascii <RTS-8>         ;^d10
       eascii <OS-8>          ;^d11 (!!)
       eascii <RSX-11M+>      ;^d12
       eascii <MCB>           ;^d13 (the DN20!!)
hsttyn=.-hsttyp-1Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; Number of defined operating system
types
So if the number I get is outside of this range, I give an unknown
error, such as:
Kermit-20>connect ZITI::
?Remote system sent an illegal configuration message
ZITI:: shows as a Linux system. Does anyone know what its configuration
byte would be? How about Windows? Ultrix? Any others?
I don't recall whether we used NRT or CTERM to get into our Ultrix
machine (it was an 8650, consequently upgraded to an 8700). I think we
used TCP/IP TELNET.
Hey Johnny, I recently installed DECmail-11 on RSTS the other evening, and
I'm noticing some undesirable behavior with the mim.stupi.net email gateway
going out from inside HECnet.
Firstly, it's worth mentioning that gmail won't accept a recipient address
with colons in it, unless you put double quotes around it.
For example:
MARDUK::PHIBER@mim.stupi.net
...won't be accepted as a recipient, and the message is immediately
red-flagged in gmail.
However, if you do the following:
"MARDUK::PHIBER"@mim.stupi.net
...gmail accepts and delivers it, MIM relays it correctly, and I receive
the email in RSTS. Nice.
Unfortunately, the reverse is failing. If I respond or send mail out from
DECmail, mim.stupi.net's sender address rules reject the message:
This is Postmaster <MIM::POSTMASTER> at MIM::.
I'm sorry, but I could not deliver your mail.
An error occured while trying to send it, and I cannot recover.
Orignal recipient was "PHIBER(a)PHIBER.COM"
Actual error is: Fatal address error.
Additional information:
5.1.7 The sender address <MARDUK::PHIBER@mim.stupi.net> is not a valid
5.1.7 RFC-5321 address. h19-20020a05651c125300b0025e725ef592si33185ljh.300
- gsm
tp
Please let us know if you can correct the sender address restrictions on
mim. I tried with quotes, but that didn't help either. :)
Thanks,
Mark
Hi Jordi, I've been using your BITXOZ as my DECdns HECNET nodename
resolution server for OpenVMS for at least eight years (since you put it
up), but it seems that area 7 went offline fairly recently. Is everything
ok?
Regards,
-Mark
On Sat, Apr 26, 2014 at 2:56 PM Jordi Guillaumes i Pons <
jg(a)jordi.guillaumes.name> wrote:
> Hello,
>
> I'm playing again a little bit with DECNET/OSI. Does anybody have the X500
> directory server install kit at hand? The PAKs are included in the hobbyist
> set, but the CSD I have is the 1992 one so I guess there has to be a more
> recent one (for VAX, by the way). The DECDns stuff changed from 6.X to 7.X
> so I'm pretty sure the version I have in my CDs won't work.
>
> By the way, BITXOZ is configured as a DECDns server, and "owns" the
> HECNET: namespace. All HECNET nodes are loaded in the form HECNET:.NAME. I
> toyed previously with a schema like HECNET:.AREAn.NAME but I have ditched
> it. My internet domain (jguillaumes.dyndns.org or jordi.guillaumes.name)
> has also opened the access to BITXOZ via rfc1006, so anyone with a
> DECNET/OSI stack shoud be able to do a $SET HOST jguillaumes.dyndns.org
> or a $DIR/APP=FTAM jguillaumes.dyndns.org:: and access that system even
> without a working HECNET link.
>
> Jordi Guillaumes i Pons
> jg(a)jordi.guillaumes.name
> HECnet: BITXOV::JGUILLAUMES
>
>
>
>
>
I mentioned here a few months ago that I'd got DECnet-VAX Phase V working
under simh using the DUP11. In the meantime, I've fixed some simh DUP11
bugs and also added DPV11 support, all of which is available in the latest
version of open-simh on github.
In the unlikely event that anyone else wants to try this out, I've written
a blog post that I hope contains all the relevant information:
https://notlikethatlikethis.blogspot.com/2022/06/decnet-vax-phase-v-wan-con…
Announcing the Open SIMH project
SIMH is a framework and family of computer simulators, initiated by Bob Supnik and continued with contributions (large and small) from many others, with the primary goal of enabling the preservation of knowledge contained in, and providing the ability to execute/experience, old/historic software via simulation of the hardware on which it ran. This goal has been successfully achieved and has for these years created a diverse community of users and developers.
This has mapped to some core operational principles:
First, preserve the ability to run old/historically significant software. This means functionally accurate, sometimes bug-compatible, but not cycle-accurate, simulation.
Second, make it reasonably easy to add new simulators for other hardware while leveraging common functions between the simulators.
Third, exploit the software nature of simulation and make SIMH convenient for debugging a simulated system, by adding non-historical features to the environment.
Fourth, make it convenient for users to explore old system environments, with as close to historical interfaces, by mapping them to new features that modern host operating systems provide.
Fifth, be inclusive of people and new technology. It's serious work, but it should be fun.
Previously, we unfortunately never spent the time to codify how we would deliver on these concepts. Rather, we have relied on an informal use of traditional free and open-source principles.
Recently a situation has arisen that compromises some of these principles and thus the entire status of the project, creating consternation among many users and contributors.
For this reason, a number of us have stepped up to create a new organizational structure, which we call "The Open SIMH Project", to be the keeper and provide formal governance for the SIMH ecosystem going forward. While details of the structure and how it operates are likely to be refined over time, what will not change is our commitment to maintaining SIMH as a free and open-source project, licensed under an MIT-style license as shown on the "simh" repository page.
It is our desire that all of the past users and contributors will come to recognize that the new organizational structure is in the best interests of the community at large and that they will join us in it. However, this iproject as defined, is where we intend to contribute our expertise and time going forward. At this point, we have in place the following, although we foresee other resources being added in the future as we identify the need and execute against them:
A Github "organization" for the project at https://github.com/open-simh
A Git repository for the simulators themselves at https://github.com/open-simh/simh
The license for the SIMH simulator code base, found in LICENSE.txt in the top level of the "simh" repository.
The "SIMH related tools" in https://github.com/open-simh/simtools. This is also licensed under MIT style or BSD style open source licenses (which are comparable apart from some minor wording differences).
A "SIMH Steering Group" -- project maintainers and guides.
The conventional git style process is used for code contributions, via pull request to the project repository. The Steering Group members have approval authority; this list is likely to change and grow over time.
By formalizing the underlying structure, our operational principles and guidance can best benefit the community. These are being developed and formalized, with a plan to publish them soon.
We have used our best judgment in setting up this structure but are open to discussion and consideration of other ideas, and to making improvements. Many of us have been part of different projects and understand that past mistakes are real. We have tried to learn from these experiences and apply the collected wisdom appropriately. We desire to hear from the community as we update and refine the operating structure for the Open SIMH project.
We hope for your patience and look forward to your support as we work to refine the organization and be able to provide this wonderful resource for anyone to use as we continue to evolve the technology provided by the SIMH system.
The SIMH Steering Group
Clem Cole
Richard Cornwell
Paul Koning
Timothe Litt
Seth Morabito
Bob Supnik
Hi all,
Is there way to have PDR accept arbitrary TCP Multinet connections?
This works:
circuit mul-123 Multinet 207.123.123.123:15001:listen --cost 8 --t3 120
While this does not:
circuit mul-123 Multinet 0.0.0.0:15001:listen --cost 8 --t3 120
I'm sure I've had it working before..
Problem is, that remote 207 address is dynamic.
Cheers,
iain
I've got some proposed changes to the DUP11 simulation code in simh, that
could do with a quick regression test to make sure they don't break when
using RSX as the OS.
I don't know RSX myself, and I haven't managed to turn up any useful info
via google on how to configure a DUP11 with RSX. If anyone has a fairly
simple recipe for getting this working then I could have a go at testing
it.
However, the test required is pretty simple, basically just to confirm that
DECnet still works when using a version of simh that contains my fix. So
ideally, if one of you out there is running this configuration, we could
work together to try it out.
Thanks,
Trevor