I've seen/heard of various stories about how people update their
nodename databases on their machines, hacking together scripts, and
processing files. So I figured I should write a small mail about the
topic (I should create a web-page with this information as well).
The main/basic point is that people are creating work for themselves
they really don't need.
Exactly how you update your nodename database on your machine depends on
what OS you are running, but there are basically prepared tools and
scripts already existing for pretty much any scenario. And if you happen
to have a system or need not currently covered, I can easily create one
for you as well.
But before going into the solutions, let me explain a bit about the
source of the data here.
DECnet phase IV do not have a centralized nodename system like DNS. Each
node in the DECnet network has its own nodename database, and every
machine can have its own name for another machine, independent of what
that other machine thinks its own nodename is.
However, in order to make it easier for multiple people and machines to
talk, it helps if everyone have a somewhat similar database. And here is
where the nodename database in MIM comes it. The nodename database that
I have on MIM is not the regular DECnet nodename database. Instead I'm
using DATATRIEVE to maintain a nodename database, which contains more
information than just the number and name. It contains the owner,
information about the software and hardware of the node, the location,
and when things were updated. This database is what is queried when
someone goes to http://mim.stupi.net/nodedb . And that page is generated
by just making queries in DATATRIEVE. If someone have a host with
DATATRIEVE on it, it is even possible to remotely access this DATATRIEVE
database over DECnet (you'll only have read-only access).
I have been considering possibly adding a web interface for people to
possibly be able to update their own information remotely, but so far
that's been a low priority thing. Maybe one day...
From this DATATRIEVE database I can then generate the DECnet nodename
database on MIM. This is a simple makefile actually. Whenever I run it,
it will create a bunch of different files (I'll get to that in a
moment), and detect if any changes have happened on the DECnet level of
things. If so, it will send a mail to people who have requested it,
informing them that the nodename database have been updated, and they
should update the nodename database of their own machines.
I hope this makes it apparent that creating various files based on the
nodename database is actually very simple. This is in a sense what
DATATRIEVE is good at. Creating reports is sortof what all these output
files are.
So - what files do I create today? Well, here is a short list:
FIX.CMD - This is a script file suitable for RSX systems using CFE.
However, it's sortof specially tailored for MIM, so it's not a file I
would recommend anyone else to use.
FIX.COM - This is a script for VMS systems using phase IV.
FIX.PHV - This is a script for VMS systems using phase V.
FIX.IMP - This is a script for VMS for anyone using DECdns.
FIX.T20 - This is a script for TOPS-20.
HECNET.PY - This is a definition file for PyDECnet.
FIX.RST - This is a script for RSTS/E.
NODENAMES.DAT - This is basically just the basic information is a simple
output form from DATATRIEVE. It exist mostly for historical reasons, but
I understand that lots of people actually take this file, and then write
code to process, extract and apply information from this file.
In addition, some systems can directly import nodenames from another
machine on DECnet, meaning you do not have to fetch and run any scripts
at all.
So here is the actioins you need to do on each system in a summarized form:
RSX:
In RSX, there is a tool called NNC which copies definitions from another
node. Copy over MIM::HECNET:NNC.BAT which is a batch file you can use
which does all the work of importing the latest definitions from MIM and
updating your local system. All you need to do is just "SUBMIT NNC.BAT"
and you are done.
VMS:
With phase IV, the node copy capability is build into NCP. All you need
to do is: "NCP COPY KNOWN NODES FROM MIM TO BOTH" and you are done.
With phase V, copy over FIX.PHV and run it, or just directly run it from
MIM like this: "@MIM::HECNET:FIX.PHV"
If you run DECdns, grab FIX.IMP, and run it with whatever tool is used
to manage this (sorry that I can't help more, I don't really have any
experience with DECdns).
TOPS-20:
Grab MIM::HECNET:FIX.T20 and run in in the NCP submode of OPR (if I
remember the setup correctly).
RSTS/E:
Grab MIM::HECNET:FIX.RST, and run it with "@FIX.RST".
PyDECnet:
Fetch hecnet.py by doing "wget mim.stupi.net/hecnet.py". Place that
where you have configured PyDECnet to get the nodenames from, and you
are good (not sure if you need to restart PyDECnet).
Now. If you have some other system with some specific format you need,
just let me know, and I'll create such a file as well. It's trivial for
me to do this from DATATRIEVE. If you spot something wrong/bad in some
file created today, let me know, and we'll fix it. If you see any errors
or omissions in the information in this mail, let me know, and I'll get
it corrected. I will create a web page with this information as well.
If you want to get a mail whenever the nodename database is updated,
just let me know and I'll add you to the list.
And HECnet is slowly growing. Occasionally a completely new person/site
gets connected. Occasionally people add more nodes. The online presence
seems pretty constant. At the moment 19 areas are online. In area 1,
currently there are 19 machines online. Looking at Paul's HECnet map
(http://akdesign.dyndns.org:8080/map), there are machines online in
quite different locations, covering a large part of the world. I find
this cool, and even though there isn't a lot being done, it's still fun.
Well. Have a nice weekend everyone, and I hope some people find this
information useful.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
I've finished the timeout functionality so that the Tops-20 finger
server will only wait so long before punting the connection (don't
worry, I'm very generous...)
I'm now working on the logging functionality. Again, on a connection
interrupt, the finger server grabs a bunch of meta-data about the
connection and displays it:
FNGSRV(1): Sunday, 6-October-2024 12:23:36.52412 AM-EDT
From: VENTI2, User: SLOGIN, Data: STREAM*
Object Type: Generic (FINGER), Segment Size: 1459
Local Flow Control: Segment, Remote Flow Control: Segment
Link Quota: Input Percentage: 50, Buffers: 16, Input Goal: 0
I have three concerns here (or maybe two and a half)
1)I want to write this information to a log file,
2)Since the data is written by concurrent server sub-forks,
a)Data can get overwritten
b)Latency to start the finger sub-sub-forks is increased
The solution is simple enough, I grab everything in the server sub-fork
and ship it up to the controller through shared memory for later
formatting and storage (and maybe printing).
When reviewing what meta-data can be gotten, I noticed that an explicit
connection confirm (MTOPR% function .MOCC) can take a pointer to an
optional sixteen bytes of data. What I can't quite tell from the
documentation is whether the server reads this or whether the server
writes it. I find myself being unsure of the wording.
1. If the former (server sends), how does the client read it?
2. If the latter (server reads), how does the client send it?
I don't see a single instance of any Tops-20 DECnet program using this,
so no help there. I /think/ this is case 1., that is, the server has
the option of writing it. However, I was wondering what specification
this is in (maybe NSP 4?) and where, so I could read it before I look at
the monitor sources to see how the client would read it. I'm assuming
it would show up at the client as optional data?
I thought I would update everyone with where things stand with the
Tops-20 finger server.
Johnny and I agreed that, by default, the response to a DECnet finger
query is a sequences of records, each no longer than 132 bytes (more
typically 90), terminated by a line-feed. If the remote finger client
or operating system can handle larger buffers, then it can connect with
optional data=STREAM. Tops-20 will then dump everything in one giant
record. Only the Tops-20 finger client can handle this, although I
suspect maybe a VAX client might also be able to do it.
I had been making steady progress until I started stress testing it,
meaning having a session active on MIM::, TOMMYT::, and VENTI2::
(local), each having a finger command in ready to go and then pushing
carriage return on all three as simultaneously as possible.
This started generating errors on MIM:: that the "object wasn't
available". What was going on was that the structure of the Tops-20
finger server really wasn't architected to have real time response for
that much curiosity. It only opens a single SRV:117 (finger) object at a
time, waits for a connection, reads the data, hands it off to an idle
finger client fork via a pipe, and then gets a new SRV:117 object.
In other words, it isn't until the finger client is started with the
connection redirected to DECnet that the server is ready to accept
another connection. That has what I would consider to be noticeable
latency, particularly on failure. An error doesn't really matter for
SMTP as it is a background task and just tries again, later. An
interactive finger on the other hand, has a user sitting there waiting
for a response, so it seemed to me that this wasn't really going to cut it.
I took the model of the Tops-20 FAL server, which has a single control
fork, looking for illness in sub-forks, all of which open their own FAL
server object. The new finger controller now starts separate FNGSRV
sub-server forks, creating a FINGER sub-fork for each, gets all the
communications lined up, and starts all the FNGSRV sub-forks to listen
for connections.
This has the advantage of not clobbering the system on FNGSRV startup
because resources are gotten or creating sequentially, so there isn't
much for a FNGSRV sub-fork to do except wait for a connection and manage
its own single FINGER sub-fork.
Startup resource allocation looks like this:
[Fork FNGSRV opening PIP:1;RECORD-LENGTH:200 for writing]
[Fork FNGSRV opening PIP:1.1 for reading]
[Fork FNGSRV opening SRV:117 for reading, writing]
[Fork FNGSRV opening PIP:2;RECORD-LENGTH:200 for writing]
[Fork FNGSRV opening PIP:2.2 for reading]
[Fork FNGSRV opening SRV:117 for reading, writing]
[Fork FNGSRV opening PIP:3;RECORD-LENGTH:200 for writing]
[Fork FNGSRV opening PIP:3.3 for reading]
[Fork FNGSRV opening SRV:117 for reading, writing]
The resulting JFN's being:
JFN File Mode Bytes(Size)
2 FNGSRV.EXE.330 Read, Execute
3 FINGER.EXE.116 Read, Execute
15 PIP:1;RECORD-LENGTH:200 Append 0.(8)
16 PIP:1.1 Read 0.(8)
17 SRV:117 Read, Append 0.(8)
20 PIP:2;RECORD-LENGTH:200 Append 0.(8)
21 PIP:2.2 Read 0.(8)
22 SRV:117 Read, Append 0.(8)
23 PIP:3;RECORD-LENGTH:200 Append 0.(8)
24 PIP:3.3 Read 0.(8)
25 SRV:117 Read, Append 0.(8)
The resulting fork structure is:
=> FNGSRV (2): HALT at STARTS+13, 0.02719
Fork 12: HALT at 0, 0.00018
Fork 13: HALT at 0, 0.00016
Fork 10: HALT at 0, 0.00012
Fork 11: HALT at 0, 0.00012
Fork 6: HALT at 0, 0.00008
Fork 7: HALT at 0, 0.00007
So fork 2 is the finger server controller, forks 12, 10, and 6 are
finger server sub-forks, and forks 13, 11, and 7 are the respective
Tops-20 finger clients. Times are in tens of microseconds, the maximum
resolution that Tops-20 supports. What can be seen is that fork
creation is happening in sub-millisecond time. This was not the case in
the 1980's with KL10's (I /think/), and modifications were necessary to
Tops-20 and the EXEC to capture the increased resolution.
I guess I'll have another version ready in about two weeks.
MIM can be used as an intermediate node, yes.
If you send your mails to MIM::<whatever>::USER the mail will be queued
up if the <whatever>:: node isn't responding right now.
Johnny
On 2024-10-04 09:16, jdmail(a)zeelandnet.nl wrote:
> Then we need a node as a mailserver? Can that be done?
>
> Johan
>
>
> Johnny Billquist schreef op 2024-10-04 02:29:
>
>> And just if anyone wonders - yes, that also means you cannot send mail
>> to users on systems that are not currently online.
>> A thing that I always found a bit annoying/irritating.
>>
>> Another reason why I wrote the mail handling for BQTCP to also do the
>> writing and sending as two separate steps, with a queuing in between.
>>
>> Here is how it looks from VMS:
>>
>> MAIL> sen
>> To: josse::bqt
>> %MAIL-E-LOGLINK, error creating network link to node JOSSE
>> -SYSTEM-F-UNREACHABLE, remote node is not currently reachable
>>
>> MAIL> sen
>> To: anke::bqt
>> %MAIL-E-LOGLINK, error creating network link to node ANKE
>> -SYSTEM-F-NOSUCHOBJ, network object is unknown at remote node
>>
>> MAIL>
>>
>>
>> It instantly tries to connect, and if that don't work, it immediately
>> fails.
>>
>> Johnny
>>
>> On 2024-10-04 01:38, Johnny Billquist wrote:
>>> Sounds like a good improvement.
>>>
>>> However, I saw something about mail here, that you should be aware of...
>>> My MAIL11 implementation is doing the queuing of mails, because that
>>> is normal with SMTP, and I just carried over the same behavior to all
>>> parts of the system.
>>>
>>> However, the "standard" MAIL11 application for RSX as well as RSTS/E
>>> , along with the DECUS MAIL for RSX (and I suspect also VMS) do not
>>> behave that way. If the connecting fails, the sending of mail
>>> immediately fail, and it is not queued for retrying. So it would
>>> appears that the mail for TOPS-20 you look at is also potentially
>>> flawed, which could cause issues.
>>>
>>> Of course, in that case, you'd normally have a human sitting in front
>>> of the terminal, who would then try again...
>>>
>>> Johnny
>>>
>>> On 2024-10-04 00:05, Thomas DeBellis wrote:
>>>> I thought I would update everyone with where things stand with the
>>>> Tops-20 finger server.
>>>>
>>>> Johnny and I agreed that, by default, the response to a DECnet
>>>> finger query is a sequences of records, each no longer than 132
>>>> bytes (more typically 90), terminated by a line-feed. Ifthe
>>>> remote finger client or operating system can handle larger buffers,
>>>> then it can connect with optional data=STREAM. Tops-20 will then
>>>> dump everything in one giant record. Only the Tops-20 finger client
>>>> can handle this, although I suspect maybe a VAX client might also be
>>>> able to do it.
>>>>
>>>> I had been making steady progress until I started stress testing it,
>>>> meaning having a session active on MIM::, TOMMYT::, and VENTI2::
>>>> (local), each having a finger command in ready to go and then
>>>> pushing carriage return on all three as simultaneously as possible.
>>>>
>>>> This started generating errors on MIM:: that the "object wasn't
>>>> available". What was going on was that the structure of the Tops-20
>>>> finger server really wasn't architected to have real time response
>>>> for that much curiosity. It only opens a single SRV:117 (finger)
>>>> object at a time, waits for a connection, reads the data, hands it
>>>> off to an idle finger client fork via a pipe, and then gets a new
>>>> SRV:117 object.
>>>>
>>>> In other words, it isn't until the finger client is started with the
>>>> connection redirected to DECnet that the server is ready to accept
>>>> another connection. That has what I would consider to be noticeable
>>>> latency, particularly on failure. An error doesn'treally matter
>>>> for SMTP as it is a background task and just tries again, later. An
>>>> interactive finger on the other hand, has a user sitting there
>>>> waiting for a response, so it seemed to me that this wasn't really
>>>> going to cutit.
>>>>
>>>> I took the model of the Tops-20 FAL server, which has a single
>>>> control fork, looking for illness in sub-forks, all of which open
>>>> their own FAL server object. The new finger controller now starts
>>>> separate FNGSRV sub-server forks, creating a FINGER sub-fork for
>>>> each, gets all the communications lined up, and starts all the
>>>> FNGSRV sub-forks to listen for connections.
>>>>
>>>> This has the advantage of not clobbering the system on FNGSRV
>>>> startup because resources are gotten or creating sequentially, so
>>>> there isn't much for a FNGSRV sub-fork to do except wait for a
>>>> connection and manage its own single FINGER sub-fork.
>>>>
>>>> Startup resource allocation looks like this:
>>>>
>>>> [Fork FNGSRV opening PIP:1;RECORD-LENGTH:200 for writing]
>>>> [Fork FNGSRV opening PIP:1.1 for reading]
>>>> [Fork FNGSRV opening SRV:117 for reading, writing]
>>>> [Fork FNGSRV opening PIP:2;RECORD-LENGTH:200 for writing]
>>>> [Fork FNGSRV opening PIP:2.2 for reading]
>>>> [Fork FNGSRV opening SRV:117 for reading, writing]
>>>> [Fork FNGSRV opening PIP:3;RECORD-LENGTH:200 for writing]
>>>> [Fork FNGSRV opening PIP:3.3 for reading]
>>>> [Fork FNGSRV opening SRV:117 for reading, writing]
>>>>
>>>> The resulting JFN's being:
>>>>
>>>> JFN File Mode Bytes(Size)
>>>>
>>>> 2 FNGSRV.EXE.330 Read, Execute
>>>> 3 FINGER.EXE.116 Read, Execute
>>>> 15 PIP:1;RECORD-LENGTH:200 Append 0.(8)
>>>> 16 PIP:1.1 Read 0.(8)
>>>> 17 SRV:117 Read, Append 0.(8)
>>>> 20 PIP:2;RECORD-LENGTH:200 Append 0.(8)
>>>> 21 PIP:2.2 Read 0.(8)
>>>> 22 SRV:117 Read, Append 0.(8)
>>>> 23 PIP:3;RECORD-LENGTH:200 Append 0.(8)
>>>> 24 PIP:3.3 Read 0.(8)
>>>> 25 SRV:117 Read, Append 0.(8)
>>>>
>>>> The resulting fork structure is:
>>>>
>>>> => FNGSRV (2): HALT at STARTS+13, 0.02719
>>>> Fork 12:HALT at 0, 0.00018
>>>> Fork 13: HALT at 0, 0.00016
>>>> Fork 10:HALT at 0, 0.00012
>>>> Fork 11: HALT at 0, 0.00012
>>>> Fork 6: HALT at 0, 0.00008
>>>> Fork 7: HALT at 0, 0.00007
>>>>
>>>> So fork 2 is the finger server controller, forks 12, 10, and 6 are
>>>> finger server sub-forks, and forks 13, 11, and 7 are the respective
>>>> Tops-20 finger clients. Times are in tens of microseconds, the
>>>> maximum resolution that Tops-20 supports. What can be seen is that
>>>> fork creation is happening in sub-millisecond time. This was notthe
>>>> case in the 1980's with KL10's (I /think/), and modifications were
>>>> necessary to Tops-20 and the EXEC to capture the increased resolution.
>>>>
>>>> I guess I'll have another version ready in about two weeks.
>>>>
>>>>
>>>> _______________________________________________
>>>> HECnet mailing list -- hecnet(a)lists.dfupdate.se
>>>> <mailto:hecnet@lists.dfupdate.se>
>>>> To unsubscribe send an email to hecnet-leave(a)lists.dfupdate.se
>>>> <mailto:hecnet-leave@lists.dfupdate.se>
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol