Hi,
It has been mentioned elsewhere and maybe a while back too but I'm not quite sure when DECnet (VAX/VMS 7.3) started serving up jewels like:
%%%%%%%%%%% OPCOM 13-NOV-2024 15:15:13.82 %%%%%%%%%%%
Message from user DECNET on TUPILE
DECnet event 4.10, circuit up
From node 29.109 (TUPILE), 1-JAN-1977 00:00:53.64
Circuit UNA-0
Regards
Keith
I was just made aware that there is a bug in the proxy access handling
under RSX.
If you enable incoming proxy access, but do not have a proxy database
set up, RSX will actually allow remote users to gain access to files (or
whatever) as any user they specify, without having to give any password.
This is a bug in the network verification program. I just fixed this,
and a fixed version is available on MIM::LB:[5,54]NVPFSL.TSK. People
should be able to just copy that one over, and then either remove NVP...
and then install LB:[5,54]NVPFSL, or just reboot, and the fixed version
should be active.
You can verify if you have the correct version by checking the version
of NVP, like this:
.tas nvp...
NVP... V08.20 GEN 150. 00021400 DU0:-00012167042
.
The fixed version is V08.20.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
[I previously sent this reply from an email address not registered
on this list, so presumaby it got bit-binned.]
On 2024-11-13 14:20, Paul Koning wrote:
> By "bug" I meant "specification bug". The author of the DNA NetMan
> spec messed up here. Yes, VMS faithfully follows that bad design.
>
> The bugfix amounts to fixing the text of the spec, then changing the
> implementations to match the corrected text. I don't know if the
> former has ever been done but obviously a number of DECnet
> implementations acted as if had been. :-)
I have Alpha VMS listings kits, but stupidly gave away my VAX listings.
Should someone send me a pointer to where a 7.3 listings kit can be
found, I'll try to find and fix it when I get back from vacation next
month.
On VMS, DECnet is a System Integrated Product (SIP) so it _should_ be
in the listings kit, but no guarantees.
Yes, I do have a listings license and had VAX/VMS on support at the
time V7.3 came out, so if someone wants me to "show my papers" be-
fore they'll give me access, I can do that...
>> On Nov 13, 2024, at 1:53 PM, Johnny Billquist <bqt(a)softjar.se> wrote:
>>
>> I would not call it a bug. It was initially defined that way. However,
>> RSX officially decided to start treating it as an unsigned 16-bit
>> value with the last releases, and documented this.
>>
>> And it's a change that do make sense.
>>
>> But if others have not applied the same update, then it makes sense
>> that they would zero the value if it have the high bit set.
>>
>> Johnny
>>
>> On 2024-11-13 19:26, Paul Koning wrote:
>>> The statement in the DECnet spec that julian half-day is 15 bits is
>>> an obvious bug with an obvious fix; clearly RSX and others have made
>>> that fix. VMS needs to do likewise.
>>> paul
>>>> On Nov 13, 2024, at 12:50 PM, John Forecast <john(a)forecast.name
>>>> <mailto:john@forecast.name>> wrote:
>>>>
>>>> Depends on the implementation. Nov 9, 2021 on VMS, Ultrix and
>>>> probably some others. RSX is good until 2065.
>>>>
>>>> John.
>>>>
>>>>> On Nov 13, 2024, at 10:24 AM, Keith Halewood
>>>>> <Keith.Halewood(a)pitbulluk.org
>>>>> <mailto:Keith.Halewood@pitbulluk.org>> wrote:
>>>>>
>>>>> Hi,
>>>>> It has been mentioned elsewhere and maybe a while back too but I’m
>>>>> not quite sure when DECnet (VAX/VMS 7.3) started serving up jewels
>>>>> like:
>>>>> %%%%%%%%%%% OPCOM 13-NOV-2024 15:15:13.82 %%%%%%%%%%%
>>>>> Message from user DECNET on TUPILE
>>>>> DECnet event 4.10, circuit up
>>>>> From node 29.109 (TUPILE), 1-JAN-1977 00:00:53.64
>>>>> Circuit UNA-0
>>>>> Regards
>>>>> Keith
>>>>> _______________________________________________
>>>>> HECnet mailing list --hecnet(a)lists.dfupdate.se
>>>>> <mailto:hecnet@lists.dfupdate.se>
>>>>> To unsubscribe send an email tohecnet-leave(a)lists.dfupdate.se
>>>>> <mailto:hecnet-leave@lists.dfupdate.se>
>>>>
>>>> _______________________________________________
>>>> HECnet mailing list -- hecnet(a)lists.dfupdate.se
>>>> <mailto:hecnet@lists.dfupdate.se>
>>>> To unsubscribe send an email to hecnet-leave(a)lists.dfupdate.se
>>>> <mailto:hecnet-leave@lists.dfupdate.se>
>>> _______________________________________________
>>> HECnet mailing list -- hecnet(a)lists.dfupdate.se
>>> To unsubscribe send an email to hecnet-leave(a)lists.dfupdate.se
>>
>> --
>> Johnny Billquist || "I'm on a bus
>> || on a psychedelic trip
>> email: bqt(a)softjar.se || Reading murder books
>> pdp is alive! || tryin' to stay hip" - B. Idol
>> _______________________________________________
>> HECnet mailing list -- hecnet(a)lists.dfupdate.se
>> To unsubscribe send an email to hecnet-leave(a)lists.dfupdate.se
>
> _______________________________________________
> HECnet mailing list -- hecnet(a)lists.dfupdate.se
> To unsubscribe send an email to hecnet-leave(a)lists.dfupdate.se
I've gotten far enough along with the Tops-20 finger server that I
thought it would be a good idea to capture some of the common
assumptions and requirements into a DECnet finger specification document
of a sorts. The current working version can be found at:
_VENTI2::DECNET-FINGER-SPECIFICATION.TXT_.
I emphasize /WORKING VERSION/ as I have been working with (a very
patient) Johnny to run tests, and chase down documentation bugs, gaps,
inaccuracies and other delusions. It's an active work in progress.
In particular, the limits of certain connections field are based what is
documented in the January 1980 version of the TOPS-20 DECnet-20
Programmer's Guide and Operations Manual, Order Number AA-5091A-TM.Â
This is quite old as it is based on Tops-20 version 4 and Tops-20 DECnet
version 2. However, it is what I had handy and what I remember coding
to, back in the day. Connection parameters are partially specified as
attributes and are as follows:
* ;*USERID*:/userid/ where /userid/ consists of 1 to 16 contiguous
alphanumeric ASCII characters (including the hyphen, dollar sign,
and underscore) identifying the source task. _Example_: ;USERID:ALIBABA
* ;*PASSWORD*:/password/ where /password/ consists of 1 to a
contiguous alphanumeric ASCII characters (including the hyphen,
dollar sign, and underscore) required by the target task to validate
the connection. _Example_: ;PASSWORD:SESAME
* *;CHARGE*:/acctno/Â where /acctno/ consists of 1 to 16 contiguous
alphanumeric ASCII characters (including the hyphen, dollar sign,
and underscore) representing the source task's account
identification. _Example_: ;CHARGE:ACCT-13C
* *;DATA*:/userdata/ where /userdata/ consists of 1 to 16 contiguous
alphanumeric ASCII characters (including the hyphen, dollar sign,
and underscore) representing user data. _Example_: ;DATA:THIS-IS-A-TEST
I've since found out that I'm *wrong* about USERID, and that it actually
allows up to 39 characters and have tested this.
What specification has these field definitions and limits? I'd like to
look at it before I go digging into Tops-20 and, of course, fixing the
finger server.
I've seen/heard of various stories about how people update their
nodename databases on their machines, hacking together scripts, and
processing files. So I figured I should write a small mail about the
topic (I should create a web-page with this information as well).
The main/basic point is that people are creating work for themselves
they really don't need.
Exactly how you update your nodename database on your machine depends on
what OS you are running, but there are basically prepared tools and
scripts already existing for pretty much any scenario. And if you happen
to have a system or need not currently covered, I can easily create one
for you as well.
But before going into the solutions, let me explain a bit about the
source of the data here.
DECnet phase IV do not have a centralized nodename system like DNS. Each
node in the DECnet network has its own nodename database, and every
machine can have its own name for another machine, independent of what
that other machine thinks its own nodename is.
However, in order to make it easier for multiple people and machines to
talk, it helps if everyone have a somewhat similar database. And here is
where the nodename database in MIM comes it. The nodename database that
I have on MIM is not the regular DECnet nodename database. Instead I'm
using DATATRIEVE to maintain a nodename database, which contains more
information than just the number and name. It contains the owner,
information about the software and hardware of the node, the location,
and when things were updated. This database is what is queried when
someone goes to http://mim.stupi.net/nodedb . And that page is generated
by just making queries in DATATRIEVE. If someone have a host with
DATATRIEVE on it, it is even possible to remotely access this DATATRIEVE
database over DECnet (you'll only have read-only access).
I have been considering possibly adding a web interface for people to
possibly be able to update their own information remotely, but so far
that's been a low priority thing. Maybe one day...
From this DATATRIEVE database I can then generate the DECnet nodename
database on MIM. This is a simple makefile actually. Whenever I run it,
it will create a bunch of different files (I'll get to that in a
moment), and detect if any changes have happened on the DECnet level of
things. If so, it will send a mail to people who have requested it,
informing them that the nodename database have been updated, and they
should update the nodename database of their own machines.
I hope this makes it apparent that creating various files based on the
nodename database is actually very simple. This is in a sense what
DATATRIEVE is good at. Creating reports is sortof what all these output
files are.
So - what files do I create today? Well, here is a short list:
FIX.CMD - This is a script file suitable for RSX systems using CFE.
However, it's sortof specially tailored for MIM, so it's not a file I
would recommend anyone else to use.
FIX.COM - This is a script for VMS systems using phase IV.
FIX.PHV - This is a script for VMS systems using phase V.
FIX.IMP - This is a script for VMS for anyone using DECdns.
FIX.T20 - This is a script for TOPS-20.
HECNET.PY - This is a definition file for PyDECnet.
FIX.RST - This is a script for RSTS/E.
NODENAMES.DAT - This is basically just the basic information is a simple
output form from DATATRIEVE. It exist mostly for historical reasons, but
I understand that lots of people actually take this file, and then write
code to process, extract and apply information from this file.
In addition, some systems can directly import nodenames from another
machine on DECnet, meaning you do not have to fetch and run any scripts
at all.
So here is the actioins you need to do on each system in a summarized form:
RSX:
In RSX, there is a tool called NNC which copies definitions from another
node. Copy over MIM::HECNET:NNC.BAT which is a batch file you can use
which does all the work of importing the latest definitions from MIM and
updating your local system. All you need to do is just "SUBMIT NNC.BAT"
and you are done.
VMS:
With phase IV, the node copy capability is build into NCP. All you need
to do is: "NCP COPY KNOWN NODES FROM MIM TO BOTH" and you are done.
With phase V, copy over FIX.PHV and run it, or just directly run it from
MIM like this: "@MIM::HECNET:FIX.PHV"
If you run DECdns, grab FIX.IMP, and run it with whatever tool is used
to manage this (sorry that I can't help more, I don't really have any
experience with DECdns).
TOPS-20:
Grab MIM::HECNET:FIX.T20 and run in in the NCP submode of OPR (if I
remember the setup correctly).
RSTS/E:
Grab MIM::HECNET:FIX.RST, and run it with "@FIX.RST".
PyDECnet:
Fetch hecnet.py by doing "wget mim.stupi.net/hecnet.py". Place that
where you have configured PyDECnet to get the nodenames from, and you
are good (not sure if you need to restart PyDECnet).
Now. If you have some other system with some specific format you need,
just let me know, and I'll create such a file as well. It's trivial for
me to do this from DATATRIEVE. If you spot something wrong/bad in some
file created today, let me know, and we'll fix it. If you see any errors
or omissions in the information in this mail, let me know, and I'll get
it corrected. I will create a web page with this information as well.
If you want to get a mail whenever the nodename database is updated,
just let me know and I'll add you to the list.
And HECnet is slowly growing. Occasionally a completely new person/site
gets connected. Occasionally people add more nodes. The online presence
seems pretty constant. At the moment 19 areas are online. In area 1,
currently there are 19 machines online. Looking at Paul's HECnet map
(http://akdesign.dyndns.org:8080/map), there are machines online in
quite different locations, covering a large part of the world. I find
this cool, and even though there isn't a lot being done, it's still fun.
Well. Have a nice weekend everyone, and I hope some people find this
information useful.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
I've finished the timeout functionality so that the Tops-20 finger
server will only wait so long before punting the connection (don't
worry, I'm very generous...)
I'm now working on the logging functionality. Again, on a connection
interrupt, the finger server grabs a bunch of meta-data about the
connection and displays it:
FNGSRV(1): Sunday, 6-October-2024 12:23:36.52412 AM-EDT
   From: VENTI2, User: SLOGIN, Data: STREAM*
   Object Type: Generic (FINGER), Segment Size: 1459
   Local Flow Control: Segment, Remote Flow Control: Segment
   Link Quota: Input Percentage: 50, Buffers: 16, Input Goal: 0
I have three concerns here (or maybe two and a half)
1)I want to write this information to a log file,
2)Since the data is written by concurrent server sub-forks,
a)Data can get overwritten
b)Latency to start the finger sub-sub-forks is increased
The solution is simple enough, I grab everything in the server sub-fork
and ship it up to the controller through shared memory for later
formatting and storage (and maybe printing).
When reviewing what meta-data can be gotten, I noticed that an explicit
connection confirm (MTOPR% function .MOCC) can take a pointer to an
optional sixteen bytes of data. What I can't quite tell from the
documentation is whether the server reads this or whether the server
writes it. I find myself being unsure of the wording.
1. If the former (server sends), how does the client read it?
2. If the latter (server reads), how does the client send it?
I don't see a single instance of any Tops-20 DECnet program using this,
so no help there. I /think/ this is case 1., that is, the server has
the option of writing it. However, I was wondering what specification
this is in (maybe NSP 4?) and where, so I could read it before I look at
the monitor sources to see how the client would read it. I'm assuming
it would show up at the client as optional data?
I thought I would update everyone with where things stand with the
Tops-20 finger server.
Johnny and I agreed that, by default, the response to a DECnet finger
query is a sequences of records, each no longer than 132 bytes (more
typically 90), terminated by a line-feed.  If the remote finger client
or operating system can handle larger buffers, then it can connect with
optional data=STREAM. Tops-20 will then dump everything in one giant
record. Only the Tops-20 finger client can handle this, although I
suspect maybe a VAX client might also be able to do it.
I had been making steady progress until I started stress testing it,
meaning having a session active on MIM::, TOMMYT::, and VENTI2::
(local), each having a finger command in ready to go and then pushing
carriage return on all three as simultaneously as possible.
This started generating errors on MIM:: that the "object wasn't
available". What was going on was that the structure of the Tops-20
finger server really wasn't architected to have real time response for
that much curiosity. It only opens a single SRV:117 (finger) object at a
time, waits for a connection, reads the data, hands it off to an idle
finger client fork via a pipe, and then gets a new SRV:117 object.
In other words, it isn't until the finger client is started with the
connection redirected to DECnet that the server is ready to accept
another connection. That has what I would consider to be noticeable
latency, particularly on failure.   An error doesn't really matter for
SMTP as it is a background task and just tries again, later. An
interactive finger on the other hand, has a user sitting there waiting
for a response, so it seemed to me that this wasn't really going to cut it.
I took the model of the Tops-20 FAL server, which has a single control
fork, looking for illness in sub-forks, all of which open their own FAL
server object. The new finger controller now starts separate FNGSRV
sub-server forks, creating a FINGER sub-fork for each, gets all the
communications lined up, and starts all the FNGSRV sub-forks to listen
for connections.
This has the advantage of not clobbering the system on FNGSRV startup
because resources are gotten or creating sequentially, so there isn't
much for a FNGSRV sub-fork to do except wait for a connection and manage
its own single FINGER sub-fork.
Startup resource allocation looks like this:
   [Fork FNGSRV opening PIP:1;RECORD-LENGTH:200 for writing]
   [Fork FNGSRV opening PIP:1.1 for reading]
   [Fork FNGSRV opening SRV:117 for reading, writing]
   [Fork FNGSRV opening PIP:2;RECORD-LENGTH:200 for writing]
   [Fork FNGSRV opening PIP:2.2 for reading]
   [Fork FNGSRV opening SRV:117 for reading, writing]
   [Fork FNGSRV opening PIP:3;RECORD-LENGTH:200 for writing]
   [Fork FNGSRV opening PIP:3.3 for reading]
   [Fork FNGSRV opening SRV:117 for reading, writing]
The resulting JFN's being:
       JFN File Mode     Bytes(Size)
        2  FNGSRV.EXE.330            Read, Execute
        3  FINGER.EXE.116            Read, Execute
        15 PIP:1;RECORD-LENGTH:200   Append         0.(8)
        16 PIP:1.1                   Read           0.(8)
        17 SRV:117                   Read, Append   0.(8)
        20 PIP:2;RECORD-LENGTH:200   Append         0.(8)
        21 PIP:2.2                   Read           0.(8)
        22 SRV:117                   Read, Append   0.(8)
        23 PIP:3;RECORD-LENGTH:200   Append         0.(8)
        24 PIP:3.3                   Read           0.(8)
        25 SRV:117                   Read, Append   0.(8)
The resulting fork structure is:
   => FNGSRV (2): HALT at STARTS+13, 0.02719
         Fork 12: HALT at 0, 0.00018
            Fork 13: HALT at 0, 0.00016
         Fork 10: HALT at 0, 0.00012
            Fork 11: HALT at 0, 0.00012
         Fork 6: HALT at 0, 0.00008
            Fork 7: HALT at 0, 0.00007
So fork 2 is the finger server controller, forks 12, 10, and 6 are
finger server sub-forks, and forks 13, 11, and 7 are the respective
Tops-20 finger clients. Times are in tens of microseconds, the maximum
resolution that Tops-20 supports. What can be seen is that fork
creation is happening in sub-millisecond time. This was not the case in
the 1980's with KL10's (I /think/), and modifications were necessary to
Tops-20 and the EXEC to capture the increased resolution.
I guess I'll have another version ready in about two weeks.
MIM can be used as an intermediate node, yes.
If you send your mails to MIM::<whatever>::USER the mail will be queued
up if the <whatever>:: node isn't responding right now.
Johnny
On 2024-10-04 09:16, jdmail(a)zeelandnet.nl wrote:
> Then we need a node as a mailserver? Can that be done?
>
> Johan
>
>
> Johnny Billquist schreef op 2024-10-04 02:29:
>
>> And just if anyone wonders - yes, that also means you cannot send mail
>> to users on systems that are not currently online.
>> A thing that I always found a bit annoying/irritating.
>>
>> Another reason why I wrote the mail handling for BQTCP to also do the
>> writing and sending as two separate steps, with a queuing in between.
>>
>> Here is how it looks from VMS:
>>
>> MAIL> sen
>> To: Â Â josse::bqt
>> %MAIL-E-LOGLINK, error creating network link to node JOSSE
>> -SYSTEM-F-UNREACHABLE, remote node is not currently reachable
>>
>> MAIL> sen
>> To: Â Â anke::bqt
>> %MAIL-E-LOGLINK, error creating network link to node ANKE
>> -SYSTEM-F-NOSUCHOBJ, network object is unknown at remote node
>>
>> MAIL>
>>
>>
>> It instantly tries to connect, and if that don't work, it immediately
>> fails.
>>
>> Â Johnny
>>
>> On 2024-10-04 01:38, Johnny Billquist wrote:
>>> Sounds like a good improvement.
>>>
>>> However, I saw something about mail here, that you should be aware of...
>>> My MAIL11 implementation is doing the queuing of mails, because that
>>> is normal with SMTP, and I just carried over the same behavior to all
>>> parts of the system.
>>>
>>> However, the "standard" MAIL11 application for RSX as well as RSTS/E
>>> , along with the DECUS MAIL for RSX (and I suspect also VMS) do not
>>> behave that way. If the connecting fails, the sending of mail
>>> immediately fail, and it is not queued for retrying. So it would
>>> appears that the mail for TOPS-20 you look at is also potentially
>>> flawed, which could cause issues.
>>>
>>> Of course, in that case, you'd normally have a human sitting in front
>>> of the terminal, who would then try again...
>>>
>>> Â Â Johnny
>>>
>>> On 2024-10-04 00:05, Thomas DeBellis wrote:
>>>> I thought I would update everyone with where things stand with the
>>>> Tops-20 finger server.
>>>>
>>>> Johnny and I agreed that, by default, the response to a DECnet
>>>> finger query is a sequences of records, each no longer than 132
>>>> bytes (more typically 90), terminated by a line-feed.  Ifthe
>>>> remote finger client or operating system can handle larger buffers,
>>>> then it can connect with optional data=STREAM. Tops-20 will then
>>>> dump everything in one giant record. Only the Tops-20 finger client
>>>> can handle this, although I suspect maybe a VAX client might also be
>>>> able to do it.
>>>>
>>>> I had been making steady progress until I started stress testing it,
>>>> meaning having a session active on MIM::, TOMMYT::, and VENTI2::
>>>> (local), each having a finger command in ready to go and then
>>>> pushing carriage return on all three as simultaneously as possible.
>>>>
>>>> This started generating errors on MIM:: that the "object wasn't
>>>> available". What was going on was that the structure of the Tops-20
>>>> finger server really wasn't architected to have real time response
>>>> for that much curiosity. It only opens a single SRV:117 (finger)
>>>> object at a time, waits for a connection, reads the data, hands it
>>>> off to an idle finger client fork via a pipe, and then gets a new
>>>> SRV:117 object.
>>>>
>>>> In other words, it isn't until the finger client is started with the
>>>> connection redirected to DECnet that the server is ready to accept
>>>> another connection. That has what I would consider to be noticeable
>>>> latency, particularly on failure.   An error doesn'treally matter
>>>> for SMTP as it is a background task and just tries again, later. An
>>>> interactive finger on the other hand, has a user sitting there
>>>> waiting for a response, so it seemed to me that this wasn't really
>>>> going to cutit.
>>>>
>>>> I took the model of the Tops-20 FAL server, which has a single
>>>> control fork, looking for illness in sub-forks, all of which open
>>>> their own FAL server object. The new finger controller now starts
>>>> separate FNGSRV sub-server forks, creating a FINGER sub-fork for
>>>> each, gets all the communications lined up, and starts all the
>>>> FNGSRV sub-forks to listen for connections.
>>>>
>>>> This has the advantage of not clobbering the system on FNGSRV
>>>> startup because resources are gotten or creating sequentially, so
>>>> there isn't much for a FNGSRV sub-fork to do except wait for a
>>>> connection and manage its own single FINGER sub-fork.
>>>>
>>>> Startup resource allocation looks like this:
>>>>
>>>> Â Â Â Â [Fork FNGSRV opening PIP:1;RECORD-LENGTH:200 for writing]
>>>> Â Â Â Â [Fork FNGSRV opening PIP:1.1 for reading]
>>>> Â Â Â Â [Fork FNGSRV opening SRV:117 for reading, writing]
>>>> Â Â Â Â [Fork FNGSRV opening PIP:2;RECORD-LENGTH:200 for writing]
>>>> Â Â Â Â [Fork FNGSRV opening PIP:2.2 for reading]
>>>> Â Â Â Â [Fork FNGSRV opening SRV:117 for reading, writing]
>>>> Â Â Â Â [Fork FNGSRV opening PIP:3;RECORD-LENGTH:200 for writing]
>>>> Â Â Â Â [Fork FNGSRV opening PIP:3.3 for reading]
>>>> Â Â Â Â [Fork FNGSRV opening SRV:117 for reading, writing]
>>>>
>>>> The resulting JFN's being:
>>>>
>>>>         JFN File Mode     Bytes(Size)
>>>>
>>>> Â Â Â Â Â Â Â Â Â 2Â Â FNGSRV.EXE.330Â Â Â Â Â Â Â Â Â Â Â Â Read, Execute
>>>> Â Â Â Â Â Â Â Â Â 3Â Â FINGER.EXE.116Â Â Â Â Â Â Â Â Â Â Â Â Read, Execute
>>>>          15 PIP:1;RECORD-LENGTH:200   Append         0.(8)
>>>>          16 PIP:1.1                   Read           0.(8)
>>>>          17 SRV:117                   Read, Append   0.(8)
>>>>          20 PIP:2;RECORD-LENGTH:200   Append         0.(8)
>>>>          21 PIP:2.2                   Read           0.(8)
>>>>          22 SRV:117                   Read, Append   0.(8)
>>>>          23 PIP:3;RECORD-LENGTH:200   Append         0.(8)
>>>>          24 PIP:3.3                   Read           0.(8)
>>>>          25 SRV:117                   Read, Append   0.(8)
>>>>
>>>> The resulting fork structure is:
>>>>
>>>> Â Â Â Â => FNGSRV (2): HALT at STARTS+13, 0.02719
>>>> Â Â Â Â Â Â Â Â Â Â Fork 12:HALT at 0, 0.00018
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Fork 13: HALT at 0, 0.00016
>>>> Â Â Â Â Â Â Â Â Â Â Fork 10:HALT at 0, 0.00012
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Fork 11: HALT at 0, 0.00012
>>>> Â Â Â Â Â Â Â Â Â Â Fork 6: HALT at 0, 0.00008
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Fork 7: HALT at 0, 0.00007
>>>>
>>>> So fork 2 is the finger server controller, forks 12, 10, and 6 are
>>>> finger server sub-forks, and forks 13, 11, and 7 are the respective
>>>> Tops-20 finger clients. Times are in tens of microseconds, the
>>>> maximum resolution that Tops-20 supports. What can be seen is that
>>>> fork creation is happening in sub-millisecond time. This was notthe
>>>> case in the 1980's with KL10's (I /think/), and modifications were
>>>> necessary to Tops-20 and the EXEC to capture the increased resolution.
>>>>
>>>> I guess I'll have another version ready in about two weeks.
>>>>
>>>>
>>>> _______________________________________________
>>>> HECnet mailing list -- hecnet(a)lists.dfupdate.se
>>>> <mailto:hecnet@lists.dfupdate.se>
>>>> To unsubscribe send an email to hecnet-leave(a)lists.dfupdate.se
>>>> <mailto:hecnet-leave@lists.dfupdate.se>
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Some time ago, I asked about a list of DECnet generic objects and what
they meant. I remember seeing a response and now I can't find it or
what I thought I saved as a file. Could somebody send me that again?Â
For now, all I have is what SYSDPY shows. This is an annotated list
(note that numbers are octal unless otherwise specified:
_Object#_ _Name_ _Comment_
        0     TASK
        1     FAL1
        2     URDS           Unit Record Device Service (DN200)
        3     ATS
        4     CTS
        5     TCL1
        6     OSI
        7     NRM
       10     3270           3270 Terminal
       11     2780           2780 Remote Job Entry protocol
       12     3790
       13     TPS
       14     DIBOL
       15     T20TRM         Tops-20/Tops-10/Ultrix Network Remote
Terminal (NRT)
       16     T20RSP
       17     TCL
       20     TLK
       21     FAL            File Access Listener
       22     RTL
       23     NCU            NICE?
       24     NETCPY
       25     ONCTH
       26     MAIL           How different from MAIL11?
       27     NVT
       30     TCON
       31     LOOP           Loopback Testing
       32     EVENT          DECnet Event Reporting
       33     MAIL11         VAX mail?
       34     FTS            File Transfer Service
       35     PHONE          Phone
       36     DDMF
       37     X25GAT         X.25 gateway
       40     UETP           User Environment Test Package
       41     VXMAIL
       42     X29SRV
       43     RDS
       44     X25HST         X.25 Host
       45     SNAGAT         SNA Gateway
       46     SNARJE         SNA Remote Job Entry
       47     SNAGIS
       50     MTSS
       51     ELF
       52     CTERM          Control Terminal
       53     DNSTA
       54     DNSUL
       55     DHCF
     ^D47     POSI           Remote OPR
     ^D63     DTR
     ^D65     TOPOL
     ^D66     DQS            Digital Queue Service (LAT?)
    ^D117     FINGER         Personal Name service
    ^D123     PMR
    ^D201     MS
I needed to take a break from finishing up the Tops-20 DECnet/SMTP code,
so I finally looked into doing a Tops-20 finger server.Over the weekend,
I cobbled something together.For example, on TOMMYT::
FINGER>oinky@venti2 /no-plan
OINKYGuinea PigOINKY not logged in
Last logout Sun 1-Sep-2024 1:40AM from TTY6 (NRT)
No new mail, last read on Tue 12-Apr-2022 10:00PM
The finger server is multi-forking and works by creating a group of
forks, putting the finger program in each fork and changing the primary
input and output to whatever connection is being served.It is general
enough so that you could probably have it run any program with little
tinkering.
Right now, it is in debugging mode and typing a lot of diagnostic
information about the link.So, for the above attempt from the client,
the finger server console output was:
FNGSRV: Connection from TOMMYT, User: SLOGIN, Data: Tops-20_Finger
Object Type: Generic (165), Segment Size: 1459
Local Flow Control: Segment, Remote Flow Control: Segment
Link Quota: Input Percentage: 50, Buffers: 16, Input Goal: 0
As can be seen, the Tops-20 finger client has been modified to send the
name of the local user and optional data identifying the finger
client.Probably I’ll change that to the finger version.The RFC for
TCP/IP finger is (naturally) silent on what you can send over DECnet, so
this doesn’t break anything.So, I can finger myself on MIM:: just fine, viz:
FINGER>debellis@mim
[Default directory: US00:[DEBELLIS]CLI: DCLSID: TDB
Last seen Sep 16 2024 23:04:37 on RT0: from VENTI2::
Logged on 16 times.
No plan.
Unfortunately, it /doesn’t /work when I sign on to MIM:: and try the
local finger client there, viz:
$ finger -d VENTI2::OINKY
[VENTI2::]
$
So, no obvious failure, but no output, either. On VENTI2::, what I’m
seeing appears to be a successful connection and that the finger program
running in the sub-fork is opening OINKY’s finger plan and sending it
back over the link, viz:
[Fork FNGSRV opening SRV:117 for reading, writing]
FNGSRV: Connection from MIM, Task: DEBELLIS
Object Type: Generic (165), Segment Size: 558
Local Flow Control: Segment, Remote Flow Control: Message
Link Quota: Input Percentage: 50, Buffers: 16, Input Goal: 0
[Fork 2 opening TOMMYT:FINGER.BIN.5 for reading thawed]
[Fork 2 opening TOMMYT:<OINKY>FINGER.PLAN.4 for reading]
Currently, the finger server is highly instrumented, so if it detected
that something was amiss (like the finger sub-fork was flat out
croaking), it would complain about it. Or it should… Finger itself is
not so highly instrumented as I wanted to make the changes there as
small as possible.
If you have a finger client that can do DECnet connections on another
platform (I’m thinking VMS or maybe RSTS) and can try it, let me know
how you make out. Be aware that I did just hack this together in two and
a half days, so I make no claims to any sort of stability. I'm just
trying to trouble shoot at this point.
—T