On Jun 18, 2024, at 12:01 PM, Johnny Billquist
<bqt(a)softjar.se> wrote:
...
Correct. And the answer is the same as with
accept: if you issue connect and then immediately a data send, you get WrongState. The
connect API creates a connection object, builds the Connect Initiate message, and sends
it. That's all it does; retransmits and the processing of accept or reject happen
afterwards.
But until you get an accept or reject, you don't know if it's even ok to send
packets, so it would be crazy to just allow the program to continue before it have
received the conncect confirm or reject, which in themselves also carry data that is
related to the connect request, and which I would expect a caller to be able to read out
the data of immediately after the connect call completes.
That's certainly a good API structure.
...
Actually, I had forgotten how the "connectors" API works. It already does the
"wait for accept/reject" thing on a Connect request. But it doesn't have
the equivalent on the accept request, mostly because there isn't a response message
there (the ACK doesn't count, it's handled inside NSP). I can add a
pseudo-message to the underlying machinery for reporting state changes, and then wait for
that in the accept call. A state change "message" would make sense anyway.
Hmm. So you are in fact delaying the return from the connect call. That sounds good. Now
you just need to do the same for the accept. And I'd say it should be tied into the
state change. If you need some pseudomessage to make it easy to implement, I think that
would be perfectly fine. Not something anyone on the outside would notice anyway.
There's a bit of history here. The original API (the "module" one) is tied
very closely to the internal machinery of the NSP layer, not surprisingly. And that's
basically asynchronous: messages are queued and sent on their way, other messages are
received, and the state changes as needed.
Then there is the "connector" API, which I created as a wrapper around pipes and
messages on those pipes, for use by applications constructed as separate processes.
It's also designed to be simpler, especially the basic pipe structure which is
optimized for handling a single connection. So while the NSP internal API treats connect
as a "send message on its way" request, with the accept or reject delivered in a
separate step, the connector has an "exchange" notion where a message expects a
response and the API call completes only when the response arrives. So the exercise is to
make that model apply to accept as well.
I should also document more clearly that the connector API is the preferred one, with the
others (the low level JSON messages over pipes, or the module level API) discouraged. As
I said, the module API does apply to applications that need to reach into the guts of
PyDECnet, like NML. If someone wanted to write a DECnet MIB listener for PyDECnet, that
would likely want to be a module also for the same reason. But things like FAL or PHONE or
FINGER are better done with connectors; if multiple DECnet connections are needed, async
connectors are best. PMR is an example of that.
Aren't there other state changes you should do
things on as well? If a link is lost, NSP will retransmit and eventually time out. If the
application is trying to read, that read should then return an error when the state
change, right? How is that handled then? (Just the first case that popped into my head.)
The timeout produces an internally generated disconnect message, so from the application
point of view it would see a receive that delivers the disconnect, just with a different
reason code than, say, a remote application doing a disconnect request. Also, disconnect
and reject (sent or received) make the connection disappear as a side effect, the handle
is no longer valid after that point. So it turns out that accept is the only case where
we have a state change not already accompanied by a message.
paul