On Nov 8, 2021, at 5:18 PM, Johnny Billquist <bqt
at softjar.se> wrote:
On 2021-11-08 16:32, Paul Koning wrote:
On Nov 8,
2021, at 10:13 AM, Johnny Billquist <bqt at softjar.se> wrote:
On 2021-11-08 15:27, Paul Koning wrote:
> On Nov 7, 2021, at 9:54 PM, Peter Lothberg
<roll at stupi.com> wrote:
>
> In the "old days" we did SMTP over DECnet, has anyone considered doing
Telnet, FTP, HTTP etc over
> DECnet transport?
I've thought about http over DECnet. That would be really easy. The obvious way to
deal with any TCP-based protocol is just to send the byte streams over DECnet messages, in
the same fashion as DECnet/Ultrix streaming mode DECnet sockets do. (Precisely how that
works I don't remember.) In the case of HTTP, a natural simplification would be to
send the entire header, in both directions, as a single message, with any data following
in one or more additional messages.
Streaming mode DECnet sockets would be interesting to learn something about.
Because otherwise the problem is really that DECnet is packet based, and not byte based
like TCP. So there is potentially some problems with adapting TCP protocols for DECnet.
Not really. Stream based protocols are protocols that do not rely on message
boundaries -- more precisely, do not rely on boundaries being marked by lower layers. If
you have a transport that does report message boundaries, and you want to carry a stream
protocol, the simple answer is to ignore those boundaries.
Well. The question here becomes:
Is there any relevance you want to attach to the packets as such, or do you actually
intend it to be stream based and then using CR+LF as the termination of lines? Which also
means you need to implement another layer in all the software to reformat the data into
that stream, and process a line at the time from that.
And of course, for protocols implementing the network virtual terminal, you then want to
escape 0xff, in order to implement all the other functions that might be implied. And
padding of standalone CRs.
And how much should you buffer before you actually should send it?
And for how long?
The reason I'm interested in how Ultrix did
streaming sockets is because of the question "when do you send the message".
That's actually a question with TCP as well, which is why it has a "no
delay" option. The trivial answer is "for each send() call to the socket, send
that data as a complete NSP message". That would obviously work; the only question
is whether something fancier is a useful optimization.
That?s basically what DECnet-Ultrix did although I seem to remember that if there is data
waiting for initial transmiission we would merge the new data in with the old.
DECnet-Ultrix was the first network implementation to make use of SOCK_SEQPACKET. There
was crash causing bug whenever a read was issued on such a socket - the problem was a
zero-day bug in the original BSD socket implementation
You might want to definitely do something smart here
sometime. But it's not trivial how you want to do that.
Actually, there might be one other question.
Consider HTTP: if the server wants to send a file (say, a .jpg image) can it send the
entire file as a single NSP message? From the DECnet architecture point of view, sure.
Do DECnet implementations put some limit on the length of an NSP message? I don't
know, except for RSTS which doesn't simply because it leaves segmentation and
reassembly to the application.
There are definitely limits here.
The problem is that in DECnet, these limits are not global. Consider RSX. You can send
any size you want (well, within limits - DECnet I/O in RSX is limited to max 8128 bytes).
However, the receiver must setup a receive that is large enough to receive the whole
message, or else parts are lost. (Remember the term "data overrun"?)
Remember - DECnet is packet based. Not stream based. It have more implications.
Data not received in one call, because the packet was too big are not received in the
next call. They are lost.
At the time when DECnet-Ultrix was written, this was indeed the case. Newer
versions os SOCK_SEQPACKET (e.g. Linux) include support for MSG_TRUNC and MSG_EOR so that
an application can use a small buffer, overflow data will not be discarded and the user
will be told that truncation has occurred.
John.
Very analogue to magtape.
The mismatch that's harder to deal with is an
application that needs message boundaries, running over a transport that doesn't have
these. The TCP/IP world is full of these, and in every instance the application protocol
concocts an ad-hoc solution to this issue. Consider iSCSI and NFS as two examples, with
different solutions (or multiple solutions, if you take the "markers" hack in
iSCSI seriously).
It's not that easy. See above...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol