To come back to the connection to MIM described below, it seems
to be related to an unnecessary congestion issue.
When connecting to a node, a configuration message contains
the remote buffer size. MIM/RSX returns 2086.
Accordingly, dnprogs' libdap/connection.cc:set_socket_buffer_size
sets
setsockopt(sockfd, SOL_SOCKET, SO_RCVBUF, &blocksize, sizeof(blocksize))
and likewise for SO_SNDBUF, as per protocol. But this triggers "on/off"
flow control in the kernel module; the overhead to have a packet in the
decnet socket buffers is such that the above blocksize is too small,
even when it is doubled by default by the kernel - especially with
the low value of 2086. Increasing it with (in exchange_config())
set_blocksize(min(MAX_READ_SIZE, cm->get_bufsize()<<1));
(or perhaps just set_blocksize(MAX_READ_SIZE); ?)
helps a lot. Note that the kernel module and VMS announce "no flow control"
(apart from "on/off"), and RSX uses "Session Control Message" flow
control,
which the kernel module implements for outbound data, so having
the buffers perhaps too large may not be too problematic.
PS: For creating a decnet.conf the 'dncopynodes' utility may be used.
On Wed, Jan 03, 2018 at 04:02:19PM +0100, Erik Olofsen wrote:
On a virtual machine with lubuntu 12.04, kernel 3.2.0,
I have
DECnet for Linux V.2.5.68s working, with dnprogs version 2.65.
It is node RULLFL.
The daemons and utilities use decnet.conf with node information,
so something useful would be to create it from mim::nodenames.dat.
On a VAX, TYPEing it works well, but on the Linux machine, dntype
hangs after giving parts of the file; dndir mim:: works well.
Does anyone have dntype mim::nodenames.dat working properly (with
perhaps different versions of the above)?
Thanks!