On 2013-01-06 03:40, Johnny Billquist wrote:
On 2013-01-06 03:17, Brian Schenkenberger, VAXman- wrote:
Johnny Billquist <bqt at softjar.se> writes:
{...snip...}
Well, this code runs on any Qbus, so all Qbus machines "benifit" from
the code, not just the PDT-11. It's proably that on the Unibus, the
speed penalty for a read-modify-write is less than reading a byte and
pushing it on the stack, and then popping the stack and writing the word
to the bus. Also, the DU-11 might respond much faster than the DUV-11.
The buses are asynchronous, so the time for the execution is totally
dependent on the speed of the device to respond to the bus cycles.
OK. That wasn't clear from the choice of the symbol in the conditional.
I know that the selection whether to call the driver DU or DUV is based
on the L$$SI1 symbol. Sorry I probably didn't point that out enough.
That tells me that all Qbus machines get that variant. (The choice of
symbol name is probably just simply because the LSI-11 was the first
Qbus machine.)
I assume the PDT-11 mentioned is the one with a Qbus, so it's the same
controller as all other Qbus machines. But that part is really a guess.
It might be that I'm wrong on that, and that the PDT-11 have something
else, which looks and works just the same way as any other Qbus machine,
but DEC didn't have any convenient way of setting up a specific
variation for the PDT-11, so they took the small hit on Qbus machines in
order to save the PDT-11, but could avoid even the small hit on Unibus
machines, since the PDT-11 isn't even in the picture there.
And based on John's response, it seems as if it actually is this latter guess that is right.
The cost on any other Qbus machine is pretty minimal, even if it is slower to go through the stack.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 2013-01-06 03:17, Brian Schenkenberger, VAXman- wrote:
Johnny Billquist <bqt at softjar.se> writes:
{...snip...}
Well, this code runs on any Qbus, so all Qbus machines "benifit" from
the code, not just the PDT-11. It's proably that on the Unibus, the
speed penalty for a read-modify-write is less than reading a byte and
pushing it on the stack, and then popping the stack and writing the word
to the bus. Also, the DU-11 might respond much faster than the DUV-11.
The buses are asynchronous, so the time for the execution is totally
dependent on the speed of the device to respond to the bus cycles.
OK. That wasn't clear from the choice of the symbol in the conditional.
I know that the selection whether to call the driver DU or DUV is based on the L$$SI1 symbol. Sorry I probably didn't point that out enough.
That tells me that all Qbus machines get that variant. (The choice of symbol name is probably just simply because the LSI-11 was the first Qbus machine.)
I assume the PDT-11 mentioned is the one with a Qbus, so it's the same controller as all other Qbus machines. But that part is really a guess.
It might be that I'm wrong on that, and that the PDT-11 have something else, which looks and works just the same way as any other Qbus machine, but DEC didn't have any convenient way of setting up a specific variation for the PDT-11, so they took the small hit on Qbus machines in order to save the PDT-11, but could avoid even the small hit on Unibus machines, since the PDT-11 isn't even in the picture there.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
From: "Brian Schenkenberger, VAXman-" <system at TMESIS.COM>
OK, I'll bite. Why is moving a character in the deferred location in
R5 to the stack and then, from the stack to the address in R4 faster
than just going from the deferred R5 location to the R4 address?
I don't know the exact answer, but the I/O page in a PDT-11 is emulated by
an 8085A and it's always super slow. I don't know why DATOB would be any
worse than DATO but if DEC thought it was, I'm sure it's true (it shouldn't
have to be a read-modify-write but some PDP-11 models do gratuitous extra
cycles so it may well be). So this isn't actually a Q vs. U difference, it's
a PDT vs. real bus difference, but the LSI-11 conditionals will catch PDTs.
Anyway, thanks Johnny! Good to know that DU and DUV are 2 for the price of 1.
(And it hadn't even clicked that the thing in a PDT is emulating a DUV, so I
guess it's 3 for the price of 1!)
John Wilson
D Bit
Johnny Billquist <bqt at softjar.se> writes:
{...snip...}
Well, this code runs on any Qbus, so all Qbus machines "benifit" from
the code, not just the PDT-11. It's proably that on the Unibus, the
speed penalty for a read-modify-write is less than reading a byte and
pushing it on the stack, and then popping the stack and writing the word
to the bus. Also, the DU-11 might respond much faster than the DUV-11.
The buses are asynchronous, so the time for the execution is totally
dependent on the speed of the device to respond to the bus cycles.
OK. That wasn't clear from the choice of the symbol in the conditional.
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG
Well I speak to machines with the voice of humanity.
On 2013-01-06 03:07, Brian Schenkenberger, VAXman- wrote:
Johnny Billquist <bqt at softjar.se> writes:
{...snip...}
But it seems very likely that it's the byte write that is the problem.
That becomes a read-modify-write cycle on the bus, since everything on
the bus is accessed as words. And accessing the DUV-11 on a PDT-11 is
extremely slow. Just guessing, mind you...
But wouldn't all benefit from that? Why conditionalize it?
Well, this code runs on any Qbus, so all Qbus machines "benifit" from the code, not just the PDT-11. It's proably that on the Unibus, the speed penalty for a read-modify-write is less than reading a byte and pushing it on the stack, and then popping the stack and writing the word to the bus.
Also, the DU-11 might respond much faster than the DUV-11. The buses are asynchronous, so the time for the execution is totally dependent on the speed of the device to respond to the bus cycles.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Johnny Billquist <bqt at softjar.se> writes:
{...snip...}
But it seems very likely that it's the byte write that is the problem.
That becomes a read-modify-write cycle on the bus, since everything on
the bus is accessed as words. And accessing the DUV-11 on a PDT-11 is
extremely slow. Just guessing, mind you...
But wouldn't all benefit from that? Why conditionalize it?
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG
Well I speak to machines with the voice of humanity.
On 2013-01-06 02:46, Brian Schenkenberger, VAXman- wrote:
Johnny Billquist <bqt at softjar.se> writes:
{...snip...}
One of them is a simple optimization for the PDT-11. It's not strictly
neccesary, but apparently DEC thought the time gain was enough to make
it worth exploiting. Code looks like this:
.IF DF L$$SI1
MOVB @(R5)+,-(SP) ;;; COPY CHARACTER FOR WORD MOVE
MOV (SP)+,(R4) ;;; (SAVES 85 USECS ON PDT-11)
.IFF ; DF L$$SI1
MOVB @(R5)+,(R4) ;;; OUTPUT A CHARACTER
.ENDC ; DF L$$SI1
OK, I'll bite. Why is moving a character in the deferred location in
R5 to the stack and then, from the stack to the address in R4 faster
than just going from the deferred R5 location to the R4 address?
I'll have to make a guess, since I don't have the hardware, nor have I checked if any documentation at that level can be found anywhere.
But it seems very likely that it's the byte write that is the problem. That becomes a read-modify-write cycle on the bus, since everything on the bus is accessed as words. And accessing the DUV-11 on a PDT-11 is extremely slow. Just guessing, mind you...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Johnny Billquist <bqt at softjar.se> writes:
{...snip...}
One of them is a simple optimization for the PDT-11. It's not strictly
neccesary, but apparently DEC thought the time gain was enough to make
it worth exploiting. Code looks like this:
.IF DF L$$SI1
MOVB @(R5)+,-(SP) ;;; COPY CHARACTER FOR WORD MOVE
MOV (SP)+,(R4) ;;; (SAVES 85 USECS ON PDT-11)
.IFF ; DF L$$SI1
MOVB @(R5)+,(R4) ;;; OUTPUT A CHARACTER
.ENDC ; DF L$$SI1
OK, I'll bite. Why is moving a character in the deferred location in
R5 to the stack and then, from the stack to the address in R4 faster
than just going from the deferred R5 location to the R4 address?
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG
Well I speak to machines with the voice of humanity.
On 2013-01-03 22:19, Johnny Billquist wrote:
On 2013-01-03 18:37, John Wilson wrote:
Trivia question: what are the major differences between programming
the DU11 and the DUV11? At least from the handbook descriptions, it
sounds like the DUV11's isochronous (= async?) mode works and the DU11's
doesn't, but that's the only difference I can see and I don't see how
it matters for DDCMP use. And yet, DECnet/RSX has different names for
them, as if it's using separate drivers for each.
If DECnet/RSX uses different names, then it is also definitely using
different drivers.
However, who knows if the actual contents of the drivers differ, or if
they just liked having different names for them. :-)
(I can check next week. Please remind me...)
Ok. Just checked.
There are about two differences between the DU and DUV driver in DECnet under RSX.
One of them is a simple optimization for the PDT-11. It's not strictly neccesary, but apparently DEC thought the time gain was enough to make it worth exploiting. Code looks like this:
.IF DF L$$SI1
MOVB @(R5)+,-(SP) ;;; COPY CHARACTER FOR WORD MOVE
MOV (SP)+,(R4) ;;; (SAVES 85 USECS ON PDT-11)
.IFF ; DF L$$SI1
MOVB @(R5)+,(R4) ;;; OUTPUT A CHARACTER
.ENDC ; DF L$$SI1
The other difference is probably more important. It appears there might be a hardware glitch on the DUV-11. Code looks like this:
.IF DF L$$SI1
INHIB$ ;;; LOCK OUT INTERRUPTS
.ENDC ; DF L$$SI1
BIC #TXINT,(R3) ;;; CLEAR TRANSMIT INTERRUPT ENABLE
.IF DF L$$SI1
ENABL$ ; ENABLE INTERRUPTS AGAIN
.ENDC ; DF L$$SI1
the only way I can understand that code is that there might be a spurious interrupt when manipulating the interrupt enable bit on the controller.
All else in the code is the same between the two controllers, and (as you might have guessed) the source code is shared, with just a conditional selecting which version to assemble.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On Jan 5, 2013, at 4:15 PM, Rob Jarratt wrote:
...
Actually I meant the code you added to the pdp11_dmc.c file, but the above
is still useful
diff --git a/PDP11/pdp11_dmc.c b/PDP11/pdp11_dmc.c
index dae4db0..c367e4c 100644
--- a/PDP11/pdp11_dmc.c
+++ b/PDP11/pdp11_dmc.c
@@ -809,12 +809,13 @@ void dmc_dumpregsel0(CTLR *controller, int trace_level, char * prefix, uint16 da
sim_debug(
trace_level,
controller->device,
- "%s SEL0 (0x%04x) %s%s%s%s%s%s%s%s\n",
+ "%s SEL0 (0x%04x) %s%s%s%s%s%s%s%s%s\n",
prefix,
data,
dmc_bitfld(data, SEL0_RUN_BIT, 1) ? "RUN " : "",
dmc_bitfld(data, SEL0_MCLR_BIT, 1) ? "MCLR " : "",
dmc_bitfld(data, SEL0_LU_LOOP_BIT, 1) ? "LU LOOP " : "",
+ dmc_bitfld(data, SEL0_ROMI_BIT, 1) ? "ROMI " : "",
dmc_bitfld(data, SEL0_RDI_BIT, 1) ? "RDI " : "",
dmc_bitfld(data, SEL0_DMC_IEI_BIT, 1) ? "IEI " : "",
dmc_bitfld(data, SEL0_DMC_RQI_BIT, 1) ? "RQI " : "",
@@ -2105,6 +2101,14 @@ void dmc_process_command(CTLR *controller)
{
dmc_start_input_transfer(controller);
}
+ else if (dmc_is_dmc (controller) &&
+ controller->csrs->sel0 & ROMI_MASK &&
+ controller->csrs->sel6 == DSPDSR)
+ /* DMC-11 or DMR-11, see if ROMI bit is set. If so, if SEL6 is
+ 0x22b3 (read line status instruction), set the DTR bit in SEL2. */
+ {
+ dmc_setreg (controller, 2, 0x800, 0);
+ }
}
}
diff --git a/PDP11/pdp11_dmc.h b/PDP11/pdp11_dmc.h
index bfa9104..cfacc5b 100644
--- a/PDP11/pdp11_dmc.h
+++ b/PDP11/pdp11_dmc.h
@@ -90,6 +90,7 @@ extern int32 int_req[IPL_HLVL];
#define DMC_RDYI_MASK 0x0080
#define DMC_IEI_MASK 0x0040
#define DMP_IEI_MASK 0x0001
+#define ROMI_MASK 0x0200
#define LU_LOOP_MASK 0x0800
#define MASTER_CLEAR_MASK 0x4000
#define RUN_MASK 0x8000
@@ -107,9 +108,12 @@ extern int32 int_req[IPL_HLVL];
#define LOST_DATA_MASK 0x0010
#define DISCONNECT_MASK 0x0040
+#define DSPDSR 0x22b3 /* KMC opcode to move line unit status to SEL2 */
+
#define SEL0_RUN_BIT 15
#define SEL0_MCLR_BIT 14
#define SEL0_LU_LOOP_BIT 11
+#define SEL0_ROMI_BIT 9
#define SEL0_RDI_BIT 7
#define SEL0_DMC_IEI_BIT 6
#define SEL0_DMP_IEI_BIT 0
By the way, the background for this can be found in the KMC-11 manual.
Next, once DSR has been seen on for 2 seconds, it does another master
clear, then the Base In operation.
Since Master Clear needs to drop the connection, I don't think that tying
DSR
to the connection being alive will work. It does seem like an obvious
thing to
do, but that second master clear suggests it isn't the way to go.
Not sure I follow. It seems to be that the driver is using DSR just to tell
it that the line is up. Of course if we use Master Clear to close the
connection then that would not work, but if we didn't close the connection
then it would work. Wouldn't it?
Yes. But since drivers use MCLR to kill the DDCMP connection, you need to close the connection on MCLR, otherwise the cleanup on DDCMP restart isn't emulated.
...
Ok, but then why did I see a "Discarding received data while controller is
not
running" message? That's the problem, I believe.
I don't often set the level of tracing that would give this message, but
even when I have, I have not seen this message. It happens if the receiving
end has not made any buffers available yet. There is not enough protocol
between the two ends to do it correctly, we would probably have to implement
DDCMP. I would argue that in practice it does not have a significant effect,
circuits do get established and DECnet can operate over the link without
seeing any errors, at least on VMS, not sure about other OSs which may
implement their drivers differently. I would have to test a bit more to be
fully sure of all this though as I usually start both ends more or less at
the same time, it might happen if there is a significant delay between
starting DECnet at each end.
DDCMP links aren't supposed to lose packets. As it happens, it looks like the DECnet machinery does recover from such a packet loss, but still, it's not expected behavior.
paul