On May 27, 2014, at 10:35 AM, Bob Armstrong <bob at jfcl.com> wrote:
On 2014-05-27 07:07, Paul_Koning at Dell.com wrote:
In other words, the listen timeout for
Ethernet is 3 * hello time, for point to point it is 2 * hello time.
=20
But can you clarify who is dropping who (i.e. which end is timing out)? =
The message on LEGATO says "dropped by adjacent node" not "dropping adjac=
ent node", which makes it sound as if MIM is timing out, not LEGATO (and th=
en somehow telling LEGATO that, which is another mystery).
=20
Or is this just a poorly worded message?
No, the message is very precise.
If you see a message on LEGATO which reports adjacency MIM down for that re=
ason, it means LEGATO received an Ethernet router hello message from MIM th=
at no longer lists LEGATO as one of the routers that MIM can see.
MIM will have a reason for not reporting LEGATO any longer. It should have=
logged a message stating that reason.
paul
When we where chasing another problem, it was found that packets
"disapear" in the Ethernet fabric at Update, sometimes.
Maybe Johnny can make a small map of the current topology with all
involved things and the lan-speeds?
--P
On May 27, 2014, at 10:48 AM, <Paul_Koning at Dell.com> <Paul_Koning at Dell.com> wrote:
On May 27, 2014, at 10:35 AM, Bob Armstrong <bob at jfcl.com> wrote:
On 2014-05-27 07:07, Paul_Koning at Dell.com wrote:
In other words, the listen timeout for
Ethernet is 3 * hello time, for point to point it is 2 * hello time.
But can you clarify who is dropping who (i.e. which end is timing out)? The message on LEGATO says "dropped by adjacent node" not "dropping adjacent node", which makes it sound as if MIM is timing out, not LEGATO (and then somehow telling LEGATO that, which is another mystery).
Or is this just a poorly worded message?
No, the message is very precise.
If you see a message on LEGATO which reports adjacency MIM down for that reason, it means LEGATO received an Ethernet router hello message from MIM that no longer lists LEGATO as one of the routers that MIM can see.
MIM will have a reason for not reporting LEGATO any longer. It should have logged a message stating that reason.
Let me spell out the sequence of protocol exchanges a bit, that may help make this clear.
Suppose I have an Ethernet with two routers on it (and nothing else, to keep it simple).
1. Start A. It will periodically send router hello messages with R/S List (see Phase IV routing spec, page 92) that is empty.
2. Start B. It will send out a router hello message with R/S List empty.
3. A receives that hello. It adds B to the list of routers it has heard from. It sends out (immediately, usually) a router hello with one R/S List entry: B, NOT known to be 2-way.
4. B receives that hello from A. It adds A to the list of routers it has heard from. It adds A to the R/S List, known 2-way (because A says it has heard B). It sends that updated hello. It also generates an Adjacency Up for A.
5. A receives that hello from B. It sees itself mentioned in the R/S List, so it changes its own R/S list entry for B to say that it now has 2-way connectivity. It sends out that updated router hello, and generates an Adjacency Up event for B.
So the key thing here is that an adjacency B is not up at A unless A hears from B that B can hear A. When that stops being true, you get the dropped event.
paul
On May 27, 2014, at 10:35 AM, Bob Armstrong <bob at jfcl.com> wrote:
On 2014-05-27 07:07, Paul_Koning at Dell.com wrote:
In other words, the listen timeout for
Ethernet is 3 * hello time, for point to point it is 2 * hello time.
But can you clarify who is dropping who (i.e. which end is timing out)? The message on LEGATO says "dropped by adjacent node" not "dropping adjacent node", which makes it sound as if MIM is timing out, not LEGATO (and then somehow telling LEGATO that, which is another mystery).
Or is this just a poorly worded message?
No, the message is very precise.
If you see a message on LEGATO which reports adjacency MIM down for that reason, it means LEGATO received an Ethernet router hello message from MIM that no longer lists LEGATO as one of the routers that MIM can see.
MIM will have a reason for not reporting LEGATO any longer. It should have logged a message stating that reason.
paul
On 2014-05-27 07:07, Paul_Koning at Dell.com wrote:
In other words, the listen timeout for
Ethernet is 3 * hello time, for point to point it is 2 * hello time.
But can you clarify who is dropping who (i.e. which end is timing out)? The message on LEGATO says "dropped by adjacent node" not "dropping adjacent node", which makes it sound as if MIM is timing out, not LEGATO (and then somehow telling LEGATO that, which is another mystery).
Or is this just a poorly worded message?
Thanks
Bob
On May 25, 2014, at 10:20 PM, Johnny Billquist <bqt at softjar.se> wrote:
On 2014-05-26 04:17, Johnny Billquist wrote:
On 2014-05-26 03:47, Bob Armstrong wrote:
Increased it to 32, ...
That's probably a good idea, but I don't actually think that's the
problem. Like I said, I see this happening with several nodes, all on
the
bridge QNA - MIM, PONDUS, A5RTR, SGC, etc.
Yeah. I checked some more, and I actually only have 18 adjacent nodes at
MIM:: so this is definitely not the problem. From my point of view, it
only seems to be LEGATO:: that is currently acting like a yo-yo...
I would guess that either some packets are lost, or else the packet trip times varies a *lot*. We should investigate more, but it's really getting late for me
Varying round trip time itself doesn t matter. Routing layer hellos are sent out periodically; the round trip time is not considered. What is necessary is that they must be delivered reliably enough. For Ethernet, two lost packets are allowed; for point to point links, only one. (Beware of braindead tunnel protocols that look like point to point links but run over UDP.) In other words, the listen timeout for Ethernet is 3 * hello time, for point to point it is 2 * hello time.
paul
On May 25, 2014, at 9:33 PM, Johnny Billquist <bqt at softjar.se> wrote:
On 2014-05-26 03:06, Bob Armstrong wrote:
Do you have a max routers set too low on your machine maybe?
We have quite a few routers on the bridge segment.
$ NCP SHOW EXEC CHAR
...
Max broadcast nonrouters = 512
Max broadcast routers = 128
...
$ NCP TELL MIM SHOW EXEC CHAR
...
Max broadcast nonrouters = 64
Max broadcast routers = 20
...
Maybe it's too low on MIM?? The message actually makes it sound like MIM
is dropping LEGATO, not the other way around.
Could be... In fact you are probably right. Just checked, and MIM have a MAX of 20 right now, which I believe is too low here. Increased it to 32, but I need to reboot for it to take effect
In DECnet, we do not assume that I can hear A means A can hear me . Instead, the protocol explicitly tests for that (in the case of routers). If the test fails, you don t get an adjacency. If there was an adjacency before, and then the test fails, that adjacency goes down.
The way this is done is that the Ethernet router hello message contains a list of routers the sender has heard. If a router doesn t see itself listed, it doesn t bring up the adjacency with the sender of that router hello. If it was there and goes away, you get the dropped by adjacency router event.
To find out why the adjacent router stopped mentioning you, you need to look in its event log. If the reason is too many router, there should be an event that says so. If the reason is something else, the reason should say what else it is. For example, it might be adjacency listener timeout meaning no hello messages were seen in 3 * hello time.
paul
Sent from mobile device that advertises itself for no good reason
On 26 May 2014, at 08:48, "Jerome H. Fine" <jhfinedp3k at compsys.to> wrote:
Cory Smelosky wrote:
On Mon, 19 May 2014, Jerome H. Fine wrote:
Since you are obviously using either V05.06 or V05.07
Yup. 5.07. It has Mentec branding!
(only the last two versions of RT-11 have the RT11ZM
Monitor), you can use the VRUN command to support
giving LINK all 64 KB of memory. Naturally, you
will probably need at least 256 KB of total physical
memory on even a PDP-11/23 (to run RT11XM as well as
to provide the needed extended memory to provide
LINK with the full 64 KB to run in) although 128 KB
might do in a pinch depending on which device you use
for the system device.
Thanks. I'll look in to VRUN.
Any good results (or bad) to report?
Got sidetracked with other projects.
How much physical memory do you have?
256 kilo words. Had more but half the board is bad. :(
Jerome Fine
[ Summary : File transfers between two simh PDP hang,
DECnet reports Data overruns and Response timeouts ]
At 2:14 AM +0200 26/5/14, Johnny Billquist wrote:
This is a problem inside of DECnet on the simulated host. It gets packets
faster than it can process them, so some packets are dropped.
Unfortunately DECnet deals very bad with systematic packet loss like this.
You get retransmissions, and after a while the retransmission timeout backs
off until you have more than a minute between retransmission attempts.
Anyway, if you can get simh to throttle the ethernet interface, that might help you.
(I don't remember offhand if it do support such functionality.)
The service polling timer can be adjusted
SET XQ POLL={DEFAULT|4..2500}
Set to 100 by default.
Changing the polling timer makes a huge difference. Have a look at:
http://pastebin.com/AZ1U6bh3
Although it still hangs sometimes, reliability has vastly improved upon the erratic behavior of the beginning. Remember, the completion time was about 3 minutes.
We're almost there :)
This turns into an interesting challenge : optimize XQ service timer to make overruns the lowest possible. This depends on many factors, among them is the data sink bandwidth.
You may have flawless copy to TI:, but it will fail to disk. The terminal is actually throttling the transfer. Disks are faster, and emulated disks are order of magnitude faster than the original ones. Emulation is pushing DECnet to speeds it was never designed for.
I'm running here as low as 10 polls/sec. Maybe 50 would be optimal, and what about 500? I need a metrics. And tools. Here, I am using AT. to time a 100. blocks file transfer. Overruns and timeouts still raise slowly, but DECnet recovers happily most of the time.
--
Jean-Yves Bernier
>Cory Smelosky wrote:
>On Mon, 19 May 2014, Jerome H. Fine wrote:
Since you are obviously using either V05.06 or V05.07
Yup. 5.07. It has Mentec branding!
(only the last two versions of RT-11 have the RT11ZM
Monitor), you can use the VRUN command to support
giving LINK all 64 KB of memory. Naturally, you
will probably need at least 256 KB of total physical
memory on even a PDP-11/23 (to run RT11XM as well as
to provide the needed extended memory to provide
LINK with the full 64 KB to run in) although 128 KB
might do in a pinch depending on which device you use
for the system device.
Thanks. I'll look in to VRUN.
Any good results (or bad) to report?
How much physical memory do you have?
Jerome Fine
On 2014-05-26 04:17, Johnny Billquist wrote:
On 2014-05-26 03:47, Bob Armstrong wrote:
Increased it to 32, ...
That's probably a good idea, but I don't actually think that's the
problem. Like I said, I see this happening with several nodes, all on
the
bridge QNA - MIM, PONDUS, A5RTR, SGC, etc.
Yeah. I checked some more, and I actually only have 18 adjacent nodes at
MIM:: so this is definitely not the problem. From my point of view, it
only seems to be LEGATO:: that is currently acting like a yo-yo...
I would guess that either some packets are lost, or else the packet trip times varies a *lot*. We should investigate more, but it's really getting late for me...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 2014-05-26 03:47, Bob Armstrong wrote:
Increased it to 32, ...
That's probably a good idea, but I don't actually think that's the
problem. Like I said, I see this happening with several nodes, all on the
bridge QNA - MIM, PONDUS, A5RTR, SGC, etc.
Yeah. I checked some more, and I actually only have 18 adjacent nodes at MIM:: so this is definitely not the problem. From my point of view, it only seems to be LEGATO:: that is currently acting like a yo-yo...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Increased it to 32, ...
That's probably a good idea, but I don't actually think that's the
problem. Like I said, I see this happening with several nodes, all on the
bridge QNA - MIM, PONDUS, A5RTR, SGC, etc.
Bob
On 2014-05-26 02:52, Paul_Koning at Dell.com wrote:
On May 24, 2014, at 10:15 AM, Johnny Billquist <bqt at softjar.se> wrote:
...
Do I need to spell it out? :-)
The hardware address is the address the card have from factory. The physical address is the address the software have programmed the card to have. Since DECnet uses specific addresses, the address is changed from the hardware address, since you do not want/need that when running DECnet.
Not necessarily exactly that way.
Well, we are talking about old hardware here, Paul... :-)
The hardware address is the default physical address. It is supposed to be globally unique (not just unique on each LAN). If you have virtual devices, like in SIMH, chances are you re responsible for this (you re in essence the manufacturer). Pedantically, if you administer MAC addresses, they should be from the locally administered address space, i.e., second bit set in the 1st byte. In practice that doesn t matter, but it avoids conflict with real hardware addresses.
Right. But you will be really unlucky if you manage to hit an address that you also happen to have some real hardware using, unless you explicitly sets it so. But even more, once DECnet starts up, it becomes irrelevant again, since DECnet changes the MAC address, and do not even consider retaining the ability to use the original MAC address.
DECnet Phase IV uses a physical address it supplies rather than the default. Other protocols (including DECnet Phase V) don t. If your NIC type (or its driver) supports only a single physical address, the physical address changes for all protocols when you turn on DECnet Phase IV. That s why you have to turn on DECnet before LAT.
Right.
However... if your NIC and driver allow per-protocol physical address, then only DECnet Phase IV uses the aa-00-04-00 address and the others continue to use the hardware address. For such systems, you have to be careful that the hardware address is unique even if DECnet is used.
DECnet on neither PDP-11 nor VAX tries any such tricks. They just assume you only will have one hardware address per interface, and sets it to what DECnet thinks it should be, and that's it. I have not checked Alpha, but I suspect it never do such a thing either. And I know that DECnet under Linux also don't play this way (or didn't last I looked). So while you are right that it could do this in theory, it is not done by anything as far as I know, and the additional complexity without any real gains outweight the potential use of such a behavior.
Most newer DEC NICs (Tulip and beyond) support multiple physical addresses, as does QNA. UNA and LANCE do not. Whether a particular OS/driver implements that is another matter.
simh and other things normally use libpcap, and that do not add extra addresses, but just gets the device into promiscuous mode, and then deals with it in software.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 2014-05-26 03:06, Bob Armstrong wrote:
Do you have a max routers set too low on your machine maybe?
We have quite a few routers on the bridge segment.
$ NCP SHOW EXEC CHAR
...
Max broadcast nonrouters = 512
Max broadcast routers = 128
...
$ NCP TELL MIM SHOW EXEC CHAR
...
Max broadcast nonrouters = 64
Max broadcast routers = 20
...
Maybe it's too low on MIM?? The message actually makes it sound like MIM
is dropping LEGATO, not the other way around.
Could be... In fact you are probably right. Just checked, and MIM have a MAX of 20 right now, which I believe is too low here. Increased it to 32, but I need to reboot for it to take effect...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Do you have a max routers set too low on your machine maybe?
We have quite a few routers on the bridge segment.
$ NCP SHOW EXEC CHAR
...
Max broadcast nonrouters = 512
Max broadcast routers = 128
...
$ NCP TELL MIM SHOW EXEC CHAR
...
Max broadcast nonrouters = 64
Max broadcast routers = 20
...
Maybe it's too low on MIM?? The message actually makes it sound like MIM
is dropping LEGATO, not the other way around.
Bob
On May 24, 2014, at 10:15 AM, Johnny Billquist <bqt at softjar.se> wrote:
...
Do I need to spell it out? :-)
The hardware address is the address the card have from factory. The physical address is the address the software have programmed the card to have. Since DECnet uses specific addresses, the address is changed from the hardware address, since you do not want/need that when running DECnet.
Not necessarily exactly that way.
The hardware address is the default physical address. It is supposed to be globally unique (not just unique on each LAN). If you have virtual devices, like in SIMH, chances are you re responsible for this (you re in essence the manufacturer). Pedantically, if you administer MAC addresses, they should be from the locally administered address space, i.e., second bit set in the 1st byte. In practice that doesn t matter, but it avoids conflict with real hardware addresses.
DECnet Phase IV uses a physical address it supplies rather than the default. Other protocols (including DECnet Phase V) don t. If your NIC type (or its driver) supports only a single physical address, the physical address changes for all protocols when you turn on DECnet Phase IV. That s why you have to turn on DECnet before LAT.
However... if your NIC and driver allow per-protocol physical address, then only DECnet Phase IV uses the aa-00-04-00 address and the others continue to use the hardware address. For such systems, you have to be careful that the hardware address is unique even if DECnet is used.
Most newer DEC NICs (Tulip and beyond) support multiple physical addresses, as does QNA. UNA and LANCE do not. Whether a particular OS/driver implements that is another matter.
paul
At 2:14 AM +0200 26/5/14, Johnny Billquist wrote:
Aha. So it is 10.2 that gets timeouts when sending packets to 10.1. So 10.1 are dropping packets.
Good. Now, in which direction was the transfer attempted?
10.2 -> 10.1, initiated from 10.1:
10.1>NFT TI:=10.2::SOME.FILE
This is a problem inside of DECnet on the simulated host. It gets packets faster than it can process them, so some packets are dropped.
Unfortunately DECnet deals very bad with systematic packet loss like this. You get retransmissions, and after a while the retransmission timeout backs off until you have more than a minute between retransmission attempts.
I have now to find the reason for the packet loss.
Anyway, if you can get simh to throttle the ethernet interface, that might help you. (I don't remember offhand if it do support such functionality.)
The service polling timer can be adjusted
SET XQ POLL={DEFAULT|4..2500}
Set to 100 by default.
--
Jean-Yves Bernier
On 2014-05-26 01:37, Jean-Yves Bernier wrote:
At 1:00 AM +0200 26/5/14, Johnny Billquist wrote:
Well, that is not the full story... :-)
10.1>NCP SHO NOD 10.2 COU
Node counters as of 30-MAR-82 00:21:30
Remote node = 10.2 (SNAKE)
1266 Seconds since last zeroed
272319 Bytes received
1723 Bytes sent
821 Messages received
373 Messages sent
1 Connects received
4 Connects sent
0 Response timeouts
0 Received connect resource errors
2 Node maximum logical links active
10.2>NCP SHO NOD 10.1 COU
Node counters as of 30-MAR-82 01:10:34
Remote node = 10.1 (SHARK)
4212 Seconds since last zeroed
13475 Bytes received
1382813 Bytes sent
2626 Messages received
5177 Messages sent
26 Connects received
2 Connects sent
73 Response timeouts
0 Received connect resource errors
3 Node maximum logical links active
Aha. So it is 10.2 that gets timeouts when sending packets to 10.1. So 10.1 are dropping packets.
Good. Now, in which direction was the transfer attempted?
Summary of my last experiments:
Node A & node B inside VirtualBox (Linux) : that's the configuration i
am running since the beginning of this discussion.
Node A & node B on a same MacMini (BSD). VirtualBox out of the loop.
=> same problem.
Node A on a MacPro, node B on the MacMini. VirtualBox out of the loop
again.
=> same problem.
Node A inside VirtualBox Linux (MacPro), node B on the MacMini (BSD).
=> same problem.
We can rule out VirtualBox.
We can rule out a Linux/BSD/pcap problem.
We can rule out running multiple nodes on the same host (MAC addresses,
etc).
Two versions of simh were used, 3.6 and 3.9, in Mac and Linux build.
Right. And I never expected any of the things listed above to be the problem to start with.
This is a problem inside of DECnet on the simulated host. It gets packets faster than it can process them, so some packets are dropped.
Unfortunately DECnet deals very bad with systematic packet loss like this. You get retransmissions, and after a while the retransmission timeout backs off until you have more than a minute between retransmission attempts.
It is a known problem.
[ Note: since simh 3.7, set console pchar=37777777777 if you want to use
EDT/RMD/NTD at the console, took me an hour to figure. ]
:-)
Anyway, if you can get simh to throttle the ethernet interface, that might help you. (I don't remember offhand if it do support such functionality.)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
At 1:00 AM +0200 26/5/14, Johnny Billquist wrote:
Well, that is not the full story... :-)
10.1>NCP SHO NOD 10.2 COU
Node counters as of 30-MAR-82 00:21:30
Remote node = 10.2 (SNAKE)
1266 Seconds since last zeroed
272319 Bytes received
1723 Bytes sent
821 Messages received
373 Messages sent
1 Connects received
4 Connects sent
0 Response timeouts
0 Received connect resource errors
2 Node maximum logical links active
10.2>NCP SHO NOD 10.1 COU
Node counters as of 30-MAR-82 01:10:34
Remote node = 10.1 (SHARK)
4212 Seconds since last zeroed
13475 Bytes received
1382813 Bytes sent
2626 Messages received
5177 Messages sent
26 Connects received
2 Connects sent
73 Response timeouts
0 Received connect resource errors
3 Node maximum logical links active
Summary of my last experiments:
Node A & node B inside VirtualBox (Linux) : that's the configuration i am running since the beginning of this discussion.
Node A & node B on a same MacMini (BSD). VirtualBox out of the loop.
=> same problem.
Node A on a MacPro, node B on the MacMini. VirtualBox out of the loop again.
=> same problem.
Node A inside VirtualBox Linux (MacPro), node B on the MacMini (BSD).
=> same problem.
We can rule out VirtualBox.
We can rule out a Linux/BSD/pcap problem.
We can rule out running multiple nodes on the same host (MAC addresses, etc).
Two versions of simh were used, 3.6 and 3.9, in Mac and Linux build.
[ Note: since simh 3.7, set console pchar=37777777777 if you want to use EDT/RMD/NTD at the console, took me an hour to figure. ]
--
Jean-Yves Bernier
On Mon, 26 May 2014, Johnny Billquist wrote:
That is the minimum. I hope you generated an endnode system... :-)
I did.
Well, basically, since your system complains about GEN being too fragmented, you have just managed to start a few things before DECnet tries to start up, and there is not enough contiguous memory left around for DECnet to be happy.
VNP is the third component of DECnet management. NCP and CFE being the other two.
VNP is used to configure networking in the actual system image file, so that it is already installed and in order when you boot. That means no other stuff has yet started, and you have more clear memory to play with, so VNP can probably set you up the same way NETINS do at runtime.
Ahhh.
So, you should go over what NETINS.CMD do, and do the same thing in VNP instead, with the exception that some things cannot be done in VNP. Those steps needs to be left in a modified NETINS.CMD...
But VNP only understands a subset of the NCP commands. For all the installation parts, you instead needs to use VMR.
Now, if you don't feel comfortable with playing around with this stuff, I should warn you that you can render your system unbootable when fooling around with this.
Well, I fiddled with pool:
set /plctl
PLCTL=896.:384.:448.:51.
set /secpol
SECPOL=400.:524.:76%
Then it came up. Closing a LAT connection also doesn't cause a crash now. ;)
Memory utilisation is 78%...but I DO intend to eventually get more memory. I'm just running with half a bad board disabled. ;)
Johnny
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
On 2014-05-26 01:05, Cory Smelosky wrote:
On Mon, 26 May 2014, Johnny Billquist wrote:
Ho hum. Well, how much memory do you have, and what else have you
managed to start before this goes down?
256 Kwords. ;)
That is the minimum. I hope you generated an endnode system... :-)
There are possible solutions as well. But it might require that you
learn a little about VNP...
Well I'm staring at a (correct!) Manager's Guide, so I'm not opposed to
that.
Well, basically, since your system complains about GEN being too fragmented, you have just managed to start a few things before DECnet tries to start up, and there is not enough contiguous memory left around for DECnet to be happy.
VNP is the third component of DECnet management. NCP and CFE being the other two.
VNP is used to configure networking in the actual system image file, so that it is already installed and in order when you boot. That means no other stuff has yet started, and you have more clear memory to play with, so VNP can probably set you up the same way NETINS do at runtime.
So, you should go over what NETINS.CMD do, and do the same thing in VNP instead, with the exception that some things cannot be done in VNP. Those steps needs to be left in a modified NETINS.CMD...
But VNP only understands a subset of the NCP commands. For all the installation parts, you instead needs to use VMR.
Now, if you don't feel comfortable with playing around with this stuff, I should warn you that you can render your system unbootable when fooling around with this.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On Mon, 26 May 2014, Johnny Billquist wrote:
Ho hum. Well, how much memory do you have, and what else have you managed to start before this goes down?
256 Kwords. ;)
There are possible solutions as well. But it might require that you learn a little about VNP...
Well I'm staring at a (correct!) Manager's Guide, so I'm not opposed to that.
Johnny
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
On 2014-05-25 22:47, Cory Smelosky wrote:
Okay. Copied!
However...
NTL -- Config File -- Partition GEN Too Fragmented
PAR$DF ,GEN,15.,TOP,2.,12.
NCP -- Set failed, operation failure
Network Initializer function failed
It seems its a big too big. :(
Ho hum. Well, how much memory do you have, and what else have you managed to start before this goes down?
There are possible solutions as well. But it might require that you learn a little about VNP...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 2014-05-25 23:05, Bob Armstrong wrote:
From the VMS log on LEGATO -
%%%%%%%%%%% OPCOM 25-MAY-2014 14:01:52.62 %%%%%%%%%%%
Message from user DECNET on LEGATO
DECnet event 4.18, adjacency down
On 2014-05-25 21:54, Jean-Yves Bernier wrote:
At 8:27 PM +0200 25/5/14, Johnny Billquist wrote:
More interesting would be to see EXEC COUNTERS, as well as counters
for the two nodes, from both sides.
Full story:
-----------------------
NCP SHO EXE COUNT
Node counters as of 30-MAR-82 00:20:53
Executor node = 10.1 (SHARK)
0 Buffer unavailable
0 Peak logical links active
0 Aged packet loss
0 Node unreachable packet loss
0 Node out-of-range packet loss
0 Oversized packet loss
0 Packet format error
0 Partial routing update loss
0 Verification reject
NCP SHO LINE QNA-0 COUNT
Line counters as of 30-MAR-82 00:20:56
Line = QNA-0
1242 Seconds since last zeroed
5309886 Bytes received
270380 Bytes sent
73284 Multicast bytes received
13435 Data blocks received
6356 Data blocks sent
240 Multicast blocks received
0 Blocks sent, single collision
0 Blocks sent, multiple collision
0 Send failure
0 Collision detect check failure
0 Receive failure
1 Unrecognized frame destination
23 Data overrun
0 System buffer unavailable
-----------------------
NCP SHO EXE COUNT
Node counters as of 30-MAR-82 00:20:56
Executor node = 10.2 (SNAKE)
0 Buffer unavailable
0 Peak logical links active
0 Aged packet loss
0 Node unreachable packet loss
0 Node out-of-range packet loss
0 Oversized packet loss
0 Packet format error
0 Partial routing update loss
0 Verification reject
NCP SHO LINE QNA-0 COUNT
Line counters as of 30-MAR-82 00:20:59
Line = QNA-0
1244 Seconds since last zeroed
359441 Bytes received
5259859 Bytes sent
73238 Multicast bytes received
6352 Data blocks received
13533 Data blocks sent
239 Multicast blocks received
0 Blocks sent, single collision
0 Blocks sent, multiple collision
0 Send failure
0 Collision detect check failure
0 Receive failure
0 Unrecognized frame destination
0 Data overrun
0 System buffer unavailable
-----------------------
10.1 runs NFT.
10.2 runs FAL.
Well, that is not the full story... :-)
Now, on 10.1:
NCP SHO NOD 10.2 COU
and on node 10.2:
NCP SHO NOD 10.1 COU
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol