Hello Keith,
Indeed, there is a way to make the white Alphaservers run VMS.The difference is minimal between the servers themselves, but sometimes the adapters and peripherals could be different in the white Alphas and don't work with VMS. Therefore you should check which adapters there are in the box. Examples of not working ones are Adaptec SCSI-adapters and various Graphics adapters.?You need to enable the SRM firmware instead of the AlphaBIOS and then create a short script which is run at boot time.If you want to do it I can send you the detailed instructions.Kari
-------- Original message --------
From: Keith Halewood <Keith.Halewood at pitbulluk.org>
Date: 10/8/20 14:00 (GMT+02:00)
To: hecnet at Update.UU.SE
Subject: [HECnet] VMS on AlphaServer 5000
Hi,
?
I?ve trawled through HECnet emails for enlightenment around alpha systems running VMS and I believe I?m not repeating the question. So here goes.
?
There?s an DEC Alphaserver 5000 (white case) (EV56) floating a few miles away from where I live. It?s running NT, so there?s an AlphaBIOS. Is there a ?hidden? SRM in that machine and therefore is it able to run current (HPE at least) versions
of VMS? I don?t want to end up with a machine running NT ? my life has enough horror in it already. I had a look on HP?s FTP site at firmware updates and the Alphaserver 5000 isn?t even mentioned, or at least not in plain text.
?
I nearly bought a reasonably loaded Itanic RX2600 a few days ago but chickened out at the last minute. What I really should do is abandon all of this and buy a new motorbike.
?
Keith
Hi,
I've trawled through HECnet emails for enlightenment around alpha systems running VMS and I believe I'm not repeating the question. So here goes.
There's an DEC Alphaserver 5000 (white case) (EV56) floating a few miles away from where I live. It's running NT, so there's an AlphaBIOS. Is there a 'hidden' SRM in that machine and therefore is it able to run current (HPE at least) versions of VMS? I don't want to end up with a machine running NT - my life has enough horror in it already. I had a look on HP's FTP site at firmware updates and the Alphaserver 5000 isn't even mentioned, or at least not in plain text.
I nearly bought a reasonably loaded Itanic RX2600 a few days ago but chickened out at the last minute. What I really should do is abandon all of this and buy a new motorbike.
Keith
Gentlepeople,
I committed rev 560, which may be of interest to a number of you.
It's basically a change of internal machinery, no functional changes. The main ones are:
1. Restructured the point to point (DDCMP and Multinet) implementations. Error recovery is much cleaner now and should be faster. Retry for persistent problems has holdoff on it that starts out pretty fast and slows down to (typically) one retry per two minutes.
2. Ethernet in PCAP mode now uses the PCAP filter mechanism to request only the frames we want. This means the Python code no longer spends a bunch of CPU time tossing away non-DECnet frames. If your machine runs other traffic in significant quantities, as mine does, this can be quite useful.
3. Improve the performance of packet logging.
This code has been running on PYTHON for several days now without problems (except for one bug that has been fixed). You're invited to give it a try; as always, problem reports or other comments are welcome.
paul
Hi, guys!
I'm trying to boot latest SIMH (4.0-current) as LAVC satellite node over
link with latency about 100ms.
Boot node is VAX/VMS 5.5.2 with latest PEDRIVER ECO.
Result: MOP part of boot sequence is work without a hitch, but SCS part is
failing miserably.
The most frequent result:
SIMH console filled with 10 x " %VAXcluster-W-NOCONN, No connection to disk
server " messages, halting with
"%VAXcluster-F-CTRLERR, boot driver virtual circuit to SCSSYSTEM 0000
Failed"
Sometime it goes little further:
...
%VAXcluster-W-RETRY, Attempting to reconnect to a disk server
%VAXcluster-W-NOCONN, No connection to disk server
%VAXcluster-W-RETRY, Attempting to reconnect to a disk server
%VAXcluster-W-NOCONN, No connection to disk server VULCAN
%VAXcluster-W-RETRY, Attempting to reconnect to a disk server
%VAXcluster-W-NOCONN, No connection to disk server
%VAXcluster-W-RETRY, Attempting to reconnect to a disk server
%VAXcluster-I-CONN, Connected to disk server VULCAN
%VAXcluster-W-NOCONN, No connection to disk server VULCAN
%VAXcluster-W-RETRY, Attempting to reconnect to a disk server
...
And halting after minute or so of filling console with those messages.
Whenever I setup throttling in SIMH to 2500K ops/s, the node boots
successfully,
joins cluster successfully and work flawlessly, but slow.
Boot process takes about half hour. After boot, changing throttle value to
3500K ops/s still works.
Increasing throttle value further broke system, with the same messages about
disk server.
Throttled SIMH performance is about 5VUPS.
The only information about maximum channel latency restrictions found in
"Guidelines for OpenVMS Cluster Configurations" manual is that:
"When an FDDI is used for OpenVMS Cluster communications, the ring latency
when the FDDI ring is idle should not exceed 400 ms."
So I suppose that 100ms latency link should be good enough for booting
satellite nodes over it.
My understanding of situation is that combination of PEDRIVER/[PEBTDRIVER
within NISCS_LOAD] with fast hardware
and slow links is a primary reason of such behavior. Please correct me if
I'm wrong.
Do anyone have experience with booting VMS clusters over slow links? OS
version recommendations?
Probably some VMS tunable variables are exists for making PEDRIVER happy on
fast hardware?
Having PEDRIVER listings can shed lights for such PEDRIVER's buggy behavior.
Link details:
Two Cisco 1861 routers connected with Internet via ADSL on one side and 3G
HSDPA on other side.
TCP/IP between sites is routed over IPSec site-to-site VPN. Ping between
sites is about 100ms.
Over that VPN built DECnet family (eth.type = 0x6000..0x600F) bridge with
L2TPv3 VPN.
--
BR, Vladimir Machulsky
Gentlepeople,
Currently the details of what PyDECnet circuits connect to are not displayed. So you can see that a Multinet circuit is up and the other end is node 42.73, but you don't see the IP addresses or the like.
When things are working that's fine; when they are broken it might be helpful to see what something is trying to talk to.
On the other hand, hiding IP addresses is arguably a security feature. So I have this question:
1. Should the addressing info (basically, what's in the --device config argument) be shown in the PyDECnet web interface?
2. Should the addressing info be visible via NCP / NML?
The difference is that #1 can be limited to be local only, if you use an internal address for the web service. That's what I do for my nodes except for the mapper, though perhaps there isn't a strong argument why it should be so restrictive. #2, on the other hand, is visible to all HECnet users assuming you haven't disabled NML in your config settings.
I'd be interested in comments. Am I too concerned about hiding information, or is it sensible to be cautious?
paul
I previously created a Github repository for various DEC things, including updated DECnet/E utilities. I thought that the RSTS patches I had posted in the past were there also, but that wasn't the case.
I've added a "patches" subdirectory, which contains the patches I have collected. I just added a new one, which fixes a bug encountered when running SIMH set to be an 11/94. In that case (and possibly some other similar variations) RSTS tries to figure out the line frequency and gets it wrong because SIMH executes much faster.
https://github.com/pkoning2/decstuff is the repository.
paul
The patches I posted are mostly for both. The kdj11e.cmd patch is for a problem seen on SIMH that probably can't happen on real hardware. The nsp1.pat file Tony mentioned is certainly more likely on SIMH but I suspect could happen on real hardware also. In any case, none of them will create problems on real hardware.
paul
> On Sep 15, 2020, at 9:20 AM, W2HX via cctalk <cctalk at classiccmp.org> wrote:
>
> Are these patches discussed below only for patching SIMH to fix problems with it? Or are these fixes that are for actual PDP hardware implementations of RSTS?
>
> -----Original Message-----
> From: cctalk <cctalk-bounces at classiccmp.org> On Behalf Of Tony Nicholson via cctalk
> Sent: Monday, September 14, 2020 6:10 PM
> To: hecnet at update.uu.se
> Cc: cctalk at classiccmp.org
> Subject: Re: [HECnet] RSTS/E patches
>
> On Tue, Sep 15, 2020 at 7:28 AM Paul Koning <paulkoning at comcast.net> wrote:
>
>> I previously created a Github repository for various DEC things,
>> including updated DECnet/E utilities. I thought that the RSTS patches
>> I had posted in the past were there also, but that wasn't the case.
>>
>> I've added a "patches" subdirectory, which contains the patches I have
>> collected. I just added a new one, which fixes a bug encountered when
>> running SIMH set to be an 11/94. In that case (and possibly some
>> other similar variations) RSTS tries to figure out the line frequency
>> and gets it wrong because SIMH executes much faster.
>>
>> https://github.com/pkoning2/decstuff is the repository.
>>
>> paul
>>
>>
> Thanks for this Paul.
>
> There's also your NSP1.PAT patch to improve data flow using RSTS/E V10.1 under SIMH (posted to the SIMH mailing list in May 2016).
>
> You'll find it and the NSP1.TXT describing it in my repository at
>
> https://github.com/agn453/RSTS-E
>
> in the "decnete" subdirectory.
>
> I've recently joined HECnet and will be making some of my updates available soon.
>
> Tony
>
> --
> Tony Nicholson <tony.nicholson at computer.org>
I've just thrown together the RSTS/E updates mentioned in my previous
message into the DECnet default account on HECnet node DINGO:: (35.619)
The file DINGO::FILES.TXT contains the following details (and will be added
to as I make available more files).
=== FILES.TXT ==
These are the files available from HECnet node DINGO:: (a SIMH
PDP-11/70 emulation running RSTS/E V10.1 on a Raspberry Pi 3B)
in the Default DECnet account (no password required).
There's also a GitHub repository at https://github.com/agn453/RSTS-E
where you can get further RSTS/E related information (with a link
to Paul Koning's repository too).
Name .Typ Size Prot Access Date Time Clu RTS
DINGO::
Patch for improved Ethernet throughput using DECnet/E V4.1
on RSTS/E V10.1
NSP1 .PAT 2 < 62> 03-May-16 03-May-16 08:38 AM 64 ...RSX
NSP1 .TXT 9 < 62> 19-Sep-16 19-Sep-16 02:29 PM 64 ...RSX
Year 2003 update for FIT (File Interchange Transfer program)
FIT .TSK 92C < 62> 03-Jul-20 03-Jul-20 02:09 PM 64 RSX
FIT .DIF 5 < 62> 02-Jul-20 02-Jul-20 07:49 AM 64 ...RSX
FITBLD.COM 2 < 62> 03-Jul-20 03-Jul-20 08:09 AM 64 DCL
This file
FILES .TXT 2 < 60> 15-Sep-20 15-Sep-20 09:31 AM 64 RT11
A directory of Kermit-11 T3.63 files (with updates to 3-Oct-2006) available
from DINGO::KERMIT:
KERMIT.TXT 35 < 60> 15-Sep-20 15-Sep-20 09:32 AM 64 RT11
Please report any problems to me via e-mail to either
tony.nicholson at computer.org or (HECnet) FAUNA::TONY
--
Tony Nicholson <tony.nicholson at computer.org>
Peter Lothberg <roll at stupi.com> writes:
>Yes, it says DEC on some on Compac on some. The DS20 diagnostics and
>microcode update CD works on it.
What does it say in the SRM?
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG
I speak to machines with the voice of humanity.