Hi list,
I have opened a notes conference named MAINFRAMES to talk about... IBM mainframes (yeah, I know I have beaten myself with the originality of the conference name). Everyone interested is invited to join. Just point your NOTES client to BITXOW:: and you will be set (unless I have done something wrong).
On Thursday, March 21, 2013 at 12:55 PM, Mark Pizzolato wrote:
On Thursday, March 21, 2013 at 12:40 PM, Cory Smelosky wrote:
On 2013-03-19 23:38, Cory Smelosky wrote:
I am now having an issue...
%SYSTEM-F-ABORT, abort
, fatal hardware error %NCP-W-UNRCMP, Unrecognized
component , Circuit Circuit = QNA-0
The xq device is attached correctly, and it works in other OSes...but
OpenVMS seems to not see it.
Device XQA0:, device type unknown, is online, network device, device is a
template only.
Error count 5 Operations completed
0
Owner process "" Owner UIC
[SYSTEM]
Owner process ID 00000000 Dev Prot
S:RWPL,O:RWPL,G,W
Reference count 0 Default buffer size
512
Please send along the configuration file you are booting the simh vax
instance with AND the output of:
sim> SHOW VERSION
sim> SHOW ETHERNET
We can take this offline if you want...
Please also send what is output when you initially invoke the simh vax instance (before the boot command).
Thanks.
- Mark
On Thursday, March 21, 2013 at 12:40 PM, Cory Smelosky wrote:
On 2013-03-19 23:38, Cory Smelosky wrote:
I am now having an issue...
%SYSTEM-F-ABORT, abort
, fatal hardware error
%NCP-W-UNRCMP, Unrecognized component , Circuit Circuit = QNA-0
The xq device is attached correctly, and it works in other OSes...but
OpenVMS seems to not see it.
Device XQA0:, device type unknown, is online, network device, device is a
template only.
Error count 5 Operations completed
0
Owner process "" Owner UIC
[SYSTEM]
Owner process ID 00000000 Dev Prot
S:RWPL,O:RWPL,G,W
Reference count 0 Default buffer size
512
Please send along the configuration file you are booting the simh vax instance with AND the output of:
sim> SHOW VERSION
sim> SHOW ETHERNET
We can take this offline if you want...
- Mark
On 2013-03-19 23:38, Cory Smelosky wrote:
On 19 Mar 2013, at 23:37, "Dave McGuire" <mcguire at neurotica.com> wrote:
On 03/19/2013 11:03 PM, Cory Smelosky wrote:
I've gotten OpenSXCE installed and I have managed to get zones to work. It took a
little bit of effort and a lot of time but I have done it.
Nice work. You really should document how you did it.
Thank you. I only had to modify one file to make it work it was a surprisingly simple fix. I will definitely document it if my next task succeeds. I'm going to do a bizarre chain starting at OpenSolaris build 134 and jumping to experimental OpenIndiana 150 for SPARC via IPS, then I will create a zone there on my zones zpool and use it as a template. I will then detach that zone and go back to OpenSXCE. I will then clone that zone for use with the real zones. Having 3 working drives makes this quick and safe. ;)
If all of this works, I can share me templates with you if you'd like.
Yow. You have absorbed Solaris amazingly quickly.
I can pick things up quite quickly if I put my mind to it. ;)
-Dave
--
Dave McGuire, AK4HZ
New Kensington, PA
I am now having an issue...
%SYSTEM-F-ABORT, abort
, fatal hardware error
%NCP-W-UNRCMP, Unrecognized component , Circuit
Circuit = QNA-0
The xq device is attached correctly, and it works in other OSes...but OpenVMS seems to not see it.
Device XQA0:, device type unknown, is online, network device, device is a
template only.
Error count 5 Operations completed 0
Owner process "" Owner UIC [SYSTEM]
Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G,W
Reference count 0 Default buffer size 512
Besides that, OpenVMS doesn't show it anywhere, as multinet doesn't come up on it and SHO KNOWN CIR doesn't list the circuit.
I can't test 3.9-0 as it doesn't build, so i'm limited to latest git. ;) Is this a bug or am I doing something wrong? ;)
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Experiments
below
On Thu, Mar 21, 2013 at 9:20 AM, Johnny Billquist <bqt at softjar.se> wrote:
They call it a "hybrid", whatever that might mean
A performance hack is what it means.
Ok, I was a tad sloppy and skipped some steps for brevity - my apologies.
CMU created Mach by starting with the Pascal based ukernel Accent from the CMU SPICE project and marrying it with BSD UNIX to bring the user space along. Accent is a true uKernel written for the "Pascalto" (aka the Triple Drip Perq) and CMU rewrote the porting they anted into C and hacked originally BSD 4.1 - ripping out the primariarly the memory and tasking system and widely released became Mach 2.5 after the BSD 4.2 upgrade.
So yes, the API is BSD 4.2, plus the Mach calls. The Messaging, Tasking and Memory support is all CMU (modeled/based on Accent) and support the communications concept of a "port" - which is a distributed message capability. The drivers for were BSD and it originally ran on the VAX. The BSD 4.2 command system was also basis for the distribution. I do not remember when the "BSD NET2" code replaced the BSD 4.2 code - but for the purposed of this discussion I will call them the same [grab me off line if you really what to know the differences].
The key is that CMU envisioned Accent and Mach (and for that matter Accent's for runner - RIG) as distributed OS's which is why the ukernel ideas important. ukernels are message based and message based systems are >>much<< more natural for NORMA (no remote memory access) "multicomputers." But many people (like me) felt that ukernels were just a better way from a software engineering standpoint to build a kernel, since the way the system structured.
But ... Mach 2.5 was a monolithic kernel and because of the BSD basis had support for many different architecture families, from Vax to 68K, NS32000, iPX386 etc and thus would be able to migrate from CMU to places like NeXT, OSF et al.
At the time of the Mach 25 release, CMU had started to work on a pure ukernel called Mach 3.0 which was not completed at CMU, but rather at the OSF/RI and a number of partners. Including my then employer - Locus Computing Corp - who was working as kernel hackers for for hire for most of the major players [DEC, Sun, HP, IBM, or in the case of OSF/1 - Intel].
At OSF and friends a lot of time and study was done considering the performance issues of a pure kernel vs. the monolithic (using the 2.5 and 3.0 based kernels on a 386). The hybrid approach is a hack to allow, some of the system "servers" to continue to be bound into the address space of the kernel so, that two messages are not needed for those system calls.
The "hybrid" idea came from the fact that for 3.0 ukernel, the memory support still is part of the kernel itself (in many ukernels like the grand daddy of them all - Dykstra's THE - the memory support is also a "server layer" in "user" space). In fact, the Mach 3.0 "micro kernel" was about 1-1.5 megabytes on a 386 which is hardly "micro" (at the time people were proposing a "nano-kernel" to solve that].
So the hybrid kernel allows the kernel behave like a pure ukernel when desired, but still allows the performance benefits of a monolithic kernel. Of course because its not a pure ukernel, it suffers all of the security issues and more difficulties of distribution for the kernel across multiprocessors that monolithic kernels have.
Clem
On 2013-03-21 14:02, Clem Cole wrote:
That said, in amazes me that today, Linus still rejects the idea of
ukernel. The benefits are so much better than the cost, and tricks
like universe are not needed and IMO easier to manage. To this day, if
I have to deal with Winders, I install whatever MSFT calls "SFU" these
days - where is the posix system call layer and Unix utilities for the
NT ukernel.
As much as I am a fan of what Linux has done for the market (because I'm
basically a UNIX junkie at heart), I personally think that it is
interesting to note that the most popular OSs in total installed base
(Winders, MacOS and iOS) are based on uKernels and only Linux is a hold
out.
Can't comment much about Windows (I know way too little), but OS X is not really a microkernel either. They call it a "hybrid", whatever that might mean. OS X is cruft on top of Darwin. Darwin has a BSD API. Darwin in turn sits on top of XNU. XNU is parts of Mach, and parts of (Free)BSD. All the BSDs are very monolithic.
http://en.wikipedia.org/wiki/XNU
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Bill
On Wed, Mar 20, 2013 at 6:16 PM, Bill Pechter <pechter at gmail.com> wrote:
I'd kill to see the sources for dual universe Unix so I could look to see about implementing a BSD/Linux dual universe clone... I'm mostly a sysadmin -- but I'm crazy.
It's not that hard. I wrote that code in a weekend for Masscomp. It's one system call to set it, and then a number of places in the kernel to use it. Send me email offline, and I'll explain in more detail if you like. I did it on a bet with Gourd to prove it could be done. Then Bonnie Johnson picked up on it and showed it the late Lorin Gale (VP of Eng) . We sold it to marketing as a solution BSD vs AT&T "UNIX Wars" that was going around.
As you said, Pyramid took it way farther than we did. We looked at it a crutch, so we could bring code in from other flavor.
I think it's interesting that HP took the attitude of keeping their BSD based kernel (like we had at Masscomp) and spliced System V on top of it, including using the System command system and then claiming they were System V. Sun would try to mate with AT&T and we all know how well that worked ;-)
Truth is there are better ways to do that these days that the idea of Universes. While they did exist in the research community at the time, the microkernel was nascent. I know tjt and I were aware of them, using a ukernel was not a popular way to build systems in those days. Particularly since, we were worried about real time so I suspect we rejected it, because we would have thought the extra code path / system delay to be detrimental. Remember, RSX, RT11 and VMS were Masscomp's "competition."
That said, in amazes me that today, Linus still rejects the idea of ukernel. The benefits are so much better than the cost, and tricks like universe are not needed and IMO easier to manage. To this day, if I have to deal with Winders, I install whatever MSFT calls "SFU" these days - where is the posix system call layer and Unix utilities for the NT ukernel.
As much as I am a fan of what Linux has done for the market (because I'm basically a UNIX junkie at heart), I personally think that it is interesting to note that the most popular OSs in total installed base (Winders, MacOS and iOS) are based on uKernels and only Linux is a hold out.
BTW: It's been done by others, but if you want something cool to try, take Darwin (which is an FreeBSD/Mach blend and what both iOS and MacOS sit on), and build a Linux "personality."
A couple of years ago, a few of us were toying with trying to splice VMS system call layer on top of Darwin (You'd pickup all the support for x86 family for free, you'd get compilers and tools). But one of the people involved soured a number of us when said party would not listen to rest of us. After a number of us dropped out, I think the idea fell apart.
Clem
On 21 Mar 2013, at 00:16, Bill Pechter <pechter at gmail.com> wrote:
The sick part was the 3 UUCP varients that could be configured to talk to each other like they were separate machines.
That sounds AWESOME. I'd love to have played with that.
sampsa
>> What happened to Masscomp?
>
> One of the Drexel, Burham, Lambert - leveraged buy-outs of the late 1980 of Milken et al.
> The guppy swallowed a whale. DBL organized a leveraged buy-out of Perkin-Elmer's computer division to create Concurrent Computer Corp (ticker: CCUR). Masscomp was actually the surviving legal entity, and actually the surviving technology, but the PE guys were clueless and they were the surviving management team. Funny part is CCUR still exists
>
> Clem
Actually, having been there at CCUR at the time, the mess that was caused by the merger was amazing.
Concurent had been around two years when they merged with (swallowed) Masscomp.
I was told Concurrent thought they could dump the existing manufacturing in Westford -- because they used to build legacy Perkin Elmer 7350 boxes (IIRC). Those were a small UniPlus system based on the 68000 with no virtual memory paging and a limited number of options.
They didn't understand the product, the diagnostics or the manufacturing processing. ECO's were not always documented in the stuff CCUR got. As an old DEC guy, I was at least familliar with the diagnostic supervisor and stuff. Masscomp was very DEC-like in the diags. So was Alliant when I was there.
The guys in Masscomp found other jobs. ECO changes never got transfered in the knowledge transfer.
Training at Conccurrent didn't have much understanding of the hardware. By the time they got manufacturing up they lost too much time and the PC platform began to be dominant. I think they were delayed in shipping some models by about a quarter.
The only thing keeping them afloat in the early 90's was the use of OS/32 boxes (old Perkin Elmer
32xx's) for various military and security uses. Their non-mililtary uses were aircraft simulators and industrial control stuff. Both of those had VAX boxes as competitors and the 68k stuff was also moving into that space.
I remember the folks pushing the 32xx iron plugged their fast context switch time vs. DEC.
I seem to remember their high end box when I left in 1988 was the 68030 with the 68040 being new.
When I came back in 1992 they were looking at PowerPC
I kept saying they should port the stuff to x86 and make the RTU a FreeBSD based OS and get out of the hardware and just to Real-Time Unix software. They didn't see that they needed someone with cheaper costs doing the assembly and design -- so they could concentrate on the high end value-add controllers and software.
Embedded stuff kept getting cheaper and smaller and the RTU hardware was expensive.
Harris split their computer division into a defense/security piece and a commercial piece. Concurrent was the surviving legal entity for the Harris-CCUR merger. Harris was a big RTU OEM and reseller to the government... I think they even had sources to RTU.
I left for Pyramid Technology traning only to come back when AT&T dropped Pyramid in the NCR deal and half the Pyramid business went bye-bye. When I tell people I know OS/x they think OSx.
Masscomp was a good place to learn about dual-universe Unix. When I hit Pyramid I went to town.
Not only did they do dual libraries -- they did two versions of every command -- the UCB universe version and the ATT version.
The sick part was the 3 UUCP varients that could be configured to talk to each other like they were separate machines.
I'd kill to see the sources for dual universe Unix so I could look to see about implementing a BSD/Linux dual universe clone... I'm mostly a sysadmin -- but I'm crazy.
A lot of Unix guys tell me how ugly dual universe is -- but I actually liked it.
bill