The parent code base (Locus TNC) been in BSD and Mach before. As I mentioned it was in HP-UX and the Intel SSD Paragon is based on the same code. Frank Mayhar was at one time working in a FreeBSD port. Tandem/HP shipped the Unixware version and post merger Bruce started to work on the Linux port - hence the OpenSSI project.
Assume a Cluster File System for uniform name space (NFS will work to start with but is not good enough in the long run), a membership service (aka CMS) and a cluster wide internode communications service (ICS), the process code can be moved in about 2-3 months of work. Basically, you have to perform heavy surgery on the process code and add the vproc layer (think vfs layer that added to support multiple FS). Once you have a proper hooks, then the process code is pretty much the same - modulo differences in memory systems.
At Locus we had it all in 14 "packages" [with two - CMS & ICS - being required]. The others were ala cart. Intel took the process technology, DEC primarily took the filing stuff, and Tandem took everything.
Clem
On Tue, Jan 8, 2013 at 10:49 AM, Brian Hechinger <wonko at 4amlunch.net> wrote:
On 1/6/2013 8:04 PM, Clem Cole wrote:
well the cluster stuff that TruCluster is based is already FOSS. checkout OpenSSI.org. sadly that tree is dead/or nearly so. Bruce stopped working on it and never was able to get the vproc layer into the Linux upstream sources. which is a real shame
Oh, nice!
Maybe we should get that somewhere else. Like NetBSD or rolled into the Illumos stuff. :)
-brian
On 2013-01-08 17:15, Brian Hechinger wrote:
On 1/8/2013 9:24 AM, Johnny Billquist wrote:
Data mining is difficult, since there are different systems, with
different possibilities of extracting it, and in different formats.
Sampsa's goal here is mapping HECnet. My goal is to write a data mining
service that just happens to provide the data that a mapper would use.
To that end, I'll be stashing all this data in a database of some sort
(details unknown as yet, see below)
Right. So, once we have the data, it can be used by other people in various ways. So lets focus on the data.
A centralized repository of data is nice in many ways, but it is a
headache to manage.
Absolutely it is, but if people are already putting INFO.TXT files out
there they are doing 99% of the work already, we just need to get the
data in a single place.
I think it's better to keep the information separate. INFO.TXT was created for one purpose, this is trying to reuse it for another, additional purpose. Better to create a separate file. In addition, we can make some better design choices as we go about this.
That said, I could be convinced of setting something semi-automatic
up. A reasonable way would be for people to give me machines to poll,
and then I'd setup an automated process to poll those machines for
files in a specific format. I can then create a database out of that,
and make it available through the web, as well as over DECnet, and
also as a summarized file. Anything would be pretty easy if we just
have the data collected.
I think this would be fantastic.
Ok. So let's set about working on this. Anyone else want to join? We should probably keep this off list, as it will be rather technical, and present a solution when we have it.
I already have something of a start for this in the form of my
database of nodes in HECnet. I'd need to extend it with more fields,
but that would be pretty easy. It's all in Datatrieve today, and that
should be accessible over DECnet right now (even though I seem to
remember that VMS hosts had some problems with that).
I'll have to learn how to access that db.
It should be trivial. Datatrieve have a programming interface. Should be callable from any language.
I'm already extracting information from that database for the hecnet
web-page on MIM (accessible as Madame).
So, if we can just decide on what we want, and how to make the
information available, I'll sit down and write the code to fix it.
Do you want to also store my data or should I do that myself? I might do
it myself at least for now until I know what exactly I need/want to save.
Let's start by talking exactly what we want to store, and how to retrieve it, and possible uses of it, to make it meaningful. We can then go about how and where to store it. I even think that it wouldn't be a problem to store it in several places, scrape the source from several places, and present data and services based on this from several places.
Johnny
On 2013-01-08 17:47, sampsa at mac.com wrote:
On 7 Jan 2013, at 22:37, Johnny Billquist <bqt at softjar.se> wrote:
It would. However, I don't expect us to run out of addresses any time soon. However, areas are a rather limited resource, and we are slowly running out of them. Sampsa, do you really think three areas are motivated? DECnet was not designed with the idea that physically separate places needed separate areas. Areas are more of a logicial division thing (although some constraints do exist on areas).
I'll be happy to remumber should this happen...
I'm still not getting why you need three areas. You have nowhere near 3000 nodes. In reality, even two areas are excessive. :-)
Johnny
On 7 Jan 2013, at 22:37, Johnny Billquist <bqt at softjar.se> wrote:
It would. However, I don't expect us to run out of addresses any time soon. However, areas are a rather limited resource, and we are slowly running out of them. Sampsa, do you really think three areas are motivated? DECnet was not designed with the idea that physically separate places needed separate areas. Areas are more of a logicial division thing (although some constraints do exist on areas).
I'll be happy to remumber should this happen...
sampsa
On 1/8/2013 9:24 AM, Johnny Billquist wrote:
Data mining is difficult, since there are different systems, with different possibilities of extracting it, and in different formats.
Sampsa's goal here is mapping HECnet. My goal is to write a data mining service that just happens to provide the data that a mapper would use.
To that end, I'll be stashing all this data in a database of some sort (details unknown as yet, see below)
A centralized repository of data is nice in many ways, but it is a headache to manage.
Absolutely it is, but if people are already putting INFO.TXT files out there they are doing 99% of the work already, we just need to get the data in a single place.
That said, I could be convinced of setting something semi-automatic up. A reasonable way would be for people to give me machines to poll, and then I'd setup an automated process to poll those machines for files in a specific format. I can then create a database out of that, and make it available through the web, as well as over DECnet, and also as a summarized file. Anything would be pretty easy if we just have the data collected.
I think this would be fantastic.
I already have something of a start for this in the form of my database of nodes in HECnet. I'd need to extend it with more fields, but that would be pretty easy. It's all in Datatrieve today, and that should be accessible over DECnet right now (even though I seem to remember that VMS hosts had some problems with that).
I'll have to learn how to access that db.
I'm already extracting information from that database for the hecnet web-page on MIM (accessible as Madame).
So, if we can just decide on what we want, and how to make the information available, I'll sit down and write the code to fix it.
Do you want to also store my data or should I do that myself? I might do it myself at least for now until I know what exactly I need/want to save.
-brian
Brian Hechinger <wonko at 4amlunch.net> writes:
On 1/8/2013 10:55 AM, Brian Schenkenberger, VAXman- wrote: > What
version of VMS and how do you get back the NCP$_NETIO error?
OpenVMS E8.4 on node RHESUS 8-JAN-2013 19:03:23.78 Uptime 8
04:55:33
If you get an actual text string back, you might try: > > "PIPE NCP
TELL.... ; SHOW SYMBOL $STATUS"
Oooh, that's nifty. I'll have to do that.
I don't see why you can't just issue the NCL TELL from the command
line > just once to get the failure results.
Because the failure is transient. It's not happening 100% of the time.
Fair 'nuff...
Make certain you maintain space around the ';' or DCL parsing may think
it's a file version separator. Since the ';' was already spoken for in
VMS, applying unix pipeline syntax requires that spacing for the ';'.
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG
Well I speak to machines with the voice of humanity.
On 1/8/2013 10:55 AM, Brian Schenkenberger, VAXman- wrote:
What version of VMS and how do you get back the NCP$_NETIO error?
OpenVMS E8.4 on node RHESUS 8-JAN-2013 19:03:23.78 Uptime 8 04:55:33
If you get an actual text string back, you might try:
"PIPE NCP TELL.... ; SHOW SYMBOL $STATUS"
Oooh, that's nifty. I'll have to do that.
I don't see why you can't just issue the NCL TELL from the command line
just once to get the failure results.
Because the failure is transient. It's not happening 100% of the time.
-brian
Brian Hechinger <wonko at 4amlunch.net> writes:
On 1/7/2013 3:21 PM, Brian Schenkenberger, VAXman- wrote: > Can't you
issue it from the command line? Python, if you are issuing the >
command from os.system("TELL...") is probably just spawning off
commands. > The context will be lost if that's the case.
I'm using is.popen(), so yeah, i think i'm out of luck in that case.
I'll just have to add some logic to maybe re-try the node if I get this.
What version of VMS and how do you get back the NCP$_NETIO error?
If you get an actual text string back, you might try:
"PIPE NCP TELL.... ; SHOW SYMBOL $STATUS"
I don't see why you can't just issue the NCL TELL from the command line
just once to get the failure results.
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG
Well I speak to machines with the voice of humanity.
I'll send out a "let me know if you aren't showing up" email when i'm ready to tackle stuff that's being missed.
Hang tight. :)
-brian
On 1/8/2013 9:47 AM, hvlems at zonnet.nl wrote:
Area 44 is also missing. Anything I ought to have done?
Hans
-----Original Message-----
From: Brian Hechinger <wonko at 4amlunch.net>
Sender: owner-hecnet at Update.UU.SE
Date: Tue, 08 Jan 2013 09:33:27
To: <hecnet at Update.UU.SE>
Reply-To: hecnet at Update.UU.SE
Subject: Re: [HECnet] _PROVISIONAL_ map of HECnet, courtesy largely of Brian
H.
On 1/7/2013 9:25 PM, Ian McLaughlin wrote:
Sampsa,
I appear to be missing. Are you able to add me?
No, you are missing because you can't currently be found.
We'll find you, be patient. :)
My area router is A42RTR 42.1023. It is adjacent to SUN 52.1 and GW 61.1. Unfortunately for your scanning program, 42.1023 is a Cisco router.
Yeah, Cisco routers have yet to be tackled.
-brian
On 1/7/2013 3:21 PM, Brian Schenkenberger, VAXman- wrote:
Can't you issue it from the command line? Python, if you are issuing the
command from os.system("TELL...") is probably just spawning off commands.
The context will be lost if that's the case.
I'm using is.popen(), so yeah, i think i'm out of luck in that case.
I'll just have to add some logic to maybe re-try the node if I get this.
-brian