On Tue, Jan 8, 2013 at 3:55 PM, Brian Schenkenberger, VAXman- <system at tmesis.com> wrote:
The "listing" portion is far more valuable than the source
itself.
Amen - although I might say - "was often more valuable" -- if you had a bug needed to fix it fast, the key you could (which is the cry of the FOSS movement). You did not have to wait for the vendor.
But more often than note, the reason you wanted to see the source was to understand a behavior what was happening that did not make sense. Sometimes, you might find a bug, but more often than not, I might find a way to do what i wanted to do with what I had in hand,
On Tue, Jan 8, 2013 at 3:20 PM, <Paul_Koning at dell.com> wrote:
What DEC and others did is to provide listings you could read, but you weren't allowed to use or modify the code. Sources, maybe, but even then I don't think you could hand the resulting executable bits to anyone outside your organization.
I think you are split hairs a little. We should talk this off line if you want to discuss it much more.
I'm not a lawyer and do not pretend to be one. I did live a lot of this including the AT&T/UCB case and today I help teach a course at Intel on Copyright and IP protection that is required for all our SW developers - so comfortable in saying I think I understand most of the subtleties.
I will point out that the Amdahl machines ran OS/360, TSS, VM etc..
Foonly and Xerox's MACie ran Tenex and the stock DEC compile suite. The Xerox Alto's were a Nova clone and lot of the code that ran in them was just that (although Xerox would be know for the boat load of SW they would write for them),
And as Ken O'Mundro used to remind everyone at USENIX conferences, it was not the instruction set and SW that got him trouble with KO - the CalData machine when it Nova microcode in it, could run DG's SW. KO did not like that Ken made a UNIBUS replacement.
Simple put, in the late 1960s and early 1970s, different firms make sold "clones" of different machines that used the original developers code base. The "cloners" made the market bigger and sometimes (like Foonly) could serve a need the primary manufacturer was not going to supply.
You are mentioning the part of "copy right" and basically, anyone with a common license allowed you to shared code. This was what we in UNIX land did extensively. So folks like me used to keep a pile of "signature pages" for different people licenses in our filing cabinet before we sent them our modified versions tapes.
The rules about copyright and SW were really unclear in the 1960s and 70s, and it would actually not get decided with real case law until the early-mid 1980s with the Apple/Franklin Computer Case (Apple had copyvwritten it's ROMs and Franklin "cloned" them). But using SW IP if the copyright license did not tell you that you could not, was consider ok for a long time. In fact that was the basis of the Franklin argument - to be a compatible it needed to use Apple's ROMs.
The original IBM user group was called "SHARE" and they did just that, as did DECUS folks. We all traded patches because we had the sources. My first job at CMU was looking at IBM source patches and figuring out how to make them work or if they mattered any more to our custom TSS system. We also got patches from other users and we pushed our changes out. IIRC, there were more Amdahl machines running TSS than IBM HW - but TSS was an IB product and ran IBM "layered products."
FYI: >>Free<< and Open Source which is what Linux, FreeBSD et al are based on today. They keep point is Free in some manner - typically with a license that allows a more free use of the IP.
BTW: I've always said the real father of the OSS movement, was the late Prof Don Pederson of UCB. His many generations of students would "publish" code. He started that practice in the late 1960s and it would become the guts of UCB "Industrial Liaisons Office" - and even later CSRG etc. BSD - Berkeley Software Distribution. The predecessor to the UNIX BSD tape, was the EE "Tools Tape" (SPICE, SPLICE, MOTIS et al).
As "dop" would say - "I always give away our source, that way I come in the back door. Other places [ such as CMU and MIT would license code sources and as he mentioned ] came in the front door like any other salesman."
Clem
John Wilson <wilson at dbit.com> writes:
From: <Paul_Koning at Dell.com>
Can't search microfiche, and a fiche reader is a clumsy and hard to use
contraption.
I always figured that was the whole point. You had to do a *lot* of
squinting and typing before you could do anything that DEC didn't want
you to.
How do you figure. Fiche was the media of its day and listings were
for reference; not intended to you to rebuild VMS. When CD-rom first
appeared on the scene, DEC put the listings on CD-rom. You could run
utilities like UNLIS (I believe my good buddy Hunter wrote that) which
could give you sanitized source but I wouldn't try to use it do to the
Copyright. The "listing" portion is far more valuable than the source
itself.
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG
Well I speak to machines with the voice of humanity.
From: <Paul_Koning at Dell.com>
Can't search microfiche, and a fiche reader is a clumsy and hard to use
contraption.
I always figured that was the whole point. You had to do a *lot* of squinting
and typing before you could do anything that DEC didn't want you to.
John Wilson
D Bit
::-)
------Origineel bericht------
Van: Brian Schenkenberger, VAXman-
Afzender: owner-hecnet at Update.UU.SE
Aan: hecnet at Update.UU.SE
Beantwoorden: hecnet at Update.UU.SE
Onderwerp: Re: [HECnet] Re: VMS sources
Verzonden: 8 januari 2013 21:43
<Paul_Koning at Dell.com> writes:
On Jan 8, 2013, at 3:04 PM, Brian Schenkenberger, VAXman- wrote:
Dave McGuire <mcguire at neurotica.com> writes:=20 >=20 >> On 01/08/2013
02:55 PM, Cory Smelosky wrote: >>> I've always wondered >> about VMS
sources: how they are actually distributed >>> today and how >> much
space they take? Is a CD-ROM enough for everything? And in >>> >> which
format are they? Just simple text files in a bunch of directories >> or
there is something fancier such as some cross references and >>
indexes? >> >> They used to make source *listings* available on fiche;
I have several >> sets of those. It's a stack of fiche maybe 3-4" >>
thick. > > How old are these "listings"?=20 >>=20 >> I haven't looked at
them in years, but I think I have at least 5.1 and >> 5.2, possibly
4.7.=20 >=20 > But who needs the source listings when you've got access
to the poor-man'= s > microfiche?
Can't search microfiche, and a fiche reader is a clumsy and hard to use
con= traption.
The "poor-man's microfiche" refers to:
$ ANALYZE/SYSTEM
SDA> EXAMINE/INSTRUCTION address;range
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG
Well I speak to machines with the voice of humanity.
<Paul_Koning at Dell.com> writes:
On Jan 8, 2013, at 3:04 PM, Brian Schenkenberger, VAXman- wrote:
Dave McGuire <mcguire at neurotica.com> writes:=20 >=20 >> On 01/08/2013
02:55 PM, Cory Smelosky wrote: >>> I've always wondered >> about VMS
sources: how they are actually distributed >>> today and how >> much
space they take? Is a CD-ROM enough for everything? And in >>> >> which
format are they? Just simple text files in a bunch of directories >> or
there is something fancier such as some cross references and >>
indexes? >> >> They used to make source *listings* available on fiche;
I have several >> sets of those. It's a stack of fiche maybe 3-4" >>
thick. > > How old are these "listings"?=20 >>=20 >> I haven't looked at
them in years, but I think I have at least 5.1 and >> 5.2, possibly
4.7.=20 >=20 > But who needs the source listings when you've got access
to the poor-man'= s > microfiche?
Can't search microfiche, and a fiche reader is a clumsy and hard to use
con= traption.
The "poor-man's microfiche" refers to:
$ ANALYZE/SYSTEM
SDA> EXAMINE/INSTRUCTION address;range
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG
Well I speak to machines with the voice of humanity.
Peter,
Everything is now set up on my end. I now have 3 Cisco tunnels. The metric is 10 on all of them. The majority of areas now route through you according to my router:
a42rtr#sh dec rout
Area Cost Hops Next Hop to Node Expires Prio
*1 11 2 Tunnel2 -> 59.11
*2 11 2 Tunnel2 -> 59.11
*3 14 3 Tunnel2 -> 59.11
*4 11 2 Tunnel2 -> 59.11
*5 11 2 Tunnel2 -> 59.11
*6 11 2 Tunnel2 -> 59.11
*7 12 3 Tunnel2 -> 59.11
*8 11 2 Tunnel2 -> 59.11
*11 11 2 Tunnel2 -> 59.11
*12 11 2 Tunnel2 -> 59.11
*18 12 3 Tunnel2 -> 59.11
*19 11 2 Tunnel2 -> 59.11
*20 12 3 Tunnel2 -> 59.11
*28 11 2 Tunnel2 -> 59.11
*33 12 3 Tunnel2 -> 59.11
*42 0 0 (Local) -> 42.1023
*44 11 2 Tunnel2 -> 59.11
*47 11 2 Tunnel2 -> 59.11
*52 10 1 Tunnel1 -> 52.1 39 64 A+
*59 10 1 Tunnel2 -> 59.11 38 64 A+
*61 10 1 Tunnel0 -> 61.1 43 64 A+
*62 11 2 Tunnel2 -> 59.11
Node Cost Hops Next Hop to Node Expires Prio
*(Area) 0 0 (Local) -> 42.1023
*42.1 1 1 FastEthernet5/0 -> 42.1 38
*42.2 1 1 FastEthernet5/0 -> 42.2 41
*42.42 1 1 FastEthernet5/0 -> 42.42 33
*42.1023 0 0 (Local) -> 42.1023
a42rtr#
On 2013-01-08, at 12:15 PM, Peter Lothberg <roll at Stupi.SE> wrote:
Do we have a 'standard' metric for Cisco routers on Hecnet? I've been =
using 10 for each link so far. How do these devices behave with =
asymmetrical costs?
You get asymetric routes.
I will (tomorrow) get it all correct on my side, but as the uppsala
box is "central" to HECnet, you are better of forcing the traffic
there as we have no metrics/topology info from the bridged ethernet.
Then we use the stockholm/reston boxes as backup together with the
Multinet links..
--P
.
---
Filter service subscribers can train this email as spam or not-spam here: http://my.email-as.net/spamham/cgi-bin/learn.pl?messageid=3CA515C859D011E2A…
Listing or sources you can obtain for a fee are not open source. What DEC and others did is to provide listings you could read, but you weren't allowed to use or modify the code. Sources, maybe, but even then I don't think you could hand the resulting executable bits to anyone outside your organization.
On the other hand, there are some interesting cases of code accidentally landing in the public domain, due to having been let out of the building without a copyright notice in place. Until 1976, that would make it public domain. I don't think that happened with any DEC software -- I remember being at the receiving end of some very stern lectures about copyright notices. But it did happen to CDC (in the early mainframe operating system COS) and to IBM (with OS/360).
paul
On Jan 8, 2013, at 3:08 PM, Clem Cole wrote:
You have to understand, the concept of "Open Source" is not new. Most vendors supplied the source listing, and sometime even the code. There was a fee to copy it all (it was said in the old day it was impossible to write a mag tape anywhere for less than $100). So the fees we really set high enough to keep the idiots away, but low enough that the customers that needed them could get them.
Remember a lot of it was in assembler, so it did you little good unless you had the vendors HW. A few things changed that all. First, the practice became less prevalent by the later 1970s primarily because of the Amadhl Corp making and selling a 360/370 clone. Interestingly enough, DEC did not sue CalData because of the SW. It was because they cloned the Unibus AND used the PDP-11 instruction set. Second once writing more and more of the OS in a High Level Language became de rigor, the ability to "steal" SW IP seemed to be more of an issue (although DEC was in good shape because no one but DEC would use BLISS).
So around the late 1970s, DEC and most other vendors began to be more protective.
On Tue, Jan 8, 2013 at 2:56 PM, Dave McGuire <mcguire at neurotica.com> wrote:
On 01/08/2013 02:51 PM, Brian Schenkenberger, VAXman- wrote:
You need to first sign and pay for a source listings license agreement.
Back many years ago, IIRC, it was about $2K. There's then maintenance
that must be paid yearly to get the listings CDs/DVDs when produced.
I am nothing short of astonished that it was that cheap!
-Dave
--
Dave McGuire, AK4HZ
New Kensington, PA
Do we have a 'standard' metric for Cisco routers on Hecnet? I've been =
using 10 for each link so far. How do these devices behave with =
asymmetrical costs?
You get asymetric routes.
I will (tomorrow) get it all correct on my side, but as the uppsala
box is "central" to HECnet, you are better of forcing the traffic
there as we have no metrics/topology info from the bridged ethernet.
Then we use the stockholm/reston boxes as backup together with the
Multinet links..
--P
.
Fair enough, I'll run my Beirut network in area 8..
sampsa
On 8 Jan 2013, at 16:24, Johnny Billquist <bqt at softjar.se> wrote:
I'm probably going to come back and make more comments on this later...
However, a couple of points and responses right now...
Eve today, areas do not correspond to adminstrative control that much. There are areas which have several different people responsible for different machines, and atleast I have "sub-delegated" parts of area 1 from time to time.
Yes, we will be running out of areas eventually. Based on the current growth, I'd guess a couple of years at most, and then we're out.
Areas should be used for administrative reasons, and relate less to any geographical constraints. DECnet areas work perfectly fine spanning large distances. "Sub-letting" areas should definitely be a valid approach. Each user in one area can still have their own area router, and their own connections to the rest of HECnet, as long as they also have interconnects within the area.
Data mining is difficult, since there are different systems, with different possibilities of extracting it, and in different formats.
A centralized repository of data is nice in many ways, but it is a headache to manage.
That said, I could be convinced of setting something semi-automatic up. A reasonable way would be for people to give me machines to poll, and then I'd setup an automated process to poll those machines for files in a specific format. I can then create a database out of that, and make it available through the web, as well as over DECnet, and also as a summarized file. Anything would be pretty easy if we just have the data collected.
I already have something of a start for this in the form of my database of nodes in HECnet. I'd need to extend it with more fields, but that would be pretty easy. It's all in Datatrieve today, and that should be accessible over DECnet right now (even though I seem to remember that VMS hosts had some problems with that).
I'm already extracting information from that database for the hecnet web-page on MIM (accessible as Madame).
So, if we can just decide on what we want, and how to make the information available, I'll sit down and write the code to fix it.
Johnny
On 2013-01-08 06:45, Cory Smelosky wrote:
On 8 Jan 2013, at 00:25, Dave McGuire <mcguire at neurotica.com> wrote:
On 01/07/2013 09:57 PM, Ian McLaughlin wrote:
Yeah more and more of us are using Ciscos to do this. We really
need to find a way around this issue that doesn't involve manual
maintenance of routing info.
Perhaps an agreed-upon entry in INFO.TXT ? That's still manually
managed, but it's managed by the individual link owners.
Well Brian raised the good point of "on which host?" ...I think the
problem here is that INFO.TXT really looks like, to me as a relative
HECnet n00b, a per-"domain" file...but there's no clear delineation of
administrative domains here. We've been using areas, but we're running
out of those, and there's no consistency in the node numbering within
each area.
We could all agree to have "an info node" with a particular node
number within each area, but that won't work when we start having
multiple administrative domains within a single area. Johnny talked
about exactly this just today, in the context of Sampsa's relocation.
Dividing lines between regions of administrative control will not
correspond to area numbers for much longer, its sounds like.
Yeah, I've been noticing that...I've up-to-now used a specific "info node" approach...but it DOES get a bit wonky when I divide my stuff, or skip a node number or re-use a node number.
(On a semi-related note...I might implement personal node-number schemes: separating PDP-11 sims from DEC-20 sims from VMS sims from physical hardware and so on.)
Perhaps a centralized database that maintains per-NODE info, not
per-AREA info. Then that database could have a field that denotes the
point of administrative control that is responsible for each node.
Centralising the NODE info could solve a lot of problems and make data mining easier. ;)
I'd also like basic (Geographic location(s) (see below for further comments), owner, that kind of stuff) per-area info to be defined in this central database.
(To be honest, I'd then break it down in to sub, and sub sub areas but at times I can go a bit overboard with creating subcategories...I doubt anyone other than myself would like breaking down their areas /that/ much.)
Then, some mechanism (either automated, manual, whatever) would then
populate that database. Perhaps there could be several population
mechanisms...a program that runs under VMS, RSX, RSTS/E, or whatever,
and something over IP for everything else.
A web interface to the database would also be nice.
How would it be done? Flatfile and having Johnny or someone add all node info by hand? ;)
-Dave
--
Dave McGuire, AK4HZ
New Kensington, PA