A gigaswitch/FDDI, or a dechub 900 module?
The gigaswitch were very reliable, a pleasure to manage.
Van: Brian Hechinger
Verzonden: donderdag 10 oktober 2013 14:44
Aan: hecnet at Update.UU.SE
Beantwoorden: hecnet at Update.UU.SE
Onderwerp: Re: [HECnet] FDDI advice
On Thu, Oct 10, 2013 at 02:36:04PM +0200, Peter Lothberg wrote:
>
> FDDI/CDDI is a dual ring token ring bus, with 4470 MTU byte packets,
> it has 802.-- frames. DEC had a mode where you turned the token off
> and used it for ptp full duplex.
I didn't know about the ptp thing. That's nifty.
> For example cisco/cabletron/crecendo had ethnernet switches with a
> FDDI uplink, that you could use.
DEC made one as well, it was that large modular thingie. I used to have
one. Never got it powered on as it was enormous.
> But you need nothing to build a FDDI ring, its a A and a B ring, you
> can just plug the cards together with fiber-patch-cables.
Unless you have one of those obnoxious single attached station cards.
-brian
DEC had solid state disks, all das iirc
Van: Johnny Billquist
Verzonden: donderdag 10 oktober 2013 11:29
Aan: hecnet at Update.UU.SE
Beantwoorden: hecnet at Update.UU.SE
Cc: Mark Wickens
Onderwerp: Re: [HECnet] SSD for Vax
On 2013-10-10 09:46, Mark Wickens wrote:
> On 10/10/2013 08:17, Daniel Soderstrom wrote:
>> Has anyone tries this? Running my Vaxstation non-stop reminds me how
>> much noise these old drives make.
>>
>> Has anyone tried SSD drives? Must be good for the PSU it terms of heat
>> and current draw.
>>
>> Daniel
>>
>> Sent from my iPhone
> There has been very little success with IDE to SCSI converters on this
> age of machine.
Hmm. bad IDE to SCSI converters?
> Remember also that even 4000/90 vintage VAXstations generally have an
> upper limit of 18GB.
Uh? That got to be a limit in VMS in that case. I can't see how the
hardware would have that limit.
> I don't recall anyone even getting an IDE drive to work, let alone an SSD.
IDE and SSD are two completely unrelated things as such. However, if the
SSD have an IDE interface, then you obviously need the IDE to SCSI
converter.
> Would love to be proved wrong however! I'm surprised someone hasn't
> written a software based SCSI drive emulator the same way that you get
> floppy emulators.
Probably mostly because of speed issues. You have some very tight timing
requirements, and a SCSI interface runs way faster than a floppy.
> Consider running the machine diskless, booted off the network with
> off-node disks.
That definitely also works.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Bridges create an extended LAN and DECnet areas may use the entire LAN, all segments that are not behind a DECnet router.
The LAN doesn't have to be just ethernet. As long as bridges are used to connect ethernet to FDDI, or ATM or token ring then DECnet areas can easily be used across that network.
Van: Mark Wickens
Verzonden: donderdag 10 oktober 2013 10:19
Aan: hecnet at Update.UU.SE
Beantwoorden: hecnet at Update.UU.SE
Onderwerp: [HECnet] Stupid question about areas...
Even though I think I know the answer to this question, I'll ask it any
way...
Is it OK to run machines that are in the same area on different subnets?
For example, passing traffic between SLAVE:: and RIPLEY:: both on DECnet
area 4 but on different subnets connected via a bridge?
I think it is and that the bridge is transparent, but for some reason in
the back of my mind I have the concept of 'area router' and that doesn't
fit with this model.
Thanks, Mark.
--
http://www.wickensonline.co.ukhttp://hecnet.euhttp://declegacy.org.ukhttp://retrochallenge.nethttps://twitter.com/#!/%40urbancamo
On 10/10/2013 10:35 AM, Brian Hechinger wrote:
The other two: the DECbridge 900, which plugged into the 900 series
modular enclosure. It's about the side of a 400 page hardcover
book, FDDI to 6 Ethernet ports, 60,000 packets per second using a
MC68040 at 25 MHz. I'm still proud of that. (I wrote the "fast
path" packet forwarding firmware.)
Neat!
That's not "neat"...that's AWESOME. Just pointing it out..
Then there is the Gigaswitch, a large modular chassis with lots of
line cards, some FDDI, some Ethernet, possibly some with other
stuff I don't remember.
I think this is the one I had. Big modular thing. Maybe (and going
by really fuzzy memory here) 8U high?
A GigaSwitch is a lot more than 8U high. Try nearly half a rack.
-Dave
--
Dave McGuire, AK4HZ
New Kensington, PA
On 10.10.2013 17:26, Paul_Koning at Dell.com wrote:
On Oct 10, 2013, at 8:44 AM, Brian Hechinger <wonko at 4amlunch.net> wrote:
On Thu, Oct 10, 2013 at 02:36:04PM +0200, Peter Lothberg wrote:
FDDI/CDDI is a dual ring token ring bus, with 4470 MTU byte packets,
it has 802.-- frames. DEC had a mode where you turned the token off
and used it for ptp full duplex.
I didn't know about the ptp thing. That's nifty.
For example cisco/cabletron/crecendo had ethnernet switches with a
FDDI uplink, that you could use.
DEC made one as well, it was that large modular thingie. I used to have
one. Never got it powered on as it was enormous.
But you need nothing to build a FDDI ring, its a A and a B ring, you
can just plug the cards together with fiber-patch-cables.
Unless you have one of those obnoxious single attached station cards.
-brian
Ah, time to dust off some dormant memories. I used to work on the FDDI standard at DEC; this stuff is familiar.
"CDDI" is marketing slang; it is not standard terminology.
FDDI is different from Ethernet; the MAC layer protocol is completely unrelated. It's quite similar to token bus (802.4), actually. (The only thing it has in commoni with 802.5 is the words "token" and "ring" -- apart from that, the two protocols operate completely differently.)
FDDI connections have a "type", which can be "A", "B", "S", or "M". "M" ports exist on concentrators. NICs will have A and B ports, if there are two connectors on the NIC ("dual attached station" or DAS) or an S port, if there is one connector (single attached station or SAS).
You have a number of topology options.
If you have DAS NICs, you can wire any number of them together in a "dual ring". That's the original FDDI topology, before DEC forced concentrators to be added into the standard. To do that, connect the NICs in circular fashion, A to B. Connected that way, loss of any single connection is handled transparently.
If you have SAS NICs. you can connect a pair of them (S to S).
If you have DAS NICs plus or or two SAS, wire the DAS NICs A-B in a chain (essentially a dual ring cut open). Then connect a SAS to each end (or just to one end, if you have one SAS). There is no redundancy in this config.
Finally, if you have any concentrators, you can build a tree config out of those. If so, the M connectors connect to the NIC connectors (A, B, or S), and the concentrator's A and B connectors either connect to M ports higher up in the hierarchy, or in a dual ring if you have a ring of concentrators, or nowhere if you're at the root of a tree.
FDDI fiber connectors are standardized but different from fiber connectors used by other networks. The connector is fairly large, flat rectangular with a shroud covering the fiber ends. Connectors are keyed to match the port type, though you can omit the keys and just wire carefully. Standard fiber is 62.5/125 micrometer, but 50/125 also works.
You can find more in the DEC Technical Journal, Vol. 3 No. 2, spring 1991. Or the relevant ANSI/ISO standards if you are a masochist. The topology rules are described fairly well in the concentrator article in DTJ, but their full glory can be found in the FDDI "SMT" (station management" standard.
paul
.
Very clearly summarized!
FDDI was the network equipment of choice long before the 100Mbit/s Ethernet reached the customers. In fact many years after that as well, because it took quite a while before the 100Mbit Ethernet switches were able to perform close to FDDI. Of course the larger packet size was a great advantage when moving lots of data.
I used to work with the DEChub900 and its various modules when working at DEC CS.
They were really top of the line technology at the time. You could plug almost anything into a DEChub900.
I still do have many kind of modules at home. :)
Kari
On 10/10/2013 10:29 AM, Brian Hechinger wrote:
All this talk of FDDI makes me want to go get the 4000/500s. I have a
pair of QBus FDDI cards. I suppose I would have to make the Octane a
router between FDDI and ethernet. :)
Are any of those 4000/500s mine? ;) I had quite a few machines up
here in your old place at one point. Don't worry, I'd only be after one
of them, and even that is low-priority at this point.
-Dave
--
Dave McGuire, AK4HZ
New Kensington, PA
Iirc solid-state SCSI drives existed.
They did, and they still exist. They tend to be industrial grade stuff,
however and also tend to be silly expensive.
Yeah.
It's unfortunate, really. :(]
Yeah. :(
Sounds like a business opportunity, basically build an enclosure, get 5-6 consumer SSDs, RAID6 them, expose a SCSI/SAS/eSATA interface to the host. If one of the drives breaks/runs out of write cycles, the box indicates the slot and we provide a new SSD for the slot.
Sampsa
On Thu, Oct 10, 2013 at 04:59:09PM +0100, Mark Wickens wrote:
Unfortunately, as nice a box as it is (that counts for most things
with a 68k in ;) FDDI to 10MB doesn't solve any problems for me as
the sole benefit is the jump to 100MB over the standard ethernet
card in the 3000/800, so any solution that is going to work would
need to give me access to 100MB+ at the other end of the FDDI.
A cheap little box running linux/*BSD/etc should be able to bridge
between FDDI and FastEthernet for you nicely.
Sure, it's not as sexy as some "real" gear, but will likely do the
trick.
It'll take far more than a 68K to do it, however. :)
-brian
On 10/10/2013 16:32, Paul_Koning at Dell.com wrote:
On Oct 10, 2013, at 10:59 AM, Mark Wickens <mark at wickensonline.co.uk> wrote:
... I have a DEC VNswitch 900XX plugged into a DEChub One MX - there are clearly modular parts to that, but I'm presuming there isn't a FDDI copper module that I would be able to use?
That looks different (same size, though). The one I was talking about is the DECbridge 900. I see some variations on the net -- DECbridge 900MX seems to be the same except that it has two AUI connectors instead of being all 10Base-T. And I think the original was a SAS (S port) while the MX is a DAS (A and B ports). http://decdoc.itsx.net/dec94mds/defbaina.pdf has details and a picture.
paul
Unfortunately, as nice a box as it is (that counts for most things with a 68k in ;) FDDI to 10MB doesn't solve any problems for me as the sole benefit is the jump to 100MB over the standard ethernet card in the 3000/800, so any solution that is going to work would need to give me access to 100MB+ at the other end of the FDDI.
Cheers, Mark.
--
http://www.wickensonline.co.ukhttp://hecnet.euhttp://declegacy.org.ukhttp://retrochallenge.nethttps://twitter.com/#!/%40urbancamo
On Oct 10, 2013, at 10:59 AM, Mark Wickens <mark at wickensonline.co.uk> wrote:
... I have a DEC VNswitch 900XX plugged into a DEChub One MX - there are clearly modular parts to that, but I'm presuming there isn't a FDDI copper module that I would be able to use?
That looks different (same size, though). The one I was talking about is the DECbridge 900. I see some variations on the net -- DECbridge 900MX seems to be the same except that it has two AUI connectors instead of being all 10Base-T. And I think the original was a SAS (S port) while the MX is a DAS (A and B ports). http://decdoc.itsx.net/dec94mds/defbaina.pdf has details and a picture.
paul