On Oct 10, 2013, at 8:44 AM, Brian Hechinger <wonko at 4amlunch.net> wrote:
On Thu, Oct 10, 2013 at 02:36:04PM +0200, Peter Lothberg wrote:
...
For example cisco/cabletron/crecendo had ethnernet switches with a
FDDI uplink, that you could use.
DEC made one as well, it was that large modular thingie. I used to have
one. Never got it powered on as it was enormous.
DEC made at least three.
The original one is the DECbridge 500, a 3U rack mounted device, 3 or 4 cards, 3 Ethernets (10 Mb/s) to FDDI. See the DTJ issue I mentioned in my previous note.
The other two: the DECbridge 900, which plugged into the 900 series modular enclosure. It's about the side of a 400 page hardcover book, FDDI to 6 Ethernet ports, 60,000 packets per second using a MC68040 at 25 MHz. I'm still proud of that. (I wrote the "fast path" packet forwarding firmware.)
Then there is the Gigaswitch, a large modular chassis with lots of line cards, some FDDI, some Ethernet, possibly some with other stuff I don't remember.
paul
On Thu, 10 Oct 2013, Brian Hechinger wrote:
On Thu, Oct 10, 2013 at 10:21:37AM -0400, Cory Smelosky wrote:
If you could SAS as pure SCSI, SAS/SCSI converters are dirt cheap. ;)
Links? It's an obnoxious thing to search for. :)
Oops. I meant SAS/SATA. They're dirt cheap as you just grab the right SFF cable. ;)
This is patently untrue. The 146G disks in my 4000/90 prove that. :)
I'd be surprised if you weren't using an adapter to attach an SCA
drive. I tried that but the VAX didn't like seeing my 36G 10k RPM
drive. I also kept bumping it and shorting out the adapter on the
case...
I did, but that shouldn't make any difference. It's just adapts between
different physical connectors. There is no magic in a 50-pin/SCA
adapter.
I must've had IDs and termination set wrong then.
Iirc solid-state SCSI drives existed.
They did, and they still exist. They tend to be industrial grade stuff,
however and also tend to be silly expensive.
Yeah.
It's unfortunate, really. :(]
Yeah. :(
-brian
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
All this talk of FDDI makes me want to go get the 4000/500s. I have a
pair of QBus FDDI cards. I suppose I would have to make the Octane a
router between FDDI and ethernet. :)
-brian
On Thu, Oct 10, 2013 at 02:26:23PM +0000, Paul_Koning at Dell.com wrote:
On Oct 10, 2013, at 8:44 AM, Brian Hechinger <wonko at 4amlunch.net> wrote:
On Thu, Oct 10, 2013 at 02:36:04PM +0200, Peter Lothberg wrote:
FDDI/CDDI is a dual ring token ring bus, with 4470 MTU byte packets,
it has 802.-- frames. DEC had a mode where you turned the token off
and used it for ptp full duplex.
I didn't know about the ptp thing. That's nifty.
For example cisco/cabletron/crecendo had ethnernet switches with a
FDDI uplink, that you could use.
DEC made one as well, it was that large modular thingie. I used to have
one. Never got it powered on as it was enormous.
But you need nothing to build a FDDI ring, its a A and a B ring, you
can just plug the cards together with fiber-patch-cables.
Unless you have one of those obnoxious single attached station cards.
-brian
Ah, time to dust off some dormant memories. I used to work on the FDDI standard at DEC; this stuff is familiar.
"CDDI" is marketing slang; it is not standard terminology.
FDDI is different from Ethernet; the MAC layer protocol is completely unrelated. It's quite similar to token bus (802.4), actually. (The only thing it has in commoni with 802.5 is the words "token" and "ring" -- apart from that, the two protocols operate completely differently.)
FDDI connections have a "type", which can be "A", "B", "S", or "M". "M" ports exist on concentrators. NICs will have A and B ports, if there are two connectors on the NIC ("dual attached station" or DAS) or an S port, if there is one connector (single attached station or SAS).
You have a number of topology options.
If you have DAS NICs, you can wire any number of them together in a "dual ring". That's the original FDDI topology, before DEC forced concentrators to be added into the standard. To do that, connect the NICs in circular fashion, A to B. Connected that way, loss of any single connection is handled transparently.
If you have SAS NICs. you can connect a pair of them (S to S).
If you have DAS NICs plus or or two SAS, wire the DAS NICs A-B in a chain (essentially a dual ring cut open). Then connect a SAS to each end (or just to one end, if you have one SAS). There is no redundancy in this config.
Finally, if you have any concentrators, you can build a tree config out of those. If so, the M connectors connect to the NIC connectors (A, B, or S), and the concentrator's A and B connectors either connect to M ports higher up in the hierarchy, or in a dual ring if you have a ring of concentrators, or nowhere if you're at the root of a tree.
FDDI fiber connectors are standardized but different from fiber connectors used by other networks. The connector is fairly large, flat rectangular with a shroud covering the fiber ends. Connectors are keyed to match the port type, though you can omit the keys and just wire carefully. Standard fiber is 62.5/125 micrometer, but 50/125 also works.
You can find more in the DEC Technical Journal, Vol. 3 No. 2, spring 1991. Or the relevant ANSI/ISO standards if you are a masochist. The topology rules are described fairly well in the concentrator article in DTJ, but their full glory can be found in the FDDI "SMT" (station management" standard.
paul
On Thu, Oct 10, 2013 at 10:21:37AM -0400, Cory Smelosky wrote:
If you could SAS as pure SCSI, SAS/SCSI converters are dirt cheap. ;)
Links? It's an obnoxious thing to search for. :)
This is patently untrue. The 146G disks in my 4000/90 prove that. :)
I'd be surprised if you weren't using an adapter to attach an SCA
drive. I tried that but the VAX didn't like seeing my 36G 10k RPM
drive. I also kept bumping it and shorting out the adapter on the
case...
I did, but that shouldn't make any difference. It's just adapts between
different physical connectors. There is no magic in a 50-pin/SCA
adapter.
Iirc solid-state SCSI drives existed.
They did, and they still exist. They tend to be industrial grade stuff,
however and also tend to be silly expensive.
Yeah.
It's unfortunate, really. :(
-brian
On Oct 10, 2013, at 8:44 AM, Brian Hechinger <wonko at 4amlunch.net> wrote:
On Thu, Oct 10, 2013 at 02:36:04PM +0200, Peter Lothberg wrote:
FDDI/CDDI is a dual ring token ring bus, with 4470 MTU byte packets,
it has 802.-- frames. DEC had a mode where you turned the token off
and used it for ptp full duplex.
I didn't know about the ptp thing. That's nifty.
For example cisco/cabletron/crecendo had ethnernet switches with a
FDDI uplink, that you could use.
DEC made one as well, it was that large modular thingie. I used to have
one. Never got it powered on as it was enormous.
But you need nothing to build a FDDI ring, its a A and a B ring, you
can just plug the cards together with fiber-patch-cables.
Unless you have one of those obnoxious single attached station cards.
-brian
Ah, time to dust off some dormant memories. I used to work on the FDDI standard at DEC; this stuff is familiar.
"CDDI" is marketing slang; it is not standard terminology.
FDDI is different from Ethernet; the MAC layer protocol is completely unrelated. It's quite similar to token bus (802.4), actually. (The only thing it has in commoni with 802.5 is the words "token" and "ring" -- apart from that, the two protocols operate completely differently.)
FDDI connections have a "type", which can be "A", "B", "S", or "M". "M" ports exist on concentrators. NICs will have A and B ports, if there are two connectors on the NIC ("dual attached station" or DAS) or an S port, if there is one connector (single attached station or SAS).
You have a number of topology options.
If you have DAS NICs, you can wire any number of them together in a "dual ring". That's the original FDDI topology, before DEC forced concentrators to be added into the standard. To do that, connect the NICs in circular fashion, A to B. Connected that way, loss of any single connection is handled transparently.
If you have SAS NICs. you can connect a pair of them (S to S).
If you have DAS NICs plus or or two SAS, wire the DAS NICs A-B in a chain (essentially a dual ring cut open). Then connect a SAS to each end (or just to one end, if you have one SAS). There is no redundancy in this config.
Finally, if you have any concentrators, you can build a tree config out of those. If so, the M connectors connect to the NIC connectors (A, B, or S), and the concentrator's A and B connectors either connect to M ports higher up in the hierarchy, or in a dual ring if you have a ring of concentrators, or nowhere if you're at the root of a tree.
FDDI fiber connectors are standardized but different from fiber connectors used by other networks. The connector is fairly large, flat rectangular with a shroud covering the fiber ends. Connectors are keyed to match the port type, though you can omit the keys and just wire carefully. Standard fiber is 62.5/125 micrometer, but 50/125 also works.
You can find more in the DEC Technical Journal, Vol. 3 No. 2, spring 1991. Or the relevant ANSI/ISO standards if you are a masochist. The topology rules are described fairly well in the concentrator article in DTJ, but their full glory can be found in the FDDI "SMT" (station management" standard.
paul
On Thu, 10 Oct 2013, Brian Hechinger wrote:
On Thu, Oct 10, 2013 at 10:08:29AM -0400, Cory Smelosky wrote:
There has been very little success with IDE to SCSI converters on this
age of machine.
Hmm. bad IDE to SCSI converters?
That's my theory. Although anything having anything to do with IDE is
hit or miss. It's a terrible interface. :)
There are SCSI/SATA converters but the cheapest ones I found are US$250
which is extremely pricy. :(
If you could SAS as pure SCSI, SAS/SCSI converters are dirt cheap. ;)
Remember also that even 4000/90 vintage VAXstations generally have an
upper limit of 18GB.
Uh? That got to be a limit in VMS in that case. I can't see how
the hardware would have that limit.
This is patently untrue. The 146G disks in my 4000/90 prove that. :)
I'd be surprised if you weren't using an adapter to attach an SCA drive. I tried that but the VAX didn't like seeing my 36G 10k RPM drive. I also kept bumping it and shorting out the adapter on the case...
Iirc solid-state SCSI drives existed.
They did, and they still exist. They tend to be industrial grade stuff,
however and also tend to be silly expensive.
Yeah.
-brian
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
On Thu, Oct 10, 2013 at 10:08:29AM -0400, Cory Smelosky wrote:
There has been very little success with IDE to SCSI converters on this
age of machine.
Hmm. bad IDE to SCSI converters?
That's my theory. Although anything having anything to do with IDE is
hit or miss. It's a terrible interface. :)
There are SCSI/SATA converters but the cheapest ones I found are US$250
which is extremely pricy. :(
Remember also that even 4000/90 vintage VAXstations generally have an
upper limit of 18GB.
Uh? That got to be a limit in VMS in that case. I can't see how
the hardware would have that limit.
This is patently untrue. The 146G disks in my 4000/90 prove that. :)
Iirc solid-state SCSI drives existed.
They did, and they still exist. They tend to be industrial grade stuff,
however and also tend to be silly expensive.
-brian
On 10/10/2013 15:08, Cory Smelosky wrote:
On Thu, 10 Oct 2013, Johnny Billquist wrote:
On 2013-10-10 09:46, Mark Wickens wrote:
On 10/10/2013 08:17, Daniel Soderstrom wrote:
Has anyone tries this? Running my Vaxstation non-stop reminds me how
much noise these old drives make.
Has anyone tried SSD drives? Must be good for the PSU it terms of heat
and current draw.
Daniel
Sent from my iPhone
There has been very little success with IDE to SCSI converters on this
age of machine.
Hmm. bad IDE to SCSI converters?
Remember also that even 4000/90 vintage VAXstations generally have an
upper limit of 18GB.
Uh? That got to be a limit in VMS in that case. I can't see how the hardware would have that limit.
I don't recall anyone even getting an IDE drive to work, let alone an SSD.
IDE and SSD are two completely unrelated things as such. However, if the SSD have an IDE interface, then you obviously need the IDE to SCSI converter.
Iirc solid-state SCSI drives existed.
Would love to be proved wrong however! I'm surprised someone hasn't
written a software based SCSI drive emulator the same way that you get
floppy emulators.
Probably mostly because of speed issues. You have some very tight timing requirements, and a SCSI interface runs way faster than a floppy.
Which is a good thing. ;)
Consider running the machine diskless, booted off the network with
off-node disks.
That definitely also works.
Johnny
I believe Nemonix have both solid state and new drop-in replacements for VAX drives, but I'm quite sure you'll be into mega-$$$.
--
http://www.wickensonline.co.ukhttp://hecnet.euhttp://declegacy.org.ukhttp://retrochallenge.nethttps://twitter.com/#!/%40urbancamo
On Thu, 10 Oct 2013, Mark Wickens wrote:
I guess this is sort of on-topic, given I am talking about networking.
The Alpha 3000/800 I have contains a PMAF-FU card which is a DEC FDDIcontroller TURBOchannel card. It has a CAT 5 copper FDDI interface. I understand that it will support networking at 100MB.
I have no knowledge of FDDI. I believe that I would need a concentrator to make use of this connection, is there such a device which I could use to bridge the 3000/800 using FDDI to a standard ethernet network?
Yeah. I had an ethernet switch that could take FDDI until a fan failed, the PSU got covered in weird goo, and it began arcing and tripping breakers. The switch powered on fine a day or so later though...
As a second question, I have a number of DEC/Compaq/HP PCI FDDI cards which were given to me. They have optical connectors. If I were to make use of these would I need another concentrator, or is there a box which will support both media with plug in modules, for example.
If you get a modular concentrator I /think/ you can get optical, and copper modules in it at once.
Thanks for the help,
Mark.
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
On Thu, 10 Oct 2013, Johnny Billquist wrote:
On 2013-10-10 09:46, Mark Wickens wrote:
On 10/10/2013 08:17, Daniel Soderstrom wrote:
Has anyone tries this? Running my Vaxstation non-stop reminds me how
much noise these old drives make.
Has anyone tried SSD drives? Must be good for the PSU it terms of heat
and current draw.
Daniel
Sent from my iPhone
There has been very little success with IDE to SCSI converters on this
age of machine.
Hmm. bad IDE to SCSI converters?
Remember also that even 4000/90 vintage VAXstations generally have an
upper limit of 18GB.
Uh? That got to be a limit in VMS in that case. I can't see how the hardware would have that limit.
I don't recall anyone even getting an IDE drive to work, let alone an SSD.
IDE and SSD are two completely unrelated things as such. However, if the SSD have an IDE interface, then you obviously need the IDE to SCSI converter.
Iirc solid-state SCSI drives existed.
Would love to be proved wrong however! I'm surprised someone hasn't
written a software based SCSI drive emulator the same way that you get
floppy emulators.
Probably mostly because of speed issues. You have some very tight timing requirements, and a SCSI interface runs way faster than a floppy.
Which is a good thing. ;)
Consider running the machine diskless, booted off the network with
off-node disks.
That definitely also works.
Johnny
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects