yOn Thu, 10 Oct 2013, Brian Hechinger wrote:
On Thu, Oct 10, 2013 at 07:18:03PM +0200, Sampsa Laine wrote:
On 10 Oct 2013, at 19:12, Brian Hechinger <wonko at 4amlunch.net> wrote:
On Thu, Oct 10, 2013 at 06:33:15PM +0200, Sampsa Laine wrote:
Sounds like a business opportunity, basically build an enclosure, get 5-6 consumer SSDs, RAID6 them, expose a SCSI/SAS/eSATA interface to the host. If one of the drives breaks/runs out of write cycles, the box indicates the slot and we provide a new SSD for the slot.
FreeBSD can do this easily enough.
In most cases (like my 4000/90, for example) that's way overkill. I just
want to be able to plug something in that would replace the internal
disks.
If I were doing this. I still think my 4000/90 is going to get hooked
up to a little MSA of some sort.
I was more thinking about selling this to "enterprise" users, not hobbyists :)
The device would be packaged as a black box with no configuration etc needed - it just looks like a SCSI drive to the bus.
It could provide more space, redundancy and speed for lower cost if built right..
It would be very surprising if something like this didn't already exist
in the "enterprise" market and cost a lot. :)
It reminds me of DigitalOcean. "SSD-backed cloud VPS!!!".
They've had at least TWO incidents where customers could run tcpdump and see others' packets and they gave me free credit despite me owing them $0.40. I will still continue to owe them $0.40 permanently due to $5 credit.
-brian
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
I remember height unit, but that is my failing memory very likely :-)
Van: Dave McGuire
Verzonden: donderdag 10 oktober 2013 19:12
Aan: hecnet at Update.UU.SE
Beantwoorden: hecnet at Update.UU.SE
Onderwerp: Re: [HECnet] FDDI advice
If you mean a RU, Rack Unit...that's 1.75".
-Dave
On 10/10/2013 01:08 PM, Hans Vlems wrote:
> A hu is two inches, right?
>
> *Van: *Brian Hechinger
> *Verzonden: *donderdag 10 oktober 2013 16:35
> *Aan: *hecnet at Update.UU.SE
> *Beantwoorden: *hecnet at Update.UU.SE
> *Onderwerp: *Re: [HECnet] FDDI advice
>
>
> On Thu, Oct 10, 2013 at 02:32:26PM +0000, Paul_Koning at Dell.com wrote:
>> >
>> > On Oct 10, 2013, at 8:44 AM, Brian Hechinger <wonko at 4amlunch.net> wrote:
>> >
>> >> On Thu, Oct 10, 2013 at 02:36:04PM +0200, Peter Lothberg wrote:
>> >>> ...
>> >>> For example cisco/cabletron/crecendo had ethnernet switches with a
>> >>> FDDI uplink, that you could use.
>> >>
>> >> DEC made one as well, it was that large modular thingie. I used to have
>> >> one. Never got it powered on as it was enormous.
>>
>> The original one is the DECbridge 500, a 3U rack mounted device, 3 or
> 4 cards, 3 Ethernets (10 Mb/s) to FDDI. See the DTJ issue I mentioned in
> my previous note.
>>
>> The other two: the DECbridge 900, which plugged into the 900 series
> modular enclosure. It's about the side of a 400 page hardcover book,
> FDDI to 6 Ethernet ports, 60,000 packets per second using a MC68040 at
> 25 MHz. I'm still proud of that. (I wrote the "fast path" packet
> forwarding firmware.)
>
> Neat!
>
>> Then there is the Gigaswitch, a large modular chassis with lots of
> line cards, some FDDI, some Ethernet, possibly some with other stuff I
> don't remember.
>
> I think this is the one I had. Big modular thing. Maybe (and going by
> really fuzzy memory here) 8U high?
>
> -brian
--
Dave McGuire, AK4HZ
New Kensington, PA
We ran DECnet phase 4 across a cisco mgs router to token ring, the rest of the net was bridged. Cisco was a new name in 1992 (?). The mgs ran version 7 iirc
Reading all those DECnet addresses byte reversed on token ring was weird. AA-00- becomes 55-00- on tr.
Van: Paul_Koning at Dell.com
Verzonden: donderdag 10 oktober 2013 19:09
Aan: hecnet at Update.UU.SE
Beantwoorden: hecnet at Update.UU.SE
Onderwerp: Re: [HECnet] Stupid question about areas...
On Oct 10, 2013, at 12:39 PM, Hans Vlems <hvlems at zonnet.nl> wrote:
> Bridges create an extended LAN and DECnet areas may use the entire LAN, all segments that are not behind a DECnet router.
> The LAN doesn't have to be just ethernet. As long as bridges are used to connect ethernet to FDDI, or ATM or token ring then DECnet areas can easily be used across that network.
Mostly yes. When mixing LAN types, the differences in max packet size can make trouble. The simplest answer is to use the Ethernet limit.
Also, token ring is not compatible with real LANs because it doesn't use real multicast. (Well, it could in theory, but IBM insisted on being incompatible.) There's a DECnet variant that copes with this, but regular Phase IV will not work over token ring. Any real LAN, including oddballs like token bus (802.4) will be fine.
paul
While we're on FDDI... I have an FDDI NIC free to a good home. It's a DEFPA-DA -- dual attached fiber, PCI (5V 32 bit). I have no way to test it, but the person who gave it to me believed it to be operational. No drivers or any other software.
First response gets it (email with shipping info direct to me, please).
paul
On Thu, Oct 10, 2013 at 07:18:03PM +0200, Sampsa Laine wrote:
On 10 Oct 2013, at 19:12, Brian Hechinger <wonko at 4amlunch.net> wrote:
On Thu, Oct 10, 2013 at 06:33:15PM +0200, Sampsa Laine wrote:
Sounds like a business opportunity, basically build an enclosure, get 5-6 consumer SSDs, RAID6 them, expose a SCSI/SAS/eSATA interface to the host. If one of the drives breaks/runs out of write cycles, the box indicates the slot and we provide a new SSD for the slot.
FreeBSD can do this easily enough.
In most cases (like my 4000/90, for example) that's way overkill. I just
want to be able to plug something in that would replace the internal
disks.
If I were doing this. I still think my 4000/90 is going to get hooked
up to a little MSA of some sort.
I was more thinking about selling this to "enterprise" users, not hobbyists :)
The device would be packaged as a black box with no configuration etc needed - it just looks like a SCSI drive to the bus.
It could provide more space, redundancy and speed for lower cost if built right..
It would be very surprising if something like this didn't already exist
in the "enterprise" market and cost a lot. :)
-brian
Hmm, then that's not what I had. I could swear it didn't look like a
DEChub 900 though.
-brian
On Thu, Oct 10, 2013 at 07:17:51PM +0200, Hans Vlems wrote:
<html><head></head><body data-blackberry-caret-color="#00a8df" style="background-color: rgb(255, 255, 255); line-height: initial;"><div id="BB10_response_div" style="width: 100%; font-size: initial; font-family: Calibri, 'Slate Pro', sans-serif; color: rgb(31, 73, 125); text-align: initial; background-color: rgb(255, 255, 255);">No it is one frame, cannot be split. There is a half sized gs but it has its boards mounted horizontally </div> <div id="response_div_spacer" style="width: 100%; font-size: initial; font-family: Calibri, 'Slate Pro', sans-serif; color: rgb(31, 73, 125); text-align: initial; background-color: rgb(255, 255, 255);"><br style="display:initial"></div> <div id="_signaturePlaceholder" style="font-size: initial; font-family: Calibri, 'Slate Pro', sans-serif; color: rgb(31, 73, 125); text-align: initial; background-color: rgb(255, 255, 255);"></div> <table width="100%" style="background-color:white;border-spacing:0px;"> <tbody><tr><td id="_persistentHeaderContainer" colspan="2" style="font-size: initial; text-align: initial; background-color: rgb(255, 255, 255);"> <div id="_persistentHeader" style="border-style: solid none none; border-top-color: rgb(181, 196, 223); border-top-width: 1pt; padding: 3pt 0in 0in; font-family: Tahoma, 'BB Alpha Sans', 'Slate Pro'; font-size: 10pt;"> <div><b>Van: </b>Brian Hechinger</div><div><b>Verzonden: </b>donderdag 10 oktober 2013 17:25</div><div><b>Aan: </b>hecnet at Update.UU.SE</div><div><b>Beantwoorden: </b>hecnet at Update.UU.SE</div><div><b>Onderwerp: </b>Re: [HECnet] FDDI advice</div></div></td></tr></tbody></table><div id="_persistentHeaderEnd" style="border-style: solid none none; border-top-color: rgb(186, 188, 209); border-top-width: 1pt; font-size: initial; text-align: initial; background-color: rgb(255, 255, 255);"></div><br><div id="_originalContent" style="">On Thu, Oct 10, 2013 at 03:19:53PM +0000, Paul_Koning at Dell.com wrote:<br>> <br>>> I think this is the one I had. Big modular thing. Maybe (and going by<br>>> really fuzzy memory here) 8U high?<br>> <br>> I was going to say "that sounds right" based on my memory of seeing one gathering dust around here. But the picture here: http://www.global-itcorp.com/products/digital-dec/networking/gigaswitch/ shows a much taller enclosure, half line card space and half power supply. Each section does look like 8U or so.<br><br>Hmmm. I wonder if those can be separated. I wonder if that also means<br>mine never would have worked. I don't remember having the bottom half.<br><br>That was also 10 years ago, so who knows, maybe I did have the whole<br>thing. :)<br><br>> Some searching turns up refurbished Gigaswitch modules. Some are pretty cheap, but it looks like those are ATM ones, the FDDI ones I see quoted are more expensive. Perhaps because FDDI was fairly successful at least for a short time, while ATM (as a LAN) was an utter failure.<br><br>The *only* thing I even needed it to do was bridge FDDI/FastEthernet so<br>it just ended up not being worth the effort.<br><br>It's not a small switch. :)<br><br>-brian<br></div></body></html>
On Thu, 10 Oct 2013, Brian Hechinger wrote:
On Thu, Oct 10, 2013 at 01:19:02PM -0400, Dave McGuire wrote:
On 10/10/2013 01:16 PM, Hans Vlems wrote:
Fddi was the answer for production plants that required 100% uptime
Only once did Fddi let me down and made me go back to work at 3:30 am,
the worst time to wake up. One of the boards in a gs/fddi failed,
isolating two plants.
I did manage to explode a power supply in a gs. Made one hell of ?bang,
fortunately that part was redundant so the net stayed up.
Yuck!
Compared to fast ethernet, I prefer fddi.
Same here.
Too expensive for private or hobbyist use though.
Hardly. You just need the right connections. I was nearly 100% FDDI
on my home network in the mid-1990s...didn't take all that much money.
I didn't have much! ;)
Hell, when I got into FDDI in the early 2000s it was cheaper than
FastEthernet!
-brian
I was contemplating making my house 4-way redundant. MoCa. FE, FibreChannel, and FDDI. All to link to the basement.
If I had the cables I woulda done it, too...
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
On Thu, Oct 10, 2013 at 01:19:30PM -0400, Dave McGuire wrote:
On 10/10/2013 01:16 PM, Brian Hechinger wrote:
Going and rescuing all that stuff is something I really want to do one
day. Not anytime soon though. :(
Save me one! ;)
If that stuff is indeed still in that barn, you can have most of it. :)
If memory serves, a good bit of that is mine. ;)
I'll take what isn't yours. ;)
Nah, you can have Dave's stuff too. :)
I'm going to fart on your head. ;)
FINE. You can have your stuff. :)
-brian
On Thu, Oct 10, 2013 at 01:10:48PM -0400, Dave McGuire wrote:
On 10/10/2013 12:55 PM, Brian Hechinger wrote:
All this talk of FDDI makes me want to go get the 4000/500s. I have a
pair of QBus FDDI cards. I suppose I would have to make the Octane a
router between FDDI and ethernet. :)
Are any of those 4000/500s mine? ;)
Nope. I picked these up from Villanova University several years ago.
I was interviewing at a place recently and the subject of VAXen came up.
Interviewer: "I remember have an account on a pair of 4000/500s in a
cluster when I was at Villanova years ago."
Me: "Those machines are at my house now!"
Interviewer: "NO WAY!"
That's cool. :-)
It was. I don't remember why I don't work there though. I don't remember
exactly, but I seem to remember them not wanting to pay me enough. :)
I had quite a few machines up
here in your old place at one point. Don't worry, I'd only be after one
of them, and even that is low-priority at this point.
Going and rescuing all that stuff is something I really want to do one
day. Not anytime soon though. :(
Well, let me know when you're able to. Even if you can't move it all
down there, if you can just come up by car, you and I can head over
there with a truck, and the stuff (even yours) can sit here for awhile.
At least then it won't be in a barn, and won't be at risk in any way.
We could probably do it over a weekend trip, even on a liesurely schedule.
Yeah, that's definitely something we should do. There's no way I can
move the majority of that stuff down here. A lot of it I'll just never
run and so will likely put up to be given away.
Still, it won't be until at least early next year that I can even
consider this. Time and money are both really tight right now.
-brian
On Thu, Oct 10, 2013 at 01:19:02PM -0400, Dave McGuire wrote:
On 10/10/2013 01:16 PM, Hans Vlems wrote:
Fddi was the answer for production plants that required 100% uptime
Only once did Fddi let me down and made me go back to work at 3:30 am,
the worst time to wake up. One of the boards in a gs/fddi failed,
isolating two plants.
I did manage to explode a power supply in a gs. Made one hell of bang,
fortunately that part was redundant so the net stayed up.
Yuck!
Compared to fast ethernet, I prefer fddi.
Same here.
Too expensive for private or hobbyist use though.
Hardly. You just need the right connections. I was nearly 100% FDDI
on my home network in the mid-1990s...didn't take all that much money.
I didn't have much! ;)
Hell, when I got into FDDI in the early 2000s it was cheaper than
FastEthernet!
-brian