Sampsa Laine wrote:
Is it possible to run a cluster over both ethernet and FDDI?
Basically, I'd like to connect Machine A -> B using ethernet and B -> C using FDDI - is this possible?
Yes, it works just fine. The only tricky bit comes when only machine C is up and you want to clusterboot machine A.
My configurations usually involve some combination of ethernet and DSSI.
Peace... Sridhar
Zane H. Healy wrote:
On Tue, 29 Sep 2009, Kari Uusim ki wrote:
first when you do a disk shadow copy and second when you boot a satellite over the Ethernet.
This is where I've always wanted to play with FDDI. I'd love to boot a
MicroVAX II off of either a high end VAX or an Alpha via a FDDI link. I'm
curious as to which would be faster. Running the MicroVAX II off of native
disks, or over the FDDI link.
Depends on what disks and controller you have on the MVII of course...
If it's an RQDX3, then just about anything else will be faster. :-)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Yes, indeed, the Qbus is definately the bottleneck here. As also if you do use SCSI disks say through a KFQSA and HSD* controller.
The KZQSA is very limited, IIRC. I'm not aware of any support for SCSI disks (except for CD drives) on it.
Kari
Zane H. Healy wrote:
That's part of what makes the idea interesting. Of course I suspect you'd
saturate your Q-Bus backplane with it. IIRC, the backplane can handle
roughly 3MB/s.
I suspect such a solution might be faster than a MicroVAX II with SCSI
drives. I've not tested the theory as while PCI FDDI cards are cheap
enough, the Q-Bus ones were not the last time I checked.
Zane
On Tue, 29 Sep 2009, Kari Uusim ki wrote:
About the comparation between FDDI and RD* disks. Theoretically FDDI should be faster, because it is capable of delivering about 12MB/s, compared to the RD* disks which might be able to deliver about 1MB/s.
Kari
Zane H. Healy wrote:
On Tue, 29 Sep 2009, Kari Uusim ki wrote:
first when you do a disk shadow copy and second when you boot a satellite over the Ethernet.
This is where I've always wanted to play with FDDI. I'd love to boot a
MicroVAX II off of either a high end VAX or an Alpha via a FDDI link. I'm
curious as to which would be faster. Running the MicroVAX II off of native
disks, or over the FDDI link.
Zane
That's part of what makes the idea interesting. Of course I suspect you'd
saturate your Q-Bus backplane with it. IIRC, the backplane can handle
roughly 3MB/s.
I suspect such a solution might be faster than a MicroVAX II with SCSI
drives. I've not tested the theory as while PCI FDDI cards are cheap
enough, the Q-Bus ones were not the last time I checked.
Zane
On Tue, 29 Sep 2009, Kari Uusim ki wrote:
About the comparation between FDDI and RD* disks. Theoretically FDDI should be faster, because it is capable of delivering about 12MB/s, compared to the RD* disks which might be able to deliver about 1MB/s.
Kari
Zane H. Healy wrote:
On Tue, 29 Sep 2009, Kari Uusim ki wrote:
first when you do a disk shadow copy and second when you boot a satellite over the Ethernet.
This is where I've always wanted to play with FDDI. I'd love to boot a
MicroVAX II off of either a high end VAX or an Alpha via a FDDI link. I'm
curious as to which would be faster. Running the MicroVAX II off of native
disks, or over the FDDI link.
Zane
About the comparation between FDDI and RD* disks. Theoretically FDDI should be faster, because it is capable of delivering about 12MB/s, compared to the RD* disks which might be able to deliver about 1MB/s.
Kari
Zane H. Healy wrote:
On Tue, 29 Sep 2009, Kari Uusim ki wrote:
first when you do a disk shadow copy and second when you boot a satellite over the Ethernet.
This is where I've always wanted to play with FDDI. I'd love to boot a
MicroVAX II off of either a high end VAX or an Alpha via a FDDI link. I'm
curious as to which would be faster. Running the MicroVAX II off of native
disks, or over the FDDI link.
Zane
Yup, I am aware that I either need to go Itanium + Alpha or VAX + Alpha.
As my only working VAX at the moment is a SIMH box that mostly does HECnet routing, I'll probably cluster the two Alphaservers (CHIMP + CHIMPY) and the Itanium box (RHESUS).
Sampsa
On 29 Sep 2009, at 21:08, Zane H. Healy wrote:
I haven't run a cluster in close to a decade, but when I was, I had a
combination of 10Mbit and 100Mbit NIC's. I think it is safe to assume that
the heaviest traffic you'll see for your use is access to drives on another
cluster.
What architectures are you planning to cluster. I gather you want an
Itanium system in there. If so I don't believe you can have a VAX in your
cluster. I don't know if you can do VAX/Alpha/Itanium, but I do know it
isn't supported.
Zane
On Tue, 29 Sep 2009, Sampsa Laine wrote:
Yeah, you guys are right - mind you, the FDDI stuff would've been sort of fun to play with, maybe later.
How much bandwidth can I expect a cluster to use (I know this depends on the use of it of course, but assuming a fairly light load, no crazy 2000 user apps and DB)? Is a 100 mbps NIC enough?
Sampsa
On 29 Sep 2009, at 20:50, Kari Uusim ki wrote:
Why don't you leave TCP/IP as it is now?
You don't have to dedicate the Ethernet just to cluster traffic (that is a feature of those inferior creations which some people call [*nix] clusters - even if they aren't). You can run any traffic on the same Ethernet interface as cluster traffic. You just have to remember if you connect the machines with a Ethernet switch that there mustn't be protocol or MAC filtering which interferes with the cluster traffic.
DECnet routing has nothing to do with TCP/IP routing. They are completely separate protocols and live their own life in a VMS machine.
I suggest you run everything on the Ethernet and forget about the FDDI.
Kari
Sampsa Laine wrote:
I've come up with an alternative solution:
I'll use ethernet for the cluster interconnect as I have ethernet ports on all the machines.
However, this leaves CHIMPY without an ethernet-based TCP/IP connection. So my question is this:
Can I hook up two machines (running VMS) with FDDI and have the second machine route packets onto the ethernet segment for me? This wasn't immediately clear from the documentation. I assume if the second machine is set up as a L1 router DECNET over this setup will be possible?
Sampsa
On 29 Sep 2009, at 19:50, Kari Uusim ki wrote:
Sampsa Laine wrote:
Is it possible to run a cluster over both ethernet and FDDI?
Basically, I'd like to connect Machine A -> B using ethernet and B -> C using FDDI - is this possible?
Sampsa
.
Unfortunately that configuration violates the rule of full connectivity. All cluster members must have direct connectivity to other members.
But if you add a FDDI-Ethernet-bridge, you should get a "legal" configuration.
Kari
.
The Alpha should be faster if it doesn't happen to be a say DEC3000-300L. :-)
I assume the DEFQA could be installed into a MicroVAX II, but I've never seen such a setup.
Zane H. Healy wrote:
On Tue, 29 Sep 2009, Kari Uusim ki wrote:
first when you do a disk shadow copy and second when you boot a satellite over the Ethernet.
This is where I've always wanted to play with FDDI. I'd love to boot a
MicroVAX II off of either a high end VAX or an Alpha via a FDDI link. I'm
curious as to which would be faster. Running the MicroVAX II off of native
disks, or over the FDDI link.
Zane
It is fully possible to run VAX (V7.3), Alpha (V8.3) and Itanium (V8.3) in the same cluster. It is not supported, but it works.
Zane H. Healy wrote:
I haven't run a cluster in close to a decade, but when I was, I had a
combination of 10Mbit and 100Mbit NIC's. I think it is safe to assume that
the heaviest traffic you'll see for your use is access to drives on another
cluster.
What architectures are you planning to cluster. I gather you want an
Itanium system in there. If so I don't believe you can have a VAX in your
cluster. I don't know if you can do VAX/Alpha/Itanium, but I do know it
isn't supported.
Zane
On Tue, 29 Sep 2009, Sampsa Laine wrote:
Yeah, you guys are right - mind you, the FDDI stuff would've been sort of fun to play with, maybe later.
How much bandwidth can I expect a cluster to use (I know this depends on the use of it of course, but assuming a fairly light load, no crazy 2000 user apps and DB)? Is a 100 mbps NIC enough?
Sampsa
On 29 Sep 2009, at 20:50, Kari Uusim ki wrote:
Why don't you leave TCP/IP as it is now?
You don't have to dedicate the Ethernet just to cluster traffic (that is a feature of those inferior creations which some people call [*nix] clusters - even if they aren't). You can run any traffic on the same Ethernet interface as cluster traffic. You just have to remember if you connect the machines with a Ethernet switch that there mustn't be protocol or MAC filtering which interferes with the cluster traffic.
DECnet routing has nothing to do with TCP/IP routing. They are completely separate protocols and live their own life in a VMS machine.
I suggest you run everything on the Ethernet and forget about the FDDI.
Kari
Sampsa Laine wrote:
I've come up with an alternative solution:
I'll use ethernet for the cluster interconnect as I have ethernet ports on all the machines.
However, this leaves CHIMPY without an ethernet-based TCP/IP connection. So my question is this:
Can I hook up two machines (running VMS) with FDDI and have the second machine route packets onto the ethernet segment for me? This wasn't immediately clear from the documentation. I assume if the second machine is set up as a L1 router DECNET over this setup will be possible?
Sampsa
On 29 Sep 2009, at 19:50, Kari Uusim ki wrote:
Sampsa Laine wrote:
Is it possible to run a cluster over both ethernet and FDDI?
Basically, I'd like to connect Machine A -> B using ethernet and B -> C using FDDI - is this possible?
Sampsa
.
Unfortunately that configuration violates the rule of full connectivity. All cluster members must have direct connectivity to other members.
But if you add a FDDI-Ethernet-bridge, you should get a "legal" configuration.
Kari
.
On Tue, 29 Sep 2009, Kari Uusim ki wrote:
first when you do a disk shadow copy and second when you boot a satellite over the Ethernet.
This is where I've always wanted to play with FDDI. I'd love to boot a
MicroVAX II off of either a high end VAX or an Alpha via a FDDI link. I'm
curious as to which would be faster. Running the MicroVAX II off of native
disks, or over the FDDI link.
Zane
I haven't run a cluster in close to a decade, but when I was, I had a
combination of 10Mbit and 100Mbit NIC's. I think it is safe to assume that
the heaviest traffic you'll see for your use is access to drives on another
cluster.
What architectures are you planning to cluster. I gather you want an
Itanium system in there. If so I don't believe you can have a VAX in your
cluster. I don't know if you can do VAX/Alpha/Itanium, but I do know it
isn't supported.
Zane
On Tue, 29 Sep 2009, Sampsa Laine wrote:
Yeah, you guys are right - mind you, the FDDI stuff would've been sort of fun to play with, maybe later.
How much bandwidth can I expect a cluster to use (I know this depends on the use of it of course, but assuming a fairly light load, no crazy 2000 user apps and DB)? Is a 100 mbps NIC enough?
Sampsa
On 29 Sep 2009, at 20:50, Kari Uusim ki wrote:
Why don't you leave TCP/IP as it is now?
You don't have to dedicate the Ethernet just to cluster traffic (that is a feature of those inferior creations which some people call [*nix] clusters - even if they aren't). You can run any traffic on the same Ethernet interface as cluster traffic. You just have to remember if you connect the machines with a Ethernet switch that there mustn't be protocol or MAC filtering which interferes with the cluster traffic.
DECnet routing has nothing to do with TCP/IP routing. They are completely separate protocols and live their own life in a VMS machine.
I suggest you run everything on the Ethernet and forget about the FDDI.
Kari
Sampsa Laine wrote:
I've come up with an alternative solution:
I'll use ethernet for the cluster interconnect as I have ethernet ports on all the machines.
However, this leaves CHIMPY without an ethernet-based TCP/IP connection. So my question is this:
Can I hook up two machines (running VMS) with FDDI and have the second machine route packets onto the ethernet segment for me? This wasn't immediately clear from the documentation. I assume if the second machine is set up as a L1 router DECNET over this setup will be possible?
Sampsa
On 29 Sep 2009, at 19:50, Kari Uusim ki wrote:
Sampsa Laine wrote:
Is it possible to run a cluster over both ethernet and FDDI?
Basically, I'd like to connect Machine A -> B using ethernet and B -> C using FDDI - is this possible?
Sampsa
.
Unfortunately that configuration violates the rule of full connectivity. All cluster members must have direct connectivity to other members.
But if you add a FDDI-Ethernet-bridge, you should get a "legal" configuration.
Kari
.
Most VAXen used to run cluster traffic through a 10Mbit/s Ethernet interface and with tens to hundreds of users. Shouldn't be a bottleneck.
Two scenarios when there is heavy traffic on the cluster interconnect are first when you do a disk shadow copy and second when you boot a satellite over the Ethernet.
Kari
Sampsa Laine wrote:
Yeah, you guys are right - mind you, the FDDI stuff would've been sort of fun to play with, maybe later.
How much bandwidth can I expect a cluster to use (I know this depends on the use of it of course, but assuming a fairly light load, no crazy 2000 user apps and DB)? Is a 100 mbps NIC enough?
Sampsa
On 29 Sep 2009, at 20:50, Kari Uusim ki wrote:
Why don't you leave TCP/IP as it is now?
You don't have to dedicate the Ethernet just to cluster traffic (that is a feature of those inferior creations which some people call [*nix] clusters - even if they aren't). You can run any traffic on the same Ethernet interface as cluster traffic. You just have to remember if you connect the machines with a Ethernet switch that there mustn't be protocol or MAC filtering which interferes with the cluster traffic.
DECnet routing has nothing to do with TCP/IP routing. They are completely separate protocols and live their own life in a VMS machine.
I suggest you run everything on the Ethernet and forget about the FDDI.
Kari
Sampsa Laine wrote:
I've come up with an alternative solution:
I'll use ethernet for the cluster interconnect as I have ethernet ports on all the machines.
However, this leaves CHIMPY without an ethernet-based TCP/IP connection. So my question is this:
Can I hook up two machines (running VMS) with FDDI and have the second machine route packets onto the ethernet segment for me? This wasn't immediately clear from the documentation. I assume if the second machine is set up as a L1 router DECNET over this setup will be possible?
Sampsa
On 29 Sep 2009, at 19:50, Kari Uusim ki wrote:
Sampsa Laine wrote:
Is it possible to run a cluster over both ethernet and FDDI?
Basically, I'd like to connect Machine A -> B using ethernet and B -> C using FDDI - is this possible?
Sampsa
.
Unfortunately that configuration violates the rule of full connectivity. All cluster members must have direct connectivity to other members.
But if you add a FDDI-Ethernet-bridge, you should get a "legal" configuration.
Kari
.
.
Yeah, you guys are right - mind you, the FDDI stuff would've been sort of fun to play with, maybe later.
How much bandwidth can I expect a cluster to use (I know this depends on the use of it of course, but assuming a fairly light load, no crazy 2000 user apps and DB)? Is a 100 mbps NIC enough?
Sampsa
On 29 Sep 2009, at 20:50, Kari Uusim ki wrote:
Why don't you leave TCP/IP as it is now?
You don't have to dedicate the Ethernet just to cluster traffic (that is a feature of those inferior creations which some people call [*nix] clusters - even if they aren't). You can run any traffic on the same Ethernet interface as cluster traffic. You just have to remember if you connect the machines with a Ethernet switch that there mustn't be protocol or MAC filtering which interferes with the cluster traffic.
DECnet routing has nothing to do with TCP/IP routing. They are completely separate protocols and live their own life in a VMS machine.
I suggest you run everything on the Ethernet and forget about the FDDI.
Kari
Sampsa Laine wrote:
I've come up with an alternative solution:
I'll use ethernet for the cluster interconnect as I have ethernet ports on all the machines.
However, this leaves CHIMPY without an ethernet-based TCP/IP connection. So my question is this:
Can I hook up two machines (running VMS) with FDDI and have the second machine route packets onto the ethernet segment for me? This wasn't immediately clear from the documentation. I assume if the second machine is set up as a L1 router DECNET over this setup will be possible?
Sampsa
On 29 Sep 2009, at 19:50, Kari Uusim ki wrote:
Sampsa Laine wrote:
Is it possible to run a cluster over both ethernet and FDDI?
Basically, I'd like to connect Machine A -> B using ethernet and B -> C using FDDI - is this possible?
Sampsa
.
Unfortunately that configuration violates the rule of full connectivity. All cluster members must have direct connectivity to other members.
But if you add a FDDI-Ethernet-bridge, you should get a "legal" configuration.
Kari
.
Why don't you leave TCP/IP as it is now?
You don't have to dedicate the Ethernet just to cluster traffic (that is a feature of those inferior creations which some people call [*nix] clusters - even if they aren't). You can run any traffic on the same Ethernet interface as cluster traffic. You just have to remember if you connect the machines with a Ethernet switch that there mustn't be protocol or MAC filtering which interferes with the cluster traffic.
DECnet routing has nothing to do with TCP/IP routing. They are completely separate protocols and live their own life in a VMS machine.
I suggest you run everything on the Ethernet and forget about the FDDI.
Kari
Sampsa Laine wrote:
I've come up with an alternative solution:
I'll use ethernet for the cluster interconnect as I have ethernet ports on all the machines.
However, this leaves CHIMPY without an ethernet-based TCP/IP connection. So my question is this:
Can I hook up two machines (running VMS) with FDDI and have the second machine route packets onto the ethernet segment for me? This wasn't immediately clear from the documentation. I assume if the second machine is set up as a L1 router DECNET over this setup will be possible?
Sampsa
On 29 Sep 2009, at 19:50, Kari Uusim ki wrote:
Sampsa Laine wrote:
Is it possible to run a cluster over both ethernet and FDDI?
Basically, I'd like to connect Machine A -> B using ethernet and B -> C using FDDI - is this possible?
Sampsa
.
Unfortunately that configuration violates the rule of full connectivity. All cluster members must have direct connectivity to other members.
But if you add a FDDI-Ethernet-bridge, you should get a "legal" configuration.
Kari
.
Didn't think about that actually :)
I just figured it would be better to dedicate interfaces and a VLAN to clustering.
The HECnet bridge doesn't bridge clustering traffic I hope?
Sampsa
On 29 Sep 2009, at 20:44, Zane H. Healy wrote:
Why would this leave CHIMPY wihtout an ethernet-based TCP/IP connection? You can use an ethernet interface for clustering, DECnet, and TCP/IP at the
same time.
Zane
On Tue, 29 Sep 2009, Sampsa Laine wrote:
I've come up with an alternative solution:
I'll use ethernet for the cluster interconnect as I have ethernet ports on all the machines.
However, this leaves CHIMPY without an ethernet-based TCP/IP connection. So my question is this:
Can I hook up two machines (running VMS) with FDDI and have the second machine route packets onto the ethernet segment for me? This wasn't immediately clear from the documentation. I assume if the second machine is set up as a L1 router DECNET over this setup will be possible?
Sampsa
On 29 Sep 2009, at 19:50, Kari Uusim ki wrote:
Sampsa Laine wrote:
Is it possible to run a cluster over both ethernet and FDDI?
Basically, I'd like to connect Machine A -> B using ethernet and B -> C using FDDI - is this possible?
Sampsa
.
Unfortunately that configuration violates the rule of full connectivity. All cluster members must have direct connectivity to other members.
But if you add a FDDI-Ethernet-bridge, you should get a "legal" configuration.
Kari
Why would this leave CHIMPY wihtout an ethernet-based TCP/IP connection? You can use an ethernet interface for clustering, DECnet, and TCP/IP at the
same time.
Zane
On Tue, 29 Sep 2009, Sampsa Laine wrote:
I've come up with an alternative solution:
I'll use ethernet for the cluster interconnect as I have ethernet ports on all the machines.
However, this leaves CHIMPY without an ethernet-based TCP/IP connection. So my question is this:
Can I hook up two machines (running VMS) with FDDI and have the second machine route packets onto the ethernet segment for me? This wasn't immediately clear from the documentation. I assume if the second machine is set up as a L1 router DECNET over this setup will be possible?
Sampsa
On 29 Sep 2009, at 19:50, Kari Uusim ki wrote:
Sampsa Laine wrote:
Is it possible to run a cluster over both ethernet and FDDI?
Basically, I'd like to connect Machine A -> B using ethernet and B -> C using FDDI - is this possible?
Sampsa
.
Unfortunately that configuration violates the rule of full connectivity. All cluster members must have direct connectivity to other members.
But if you add a FDDI-Ethernet-bridge, you should get a "legal" configuration.
Kari
They aren't supported on Integrity.
Only Ethernet LANs are.
Sampsa Laine wrote:
Hmm, not ideal.
I do have 3 FDDI cards, do you know if the DEFPA cards work on OpenVMS for Integrity boxes?
Sampsa
On 29 Sep 2009, at 19:50, Kari Uusim ki wrote:
Sampsa Laine wrote:
Is it possible to run a cluster over both ethernet and FDDI?
Basically, I'd like to connect Machine A -> B using ethernet and B -> C using FDDI - is this possible?
Sampsa
.
Unfortunately that configuration violates the rule of full connectivity. All cluster members must have direct connectivity to other members.
But if you add a FDDI-Ethernet-bridge, you should get a "legal" configuration.
Kari
.
I've come up with an alternative solution:
I'll use ethernet for the cluster interconnect as I have ethernet ports on all the machines.
However, this leaves CHIMPY without an ethernet-based TCP/IP connection. So my question is this:
Can I hook up two machines (running VMS) with FDDI and have the second machine route packets onto the ethernet segment for me? This wasn't immediately clear from the documentation. I assume if the second machine is set up as a L1 router DECNET over this setup will be possible?
Sampsa
On 29 Sep 2009, at 19:50, Kari Uusim ki wrote:
Sampsa Laine wrote:
Is it possible to run a cluster over both ethernet and FDDI?
Basically, I'd like to connect Machine A -> B using ethernet and B -> C using FDDI - is this possible?
Sampsa
.
Unfortunately that configuration violates the rule of full connectivity. All cluster members must have direct connectivity to other members.
But if you add a FDDI-Ethernet-bridge, you should get a "legal" configuration.
Kari
Hmm, not ideal.
I do have 3 FDDI cards, do you know if the DEFPA cards work on OpenVMS for Integrity boxes?
Sampsa
On 29 Sep 2009, at 19:50, Kari Uusim ki wrote:
Sampsa Laine wrote:
Is it possible to run a cluster over both ethernet and FDDI?
Basically, I'd like to connect Machine A -> B using ethernet and B -> C using FDDI - is this possible?
Sampsa
.
Unfortunately that configuration violates the rule of full connectivity. All cluster members must have direct connectivity to other members.
But if you add a FDDI-Ethernet-bridge, you should get a "legal" configuration.
Kari
Sampsa Laine wrote:
Is it possible to run a cluster over both ethernet and FDDI?
Basically, I'd like to connect Machine A -> B using ethernet and B -> C using FDDI - is this possible?
Sampsa
.
Unfortunately that configuration violates the rule of full connectivity. All cluster members must have direct connectivity to other members.
But if you add a FDDI-Ethernet-bridge, you should get a "legal" configuration.
Kari
Is it possible to run a cluster over both ethernet and FDDI?
Basically, I'd like to connect Machine A -> B using ethernet and B -> C using FDDI - is this possible?
Sampsa
Yeah. Saw that afterwards. I saw that you hadn't specified a port number, but if the problem was when running simh, that detail had not been relevant. :-)
Johnny
Sampsa Laine wrote:
It was a misconfiguration of the bridge, forgot the port number, the bridge tried to open an interface named silverback.sampsa.com which didn'\t work obviously.
Sampsa
On 26 Sep 2009, at 23:18, Johnny Billquist wrote:
Sampsa Laine wrote:
I'm setting up a VM that runs SIMH-VAX in a Gentoo VM and keep getting the following error message:
Error opening device.
The relevant part of my bridge.conf reads as follows:
[bridge]
local eth0
sampsa silverback.sampsa.com
Any ideas?
Not sure what you are doing. Is the problem coming from running simh, or from running the bridge program???
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
It was a misconfiguration of the bridge, forgot the port number, the bridge tried to open an interface named silverback.sampsa.com which didn'\t work obviously.
Sampsa
On 26 Sep 2009, at 23:18, Johnny Billquist wrote:
Sampsa Laine wrote:
I'm setting up a VM that runs SIMH-VAX in a Gentoo VM and keep getting the following error message:
Error opening device.
The relevant part of my bridge.conf reads as follows:
[bridge]
local eth0
sampsa silverback.sampsa.com
Any ideas?
Not sure what you are doing. Is the problem coming from running simh, or from running the bridge program???
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Sampsa Laine wrote:
I'm setting up a VM that runs SIMH-VAX in a Gentoo VM and keep getting the following error message:
Error opening device.
The relevant part of my bridge.conf reads as follows:
[bridge]
local eth0
sampsa silverback.sampsa.com
Any ideas?
Not sure what you are doing. Is the problem coming from running simh, or from running the bridge program???
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Yup, that was it, forgot the port number after the IP. Works now, cheers.
Sampsa
On 26 Sep 2009, at 22:53, gerry77 at mail.com wrote:
On Sat, 26 Sep 2009 22:43:14 +0100, you wrote:
I'm setting up a VM that runs SIMH-VAX in a Gentoo VM and keep getting
the following error message:
Error opening device.
The relevant part of my bridge.conf reads as follows:
[bridge]
local eth0
sampsa silverback.sampsa.com
Any ideas?
If I remember correctly, you must tell it a port number for any remote bridge
or it will consider "silverback.sampsa.com" and such as a local device, which
obviously does not exist. Try appending ":4711" or whatever at the end of the
second line :)
G.
On Sat, 26 Sep 2009 22:43:14 +0100, you wrote:
I'm setting up a VM that runs SIMH-VAX in a Gentoo VM and keep getting
the following error message:
Error opening device.
The relevant part of my bridge.conf reads as follows:
[bridge]
local eth0
sampsa silverback.sampsa.com
Any ideas?
If I remember correctly, you must tell it a port number for any remote bridge
or it will consider "silverback.sampsa.com" and such as a local device, which
obviously does not exist. Try appending ":4711" or whatever at the end of the
second line :)
G.