On 6 Mar 2013, at 15:09, Gregg Levine <gregg.drwho8 at gmail.com> wrote:
On Wed, Mar 6, 2013 at 3:07 PM, Cory Smelosky <b4 at gewt.net> wrote:
On 6 Mar 2013, at 15:05, Gregg Levine <gregg.drwho8 at gmail.com> wrote:
On Wed, Mar 6, 2013 at 3:02 PM, Dave McGuire <mcguire at neurotica.com> wrote:
On 03/06/2013 02:13 PM, Cory Smelosky wrote:
Um
Linux kernels like changing often is more my point. ;)
New kernel releases seldom bring down my production systems. ;)
That and an inability to live-patch the kernel safely.
This doesn't either.
-Dave
--
Dave McGuire, AK4HZ
New Kensington, PA
Hello!
This does not explain why there are four Yeti in those areas trying to
do that ..
Sure it does.
"How many yetis does it take to build a kernel?"
Did you see the odd looking yellow car outside your place today?
-----
Gregg C Levine gregg.drwho8 at gmail.com
"This signature fought the Time Wars, time and again."
Hello!
They aren't trying to build it, they are trying to crash the system,
and from inside the building.
Ahh. Well, they are yetis. Why do they need to crash the kernel when they can just trample the hardware?
--
-----
Gregg C Levine gregg.drwho8 at gmail.com
"This signature fought the Time Wars, time and again."
On Wed, Mar 6, 2013 at 3:07 PM, Cory Smelosky <b4 at gewt.net> wrote:
On 6 Mar 2013, at 15:05, Gregg Levine <gregg.drwho8 at gmail.com> wrote:
On Wed, Mar 6, 2013 at 3:02 PM, Dave McGuire <mcguire at neurotica.com> wrote:
On 03/06/2013 02:13 PM, Cory Smelosky wrote:
Um
Linux kernels like changing often is more my point. ;)
New kernel releases seldom bring down my production systems. ;)
That and an inability to live-patch the kernel safely.
This doesn't either.
-Dave
--
Dave McGuire, AK4HZ
New Kensington, PA
Hello!
This does not explain why there are four Yeti in those areas trying to
do that ..
Sure it does.
"How many yetis does it take to build a kernel?"
Did you see the odd looking yellow car outside your place today?
-----
Gregg C Levine gregg.drwho8 at gmail.com
"This signature fought the Time Wars, time and again."
Hello!
They aren't trying to build it, they are trying to crash the system,
and from inside the building.
--
-----
Gregg C Levine gregg.drwho8 at gmail.com
"This signature fought the Time Wars, time and again."
On 03/06/2013 03:05 PM, Gregg Levine wrote:
On Wed, Mar 6, 2013 at 3:02 PM, Dave McGuire <mcguire at neurotica.com> wrote:
On 03/06/2013 02:13 PM, Cory Smelosky wrote:
Um
Linux kernels like changing often is more my point. ;)
New kernel releases seldom bring down my production systems. ;)
That and an inability to live-patch the kernel safely.
This doesn't either.
-Dave
--
Dave McGuire, AK4HZ
New Kensington, PA
Hello!
This does not explain why there are four Yeti in those areas trying to
do that.....
Did you see the odd looking yellow car outside your place today?
That's MY car!
--
Dave McGuire, AK4HZ
New Kensington, PA
On 6 Mar 2013, at 15:05, Gregg Levine <gregg.drwho8 at gmail.com> wrote:
On Wed, Mar 6, 2013 at 3:02 PM, Dave McGuire <mcguire at neurotica.com> wrote:
On 03/06/2013 02:13 PM, Cory Smelosky wrote:
Um
Linux kernels like changing often is more my point. ;)
New kernel releases seldom bring down my production systems. ;)
That and an inability to live-patch the kernel safely.
This doesn't either.
-Dave
--
Dave McGuire, AK4HZ
New Kensington, PA
Hello!
This does not explain why there are four Yeti in those areas trying to
do that ..
Sure it does.
"How many yetis does it take to build a kernel?"
Did you see the odd looking yellow car outside your place today?
-----
Gregg C Levine gregg.drwho8 at gmail.com
"This signature fought the Time Wars, time and again."
On Wed, Mar 6, 2013 at 3:02 PM, Dave McGuire <mcguire at neurotica.com> wrote:
On 03/06/2013 02:13 PM, Cory Smelosky wrote:
Um
Linux kernels like changing often is more my point. ;)
New kernel releases seldom bring down my production systems. ;)
That and an inability to live-patch the kernel safely.
This doesn't either.
-Dave
--
Dave McGuire, AK4HZ
New Kensington, PA
Hello!
This does not explain why there are four Yeti in those areas trying to
do that.....
Did you see the odd looking yellow car outside your place today?
-----
Gregg C Levine gregg.drwho8 at gmail.com
"This signature fought the Time Wars, time and again."
On 6 Mar 2013, at 15:02, Dave McGuire <mcguire at neurotica.com> wrote:
On 03/06/2013 02:13 PM, Cory Smelosky wrote:
Um
Linux kernels like changing often is more my point. ;)
New kernel releases seldom bring down my production systems. ;)
Good thing the linux kernel doesn't magically patch itself. ;)
That and an inability to live-patch the kernel safely.
This doesn't either.
Okay, that's just maybe a dream of mine. ;)
-Dave
--
Dave McGuire, AK4HZ
New Kensington, PA
On 03/06/2013 02:13 PM, Cory Smelosky wrote:
Um
Linux kernels like changing often is more my point. ;)
New kernel releases seldom bring down my production systems. ;)
That and an inability to live-patch the kernel safely.
This doesn't either.
-Dave
--
Dave McGuire, AK4HZ
New Kensington, PA
On 6 Mar 2013, at 14:35, Brett Bump <bbump at rsts.org> wrote:
On Wed, 6 Mar 2013, Cory Smelosky wrote:
FreeBSD 4.10 was a good release. It was far more stable tan 5.x. It was used long past its use-by date. ;)
And still is (not for work, but on the RSTS hobby domain):
Nice. ;)
How often do you get break-in attempts? Also, what hardware is it running on?
mail# uptime
12:38PM up 65 days, 4:15, 1 user, load averages: 0.00, 0.00, 0.00
mail# uname -a
FreeBSD mail.rsts.org 4.10-RELEASE FreeBSD 4.10-RELEASE #1: Tue Mar 7 20:59:05 MST 2006 bbump at mail.rsts.org:/usr/src/sys/compile/Firewall i386
mail# date
Wed Mar 6 12:38:46 MST 2013
Brett
On Wed, 6 Mar 2013, Cory Smelosky wrote:
FreeBSD 4.10 was a good release. It was far more stable tan 5.x. It was used long past its use-by date. ;)
And still is (not for work, but on the RSTS hobby domain):
mail# uptime
12:38PM up 65 days, 4:15, 1 user, load averages: 0.00, 0.00, 0.00
mail# uname -a
FreeBSD mail.rsts.org 4.10-RELEASE FreeBSD 4.10-RELEASE #1: Tue Mar 7 20:59:05 MST 2006 bbump at mail.rsts.org:/usr/src/sys/compile/Firewall i386
mail# date
Wed Mar 6 12:38:46 MST 2013
Brett
On 6 Mar 2013, at 14:15, Brett Bump <bbump at rsts.org> wrote:
On Wed, 6 Mar 2013, Cory Smelosky wrote:
On 6 Mar 2013, at 14:03, Brett Bump <bbump at rsts.org> wrote:
Um?
Linux kernels like changing often is more my point. ;)
That and an inability to live-patch the kernel safely.
Well, I guess I would have to agree with Dave on that point. What is the
point of replacing it if it does exactly what you want? The FreeBSD boxen
that we were running was V4.10. No, it would not do 64bit, no, it was not
the latest and greatest. But the systems were fully tested long before
they were put into operation, and we had a specific goal in mind. These
were the production boxen so they were not used for new ideas. The new
machines that we put in are expected to live as they are for about 5 years
(give or take a year for some measure of budget).
FreeBSD 4.10 was a good release. It was far more stable tan 5.x. It was used long past its use-by date. ;)
Brett
On Wed, 6 Mar 2013, Cory Smelosky wrote:
----- Original Message -----
| From: "Dave McGuire" <mcguire at neurotica.com>
| To: hecnet at Update.UU.SE
| Sent: Wednesday, 6 March, 2013 12:11:41 PM
| Subject: Re: [HECnet] Vt100 tester
|
| On 03/06/2013 07:29 AM, Jerome H. Fine wrote:
| >> I've mentioned this to one or two folks here privately, but now
| >> that
| >> it has come up...My mother is a journalist with Associated Press,
| >> and
| >> she recently took a new assignment in a different city. Their
| >> office
| >> has a VAX-4000 running VMS, handling some sort of database. They
| >> love
| >> it, and they have no plans to migrate away from it.
| >>
| > It would be appreciated if a few interesting aspects concerning
| > the system were shared so we could understand how such a
| > mature system manages to compete.
|
| I suspect it's not trying to "compete", at least not any more than,
| say, the desks (not the desktops, but the DESKS) in the offices, etc.
| It's an appliance; it sits there and does its job. There's no valid
| reason to change it.
|
| There's an odd consumerist attitude that goes something like "oh,
| the
| manufacturer has introduced a new model, this one must somehow suck
| now,
| I'd better replace it!"...That attitude is common in the worlds of
| computers and cars, but not much else. If Great Neck (a
| common-in-USA
| manufacturer of cheap-but-usable hand tools) introduces a new model
| of
| hammer, I'm not going to throw my old one (probably twenty years old)
| and rush out to buy the new one. That would be stupid...and it's
| just
| as stupid with computers and cars.
How do you even add new features to a hammer? Do you make it electric and capable of making coffee? ;)
|
| I'm pretty sure I'm preaching to the choir here...at least I really
| hope I am. ;)
|
| > (a) When was the system first installed?
|
| I have no idea. She says she thinks it was a 4000-500, which would
| put it in the mid-1990s.
|
| > (b) Approximately how long is the up-time between re-boots?
|
| Again I have no idea. (this is my mother's place of employment,
| 1200mi
| from here, not mine) Let's put it this way, though...it's likely
| that
| this machine is running VMS, and it's not at all unusual for VMS
| systems
| to have uptimes in the 5+ year range. If it didn't get those sorts
| of
| uptimes, it probably would've annoyed someone and gotten replaced by
| now.
Try managing uptimes like that with linux! ;)
Linux 2.6.37.6.
bbump at mail:~$ uptime
12:07:26 up 260 days, 10:09, 1 user, load average: 0.78, 0.80, 0.84
I like this part: ;-)
bbump at mail:~$ free
total used free shared buffers cached
Mem: 296973320 199845248 97128072 0 506812 185232136
-/+ buffers/cache: 14106300 282867020
Swap: 0 0 0
Linux 2.6.37.6.
bbump at www:~$ uptime
12:08:01 up 260 days, 11:02, 1 user, load average: 0.15, 0.18, 0.22
Linux 2.6.37.6.
bbump at moo:~$ uptime
12:08:19 up 260 days, 10:39, 1 user, load average: 0.00, 0.08, 0.18
Linux 2.6.37.6.
bbump at vc:~$ uptime
12:08:48 up 205 days, 6:55, 1 user, load average: 0.07, 0.20, 0.21
bbump at ns1:~$ uptime
12:08:46 up 260 days, 10:21, 1 user, load average: 0.02, 0.04, 0.05
Linux 2.6.37.6.
bbump at ns2:~$ uptime
12:09:30 up 260 days, 11:28, 1 user, load average: 0.24, 0.19, 0.10
Actually, these are all new systems. The old ones were taken out last
summer. Most of them were FreeBSD systems with over 4 years of uptime.
The only reason they were replaced was old hardware starting to fail.
FreeBSD is a bit safer to leave on an older kernel for longer amounts of time.
Although you find the occasional RedHat 5 box running 2.2...
Most any system that is managed properly (hmm..except maybe Wanderz) can
yield the same results. Our RSTS systems always had ?????? for uptimes
as the field could only hold enough information for a month.
Hey, even NT 4 can remain up for a long time if properly managed. It's just absolutely nobody knows how to properly manage NT 4. Mind, this only applies to NT 4.
Brett
|
| > (c) Do they have any virus problems?
| > (d) Have they ever been hacked into?
|
| ROFL!!! I haven't had a laugh this good on a long time. ;)
|
| > Other information such as the physical details would also be
| > interesting along with the number of users. Anything else
| > your mother felt willing to share would provide the list
| > members with good hard information.
|
| She's pretty busy in her new assignment, but I'm sure she can do
| some
| digging. I'd love to find out more myself.
|
| -Dave
|
| --
| Dave McGuire, AK4HZ
| New Kensington, PA
|
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Experiments
http://dev.gimme-sympathy.org Home experiments