Subject: Re: NT crashes From: Rex Ballard Date: Tue, 2 Jul 1996 19:54:58 -0400
How the Web Was Won
Subject: Re: NT crashes From: Rex Ballard Date: Tue, 2 Jul 1996 19:54:58 -0400
In-Reply-To: 
Message-ID: 
References: <4qs369$7sl@peabody.colorado.edu> <4qscdu$4bg@canton.charm.net> <01bb63b5.cf600ef0$810a399d@jandersnt1>  <4qu72f$bl0@news.tamu.edu> 
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII



On 28 Jun 1996, Nils Nieuwejaar wrote:

> Walter Eric Johnson  wrote:
> >Andrew Mcnab (mcnab@sp067.cern.ch) wrote:
> >: 
> >:  NT is allowing Borland C++ to crash the whole machine. How can an ordinary
> >: user do that in Linux (or another decent Unix) with g++ or whatever? I 
> >: thought NT was supposed to be this bullet proof wonder-OS.

There are a couple of areas where NT can go south in an ugly way...
Was your friend trying to implement a VXD?  Was he invoking VXD routines
from an application (rather than through the kernel), was he trying to use
shared memory to "pipeline" applications?

I am an intense power user.  With Windows 3.1 I average 3-5 crashes/day.
With OS/2, I average 3-5 crashes/week but the recovery is very poor (I
used to lose the desktop every other day and corrupt the file-system about
twice/week).  I was actually an Alpha Tester for OS/2 2.0 and they would
send my drives to Boca because I could break it so easily.

Windows NT is actually pretty stable.  I only crash it about once/week,
and usually it dies in a VXD (not microsoft's fault).

> >Hmm.  When I used Linux, it crashed on me fairly often.  I sure had
> >a long wait when it rebooted.  

Most of the problems I've had with Linux crashes are related to the
initial hardware configuration.  When the Ethernet card and the Serial
port get assigned to the same I/O port and interrupt, things act a bit
strange.  The early (0.98) systems were especially volitile.

I have been able to get several Linux systems up.  Ever since SlackWare
2.1, I have been very impressed with the stability.  I've had a bit of
trouble with the ELF Adaptec 2940 release.  Both Slackware 3.0 and Red Hat
gave me trouble.

Once I get the Linux system up and running though, I usually end up
rebooting it every few weeks because I want to do maintainence of some
sort.  I especially like to use it on the machines that get "Swapped Out"
because the old workstations aren't big enough to run NT.  That "Boat
Anchor" turns into a great Workstation, File Server, or Intranet Host,
all for about $40 and a few hours of overtime.


> That's certainly different than my experience.  Linux almost never
> crashes on me.  The Linux machine in my lab has been up for 24 days now.
> If I hadn't brought it down to ugrade to Linux 2.0, the uptime would be
> measured in months.

It's a good idea to reboot once/month just to make sure that your spool,
mail, and tmp directories don't get too full.  Scheduled rebooting also
gets rid of zombies.

> As for booting, Linux seems to take a little longer to boot than DOS.
> It certainly boots faster than DOS/Windows, Win95, or OS/2.  Perhaps what
> you are seeing is the time to check the consistency of your filesystems
> after a crash.

Most of the delay in rebooting is disk-drive related.  If you boot all the
way up to XDM and include the time setting up your post-login window
manager desktop, it's pretty close to the same time for Windows NT or for
Linux.

> >When I used Windows 3.x, it crashed quite often, too.  It didn't
> >take near as long to reboot, though.
> 
> Of course not.  It didn't bother to check the state of your file system.
> Without checking on bootup, you might go hours or days before you
> realized that some files and/or directory structures were hosed during
> the crash.  By then, of course, there is no way to fix the problem.

One of the best things about Linux is that hitting "The Big Red Button" is
usually the LAST resort.  Even when have a hung console, I can use the
Alt-F2 key to get to another console, or telnet in from another
workstation, or even hit cntl-alt-delete, which causes an orderly shutdown
and assures file-system integrity.

Even when I hard reboot (I accidently unplug the machine in the middle of
receiving a news feed), e2fsck is pretty good at rebuilding the lost
files.  It's much easier to rebuild from I-Node than from FAT records.

Once in a while, I have had other UNIX systems which power-glitched in
such a way as to corrupt the I-Node table.  That was a bit uglier.  It
messed up the e-mail spooler files and the tmp files (big deal).  You can
turn write caching off and still cache reads.

Unix was tested the hard way.  Unix "Wizards" used to play a game called
"Core Wars".  The object of the game was to write a process that could
kill opponant's processes and spawn it's own processes before it was
killed.  Generally 4-5 people would play.  Of course, one of the other
rules was that you couldn't harm a productive process.  The winner was the
guy who filled the spare slots of the process table with his processes
while leaving the system fully functional.  It wasn't unusual to have 500
or more processes "Nicely" running amok in the CPU.

Can you imagine trying that game on NT or OS/2? :-).




	Rex Ballard - Director of Electronic Distribution
	Standard & Poor's/McGraw-Hill
	Opinions expressed do not necessarily reflect
	the Management of the McGraw-Hill Companies.
	http://cnj.digex.net/~rballard




From rballard@cnj.digex.net Tue Jul  2 20:13:41 1996
Status: O
X-Status: 
Newsgroups: comp.os.linux.hardware,comp.os.linux.advocacy