Experiencing unreproducible internal compiler errors <<whinge>>
Matt Lowry
mclowry@cs.adelaide.edu.au
Thu Mar 16 17:46:00 GMT 2000
On Thu, 16 Mar 2000, Zack Weinberg wrote:
> GCC may be the most stressful program your machine ever runs. It runs
> the CPU at full throttle for minutes to hours, depending on the size
> of the build. It has almost-random memory access patterns, and its
> active set - memory being referenced constantly - can grow to hundreds
> of megs. This puts way more strain on the hardware than any casual
> testing utility. If you'd actually read the FAQ I pointed you at, it
> would have explained this in great detail.
Ha! Like compilation is the only way to thrash a box. As it happens I did
read the FAQ. "If you'd read the message I orignally sent, it would have
explained" that only one of the errors I got was a segfault in cc1, and
none of the other three were vagualy close to the "other possiblities" the
FAQ lists for errors a hardware problem may induce. While I accept the FAQs
assertion that gcc is more stressful then some little test proggy looking
for bad memory, I don't think the situations in which I experienced the
errors are "the most stressful program" my machine has ever run.
Now, for the last 1 and 1/2 hours or so I have, concurrently, been :
a) Running 8 Xmames. Collectively these are emulating the operation of
around 25 CPUS and numerous mother and daughter boards. My new K7-650 is
the first box I've had that can run a single Xmame with a "decent"
i.e. complex game at full speed. A single Xmame is very effective at
running a (real) CPU at "full trottle for minutes to hours".
b) Repeatedly compressing and uncompressing (bzip2) a tarball containing
all linux modules sitting my machine.
c) Two (i.e. concurrent) compilations of the linux kernel with support for
_everything_.
Neither the kernel compilations, or any of the other stuff, has fallen
over. Here's a nice top shot :
11:28am up 20:43, 10 users, load average: 9.07, 9.22, 8.83
84 processes: 72 sleeping, 12 running, 0 zombie, 0 stopped
CPU states: 93.5% user, 6.4% system, 0.0% nice, 0.0% idle
Mem: 257680K av, 253124K used, 4556K free, 89396K shrd, 18224K buff
Swap: 136512K av, 220K used, 136292K free 32892K cached
PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND
8507 root 18 0 7760 7760 344 R 0 10.9 3.0 0:07 bzip2
8586 root 12 0 7572 7572 1904 R 0 10.9 2.9 0:01 cc1
2878 mclowry 11 0 6728 6728 3012 R 0 9.0 2.6 8:38 xmame.x11
2882 mclowry 11 0 4252 4252 2108 R 0 9.0 1.6 8:09 xmame.x11
2883 mclowry 11 0 4252 4252 2108 R 0 9.0 1.6 8:11 xmame.x11
2909 mclowry 11 0 7480 7480 2360 R 0 9.0 2.9 8:15 xmame.x11
2877 mclowry 11 0 6728 6728 3012 R 0 8.3 2.6 8:43 xmame.x11
2881 mclowry 11 0 6728 6728 3012 R 0 8.3 2.6 8:31 xmame.x11
2908 mclowry 11 0 7468 7468 2360 R 0 8.3 2.8 8:25 xmame.x11
2910 mclowry 11 0 12268 11M 2508 R 0 8.3 4.7 8:12 xmame.x11
645 root 9 0 77740 75M 3468 R 0 5.1 30.1 11:21 X
2874 mclowry 4 0 1084 1084 860 R 0 2.5 0.4 0:44 top
1 root 0 0 480 480 416 S 0 0.0 0.1 0:05 init
As you can see, CPU is being thrashed quite effectively, as is the memory.
Here's another top shot :
11:37am up 20:51, 10 users, load average: 8.37, 9.73, 9.31
91 processes: 81 sleeping, 10 running, 0 zombie, 0 stopped
CPU states: 84.0% user, 15.9% system, 0.0% nice, 0.0% idle
Mem: 257680K av, 249624K used, 8056K free, 93560K shrd, 19388K buff
Swap: 136512K av, 220K used, 136292K free 31844K cached
PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND
2908 mclowry 11 0 7468 7468 2360 S 0 8.8 2.8 9:09 xmame.x11
645 root 11 0 77740 75M 3468 R 0 8.6 30.1 12:00 X
2881 mclowry 11 0 6728 6728 3012 R 0 8.6 2.6 9:14 xmame.x11
2877 mclowry 11 0 6728 6728 3012 S 0 8.4 2.6 9:27 xmame.x11
2909 mclowry 11 0 7480 7480 2360 R 0 8.4 2.9 8:59 xmame.x11
2910 mclowry 10 0 12268 11M 2508 S 0 8.4 4.7 8:56 xmame.x11
9028 root 16 0 3996 3996 344 R 0 8.4 1.5 0:02 bzip2
2878 mclowry 11 0 6728 6728 3012 R 0 8.2 2.6 9:22 xmame.x11
2883 mclowry 9 0 4252 4252 2108 S 0 8.0 1.6 8:55 xmame.x11
2882 mclowry 9 0 4252 4252 2108 R 0 7.6 1.6 8:53 xmame.x11
9081 root 11 0 3640 3640 1328 R 0 4.3 1.4 0:00 cc1
9084 root 11 0 2408 2408 1316 R 0 1.5 0.9 0:00 cc1
9078 root 5 0 1756 1756 1088 S 0 1.1 0.6 0:00 gcc
9074 root 5 0 1756 1756 1088 S 0 0.9 0.6 0:00 gcc
9083 root 9 0 1268 1268 444 S 0 0.9 0.4 0:00 cpp
2874 mclowry 2 0 1084 1084 860 R 0 0.7 0.4 0:49 top
9080 root 6 0 1320 1320 444 S 0 0.7 0.5 0:00 cpp
Does this look stressful on my machine? Sigh. The apparent dismissiveness
of your previous email this is in response to has me feeling moderately
antagonistic (do you take me for the kind of AkorZ dOOd that usually
expresses such attitudes?). I should leave this one alone now I think.
However I do feel compelled to once again assert that the symptoms I have
expereince are inconsistent with your explaination. If for no other reason
then they only occured when gcc was compiling two particular packages. It
has not happened for any other compilations.
Anyhoo, let me close this on a philosophical note by invoking the spirit of
the great Douglas Adams and remind you of that wonderful device in one of
his books which could render an object invisible by projecting a "Somebody
Else's Problem" field. "A bug in gcc! Where! All I can see is your dodgey
hardware ... it's somebody else's problem ..."
Enjoy!
------------------------------------------------
Matt Lowry ( mclowry@cs.adelaide.edu.au )
------------------------------------------------
A social life ?
Where can I download that from ?
------------------------------------------------
PS: Here's another top shot for good luck ;)
11:53am up 21:07, 10 users, load average: 9.90, 10.27, 10.00
88 processes: 76 sleeping, 12 running, 0 zombie, 0 stopped
CPU states: 92.4% user, 7.5% system, 0.0% nice, 0.0% idle
Mem: 257680K av, 254352K used, 3328K free, 91400K shrd, 18548K buff
Swap: 136512K av, 220K used, 136292K free 37408K cached
PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND
10001 root 20 0 7760 7760 344 R 0 16.6 3.0 0:13 bzip2
645 root 13 0 77740 75M 3468 S 0 12.8 30.1 13:01 X
2908 mclowry 14 0 7468 7468 2360 R 0 7.7 2.8 10:30 xmame.x11
2881 mclowry 14 0 6728 6728 3012 R 0 7.3 2.6 10:35 xmame.x11
2877 mclowry 14 0 6728 6728 3012 R 0 7.1 2.6 10:48 xmame.x11
2878 mclowry 14 0 6728 6728 3012 R 0 7.1 2.6 10:43 xmame.x11
2882 mclowry 15 0 4252 4252 2108 R 0 6.9 1.6 10:15 xmame.x11
2909 mclowry 14 0 7480 7480 2360 R 0 6.7 2.9 10:20 xmame.x11
2910 mclowry 14 0 12268 11M 2508 R 0 6.7 4.7 10:18 xmame.x11
2883 mclowry 14 0 4252 4252 2108 R 0 6.1 1.6 10:17 xmame.x11
10138 root 14 0 3348 3348 1328 R 0 3.5 1.2 0:00 cc1
10135 root 5 0 1756 1756 1088 S 0 1.3 0.6 0:00 gcc
10137 root 6 0 1300 1300 444 S 0 0.9 0.5 0:00 cpp
2874 mclowry 1 0 1084 1084 860 R 0 0.7 0.4 0:57 top
1499 mclowry 0 0 2804 2804 1524 S 0 0.3 1.0 0:00 Eterm
More information about the Gcc-bugs
mailing list