This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: Revised release criteria for GCC 4.0
- From: Mark Mitchell <mark at codesourcery dot com>
- To: Benjamin Kosnik <bkoz at redhat dot com>
- Cc: gcc at gcc dot gnu dot org
- Date: Mon, 13 Dec 2004 16:03:20 -0800
- Subject: Re: Revised release criteria for GCC 4.0
- Organization: CodeSourcery, LLC
- References: <20041213174849.694408c3.bkoz@redhat.com>
Benjamin Kosnik wrote:
hi mark. thanks for the update.
i must say, the clarification is really nice, but puzzling.
1) platform support
any chance you could elaborate on the rationale used
to pick the primary platforms? the decision is cool, but the process
behind it would be nice to understand.
It's a combination of factors. Yes, some aspects are historical. One
reason to include some systems is the "canary" aspect of some systems;
for example, including systems without weak symbols helps to smoke out
issues with templates. We'd all love for all systems to become more
SVR4-ish, but the SC feels that it's important to retain support for a
relatively wide variety of systems for the forseeable future. We wanted
to continue to support the same major processor families as in previous
releases, with the exception of Alpha, which has been EOL'd.
2) complete dropping of code quality, applications
WTF?? Why drop the glibc and kernel baselines?? I think these have
helped in the past to keep initial releases from being of the
brown-paper bag variety.
We're not dropping code quality as a criterion, we're simply treatig
code quality regressions as a regression like any other.
As for kernel/application baselines, how many releases have I done where
(a) I had that baseline data to examine, and (b) the results were good?
Zero.
Instead, we're taking the point of view that, realistically, we're not
going to have that data, but that, fortunately, many people use
prerelease versions of GCC to test their stuff with, and bugs get
reported, and so we'll get much of the same information in the form of
bug reports.
3) drop compile time performance as a factor.
Likewise, compile-time performance regressions are regressions, and
therefore legitimate issues.
But, I'd be unlikely to hold up an otherwise functional release because
of some compile-time regressions on some inputs. (I think that's true
of most software, other than extremely performance-oriented software;
for example, I doubt Microsoft would hold up a release of Word because
it was 15% slower in repaginating certain long documents.)
Partly, that's because this isn't something that's easy to fix right
before a release. If you want to fix it, you have to deal with it
during the earlier development stages, when there's more flux. It would
be a bad decision to (say) substantially reorganize a tree data
structure to save space in the week before a release. However, adding a
few lines of code to check for error_mark_node, or to deal with an
obscure argument-passing problem, is quite reasonable.
--
Mark Mitchell
CodeSourcery, LLC
mark@codesourcery.com
(916) 791-8304