Patch for c/13801

Joseph S. Myers jsm@polyomino.org.uk
Mon Aug 2 23:34:00 GMT 2004


On Mon, 2 Aug 2004, Zack Weinberg wrote:

> You cannot do it this way.  At end of compilation, the decls must have
> types which reflect all the details we have amassed from all
> declarations seen -- possibly across multiple translation units.

The decls in the external scope do - the type stored there is the
composite of all the types seen so the decls get that scope when we pop
back to the external scope.  I do not in fact expect problems arising from
issues with static declarations with types refined in inner scopes, but if
such arise (and the testsuite showed none) then a scope for such
declarations analogous to the external scope could be added to store their
composite types.

> Consider the case where the nested block scope declaration (or the
> declaration in a different translation unit, equivalently) has an
> initializer: DECL_INITIAL will be inconsistent with the type.  We'll
> either ICE or fail to emit a correct initializer.

Nested block scope declarations can't have initializers, so if there are
problems there then we should discard the initializer found at block
scope.

In the case of the implicit initializer for incomplete arrays that get
implicitly initialized to an array of size 1 if incomplete at the end of
the translation unit, if the type is completed incompatibly in a block (to
a size bigger than 1) then the code has compile-time undefined behavior
(of which diagnosis would be nice).

> I also object to any solution to this bug which increases memory usage
> for correct code.  The effect of this bug is to fail to diagnose
> certain cases where a program is ill-formed; as such, correct code
> never encounters the bug, so it is better to leave the bug unfixed
> than to penalize compilation of correct code.  (Your 2.6% compile time

You can do things an awful lot quicker and cheaper if you don't care about
diagnosing incorrect code!  I can also argue that it causes valid code to
be wrongly diagnosed:

typedef int IA[];
typedef int A5[5];
typedef int A10[10];

A10 array10;

A5 *ap;
void
f (void)
{
  int ap;
  {
    extern IA *ap;
    ap = &array10;
  }
}

(Whether this is valid isn't entirely clear, but there's a good case that
it is; certainly the pointers to differently sized arrays must have the
same representation and alignment requirements.  There's even a plausible
argument that it is strictly conforming where the equivalent code with a
pointer cast instead of the shadowing declaration wouldn't be, because so
little about pointer casts is defined in ISO C.)

> improvement is probably just noise -- unless you've been thorough
> enough in your measurements that you can nail down the error bars, any
> number that small should be viewed with great suspicion, never mind
> the state of --enable-checking.)

Unmodified compiler, timings of three runs:

user    3m59.300s
sys     0m4.500s

user    3m57.290s
sys     0m4.360s

user    3m56.600s
sys     0m4.570s

Modified compiler, timings of three runs:

user    3m51.380s
sys     0m4.500s

user    3m51.130s
sys     0m4.690s

user    3m51.620s
sys     0m4.220s

Both configured, installed and tested the same way, the only variable
changed being the patch applied.

> A correct solution to this bug could take the form of a flag bit on
> struct c_binding (there is room for two more of these without
> increasing the size of the structure) which records whether or not the
> declaration is incomplete in that scope.  This then has to propagate
> to everywhere that issues diagnostics for inappropriate use of decls
> with incomplete types.

It's not just a matter of the decl having incomplete type, any type nested
within it might be incomplete.  See the testcase examples of pointers to
incomplete types and functions returning such pointers.  It can also apply
to function parameter types (and so to whether function calls with pointer
to array arguments get diagnosed, similar to the example above of valid
code).  Effectively, the type of a decl at all points in compiling and
constraint checking should be the type the standard says it has at that
point in the source, rather than duplicating checks for special cases
everywhere, just as the proper solution to bit-field problems was to
ensure that bit-fields have the proper types everywhere in the compiler
and fix the fallout from such types as it was found.

> Submit this change separately.  With regard to the incompatible
> declarations of built-ins, I agree with Geoff, and I'd add that you
> should not diagnose incompatible redeclarations of built-ins that are
> not covered by the currently selected standard.

While not diagnosing those not covered by the current standard sounds
nice, there's still the problem of whether the user wants the built-in
function (in which case surely it should be diagnosed) or doesn't.  The
presumption at present is that they want the built-in function.  The
testcases certainly assume that.  It also probably serves to save bug
reports from users e.g. complaining that cbrt doesn't work when they don't
include <math.h>.  The alternative is to discard the built-in declaration
completely in favour of the new implicit one.

-- 
Joseph S. Myers               http://www.srcf.ucam.org/~jsm28/gcc/
    jsm@polyomino.org.uk (personal mail)
    jsm28@gcc.gnu.org (Bugzilla assignments and CCs)



More information about the Gcc-patches mailing list