This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Question about bitsizetype


On Wed, May 9, 2012 at 10:52 PM, William J. Schmidt
<wschmidt@linux.vnet.ibm.com> wrote:
> On Wed, 2012-05-09 at 13:47 -0700, Andrew Pinski wrote:
>> On Wed, May 9, 2012 at 1:36 PM, William J. Schmidt
>> <wschmidt@linux.vnet.ibm.com> wrote:
>> > Greetings,
>> >
>> > I've been debugging a Fedora 17 build problem on ppc64-redhat-linux, and
>> > ran into an issue with bitsizetype. ?I have a patch that fixes the
>> > problem, but I'm not yet convinced it's the right fix. ?I'm hoping
>> > someone here can help me sort it out.
>> >
>> > The problem occurs when compiling some Java code at -O3. ?The symptom is
>> > a segv during predictive commoning. ?The problem comes when analyzing a
>> > data dependence between two field references. ?The access functions for
>> > the data refs are determined in tree-data-ref.c: dr_analyze_indices ():
>> >
>> > ? ? ?else if (TREE_CODE (ref) == COMPONENT_REF
>> > ? ? ? ? ? ? ? && TREE_CODE (TREE_TYPE (TREE_OPERAND (ref, 0))) == RECORD_TYPE)
>> > ? ? ? ?{
>> > ? ? ? ? ?/* For COMPONENT_REFs of records (but not unions!) use the
>> > ? ? ? ? ? ? FIELD_DECL offset as constant access function so we can
>> > ? ? ? ? ? ? disambiguate a[i].f1 and a[i].f2. ?*/
>> > ? ? ? ? ?tree off = component_ref_field_offset (ref);
>> > ? ? ? ? ?off = size_binop (PLUS_EXPR,
>> > ? ? ? ? ? ? ? ? ? ? ? ? ? ?size_binop (MULT_EXPR,
>> > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?fold_convert (bitsizetype, off),
>> > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?bitsize_int (BITS_PER_UNIT)),
>> > ? ? ? ? ? ? ? ? ? ? ? ? ? ?DECL_FIELD_BIT_OFFSET (TREE_OPERAND (ref, 1)));
>> > ? ? ? ? ?VEC_safe_push (tree, heap, access_fns, off);
>> > ? ? ? ?}
>> >
>> > Note the use of bitsizetype. ?On a 64-bit target that defines TImode,
>> > this is apparently set to a 128-bit unsigned type, verified in gdb:
>> >
>> > (gdb) ptr bitsizetype
>> > ?<integer_type 0xfffb5d700a8 bitsizetype public unsigned sizetype TI
>> > ? ?size <integer_cst 0xfffb5c82380 type <integer_type 0xfffb5d700a8
>> > bitsizetype> constant 128>
>> > ? ?unit size <integer_cst 0xfffb5c823a0 type <integer_type
>> > 0xfffb5d70000 sizetype> constant 16>
>> > ? ?align 128 symtab 0 alias set -1 canonical type 0xfffb5d700a8
>> > precision 128 min <integer_cst 0xfffb5c823c0 0> max <integer_cst
>> > 0xfffb5c82360 -1>>
>> >
>> > The problem arises in tree-data-ref.c: analyze_ziv_subscript:
>> >
>> > ?type = signed_type_for_types (TREE_TYPE (chrec_a), TREE_TYPE (chrec_b));
>> > ?chrec_a = chrec_convert (type, chrec_a, NULL);
>> > ?chrec_b = chrec_convert (type, chrec_b, NULL);
>> > ?difference = chrec_fold_minus (type, chrec_a, chrec_b);
>> >
>> > Both input types are bitsizetype of mode TImode. ?This call reduces to a
>> > call to tree.c: signed_or_unsigned_type_for ():
>> >
>> > ?return lang_hooks.types.type_for_size (TYPE_PRECISION (t), unsignedp);
>>
>> And that was fixed by not calling type_for_size with the following patch:
>> r185226 | rguenth | 2012-03-12 06:04:43 -0700 (Mon, 12 Mar 2012) | 9 lines
>>
>> 2012-03-12 ?Richard Guenther ?<rguenther@suse.de>
>>
>> ? ? ? ? * tree.c (signed_or_unsigned_type_for): Use
>> ? ? ? ? build_nonstandard_integer_type.
>> ? ? ? ? (signed_type_for): Adjust documentation.
>> ? ? ? ? (unsigned_type_for): Likewise.
>> ? ? ? ? * tree-pretty-print.c (dump_generic_node): Use standard names
>> ? ? ? ? for non-standard integer types if available.
>> Thanks,
>> Andrew Pinski
>>
>>
> Ah, Andrew, you're a life-saver. ?Thanks!

The above is of course not exactly safe backporting ... (well, maybe it is,
I'm not sure ;)).

Another possibility would be to not use bitsizetype here and truncate
the result to sizetype (in case it fits, if it doesn't fit, give up - unlikely).

But well, maybe we should backport the above.

Richard.

> Bill
>
>>
>> >
>> > So this is the interesting point. ?We are calling back to the front end
>> > to find a type having the same precision as bitsizetype, in this case
>> > 128. ?The C lang hook handles this fine, but the Java one does not:
>> >
>> > tree
>> > java_type_for_size (unsigned bits, int unsignedp)
>> > {
>> > ?if (bits <= TYPE_PRECISION (byte_type_node))
>> > ? ?return unsignedp ? unsigned_byte_type_node : byte_type_node;
>> > ?if (bits <= TYPE_PRECISION (short_type_node))
>> > ? ?return unsignedp ? unsigned_short_type_node : short_type_node;
>> > ?if (bits <= TYPE_PRECISION (int_type_node))
>> > ? ?return unsignedp ? unsigned_int_type_node : int_type_node;
>> > ?if (bits <= TYPE_PRECISION (long_type_node))
>> > ? ?return unsignedp ? unsigned_long_type_node : long_type_node;
>> > ?return 0;
>> > }
>> >
>> > This returns zero, causing the first call to chrec_convert in
>> > analyze_ziv_subscript to segfault.
>> >
>> > I can cause the build to succeed with the following patch...
>> >
>> > Index: gcc/java/typeck.c
>> > ===================================================================
>> > --- gcc/java/typeck.c ? (revision 187158)
>> > +++ gcc/java/typeck.c ? (working copy)
>> > @@ -189,6 +189,12 @@ java_type_for_size (unsigned bits, int unsignedp)
>> > ? ? return unsignedp ? unsigned_int_type_node : int_type_node;
>> > ? if (bits <= TYPE_PRECISION (long_type_node))
>> > ? ? return unsignedp ? unsigned_long_type_node : long_type_node;
>> > + ?/* A 64-bit target with TImode requires 128-bit type definitions
>> > + ? ? for bitsizetype. ?*/
>> > + ?if (int128_integer_type_node
>> > + ? ? ?&& bits == TYPE_PRECISION (int128_integer_type_node))
>> > + ? ?return (unsignedp ? int128_unsigned_type_node
>> > + ? ? ? ? ? : int128_integer_type_node);
>> > ? return 0;
>> > ?}
>> >
>> > ...but I wonder whether this is the correct approach. ?Is the problem
>> > really that the lang hook is missing handling for bitsizetype for
>> > certain targets, or is the problem that bitsizetype is 128 bits? ?All of
>> > the other front ends seem to get along fine with a 128-bit bitsizetype;
>> > it's just kind of an odd choice on a 64-bit machine. ?Or is the problem
>> > in the dr_analyze_indices code that's using bitsizetype?
>> >
>> > The thing that gives me pause here is that other machines would likely
>> > have the same problem. ?Any machine using a 128-bit bitsizetype would
>> > hit this problem sooner or later when optimizing Java code. ?Perhaps
>> > it's just that few people compile Java statically anymore -- certainly
>> > we don't even build it during normal development.
>> >
>> > I had myself convinced that all 64-bit machines with a TImode would have
>> > a 128-bit bitsizetype, but I'm having trouble connecting the dots on
>> > that at the moment, so that may or may not be true. ?If it is, though,
>> > then this would seemingly come up periodically on Intel building Java.
>> > That makes me suspicious that I don't understand this well enough yet.
>> >
>> > Thanks in advance for any help! ?I'd like to get this resolved quickly
>> > for the Fedora folks, but I want to do it properly.
>> >
>> > Thanks,
>> > Bill
>> >
>> >
>>
>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]