This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [006/nnn] poly_int: tree constants
Martin Sebor <msebor@gmail.com> writes:
> On 10/23/2017 11:00 AM, Richard Sandiford wrote:
>> +#if NUM_POLY_INT_COEFFS == 1
>> +extern inline __attribute__ ((__gnu_inline__)) poly_int64
>> +tree_to_poly_int64 (const_tree t)
>
> I'm curious about the extern inline and __gnu_inline__ here and
> not in poly_int_tree_p below. Am I correct in assuming that
> the combination is a holdover from the days when GCC was compiled
> using a C compiler, and that the way to write the same definition
> in C++ 98 is simply:
>
> inline poly_int64
> tree_to_poly_int64 (const_tree t)
>
>> +{
>> + gcc_assert (tree_fits_poly_int64_p (t));
>> + return TREE_INT_CST_LOW (t);
>> +}
>
> If yes, I would suggest to use the C++ form (and at some point,
> changing the existing uses of the GCC/C idiom to the C++ form
> as well).
>
> Otherwise, if something requires the use of the C form I would
> suggest to add a brief comment explaining it.
You probably saw that this is based on tree_to_[su]hwi. AIUI the
differences from plain C++ inline are that:
a) with __gnu_inline__, an out-of-line definition must still exist.
That fits this use case well, because the inline is conditional on
the #ifdef and tree.c has an out-of-line definition either way.
If we used normal inline, we'd need to add extra #ifs to tree.c
as well, to avoid multiple definitions.
b) __gnu_inline__ has the strength of __always_inline__, but without the
correctness implications if inlining is impossible for any reason.
I did try normal inline first, but it wasn't strong enough. The
compiler ended up measurably faster if I copied the tree_to_[su]hwi
approach.
> ...
>> +
>> +inline bool
>> +poly_int_tree_p (const_tree t, poly_int64_pod *value)
>> +{
> ...
[This one is unconditionally inline.]
>> /* The tree and const_tree overload templates. */
>> namespace wi
>> {
>> + class unextended_tree
>> + {
>> + private:
>> + const_tree m_t;
>> +
>> + public:
>> + unextended_tree () {}
>
> Defining no-op ctors is quite dangerous and error-prone. I suggest
> to instead default initialize the member(s):
>
> unextended_tree (): m_t () {}
>
> Ditto everywhere else, such as in:
This is really performance-senesitive code though, so I don't think
we want to add any unnecessary initialisation. Primitive types are
uninitalised by default too, and the point of this class is to
provide an integer-like interface.
In your other message you used the example of explicit default
initialisation, such as:
class foo
{
foo () : x () {}
unextended_tree x;
};
But I think we should strongly discourage that kind of thing.
If someone wants to initialise x to a particular value, like
integer_zero_node, then it would be better to do it explicitly.
If they don't care what the initial value is, then for these
integer-mimicing classes, uninitialised is as good as anything
else. :-)
Note that for this class NULL_TREE is not a safe default value.
The same goes for the wide-int.h classes, where a length or precision
of 0 is undefined and isn't necessarily going to be handled gracefully
or predictably.
>> template <int N>
>> class extended_tree
>> {
>> @@ -5139,11 +5225,13 @@ extern bool anon_aggrname_p (const_tree)
>> const_tree m_t;
>>
>> public:
>> + extended_tree () {}
>> extended_tree (const_tree);
> ...
>> Index: gcc/tree.c
>> ===================================================================
> ...
>> +
>> +/* Return true if T holds a polynomial pointer difference, storing it in
>> + *VALUE if so. A true return means that T's precision is no greater
>> + than 64 bits, which is the largest address space we support, so *VALUE
>> + never loses precision. However, the signedness of the result is
>> + somewhat arbitrary, since if B lives near the end of a 64-bit address
>> + range and A lives near the beginning, B - A is a large positive value
>> + outside the range of int64_t. A - B is likewise a large negative value
>> + outside the range of int64_t. All the pointer difference really
>> + gives is a raw pointer-sized bitstring that can be added to the first
>> + pointer value to get the second. */
>
> I'm not sure I understand the comment about the sign correctly, but
> if I do, I don't think it's correct.
>
> Because their difference wouldn't representable in any basic integer
> type (i.,e., in ptrdiff_t) the pointers described above could never
> point to the same object (or array), and so their difference is not
> meaningful. C/C++ only define the semantics of a difference between
> pointers to the same object. That restricts the size of the largest
> possible object typically to SIZE_MAX / 2, or at most SIZE_MAX on
> the handful of targets where ptrdiff_t has greater precision than
> size_t. But even on those targets, the difference between any two
> pointers to the same object must be representable in ptrdiff_t,
> including the sign.
But does that apply even when no pointer difference of that size
occurs in the original source? I.e., is:
char *x = malloc (0x80000001)
undefined in itself on 32-bit targets? Does it become undefined after:
for (unsigned int i = 0; i < 0x80000001; ++i)
x[i++] = 0;
where no large pointer difference is calculated? But I realise
gcc's support for this kind of thing is limited, and that we do
try to emit a diagnostic for obvious instances...
In the (two) places that need this -- both conversions from
cst_and_fits_in_hwi -- the immediate problem is that the sign
of the type doesn't necessarily match the logical sign of the
difference. E.g. a negative offset can be represented as a large
unsigned value of sizetype.
Thanks,
Richard