This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Remove redundant AND from count reduction loop


Richard Biener <richard.guenther@gmail.com> writes:
> On Wed, Jun 24, 2015 at 1:10 PM, Richard Sandiford
> <richard.sandiford@arm.com> wrote:
>> Richard Biener <richard.guenther@gmail.com> writes:
>>>>> I'm fine with using tree_nop_conversion_p for now.
>>>>
>>>> I like the suggestion about checking TYPE_VECTOR_SUBPARTS and the element
>>>> mode.  How about:
>>>>
>>>>  (if (VECTOR_INTEGER_TYPE_P (type)
>>>>       && TYPE_VECTOR_SUBPARTS (type) == TYPE_VECTOR_SUBPARTS (TREE_TYPE (@0))
>>>>       && (TYPE_MODE (TREE_TYPE (type))
>>>>           == TYPE_MODE (TREE_TYPE (TREE_TYPE (@0)))))
>>>>
>>>> (But is it really OK to be adding more mode-based compatibility checks?
>>>> I thought you were hoping to move away from modes in the middle end.)
>>>
>>> The TYPE_MODE check makes the VECTOR_INTEGER_TYPE_P check redundant
>>> (the type of a comparison is always a signed vector integer type).
>>
>> OK, will just use VECTOR_TYPE_P then.
>
> Given we're in a VEC_COND_EXPR that's redundant as well.

Hmm, but is it really guaranteed in:

 (plus:c @3 (view_convert (vec_cond @0 integer_each_onep@1 integer_zerop@2)))

that the @3 and the view_convert are also vectors?  I thought we allowed
view_converts from vector to non-vector types.

>>>>>>> +/* We could instead convert all instances of the vec_cond to negate,
>>>>>>> +   but that isn't necessarily a win on its own.  */
>>>>>
>>>>> so p ? 1 : 0 -> -p?  Why isn't that a win on its own?  It looks
>>>>> more compact
>>>>> at least ;)  It would also simplify the patterns below.
>>>>
>>>> In the past I've dealt with processors where arithmetic wasn't handled
>>>> as efficiently as logical ops.  Seems like an especial risk for 64-bit
>>>> elements, from a quick scan of the i386 scheduling models.
>>>
>>> But then expansion could undo this ...
>>
>> So do the inverse fold and convert (neg (cond)) to (vec_cond cond 1 0)?
>> Is there precendent for doing that kind of thing?
>
> Expanding it as this, yes.  Whether there is precedence no idea, but
> surely the expand_unop path could, if there is no optab for neg:vector_mode,
> try expanding as vec_cond .. 1 0.

Yeah, that part isn't the problem.  It's when there is an implementation
of (neg ...) (which I'd hope all real integer vector architectures would
support) but it's not as efficient as the (and ...) that most targets
would use for a (vec_cond ... 0).

> There is precedence for different
> expansion paths dependent on optabs (or even rtx cost?).  Of course
> expand_unop doesn't get the original tree ops (expand_expr.c does,
> where some special-casing using get_gimple_for_expr is).  Not sure
> if expand_unop would get 'cond' in a form where it can recognize
> the result is either -1 or 0.

It just seems inconsistent to have the optabs machinery try to detect
this ad-hoc combination opportunity while still leaving the vcond optab
to handle more arbitrary cases, like (vec_cond (eq x y) 0xbeef 0).
The vcond optabs would still have the logic needed to produce the
right code, but we'd be circumventing it and trying to reimplement
one particular case in a different way.

Thanks,
Richard


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]