This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Improve x86 and + rotate (PR target/82498)


On Thu, Oct 12, 2017 at 8:32 AM, Uros Bizjak <ubizjak@gmail.com> wrote:
> On Wed, Oct 11, 2017 at 10:59 PM, Jakub Jelinek <jakub@redhat.com> wrote:
>> Hi!
>>
>> As can be seen on the testcase below, the *<rotate_insn><mode>3_mask
>> insn/splitter is able to optimize only the case when the and is
>> performed in SImode and then the result subreged into QImode,
>> while if the computation is already in QImode, we don't handle it.
>>
>> Fixed by adding another pattern, bootstrapped/regtested on x86_64-linux and
>> i686-linux, ok for trunk?
>
> We probably want to add this variant to *all* *_mask splitters (there
> are a few of them in i386.md, please grep for "Avoid useless
> masking"). Which finally begs a question - should we implement this
> simplification in a generic, target-independent way? OTOH, we already
> have SHIFT_COUNT_TRUNCATED and shift_truncation_mask hooks, but last
> time I try the former, there were some problems in the testsuite on
> x86. I guess there are several targets that would benefit from
> removing useless masking of count operands.

Oh, and there is a strange x86 exception in the comment for
SHIFT_COUNT_TRUNCATED. I'm not sure what "(real or pretended)
bit-field operation" means, but variable-count BT instruction with
non-memory operand (we never generate variable-count BTx with memory
operand) masks its count operand as well.

Uros.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]