This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Optimise the fpclassify builtin to perform integer operations when possible


On Tue, Sep 13, 2016 at 6:15 PM, Jeff Law <law@redhat.com> wrote:
> On 09/13/2016 02:41 AM, Jakub Jelinek wrote:
>>
>> On Mon, Sep 12, 2016 at 04:19:32PM +0000, Tamar Christina wrote:
>>>
>>> This patch adds an optimized route to the fpclassify builtin
>>> for floating point numbers which are similar to IEEE-754 in format.
>>>
>>> The goal is to make it faster by:
>>> 1. Trying to determine the most common case first
>>>    (e.g. the float is a Normal number) and then the
>>>    rest. The amount of code generated at -O2 are
>>>    about the same +/- 1 instruction, but the code
>>>    is much better.
>>> 2. Using integer operation in the optimized path.
>>
>>
>> Is it generally preferable to use integer operations for this instead
>> of floating point operations?  I mean various targets have quite high
>> costs
>> of moving data in between the general purpose and floating point register
>> file, often it has to go through memory etc.
>
> Bit testing/twiddling is obviously a trade-off for a non-addressable object.
> I don't think there's any reasonable way to always generate the most
> efficient code as it's going to depend on (for example) register allocation
> behavior.
>
> So what we're stuck doing is relying on the target costing bits to guide
> this kind of thing.

I think the reason for this patch is to provide a general optimized
integer version.

The only reason to not use integer operation (compared to what
fold_builtin_classify
does currently) is that the folding is done very early at the moment
and it's harder
to optimize the integer bit-twiddling with more FP context known.
Like if we know
if (! isnan ()) then unless we also expand that inline via
bit-twiddling nothing will
optimize the followup test from the fpclassify.   This might be somewhat moot
at the moment given our lack of FP value-range propagation but it should be a
general concern (of doing this too early).

I think it asks for a FP (class) propagation pass somewhere (maybe as part of
complex lowering which already has a similar "coarse" lattice -- not that I like
its implementation very much) and doing the "lowering" there.

Not something that should block this patch though.

Richard.

> jeff


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]