[PATCH] rs6000: Add optimizations for _mm_sad_epu8

Segher Boessenkool segher@kernel.crashing.org
Fri Nov 19 18:09:32 GMT 2021


Hi!

On Fri, Oct 22, 2021 at 12:28:49PM -0500, Paul A. Clarke wrote:
> Power9 ISA added `vabsdub` instruction which is realized in the
> `vec_absd` instrinsic.
> 
> Use `vec_absd` for `_mm_sad_epu8` compatibility intrinsic, when
> `_ARCH_PWR9`.
> 
> Also, the realization of `vec_sum2s` on little-endian includes
> two shifts in order to position the input and output to match
> the semantics of `vec_sum2s`:
> - Shift the second input vector left 12 bytes. In the current usage,
>   that vector is `{0}`, so this shift is unnecessary, but is currently
>   not eliminated under optimization.

The vsum2sws implementation uses an unspec, so there is almost no chance
of anything with it being optimised :-(

It rotates it right by 4 bytes btw, it's not a shift.

> - Shift the vector produced by the `vsum2sws` instruction left 4 bytes.
>   The two words within each doubleword of this (shifted) result must then
>   be explicitly swapped to match the semantics of `_mm_sad_epu8`,
>   effectively reversing this shift.  So, this shift (and a susequent swap)
>   are unnecessary, but not currently removed under optimization.

Rotate left by 4 -- same thing once you consider word 0 and 2 are set
to zeroes by the sum2sws.

Not sure why it is not optimised, what do the dump files say?  -dap and
I'd start looking at the combine dump.

> Using `__builtin_altivec_vsum2sws` retains both shifts, so is not an
> option for removing the shifts.
> 
> For little-endian, use the `vsum2sws` instruction directly, and
> eliminate the explicit shift (swap).
> 
> 2021-10-22  Paul A. Clarke  <pc@us.ibm.com>
> 
> gcc
> 	* config/rs6000/emmintrin.h (_mm_sad_epu8): Use vec_absd
> 	when _ARCH_PWR9, optimize vec_sum2s when LE.

Please don't break changelog lines early.

> -  vmin = vec_min (a, b);
> -  vmax = vec_max (a, b);
> +#ifndef _ARCH_PWR9
> +  __v16qu vmin = vec_min (a, b);
> +  __v16qu vmax = vec_max (a, b);
>    vabsdiff = vec_sub (vmax, vmin);
> +#else
> +  vabsdiff = vec_absd (a, b);
> +#endif

So hrm, maybe we should have the vec_absd macro (or the builtin) always,
just expanding to three insns if necessary.

Okay for trunk with approproate changelog and commit message changes.
Thanks!


Segher


More information about the Gcc-patches mailing list