This is the mail archive of the
mailing list for the GCC project.
Re: builtin_bswap plus enhancements
- From: Chris Lattner <clattner at apple dot com>
- To: Falk Hueffner <falk at debian dot org>
- Cc: Eric Christopher <echristo at apple dot com>, "gcc-patches at gcc dot gnu dot org Patches" <gcc-patches at gcc dot gnu dot org>, paul at codesourcery dot com, iant at google dot com
- Date: Thu, 10 Aug 2006 12:57:10 -0700
- Subject: Re: builtin_bswap plus enhancements
- References: <44D29BA2.email@example.com> <2895761B-5D52-4C32-894C-B19DDFBD9F4D@apple.com> <firstname.lastname@example.org>
On Aug 10, 2006, at 12:43 PM, Falk Hueffner wrote:
Chris Lattner <email@example.com> writes:
Why not add bswap16 as well?
It should be unnecessary, since any attempt to express it should be
picked up by the rot idiom recognizer, and the backends should then
emit optimal code for constant-8 rots (and if that doesn't actually
happen, we should rather fix that).
Sure, makes sense. I was wondering more from the sake of consistency
than from what GCC can and can not do.
LLVM, for example, recognizes the common bswap idioms for 16/32/64
bits and generates good code for them, but also exposes intrinsics
for each. The intrinsics are important to clients who want to *know*
they are going to get good code, without having to know that a
particular version of the compiler will do the right thing, or worry
about regressions in future versions.
In any case, I have no particular interest in what GCC does, it just
seemed odd to have buildins for 32/64-bit but not 16-bit.