This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: #define SLOW_BYTE_ACCESS in i386.h
- From: Eric Christopher <echristo at apple dot com>
- To: Hui-May Chang <hm dot chang at apple dot com>
- Cc: gcc-patches at gcc dot gnu dot org
- Date: Fri, 01 Sep 2006 11:03:31 -0700
- Subject: Re: #define SLOW_BYTE_ACCESS in i386.h
- References: <4796D556-1CD8-40E6-9A96-1D506F4D9799@apple.com>
Hui-May Chang wrote:
I have a question regarding "#define SLOW_BYTE_ACCESS" in i386.h. It is
used in "get_best_mode" routine which finds the best mode to use when
referencing a bit field. It is currently set to 0. If it is set it to 1,
it means "accessing less than a word of memory is no faster than
accessing a word of memory". I experimented with it and observed great
performance improvement. It is set to 1 for some other configurations
(e.g., rs6000, pa, ia64). Is it always a win to set it? Is it better to
set it for certain i386 architectures?
I'll bet that it's probably advantageous to set it for a couple of
reasons in the new chips at least:
1) You avoid the problem that got you here of large bitfields needing
shift/insert operations
2) You avoid length changing since you're mostly operating on things in
word mode.
However, I'm not an expert on the chip so I'd suggest posting a small
testcase that shows #1 for people and the resultant code differences so
they can see the difference. Hopefully someone with more intel
experience (like HJ or Jan) can comment on whether or not this is a good
general idea for the processor.
-eric