This is the mail archive of the
mailing list for the GCC project.
Re: [PING] lfloor/lceil and rint SSE expansion for x86_64/i?86
- From: Roger Sayle <roger at eyesopen dot com>
- To: Jan Hubicka <hubicka at ucw dot cz>
- Cc: Richard Guenther <rguenther at suse dot de>, <gcc-patches at gcc dot gnu dot org>
- Date: Sun, 29 Oct 2006 12:36:55 -0700 (MST)
- Subject: Re: [PING] lfloor/lceil and rint SSE expansion for x86_64/i?86
On Sun, 29 Oct 2006, Jan Hubicka wrote:
> this patch seems to noticeably increase memory consumption.
> Perhaps setup cost of optabs?
After a bit of digging, it turns out that the way we handle optabs is
becoming increasingly inefficient. Back when targets had a small
handful of machine modes, there wasn't a problem. But with architectures
becoming increasingly complex, with numerous vector modes, etc... these
huge tables (richi reported 130K each) are rapidly becoming unreasonable
and incredibly sparse.
My current thinking is that we can probably save a significant amount of
memory by rewriting genopinit.c to use machine generated functions to
select target's appropriate insn_code, using switches. Indeed, for the
targets without special instructions, we there shouldn't be any overhead.
Hence instead of a NUM_MACHINE_MODES x NUM_MACHINE_MODES array that's
completely empty, we use the default "return CODE_for_nothing;" hook.
The only fly in the ointment is the way we allow targets to tweak these
tables at runtime, to either rename the libcall or alter the insn_code.
But I seen no reason to dynamically allocate all these large optab tables
at start-up, even on machines where they are not relevant.