This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: [testcase] simplify_binary_operation test (was Re: -freduce-all-givs and fortran)


> > So, in my mind, the question is where are we _still_ failing?  I
> > have some examples that go wrong for Sparc (due to existance of
> > reg+reg but not reg+reg+disp addressing), but that's a different
> > problem.
> 
> We fail as soon as the current heuristic tells the compiler *not* to
> reduce givs while the reduction of *all* givs will actually lead to
> fewer integer registers used than *not* reducing all givs.
Does someone know how other compilers handle that?
I've lurked around some sources (impact, sgipro) and they do reducing
of all givs always. This is understnadable, as both targets only modern
machines with no smart addressing modes and plenty of registers.

But the behaviour of Intel compiler appears to be that all givs are
reduced to and then they are "unreduced" as needed.

The loop optimizer's heuristics appears to be just nightmare, perhaps
we can come with better model after some thinking.

Honza


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]