This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PATCH] Fix PR c/11420 (broken amd64 movabs*) (take 2)


On Thu, Jul 03, 2003 at 11:02:37AM -0700, Richard Henderson wrote:
> On Thu, Jul 03, 2003 at 01:46:36PM -0400, Jakub Jelinek wrote:
> > Do you think x86_64_movabs_operand should include the MEM and
> > add new constraints for MEM with immediate operand or register operand,
> 
> Yes, I think this would be best.

Unfortunately, it does not seem to work (what I've tried is attached
as P2 patch). reload expects strict_memory_address_p for MEM operands
in way too many places. Given that movabs* patterns are there for
MEMs which are not legitimate addresses this means reload always reloads
the addresses into a register, so say:
long bar (void);
void foo (void)
{
  *(long *)0xabcdef0000000000 = bar ();
}
which ought to be
        call    bar
        movabsq %rax, -6066930339719151616
is suddenly:
        movabsq $-6066930339719151616, %rdx
        movq    %rax, (%rdx)
(and similarly with SYMBOL_REFs).

Below is implemented the uglier, but apparently working alternative
of checking insn for volatile MEMs.

Also, while working on the patch, I've noticed that when in 2001 the 2 SSE
alternatives were expanded into 3, type attribute was not updated,
so Y<-m alternative is not considered ssemov but imov.

	Jakub

Attachment: P4
Description: Text document

Attachment: P2
Description: Text document


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]