I'm sorry, this is perhaps not the correct component but my knowledge of gcc internals does not allow me to do more than guess. For all version of gcc I've tried, the following code: struct point { int x, y; }; bool f(point a, point b) { return a.x == b.x && a.y == b.y; } bool f(unsigned long long a, unsigned long long b) { return a == b; } is compiled to f(point, point): xor eax, eax cmp edi, esi je .L5 ret .L5: sar rdi, 32 sar rsi, 32 cmp edi, esi sete al ret f(unsigned long long, unsigned long long): cmp rdi, rsi sete al ret I'd expect f(point, point) to have the same assembly as f(unsigned long long, unsigned long long). Yours, -- Jean-Marc Bourguet
Confirmed.
Confirmed. Happens on aarch64 too: cmp w0, w1 beq .L5 mov w0, 0 ret .p2align 2,,3 .L5: asr x0, x0, 32 asr x1, x1, 32 cmp w0, w1 cset w0, eq ret I wonder if we could expose that point is passed via a 64bit argument at the tree level and then use BIT_FIELD_REF to do the extraction or lower the field extractions to BIT_FIELD_REF. Also we don't optimize: bool f1(unsigned long long a, unsigned long long b) { return (((int)a) == ((int)b)) && ((int)(a>>32) == (int)(b>>32)); } into just return a==b; either. Which is another thing which needs to happen after the BIT_FIELD_REF Change ...