[Bug sanitizer/55309] gcc's address-sanitizer 66% slower than clang's
jakub at gcc dot gnu.org
gcc-bugzilla@gcc.gnu.org
Fri Feb 8 09:02:00 GMT 2013
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=55309
--- Comment #27 from Jakub Jelinek <jakub at gcc dot gnu.org> 2013-02-08 09:02:23 UTC ---
Zero based offset has the big disadvantage of imposing big requirements on the
executable.
Could we on x86_64 think about mem_to_shadow(x) (x >> 3) + 0x7fff8000 (note,
not |, but +)?
Then instead of something like:
movq %rdi, %rdx
movabsq $17592186044416, %rax
shrq $3, %rdx
cmpb $0, (%rdx,%rax)
jne .L5
movq (%rdi), %rax
ret
.L5:
pushq %rax
call __asan_report_load8
we could emit:
movq %rdi, %rdx
shrq $3, %rdx
cmpb $0, 0x7fff8000(%rdx)
jne .L5
movq (%rdi), %rax
ret
.L5:
pushq %rax
call __asan_report_load8
which is 7 bytes shorter sequence, without the need of an extra register and
the not so cheap movabs insn. By forcing PIE for everything, you are forcing
the PIC overhead of unnecessary extra indirections in many places (and, on
non-x86_64 usually it is even much more expensive).
More information about the Gcc-bugs
mailing list