This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug rtl-optimization/82677] Many projects (linux, coreutils, GMP, gcrypt, openSSL, etc) are misusing asm(divq/divl) etc, potentially resulting in faulty/unintended optimisations


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82677

Niels Möller <nisse at lysator dot liu.se> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |nisse at lysator dot liu.se

--- Comment #10 from Niels Möller <nisse at lysator dot liu.se> ---
Out of curiosity, how is this handled for division instructions generated by
gcc, with no __asm__ involved? E.g., consider

int foo(int d) {
  int r = 1234567;
  if (d != 0)
    r = r / d;
  return r;
}

On an architecture where the div instruction doesn't raise any exception on
divide by zero, this function could be compiled to a division instruction +
conditional move, without any branch instruction. Right?

But on most architectures, that optimization would be invalid, and the compiler
must somehow know that. Is that a property on the representation of division
expression? Or is it tied to some property of the instruction pattern for the
divide instruction?

My question really is: What would it take to mark an __asm__ expression so that
it's treated in the same way as a plain C division?

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]