This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug tree-optimization/79564] [missed optimization][x86] relaxed atomic counting compiled the same as seq_cst
- From: "rguenth at gcc dot gnu.org" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: Fri, 17 Feb 2017 08:46:03 +0000
- Subject: [Bug tree-optimization/79564] [missed optimization][x86] relaxed atomic counting compiled the same as seq_cst
- Auto-submitted: auto-generated
- References: <bug-79564-4@http.gcc.gnu.org/bugzilla/>
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79564
Richard Biener <rguenth at gcc dot gnu.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|UNCONFIRMED |NEW
Last reconfirmed| |2017-02-17
CC| |rguenth at gcc dot gnu.org
Ever confirmed|0 |1
--- Comment #2 from Richard Biener <rguenth at gcc dot gnu.org> ---
Confirmed. atomics are basically black-box optimization barriers on GIMPLE.
I once had some patches to at least make alias-analysis not choke on them
too much but never got to the point submitting them...
Here you are asking to apply store-motion which isn't trivial given the IL
in its current form:
int count_relaxed(const char*) (const char * str)
{
static struct atomic_int counter = {.D.8730={._M_i=0}};
char _1;
unsigned int _7;
int _8;
char _14;
<bb 2> [15.00%]:
str_13 = str_4(D) + 1;
_14 = *str_4(D);
if (_14 != 0)
goto <bb 3>; [85.00%]
else
goto <bb 4>; [15.00%]
<bb 3> [85.00%]:
# str_17 = PHI <str_6(3), str_13(2)>
__atomic_fetch_add_4 (&MEM[(struct __atomic_base *)&counter]._M_i, 1, 0);
str_6 = str_17 + 1;
_1 = MEM[base: str_6, offset: -1B];
if (_1 != 0)
goto <bb 3>; [85.00%]
else
goto <bb 4>; [15.00%]
<bb 4> [15.00%]:
_7 = __atomic_load_4 (&MEM[(const struct __atomic_base *)&counter]._M_i, 5);
[tail call]
_8 = (int) _7;
return _8;