During the training part the run fails with: 200.c: In function 'htls_.clone.35': 200.c:26853:7: benchmark internal error: in ?, at fold-const.c:2677 during fdo1 with -mcpu=neoverse-v1 -Ofast -flto=auto I'll bisect and reduce but this is a preliminary submission This reappeared somewhere between 6e0b048fb5bc0809048ef8f487830ad26f4b87cf..9a4bb95a4e68b6f90a16f337b0b4cdb9af957ab1 But could have just been temporarily hidden. Before that it was also failing between a7ae0c31245a7db7abf2e80d0016510afe9c8ad0..979ca3ba366da7177f427e049f67673ec3e35442
Please add -fno-strict-aliasing and try again.
(In reply to Richard Biener from comment #1) > Please add -fno-strict-aliasing and try again. Already on. Full options are: -fprofile-generate -mcpu=neoverse-v1 -Ofast -fomit-frame-pointer -flto=auto -g3 -fno-strict-aliasing -fgnu89-inline -std=gnu17
Isn't this a dup of bug 115450 ?
(In reply to Andrew Pinski from comment #3) > Isn't this a dup of bug 115450 ? Don't believe so. This is only showing up with PGO for me, but it's only during training, so I suspect -fprofile-generate is doing something as it doesn't show up on the job without it.
Ok, I just realized why it fails for PGO, that's because PGO uses the train dataset. And indeed I can reproduce it now on `train` without LTO. ref is still fine which is likely why the other CIs didn't see it.
Works for me on x86_64-linux with -Ofast -march=znver4
(In reply to Richard Biener from comment #6) > Works for me on x86_64-linux with -Ofast -march=znver4 Yeah still failing here. I'll track down the change in code this week. It's on my list for the week.
Fixed by g:589d79e6268b055422a7b6c11cd0a8a4f2531a8c