This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug tree-optimization/56982] [4.8/4.9 Regression] Bad optimization with setjmp()
- From: "rguenther at suse dot de" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: Wed, 17 Apr 2013 09:07:10 +0000
- Subject: [Bug tree-optimization/56982] [4.8/4.9 Regression] Bad optimization with setjmp()
- Auto-submitted: auto-generated
- References: <bug-56982-4 at http dot gcc dot gnu dot org/bugzilla/>
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=56982
--- Comment #7 from rguenther at suse dot de <rguenther at suse dot de> 2013-04-17 09:07:10 UTC ---
On Wed, 17 Apr 2013, jakub at gcc dot gnu.org wrote:
>
> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=56982
>
> --- Comment #6 from Jakub Jelinek <jakub at gcc dot gnu.org> 2013-04-17 08:56:00 UTC ---
> I don't see how we could declare the testcase invalid, why would n need to be
> volatile? It isn't live across the setjmp call, it is even declared after the
> setjmp call, and it is always initialized after the setjmp call.
Then there is no other way but to model the abnormal control flow
properly. Even simple CSE can break things otherwise. Consider
int tmp = a + 1;
setjmp ()
int tmp2 = a + 1;
even on RTL CSE would break that, no? setjmp doesn't even
forcefully start a new basic-block.
Hmm, maybe doing that, start a new BB for all returns-twice
calls and add an abnormal edge from FN entry is enough to
avoid all possibly dangerous transforms.
Richard.