unsigned int xornot (unsigned int a) { unsigned int b = a ^ 0xffff; unsigned int c = ~b; return c; } unsigned int notxor (unsigned int a) { unsigned int b = ~a; unsigned int c = b ^ 0xffff; return c; } The last tree-ssa form looks like: ;; Function xornot (xornot) xornot (a) { unsigned int c; unsigned int b; <bb 0>: b_2 = a_1 ^ 65535; c_3 = ~b_2; return c_3; } ;; Function notxor (notxor) notxor (a) { unsigned int c; unsigned int b; <bb 0>: b_2 = ~a_1; c_3 = b_2 ^ 65535; return c_3; }

Confirmed.

Will be fix by PR 15459 when fold is able to optimize ~(a<D1109>_1 ^ 65535) into ~a<D1104>_1 ^ 65535

I should note that the RTL level does this optimization.

Actually it is better for ~(a ^ CST) to come out as a ^ ~CST. Right now we actually already implement ~(a^CST) as a ^ ~CST. So we need just to implement (~a) ^ CST as a ^ ~CST. In fact this is what simplify_rtx does.

~(a^CST) is done in fold_unary with the comment of: /* Convert ~(X ^ Y) to ~X ^ Y or X ^ ~Y if ~X or ~Y simplify. */ Only (~a^~b) is simplified. That can be expanded to: (~a^b) if ~b simplifies, simplify the expression. likewise for (a^~b) (if ~a simplifies, simplify the expression). I am going to implement this.

I am no longer going to fix the fold issue, it is too much hasle to get this fixed.

Subject: Bug 15458 Author: sayle Date: Sun Oct 29 17:51:07 2006 New Revision: 118152 URL: http://gcc.gnu.org/viewcvs?root=gcc&view=rev&rev=118152 Log: PR tree-optimization/15458 * fold-const.c (fold_binary): Optimize ~X ^ C as X ^ ~C, where C is a constant. * gcc.dg/fold-xornot-1.c: New test case. Added: trunk/gcc/testsuite/gcc.dg/fold-xornot-1.c Modified: trunk/gcc/ChangeLog trunk/gcc/fold-const.c trunk/gcc/testsuite/ChangeLog

Fixed.