#include <stdint.h> struct foo { uint32_t x:20; }; int bar(struct foo f) { if (f.x) { uint32_t y = (uint32_t)f.x*4096; if (y<200) return 1; else return 2; } return 3; } Here, truth of the condition f.x implies y>=4096, but GCC does not DCE the y<200 test and return 1 codepath. I actually had this come up in real world code, where I was considering use of an inline function with nontrivial low size cases when a "page count" bitfield is zero, where I expected these nontrivial cases to be optimized out based on already having tested that the page count being nonzero, but GCC was unable to do it. LLVM/clang does it.
Confirmed via godbolt: https://godbolt.org/z/FexRgJ
The problem is if (f.x) gets lowered too early. This is not directly a VPR issue either. Changing the code slightly: #include <stdint.h> struct foo { uint32_t x:20; }; int bar(struct foo f) { uint32_t y = (uint32_t)f.x; if (y) { y *= 4096; if (y<200) return 1; else return 2; } return 3; } GCC is able to optimize it.
Just to quote EVRP sees <bb 2> : _1 = VIEW_CONVERT_EXPR<unsigned int>(f); _2 = _1 & 1048575; if (_2 != 0) goto <bb 3>; [INV] else goto <bb 6>; [INV] <bb 3> : _3 = f.x; _4 = (unsigned int) _3; y_8 = _4 * 4096; if (y_8 <= 199) thus the f.x != 0 test has been folded by one of those $?%&! permature fold-const transforms to if ((BIT_FIELD_REF <f, 32, 0> & 1048575) != 0) the fix is to get rid of those (and fix the "fallout").
Mine for GCC 13.
No longer working on bitfields optimization. Maybe in a few years I will be again but not today.