When running an OpenGL regression test against a recent change in Mesa git which uses always_inline, we are getting odd results. 64bit builds seem to be working fine, it's just 32bit with a problem.
gcc 6.3.1 and gcc 7.1 have both been confirmed to have the same issue.
If I remove always_inline the problem goes away.
always_inline was added in this commit , but it wasn't until a second user of the inlined function was added that the problem began .
I've only tested the issue on Intel gpu's so far, but likely exists on all drivers. It should be reproducible on Sandy Bridge and later.
1. Build the a 32 versions of piglit  and mesa  from git.
2. Set LD_LIBRARY_PATH to the new Mesa libs and run the following tests from the piglit git directory.
./bin/shader_runner tests/spec/arb_shader_bit_encoding/execution/and-clamp.shader_test -auto -fbo
If you have any questions about this please ask for help on the #dri-devel or #intel-gfx freenode channels. My nick is tarceri but there should be someone who can help if I'm not around.
Adding url to Mesa bug report.
This sounds like maybe some undefined code that only shows up with always_inline and maybe 32bit x86.
Can you attach the preprocessed source of where the always_inline makes a difference?
Also can you try with -fwarpv (turns signed integer overflow to be defined as overflow) -fno-strict-aliasing (turns aliasing issues from undefined to being defined)?
No change with -fno-strict-aliasing
-fwarpv is not geting past config with warnings such as:
WARNING: sys/sysmacros.h: present but cannot be compiled
Seems to work correctly with newer versions of GCC.