Georg Johann Lay <avr@gjlay.de> writes:
Suppose an backend implements some unspec_volatile (UV) and has a
destinct understanding of what it should be.
If other parts of the compiler don't know exactly what to do, it's a
potential source of trouble.
- "May I schedule across the unspec_volatile (UV) or not?"
- "May I move the UV from one BB into an other"
- "May I doublicate UVs or reduce them?" (unrolling, ...)
That kind of questions.
If there is no clear statement/specification, we have a problem and a
dÃja vu of the too well known unspecified/undefined/implementation
defined like
i = i++
but now within the compiler, for instance between backend and middle end.
The unspec_volatile RTL code is definitely under-documented.
In general, the rules about unspec_volatile are similar to the rules
about the volatile qualifier. In other words, an unspec_volatile may be
duplicated at compile time, e.g., if a loop is unrolled, but must be
executed precisely the specified number of times at runtime. An
unspec_volatile may move to a different basic block under the same
conditions. If the compiler can prove that an unspec_volatile can never
be executed, it can discard it.
That is clear enough. What is much less clear is the extent to which an
unspec_volatile acts as a scheduling barrier. The scheduler itself
never moves any instructions across an unspec_volatile, so in that sense
an unspec_volatile is a scheduling barrier. However, I believe that
there are cases where the combine pass will combine instructions across
an unspec_volatile, so in that sense an unspec_volatile is not a
scheduling barrier. (The combine pass will not attempt to combine the
unspec_volatile instruction itself.)
It may be that those cases in combine are bugs. However, neither the
documentation nor the implementation are clear on that point.