This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [PATCH, x86_64]: Use soft-fp for TFmode (128bit) FP.
Uros Bizjak wrote:
Attached follow-on patch to soft-fp/op-common.h implements Jakub's
suggestion for FP_EXTEND and FP_TRUNC defines. However, IMO we should
also generate a FP_EXP_DENORM exception in both cases and
FP_EX_PRECISION in FP_TRUNC case.
...
int main()
{
long double a = 1.1e-4950L;
long double b = 3.1L;
Ouch.
This needs to be:
volatile long double a = 1.1e-4950L.
to test extendxftf functionality.
Unfortunatelly, this uncovers another generic bug. For denormals,
FP_UNPACK_RAW_E increases exponent field by one:
--cut here--
#define FP_UNPACK_RAW_E(X, val) \
do { \
union _FP_UNION_E _flo; _flo.flt = (val); \
\
X##_f0 = _flo.bits.frac; \
X##_f1 = 0; \
X##_e = _flo.bits.exp; \
X##_s = _flo.bits.sign; \
if (!X##_e && X##_f0 && !(X##_f0 & _FP_IMPLBIT_E)) \
{ \
X##_e++; \
FP_SET_EXCEPTION(FP_EX_DENORM); \
} \
} while (0)
--cut here--
When X_e equals zero (denormal) and X_f0 has some small value, i.e. 0x03
in above example), X_e (exponent field) gets increased by one. Following
this, detection of normal numbers via generic _FP_EXP_NORMAL macro goes
wrong.
Is this intentional? Should _FP_EXP_NORMAL macro detect extended type
denormals via implicit bit instead?
Uros.