[Bug fortran/48426] [patch] Quad precision promotion

jvdelisle at frontier dot com gcc-bugzilla@gcc.gnu.org
Sun Apr 3 21:14:00 GMT 2011


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48426

--- Comment #5 from jvdelisle at frontier dot com 2011-04-03 21:14:41 UTC ---
On 04/03/2011 12:49 PM, inform at tiker dot net wrote:
> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48426
>
> --- Comment #3 from Andreas Kloeckner<inform at tiker dot net>  2011-04-03 19:49:51 UTC ---
> (In reply to comment #2)
>> There is already -fdefault-real-8, -fdefault-integer-8, and
>> -fdefault-double-8.  This is already 3 too many hacks.  Adding
>> an additional 7 options is 7 too many.
>
>> From a purist perspective, I absolutely gree with you. Practically however,
> in dealing with legacy codes (*), the value of being able to do this should not
> be underestimated. If this was a useless thing to do, why would Lahey/Fujitsu
> have included these flags?
>

Lets take another perspective, different than Steve's comments. Lets assume 
no-one is blindly using the legacy codes with changes in default kinds without 
at least some understanding of the code and knowing what it is doing.  With 
modern editors or scripts, one could modify declarations to use a 
kind=parameter.  This could be done to change REAL to REAL(kind=wp). wp meaning 
working precision.  One could also update real constants to append _wp to them. 
  Then add a use statement at the beginning of each file that sets wp. Several 
different precision parameters could be set up and used in varying places 
depending on the application.  One gets a finer degree of control.

Now I have presented a simplistic example and I am sure the real code may offer 
some additional challenges. But, the net result would be more flexible code
that 
is more portable and the original issue becomes moot.

If ones knowledge of the code is greater then mine it is likely one could see 
more opportunities for improving it with simple changes like this, especially
in 
the code that is critical to the issue.  Once the changes are made, perhaps
even 
incrementally over time in small parts, they never have to be changed back, 
because the code is standard conforming to do so.

Now, we can make arguments to go in either direction; put the burden of 
resolving the issue on the compiler team or put the burden on the 
user/developers.  I think Steve is saying that we have an obligation on the 
compiler side to protect users who may not be as knowledgeable/experienced from 
doing stupid things, whereas someone such as yourself would probably know what 
to be careful of and could avoid it. Providing the adjustable precision on the 
user/developer side assures that it is done carefully and correctly, specific
to 
the application.

Now the flip side is that the less than 100 lines of code patch on the compiler 
side is a lot less effort then perhaps updating the 100,000 lines of legacy 
code. However, the compiler changes affect thousands of users and thousands of 
applications.  That means there is, lets say, 1000 times more risk to the
change 
on the compiler side.  (With no warranty of course).

I am leaning toward we need to err on the conservative side, and let others 
comment from here. We have had these discussions on this topic many times 
before. I don't think there is an easy answer.  My personal preference would be 
to update the user code. (aka software maintenance).

Regards



More information about the Gcc-bugs mailing list