[RFC Patch]: Implement remainder() as built-in function [PR fortran/24518]

Mike Stump mrs@apple.com
Wed Oct 25 20:35:00 GMT 2006


On Oct 25, 2006, at 6:42 AM, Kaveh R. GHAZI wrote:
> I agree with Richard that transforming printf("hello") into
> fputs("hello",__get_stdout()) may not be a good option.

For -Os, that would be bad.  For -O2, I'd let the numbers speak for  
themselves, though, I'd also tend to think it might not be worth it.   
Though, maybe a fwrite style call, so that we don't have to examine  
the bytes would be a win, when the string output it known.

> Now if you could get it to inline __get_stdout() I would be much  
> more excited.

You might hate me, but, autoconf can smell the system:

$ cat > t1.c
#include <stdio.h>
stdout
$ gcc -E t1.c | tail -1
__stdoutp
$ gcc -E t1.c | grep 'extern FILE \* __stdoutp;'
extern FILE * __stdoutp;

and then, if we think we understand what the system is doing, we can  
then just do it.  Linux is similar, it is mapped to just stdout.  If  
we can pick up 90% of the systems we care about with one autoconf  
test, why not.  The harder part, is if one needs to understand the  
type (I hope not); if not, one can just

   typdef struct unknown FILE;
   extern FILE *cpped_name_for_stdout;

If one needs to know the type, well, just let say that I think I'd  
rather throw my hands up in the air.  Maybe pend that on LTO and have  
the gcc build phase LTOize the type and re-appear it back into  
compilation time, but one would seriously need to consider what the  
optimizer could do with that information and what the system libc  
people do for maintenance and how LTO impacts binary stability that is  
trying to be managed by a changing libc and how that could cause the  
LTO type optimizer to go wrong.  Now, the good news, we will already  
need to think about and solve that problem.  :-)

Anyway, back to the question of why not, we'd need to acknowledge that  
some systems might trip up on this (-thread turns on -D_THREADS=1 and  
that causes stdout (or stderr) to map a different way, and the system  
requires this environment when compiling, and users do that, but gcc  
wasn't intelligent enough to know about it while autoconfing, or  
building libgcc, so, it got it wrong, and that can't be undone by the  
user of gcc, but only by the gcc developer and then after that, the  
user would need to reinstall a new gcc.  The system (libc) isn't  
technically limited on the numbers of ways in which such funniness can  
kill us, though, in practice, I think that for stdout and errno, we  
can safely workout what real systems do (and don't do) and engineer  
something that will work reliably within a single release cycle.


Now, for the people that don't like such magical things, one can  
introduce the old-school approach and do

   #define TARGET_STDOUT_NAME "stdout"

on those system that are `known' to be safe to do this on in all  
situations, and just have the mid-end use that given knowledge; we  
know it is right be fiat, no autoconf to go wrong (or to slow the  
compilation of the compiler).  If this simple interface allows  
coverage of 90% of the systems that we care about, again, seems  
reasonable to me.

Now, a word on unnecessary libc dependencies in gcc on the system.   
The exposure _is_ greater for stdout or an inline version of errno  
than for the __get_errno type routine as the later only  
requires .a/.so forward portability by libc.  The worse case would be  
one might have to rebuild/reinstall the compiler at OS upgrade times  
to avoid mixing what were internal details of libc form the previous  
libc into newly built software that must use only internal details  
from the new libc.  This doesn't happen in the __set_errno case  
because the system engineered forward portability of .a/.so files on  
their system, or we don't care.



More information about the Gcc-patches mailing list