This is the mail archive of the
mailing list for the GCC project.
Re: stderr vs. STDERR_FILENO
Mark Mitchell <firstname.lastname@example.org> writes:
> There are three cases:
> (1) stderr open and attached to fd 2
> (2) stderr freopen'd (and not attached to fd 2)
> (3) stderr fclose'd
It occurs to me that there's actually a fourth case:
(4) the stderr FILE object intact, but pointing to a file descriptor
that's been close'd (and possibly later reopened) behind the back of
stdio using the POSIX calls
This is actually fairly common for some network daemons that do things
like close all their open file descriptors in a loop when they start.
(Whether that's a good idea or not is sort of out of the scope of the
discussion; people do it quite frequently.) These daemons often have no
intention of ever using stdio in the remainder of the program.
I have no idea how stdio reacts to this situation. I would naively expect
that it would just attempt to write to fd 2, whether it's closed or not,
so if the application has done this, fwrite to stderr is going to either
fail with EBADF or go somewhere completely random.
I'm not sure there's really a good way of using a standard stdio stream
behind the back of the user, as it were. There are just so many ways the
user can break things that occur commonly in practice. :/
Both using the stream and writing to the fd seem to have problems to me.
My *guess* is that using the stream is going to cause a more immediate and
obvious problem if the user has done something strange (like segfaults if
the stream has been fclose'd), but also runs more of a risk of a memory
clobber. Both approaches seem to run some risk of spewing messages in a
place they shouldn't go, although I would think that writing blindly to fd
2 runs a somewhat higher risk based on my experience.
Whichever approach is taken, it's clear the documentation needs to really
draw it to the user's attention.
Russ Allbery (email@example.com) <http://www.eyrie.org/~eagle/>