This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: US-CERT Vulnerability Note VU#162289
Florian Weimer wrote:
Existing safe C implementations take a performance hit which is a factor
between 5 and 11 in terms of execution time. There is some new research
that seems to get away with a factor less than 2, but it's pretty recent
and I'm not sure if it's been reproduced independently. If GCC users
are actually willing to take that hit for some gain in security
(significant gains for a lot of legacy code, of course), then most of
the recent work on GCC has been wasted. I don't think this is the case.\
This is wholly excessive rhetoric, it's using a common invalid device
in argument, sometimes called extenso ad absurdum. It goes like this
You advocate A
But then to be consistent you should also advocate B,C,D,E
I will now argue against the combination A,B,C,D,E :-) :-)
These implementations that are 5-11 times slower are doing all sorts
of things that
a) I am not advocating in this discussion
b) I would not advocate in any case
Are you really saying that this particular optimization is costly to
eliminate? If so I just don't believe that allegation without data.
Keep in mind it's not the comparison that's the real problem here, it's
the subsequent buffer overflow. And plugging that hole in full
generality is either difficult to do, or involves a significant run-time
performance overhead (or both).
And there you go! I do NOT advocate "plugging that hole in full
generality", so go try this argument on someone who does (I don't
think there are any such people around here!)
To me, dubious optimizations like this at the very least should
be optional and able to be turned off.
Why is this optimization dubious? We would need to look at real-world
code to tell, and so far, we haven't heard anything about the context in
which the issue was originally encountered.
An optimziation is dubious to me if
a) it produces surprising changes in behavior (note the importance of
the word surprising here)
b) it does not provide significant performance gains (note the
importance of the word significant here).
I find this optimization qualifies as meeting both criteria a) and b),
so that's why I consider it dubious.