PING [Patch][Middle-end]Add -fzero-call-used-regs=[skip|used-gpr|all-gpr|used|all]
Tue Aug 25 21:54:08 GMT 2020
> On Aug 24, 2020, at 3:20 PM, Segher Boessenkool <firstname.lastname@example.org> wrote:
> On Mon, Aug 24, 2020 at 01:02:03PM -0500, Qing Zhao wrote:
>>> On Aug 24, 2020, at 12:49 PM, Segher Boessenkool <email@example.com> wrote:
>>> On Wed, Aug 19, 2020 at 06:27:45PM -0500, Qing Zhao wrote:
>>>>> On Aug 19, 2020, at 5:57 PM, Segher Boessenkool <firstname.lastname@example.org> wrote:
>>>>> Numbers on how expensive this is (for what arch, in code size and in
>>>>> execution time) would be useful. If it is so expensive that no one will
>>>>> use it, it helps security at most none at all :-(
>>> Without numbers on this, no one can determine if it is a good tradeoff
>>> for them. And we (the GCC people) cannot know if it will be useful for
>>> enough users that it will be worth the effort for us. Which is why I
>>> keep hammering on this point.
>> I can collect some run-time overhead data on this, do you have a recommendation on what test suite I can use
>> For this testing? (Is CPU2017 good enough)?
> I would use something more real-life, not 12 small pieces of code.
There is some basic information about the benchmarks of CPU2017 in below link:
GCC itself is one of the benchmarks in CPU2017 (502.gcc_r). And 526.blender_r is even larger than 502.gcc_r.
And there are several other quite big benchmarks as well (perlbench, xalancbmk, parest, imagick, etc).
More information about the Gcc-patches