[PATCH] introduce --param max-object-size

Jeff Law law@redhat.com
Tue Dec 1 22:23:05 GMT 2020



On 11/30/20 3:21 PM, Martin Sebor wrote:
> On 11/30/20 1:29 PM, Jeff Law wrote:
>>
>>
>> On 11/17/20 7:09 PM, Martin Sebor wrote:
>>> On 11/16/20 4:54 PM, Jeff Law wrote:
>>>>
>>>> On 11/16/20 2:04 AM, Richard Biener via Gcc-patches wrote:
>>>>> On Sun, Nov 15, 2020 at 1:46 AM Martin Sebor via Gcc-patches
>>>>> <gcc-patches@gcc.gnu.org> wrote:
>>>>>> GCC considers PTRDIFF_MAX - 1 to be the size of the largest object
>>>>>> so that the difference between a pointer to the byte just past its
>>>>>> end and the first one is no more than PTRDIFF_MAX.  This is too
>>>>>> liberal in LP64 on most systems because the size of the address
>>>>>> space is constrained to much less than that, both by the width
>>>>>> of the address bus for physical memory and by the practical
>>>>>> limitations of disk sizes for swap files.
>>>>> Shouldn't this be a target hook like MAX_OFILE_ALIGNMENT then?
>>>>
>>>> I think one could argue either way.  Yes, the absolutes are a function
>>>> of the underlying hardware and it can change over the lifetime of a
>>>> processor family which likey differs from MAX_OFILE_ALIGNMENT.
>>>>
>>>>
>>>> A PARAM gives the developer  a way to specify the limit which is more
>>>> flexible.
>>>>
>>>>
>>>> What I'm not really not sure of is whether is really matters in
>>>> practice
>>>> for end users.
>>>
>>> I'd like to do two things with this change: 1) make it easier
>>> (and encourage users) to detect as early as possible more bugs
>>> due to excessive sizes in various function calls (malloc, memcpy,
>>> etc.), and 2) verify that GCC code uses the limit consistently
>>> and correctly.
>>>
>>> I envision the first would appeal to security-minded distros
>>> and other organizations that use GCC as their system compiler.
>>> For those, a target hook would be more convenient than a --param.
>>> But I also expect individual projects wanting to impose stricter
>>> limits than distros select.  For those, a --param is the only
>>> choice (aside from individual -Wlarger-than- options(*)).
>>>
>>> With this in mind, I think providing both a target hook and
>>> a --param has the best chance of achieving these goals.
>>>
>>> The attached patch does that.
>>>
>>> Martin
>>>
>>> [*] To enforce more realistic object size limits than PTRDIFF_MAX,
>>> GCC users today have to set no fewer than five (or six if we count
>>> -Wstack-usage) options: -Walloca-larger-than,
>>> -Walloc-size-larger-than, -Wframe-larger-than, -Wlarger-than, and
>>> -Wvla-larger-than.  The limits are all independent of one another.
>>> I expect providing a single configurable baseline value for all
>>> these options to use and refine to be helpful to these users.
>>>
>>> gcc-max-objsize.diff
>>>
>> The more I think about this, the more I think it's not really useful in
>> practice.
>>
>> I don't see distros using this flag as there's likely no good values a
>> distro could use that would likely catch bogus code without needlessly
>> flagging valid code.
>
> Red Hat documents 128TB of maximum x86_64 per-process virtual address
> space:
>
>   https://access.redhat.com/articles/rhel-limits
>
> My understanding is that no x86_64 implementation exists that supports
> objects larger than 2^48 bytes.  AFAIK, other 64-bit architectures and
> operating system have similar limits.  The Red Hat page mentions limits
> for all our supported architectures.
And are there any real world cases where lowering from PTRDIFF_MAX to
one of these limits actually matters in our ability to detect bogus
code?   I strongly suspect the answer is no.

>
>>
>> I don't see individual projects using this code either -- for the most
>> part I would not expect a project developer to be able to accurately
>> predict the maximum size of allocations they potentially perform and
>> then bake that into their build system.  There are exceptions (kernel &
>> embedded systems come immediately to mind).
>
> They can refer to documentation like the RHEL link above to figure
> that out.  But even with 64-bit addresses, the size of the virtual
> address space is limited by the amount of physical memory and
> the size of the swap file (limited by the size of the disk), and
> for practical purposes, by the transfer rate of the disk.  So with
> some math, those who care about these things can easily come up
> with a more realistic limit for their application than PTRDIFF_MAX.
Yes, they can, but I strongly suspect they won't.

>
>>
>> And finally, I'm really not a fan of --params for end-user needs.  Those
>> feel much more like options that we as GCC developers use to help
>> ourselves rather than something we encourage others to use.
>
> I don't insist on it to be a parameter.  Whatever other knob will
> work as well.  I just thought this is what parameters were for(*).
>
> Will you approve the patch with the parameter changed to an option?
> Say -fmax-object-size, or would I be wasting my time?
Like the other thread (I believe around loop unrolling heuristics) a
-f<whatever>= option is just as bad as a param IMHO.

I won't stand in the way if someone else wants to ACK this, but I'm not
going to ACK it myself without a better sense that it matters in real
world code.


Jeff



More information about the Gcc-patches mailing list