[PINGv2][PATCH] Ignore alignment by option

Yury Gribov y.gribov@samsung.com
Thu Dec 4 14:16:00 GMT 2014


On 12/04/2014 05:04 PM, Dmitry Vyukov wrote:
> On Thu, Dec 4, 2014 at 4:48 PM, Yury Gribov <y.gribov@samsung.com> wrote:
>> On 12/04/2014 03:47 PM, Dmitry Vyukov wrote:
>>>
>>> size_in_bytes = -1 instrumentation is too slow to be the default in
>>> kernel.
>>>
>>> If we want to pursue this, I propose a different scheme.
>>> Handle 8+ byte accesses as 1/2/4 accesses. No changes to 1/2/4 access
>>> handling.
>>> Currently when we allocate, say, 17-byte object we store 0 0 1 into
>>> shadow. An 8-byte access starting at offset 15 won't be detected,
>>> because the corresponding shadow value is 0. Instead we start storing
>>> 0 9 1 into shadow. Then the first shadow != 0 check will fail, and the
>>> precise size check will catch the OOB access.
>>> Make this scheme the default for kernel (no additional flags).
>>>
>>> This scheme has the following advantages:
>>> - load shadow only once (as opposed to the current size_in_bytes = -1
>>> check that loads shadow twice)
>>> - less code in instrumentation
>>> - accesses to beginning and middle of the object are not slowed down
>>> (shadow still contains 0, so fast-path works); only accesses to the
>>> very last bytes of the object are penalized.
>>
>>
>> Makes sense.  The scheme actually looks bullet-proof, why Asan team
>> preferred current (fast but imprecise) algorithm?
>>
>> BTW I think we'll want this option in userspace so well so we'll probably
>> need to update libasan.
>
> We've discussed this scheme, but nobody has shown that it's important enough.
> It bloats binary (we do have issues with binary sizes) and slows down
> execution a bit. And if it is non-default mode, then it adds more
> flags (which is bad) and adds more configurations to test.
>
> For this to happen somebody needs to do research on (1) binary size
> increase, (2) slowdown, (3) number of additional bugs it finds (we can
> run it over extensive code base that is currently asan-clean).

Regarding (3) - unless a codebase deliberately uses unaligned accesses 
(like kernel) this change would be of little use - all unaligned 
accesses are then bugs and should already be detected and fixed with UBSan.

-Y



More information about the Gcc-patches mailing list