This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [PINGv2][PATCH] Ignore alignment by option
- From: Dmitry Vyukov <dvyukov at google dot com>
- To: Yury Gribov <y dot gribov at samsung dot com>
- Cc: Marat Zakirov <m dot zakirov at samsung dot com>, "gcc-patches at gcc dot gnu dot org" <gcc-patches at gcc dot gnu dot org>, Jakub Jelinek <jakub at redhat dot com>, Kostya Serebryany <kcc at google dot com>, Andrey Ryabinin <a dot ryabinin at samsung dot com>, address-sanitizer <address-sanitizer at googlegroups dot com>
- Date: Thu, 4 Dec 2014 21:06:36 +0400
- Subject: Re: [PINGv2][PATCH] Ignore alignment by option
- Authentication-results: sourceware.org; auth=none
- References: <546CB0CD dot 8090402 at samsung dot com> <547731CA dot 8080909 at samsung dot com> <54804DF7 dot 9020602 at samsung dot com> <CACT4Y+aJN3jtwGhAX4J38+Y-zn9r=KkyhhaY-iDky2WPE_Yhpg at mail dot gmail dot com> <54806622 dot 8010505 at samsung dot com> <CACT4Y+b_xvmoROfzRk4MbE_DKfC0P_2k23-57bVe_MVmBiLQZQ at mail dot gmail dot com> <54806CD5 dot 5080308 at samsung dot com>
On Thu, Dec 4, 2014 at 5:16 PM, Yury Gribov <y.gribov@samsung.com> wrote:
> On 12/04/2014 05:04 PM, Dmitry Vyukov wrote:
>>
>> On Thu, Dec 4, 2014 at 4:48 PM, Yury Gribov <y.gribov@samsung.com> wrote:
>>>
>>> On 12/04/2014 03:47 PM, Dmitry Vyukov wrote:
>>>>
>>>>
>>>> size_in_bytes = -1 instrumentation is too slow to be the default in
>>>> kernel.
>>>>
>>>> If we want to pursue this, I propose a different scheme.
>>>> Handle 8+ byte accesses as 1/2/4 accesses. No changes to 1/2/4 access
>>>> handling.
>>>> Currently when we allocate, say, 17-byte object we store 0 0 1 into
>>>> shadow. An 8-byte access starting at offset 15 won't be detected,
>>>> because the corresponding shadow value is 0. Instead we start storing
>>>> 0 9 1 into shadow. Then the first shadow != 0 check will fail, and the
>>>> precise size check will catch the OOB access.
>>>> Make this scheme the default for kernel (no additional flags).
>>>>
>>>> This scheme has the following advantages:
>>>> - load shadow only once (as opposed to the current size_in_bytes = -1
>>>> check that loads shadow twice)
>>>> - less code in instrumentation
>>>> - accesses to beginning and middle of the object are not slowed down
>>>> (shadow still contains 0, so fast-path works); only accesses to the
>>>> very last bytes of the object are penalized.
>>>
>>>
>>>
>>> Makes sense. The scheme actually looks bullet-proof, why Asan team
>>> preferred current (fast but imprecise) algorithm?
>>>
>>> BTW I think we'll want this option in userspace so well so we'll probably
>>> need to update libasan.
>>
>>
>> We've discussed this scheme, but nobody has shown that it's important
>> enough.
>> It bloats binary (we do have issues with binary sizes) and slows down
>> execution a bit. And if it is non-default mode, then it adds more
>> flags (which is bad) and adds more configurations to test.
>>
>> For this to happen somebody needs to do research on (1) binary size
>> increase, (2) slowdown, (3) number of additional bugs it finds (we can
>> run it over extensive code base that is currently asan-clean).
>
>
> Regarding (3) - unless a codebase deliberately uses unaligned accesses (like
> kernel) this change would be of little use - all unaligned accesses are then
> bugs and should already be detected and fixed with UBSan.
You answered your own question about user space :)