This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [patch] Fix node weight updates during ipa-cp (issue7812053)
- From: Richard Biener <richard dot guenther at gmail dot com>
- To: Teresa Johnson <tejohnson at google dot com>
- Cc: "gcc-patches at gcc dot gnu dot org" <gcc-patches at gcc dot gnu dot org>, reply at codereview dot appspotmail dot com
- Date: Mon, 8 Apr 2013 10:52:24 +0200
- Subject: Re: [patch] Fix node weight updates during ipa-cp (issue7812053)
- References: <20130327172238 dot 545EF80580 at tjsboxrox dot mtv dot corp dot google dot com> <CAFiYyc1F7ncAXzN9eN3wuZzpZvk=mpKM7D4CcBt6hCKpeM1xtA at mail dot gmail dot com> <CAAe5K+WCmr8sqGgFm8vVqDGe5Xfe840jBn7Suf5ZGQkezO38Bg at mail dot gmail dot com>
On Fri, Apr 5, 2013 at 4:18 PM, Teresa Johnson <tejohnson@google.com> wrote:
> On Thu, Mar 28, 2013 at 2:27 AM, Richard Biener
> <richard.guenther@gmail.com> wrote:
>> On Wed, Mar 27, 2013 at 6:22 PM, Teresa Johnson <tejohnson@google.com> wrote:
>>> I found that the node weight updates on cloned nodes during ipa-cp were
>>> leading to incorrect/insane weights. Both the original and new node weight
>>> computations used truncating divides, leading to a loss of total node weight.
>>> I have fixed this by making both rounding integer divides.
>>>
>>> Bootstrapped and tested on x86-64-unknown-linux-gnu. Ok for trunk?
>>
>> I'm sure we can outline a rounding integer divide inline function on
>> gcov_type. To gcov-io.h, I suppose.
>>
>> Otherwise this looks ok to me.
>
> Thanks. I went ahead and worked on outlining this functionality. In
> the process of doing so, I discovered that there was already a method
> in basic-block.h to do part of this: apply_probability(), which does
> the rounding divide by REG_BR_PROB_BASE. There is a related function
> combine_probabilities() that takes 2 int probabilities instead of a
> gcov_type and an int probability. I decided to use apply_probability()
> in ipa-cp, and add a new macro GCOV_COMPUTE_SCALE to basic-block.h to
> compute the scale factor/probability via a rounding divide. So the
> ipa-cp changes I made use both GCOV_COMPUTE_SCALE and
> apply_probability.
>
> I then went through all the code to look for instances where we were
> computing scale factors/probabilities and performing scaling. I found
> a mix of existing uses of apply/combine_probabilities, uses of RDIV,
> inlined rounding divides, and truncating divides. I think it would be
> good to unify all of this. As a first step, I replaced all inline code
> sequences that were already doing rounding divides to compute scale
> factors/probabilities or do the scaling, to instead use the
> appropriate helper function/macro described above. For these
> locations, there should be no change to behavior.
>
> There are a number of places where there are truncating divides right
> now. Since changing those may impact the resulting behavior, for this
> patch I simply added a comment as to which helper they should use. As
> soon as this patch goes in I am planning to change those to use the
> appropriate helper and test performance, and then will send that patch
> for review. So for this patch, the only place where behavior is
> changed is in ipa-cp which was my original change.
>
> New patch is attached. Bootstrapped (both bootstrap and
> profiledbootstrap) and tested on x86-64-unknown-linux-gnu. Ok for
> trunk?
Ok.
Thanks,
Richard.
> Thanks,
> Teresa
>
>>
>> Thanks,
>> Richard.
>>
>>> 2013-03-27 Teresa Johnson <tejohnson@google.com>
>>>
>>> * ipa-cp.c (update_profiling_info): Perform rounding integer
>>> division when updating weights instead of truncating.
>>> (update_specialized_profile): Ditto.
>>>
>>> Index: ipa-cp.c
>>> ===================================================================
>>> --- ipa-cp.c (revision 197118)
>>> +++ ipa-cp.c (working copy)
>>> @@ -2588,14 +2588,18 @@ update_profiling_info (struct cgraph_node *orig_no
>>>
>>> for (cs = new_node->callees; cs ; cs = cs->next_callee)
>>> if (cs->frequency)
>>> - cs->count = cs->count * (new_sum * REG_BR_PROB_BASE
>>> - / orig_node_count) / REG_BR_PROB_BASE;
>>> + cs->count = (cs->count
>>> + * ((new_sum * REG_BR_PROB_BASE + orig_node_count/2)
>>> + / orig_node_count)
>>> + + REG_BR_PROB_BASE/2) / REG_BR_PROB_BASE;
>>> else
>>> cs->count = 0;
>>>
>>> for (cs = orig_node->callees; cs ; cs = cs->next_callee)
>>> - cs->count = cs->count * (remainder * REG_BR_PROB_BASE
>>> - / orig_node_count) / REG_BR_PROB_BASE;
>>> + cs->count = (cs->count
>>> + * ((remainder * REG_BR_PROB_BASE + orig_node_count/2)
>>> + / orig_node_count)
>>> + + REG_BR_PROB_BASE/2) / REG_BR_PROB_BASE;
>>>
>>> if (dump_file)
>>> dump_profile_updates (orig_node, new_node);
>>> @@ -2627,14 +2631,19 @@ update_specialized_profile (struct cgraph_node *ne
>>>
>>> for (cs = new_node->callees; cs ; cs = cs->next_callee)
>>> if (cs->frequency)
>>> - cs->count += cs->count * redirected_sum / new_node_count;
>>> + cs->count += (cs->count
>>> + * ((redirected_sum * REG_BR_PROB_BASE
>>> + + new_node_count/2) / new_node_count)
>>> + + REG_BR_PROB_BASE/2) / REG_BR_PROB_BASE;
>>> else
>>> cs->count = 0;
>>>
>>> for (cs = orig_node->callees; cs ; cs = cs->next_callee)
>>> {
>>> - gcov_type dec = cs->count * (redirected_sum * REG_BR_PROB_BASE
>>> - / orig_node_count) / REG_BR_PROB_BASE;
>>> + gcov_type dec = (cs->count
>>> + * ((redirected_sum * REG_BR_PROB_BASE
>>> + + orig_node_count/2) / orig_node_count)
>>> + + REG_BR_PROB_BASE/2) / REG_BR_PROB_BASE;
>>> if (dec < cs->count)
>>> cs->count -= dec;
>>> else
>>>
>>> --
>>> This patch is available for review at http://codereview.appspot.com/7812053
>
>
>
> --
> Teresa Johnson | Software Engineer | tejohnson@google.com | 408-460-2413