This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: AVR: CC0 to CCmode conversion


Paul Schlie <schlie@comcast.net> writes:

> > From: Denis Chertykov <denisc@overta.ru>
> >> - possibly something like: ?
> >> 
> >>   (define_insn "*addhi3"
> >>     [(set (match_operand:HI 0 ...)
> >>        (plus:HI (match_operand:HI 1 ...)
> >>                 (match_operand:HI 2 ...)))
> >>      (set (reg ZCMP_FLAGS)
> >>        (compare:HI (plus:HI (match_dup 1) (match_dup 2))) (const_int 0))
> >>      (set (reg CARRY_FLAGS)
> >>        (compare:HI (plus:HI (match_dup 1) (match_dup 2))) (const_int 0))]
> >>     ""
> >>     "@ add %A0,%A2\;adc %B0,%B2
> >>        ..."
> >>     [(set_attr "length" "2, ...")])
> > 
> > You have presented a very good example. Are you know any port which
> > already used this technique ?
> > As I remember - addhi3 is a special insn which used by reload.
> > The reload will generate addhi3 and reload will have a problem with
> > two modified regs (ZCMP_FLAGS, CARRY_FLAGS) which will be a bad
> > surprise for reload. :( As I remember.
> 
> Thanks for your patience, and now that I understand GCC's spill/reload
> requirements/limitations a little better; I understand your desire to merge
> compare-and-branch.

I don't want to merge compare-and-branch because (as Richard said)
"explicit compare elimination by creating
 even larger fused operate-compare-and-branch instructions that
 could be recognized by combine.  I wouldn't actually recommend
 this though, because branch instructions with output reloads are
 EXTREMELY DIFFICULT to implement properly.".
  (IMPOSSIBLE for AVR) 

I want to have two separate insns compare and branch.

> However, as an alternative to merging compare-and-branch's to
> overcome the fact that the use of a conventional add operation to
> calculate the effective spill/reload address for FP offsets >63
> bytes would corrupt the machine's cc-state that a following
> conditional skip/branch may be dependant on; I wonder if it may be
> worth considering simply saving the status register to a temp
> register and restoring it after computing the spill/reload address
> when a large FP offset is required. (which seems infrequent relative
> to those with <63 byte offsets, so would typically not seem to be
> required?)
> 
> If this were done, then not only could compares be split from branches, and
> all side-effects fully disclosed; but all compares against 0 resulting from
> any arbitrary expression calculation may be initially directly optimized
> without relying on a subsequent peephole optimization to accomplish.
> 
> Further, if there were a convenient way to determine if the now fully
> exposed cc-status register was "dead" (i.e. having no dependants), then
> it should be then possible to eliminate its preservation when calculating
> large FP offset spill/reload effective addresses, as it would be known that
> no subsequent conditional skip/branch operations were dependant on it.
> 
> With this same strategy, it may even be desirable to then conditionally
> preserve the cc-status register abound all corrupting effective address
> calculations when cc-status register is not "dead", as it would seem to
> be potentially more efficient to do so rather than otherwise needing
> to re-compute an explicit comparison afterward?

I think that it's a better way. I will test it.

> (Observing that I'm basically suggesting treating the cc-status register
>  like any other hard register, who's value would need to be saved/restored
>  around any corrupting operation if it's value has live dependants; what's
>  preventing GCC's register and value dependency tracking logic from being
>  able to manage its value properly just like it can for other register
>  allocated values ?)

Why not CCmode register ?

Denis.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]