This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: PR optimization/9786
- From: Mike Stump <mrs at apple dot com>
- To: John David Anglin <dave at hiauly1 dot hia dot nrc dot ca>
- Cc: gcc at gcc dot gnu dot org
- Date: Wed, 17 Sep 2003 14:29:38 -0700
- Subject: Re: PR optimization/9786
On Tuesday, September 16, 2003, at 08:28 PM, John David Anglin wrote:
Is it ok to delete a no-op insn that can trap?
The question is simple, but probably not specific enough. As it
stands,
the answer is probably no. But, in reading the referenced message, it
sounds like we know that the no-op doesn't trap, and therefore, we can
change it, optimize it based on the knowledge that it doesn't trap and
therefore, delete it.
What about an unnecessary PIC symbol reference accessed through the
got?
What about it? If a program behaves as if it obeys the require
semantics, then we can do it, and if it doesn't, we can't. Offhand,
this case sounds like a, no, we can optimize it out. A pic/got expert
might be able to counter, with a, no we can't because of _insert really
obscure stuff here_. The type of possible rant would be, no, we can't
optimize it out, as otherwise the on-demand load of the library would
not happen, and the ctor firing would be wrong for this class of lazy
libraries... :-( Ick! I'd say, but, if that is how the _target_ is
defined... Well, they can make weird valuations like this, and we'd
just put it in as conditional on the target bit, if, all targets didn't
want this. In the end, you see this can be extremely target dependent,
if knowledgeable experts tell us it is.
I'm not trying to claim I'm an expert at _name of favorite obscure
system that would render wrong, any general statement about the topic_
type of system. They can speak up, as code that might violate their
notion of the world is put in.
I don't I understand the rules for -fnon-call-exceptions. On
the PA the rules for the trapping of floating point instructions are
very complex. A symbol reference via the reference shouldn't normally
trap but almost any instruction can trap if there is a system TLB
Isn't think exactly like saying that any instruction can trap, when we
have a system that does asynchronous process scheduling and we can
issue an event (signal in UNIX for example) from another process? If
so, then what you say it true, in spite of, not because of the TLB.
TLBs happen under the hood, and don't usually inject into the abstract
machine model for the compiler (or languages the compilers target), so
they are as if they didn't exist at all. In the case of
-non-call-expcetions, there are only two things that _trap_, memory
load or store because the data at the address hasn't been allocated to
the process, and (waving hand) floating point. No one claims a TLB
miss interacts with or does anything special with -non-call-exceptions.
In fact, they specifically disclaim all traps, except for a small
class of interesting traps. Presumably, you can read the Java spec and
get a very good definition of exactly what can trap and why and how and
what can't, I think it was put in for them.
or in the case a problem with the got pointer.
If it is because the object fetched or stored isn't mapped into the
virtual address space of the process, then yes, otherwise no.
We emit a lot of insns in the initial rtl generation that we wouldn't
emit if we were smarter. The current rules say an insn that loads
a symbol reference containing an unspec in a MEM may trap, so it can't
be deleted. On the otherhand, the insn may be logically unnecessary to
the behavior of the code and probably didn't have to be emitted in
the first place. So, where do we draw the line in optimizing rtl?
When it doesn't behave as if it met the required semantics of the
abstract machine model of the language. You should feel exactly like I
dodged the question, that was intentional. We, the gcc developers, are
expected to know the as if rule, and the required semantics of the
abstract machine model of all of our languages. Ok, stop laughing
now... Or, at least, we have to be able to ask Kenner about the fine
points of what Ada expects out of some obscure corner of the gcc
backend semantic landscape.
The gcc abstract model was exactly the C model, but as other languages
come under the gcc umbrella, we have to enhance, extend, and more fully
specify exactly those semantics required by the language standard upon
the gcc constructs, and when that is at odds with existing uses, to
split or extends the constructs with the required semantic bits.
So, for example, if there is a language that allows a user to write:
{ id object;
try {
object = 1;
} catch (got_bits_smashed_in (object)) { ... }
}
and this language is added to gcc, then we'd have to find a way to
express the notion of a symbol that uses got bits to the optimizer and
then, for it, for that language, we could not just delete the no-op
that plays with the got bits, as the language is defined to require a
transfer of control to the exception handler in that case.