This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Deprecating basic asm in a function - What now?


In the end, my problems with basic-asm-in-functions (BAIF) come down to reliably generating correct code.

Optimizations are based on the idea of "you can safely modify x if you can prove y." But given that basic asm are opaque blocks, there is no way to prove, well, much of anything. Adding 'volatile' and now 'memory' do help. But the very definition of "opaque blocks" means that you absolutely cannot know what weird-ass crap someone might be trying. And every time gcc tweaks any of the optimizers, there's a chance that this untethered blob could bubble up or down within the code. There is no way for any optimizer to safely decide whether a given position violates the intent of the asm, since it can have no idea what that intent is. Can even a one line displacement of arbitrary assembler code cause race conditions or data corruption without causing a compiler error? I'll bet it can.

And also, there is the problem of compatibility. I am told that the mips "sync" instruction (used in multi-thread programming) requires a memory clobber to work as intended. With that in mind, when I see this code on some blog, is it safe?

   asm("sync");

And of course the answer is that without context, there's no way to know. It might have been safe for the compiler in the blogger's environment. Or maybe he was an idiot. It's certainly not safe in any released version of gcc. But unless the blog reader knows the blogger's level of intelligence, development environment, and all the esoteric differences between that environment and their own, they could easily copy/paste that right into their own project, where it will compile without error, and "nearly always" work just fine...

With Bernd's recent change, people who have this (incorrect) code in their gcc project will suddenly have their code start working correctly, and that's a good thing. Well, they will a year or so from now. If they immediately start using v7.0. And if they prevent everyone from trying to compile their code using 6.x, 5.x, 4.x etc where it will never work correctly. So yes, it's a fix. But it can easily be years before this change finally solves this problem. However, changing this to use extended:

   asm("sync":::"memory");

means that their very next source code release will immediately work correctly on every version of gcc.

You make the point about how people changing this code could easily muck things up. And I absolutely agree, they could. On the other hand, it's possible that their code is already mucked up and they just don't know it. But even if they change this code to asm("sync":::) (forcing it to continue working the same way it has for over a decade), the programmer's intent is clear (if wrong). A knowledgeable mips programmer could look at that and say "Hey, that's not right." While making that same observation with asm("sync") is all but impossible.

BAIF is fragile. That, combined with: unmet user expectations, bad user code, inconsistent implementations between compilers, changing implementations within compilers, years of bad docs, no "best practices" guide and an area that is very complex all tell me that this is an area filled with unfixable, ticking bombs. That's why I think it should be deprecated.

Other comments inline:

On 20/06/16 18:36, Michael Matz wrote:
I see zero gain by deprecating them and only churn.  What would be the
advantage again?
Correctness.
As said in the various threads about basic asms, all correctness
problems can be solved by making GCC more conservative in handling them
(or better said: not making it less conservative).

If you talk about cases where basic asms diddle registers expecting GCC to
have placed e.g. local variables into specific ones (without using local
reg vars, or extended asm) I won't believe any claims ...

It is very likely that many of these basic asms are not
robust
... of them being very likely without proof.

I don't have a sample of people accessing local variables, but I do have one where someone was using 'scratch' registers in BAIF assuming the compiler would just "handle it." And before you call that guy a dummy, let me point out that under some compilers, that's a perfectly valid assumption for asm. And even if not, it may have been working "just fine" in his implementation for years. Which doesn't make it right.

They will have stopped
working with every change in compilation options or compiler version.  In
contrast I think those that did survive a couple years in software very
likely _are_ correct, under the then documented (or implicit) assumptions.
Those usually are: clobbers and uses memory, processor state and fixed
registers.

As I was saying, a history of working doesn't make it right. It just means the ticking hasn't finished yet.

How would you know if you have correctly followed the documented rules? Don't expect the compiler to flag these violations for you.

in the face of compiler changes because they don't declare their
dependencies and therefore work only by accident.
Then the compiler better won't change into less conservative handling of
basic asms.

You see, the experiment shows that there's a gazillion uses of basic asms
out there.  Deprecating them means that each and every one of them (for us
alone that's 540 something, including testsuite and boehm) has to be
changed from asm("body") into asm("body" : : : "memory") (give and take
some syntax for also clobbering flags).  Alternatively rewrite the
body to actually make use of extended asm.  I guarantee you that a
non-trivial percentage will be wrong _then_ while they work fine now.

Yes, that sounds bad, and you aren't wrong.

On the other hand, there is no guarantee that they are actually correct right now. Correct enough? Maybe so. But of course there's no guarantee that tomorrow's newest optimization won't bork some of them because it followed the rules and you didn't quite.

And if we're talking about made up numbers, what about the 'non-trivial' percentage of changes that end up improving things? Fixing bugs, better performance, more maintainable code, optimization failures and late night debugging sessions that never happen, etc...

Even if it weren't so it still would be silly if GCC simply could regard
the former as the latter internally.  It would just be change for the sake
of it and affecting quite many users without gain.


Ciao,
Michael.



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]