This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Algol Front end


> On the IBM mainframe there is no performance difference between packed decimal
> and binary numbers, in general - I have done tests to verify this. This  not
> true on machines without packed decimal hardware/microcode support. Strictly
> speaking the X86s do have some rudimentary packed decimal support, but the
> decimal format is different from the X/Open COBOL packed decimal standard,
> which is the same as the format supported by the IBM mainframes.

Most certainly packed decimal does not run as fast as normal binary register
operations. if your tests show otherwise they are flawed.

> Not only do you have to write a lot of code to support packed decimal, but it
> is complex/tricky or relatively slow or probably both.

We found it pretty easy in Realia COBOL to beat the general performance of
IBM COBOL (that was a 4.77MHz PC vs a 370/148). Both have scaled up by now,
but the PC has scaled up more :-)

> Of course you can implement packed decimal in GCC via function calls, however
> most of the optimisation does not work because GCC does not understand what is
> going on. You could try and inline the runtime but I suspect without proof this
> would lead to unacceptable code bloat.

You should be able to get perfectly reasonable performance with runtime
calls, we certainly did in Realia COBOL. The code that IBM generates is
not that good.

> I had a look at the Ada runtime for packed decimal about 12 months ago, and I
> would be amazed if it is anywhere near as fast as binary arithmetic. From
> memory it was written in Ada so it is probably not reusable for COBOL. If
> anyone does have any good information on optimising packed decimal code (other
> than Knuth's routines for converting to and from decimal) I would be interested
> to hear about it.

The Ada runtime is not the place to look, there are no efficient routines
there. The Realia runtime would be a good place to look, but unfortunately
that is proprietary.

Do you have any further information re your comment on going bits in parellel?

It's a relatively straightforward algorithm. I have recreated it a couple of
times, I could do so again I suppose. You use the same algorithm that the
IBM mainframe likely uses.

> a) Warn people not to use PD if they want fast programs contrary to their
> expectations from their mainframe work.

Not necessary to give this warning if you do a decent job on the runtime
routines. The IBM mainframe has no special secrets. What it does in hardware
you can come close to doing in software, especially on a 64-bit machine
where you can process 16 digits at a time.

> b) Try and find ways to turn packed decimal into binary eg for isolated data
> items that are not aliased in any way.

That's a worthwhile optimization, but you can do just fine without it.

> Anyway I am not complaining, it was just a side comment... I would not think
> the silicon for packed decimal support is justified. If GCC native support for
> packed decimal is justified, someone will no doubt contribute it!

If you can do a packed decimal addition of 16 digits in a few clocks (which
is certainly possible) that's good enough to get perfectly fine performance.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]