This is the mail archive of the
mailing list for the GCC project.
Re: Algol Front end
- From: Tim Josling <tej at melbpc dot org dot au>
- To: Robert Dewar <dewar at gnat dot com>, GCC <gcc at gcc dot gnu dot org>
- Date: Wed, 08 May 2002 06:35:17 +1000
- Subject: Re: Algol Front end
- Organization: Melbourne PC User Group
- References: <20020507112919.61C0AF28D5@nile.gnat.com>
On the IBM mainframe there is no performance difference between packed decimal
and binary numbers, in general - I have done tests to verify this. This not
true on machines without packed decimal hardware/microcode support. Strictly
speaking the X86s do have some rudimentary packed decimal support, but the
decimal format is different from the X/Open COBOL packed decimal standard,
which is the same as the format supported by the IBM mainframes.
Not only do you have to write a lot of code to support packed decimal, but it
is complex/tricky or relatively slow or probably both.
Of course you can implement packed decimal in GCC via function calls, however
most of the optimisation does not work because GCC does not understand what is
going on. You could try and inline the runtime but I suspect without proof this
would lead to unacceptable code bloat.
It may be you could define fake PD registers and instructions in the machine
descriptions and actually get GCC to do optimisation of the PD code, but this
would be a big challenge.
I had a look at the Ada runtime for packed decimal about 12 months ago, and I
would be amazed if it is anywhere near as fast as binary arithmetic. From
memory it was written in Ada so it is probably not reusable for COBOL. If
anyone does have any good information on optimising packed decimal code (other
than Knuth's routines for converting to and from decimal) I would be interested
to hear about it.
Do you have any further information re your comment on going bits in parellel?
The relevance of this is, I need to
a) Warn people not to use PD if they want fast programs contrary to their
expectations from their mainframe work.
b) Try and find ways to turn packed decimal into binary eg for isolated data
items that are not aliased in any way.
I would draw an analogy of floating point support. On a machine without
FP support you could write emulator routines and so forth, but don't expect
your FFT to run too fast!
Anyway I am not complaining, it was just a side comment... I would not think
the silicon for packed decimal support is justified. If GCC native support for
packed decimal is justified, someone will no doubt contribute it!
Robert Dewar wrote:
> > ... Modern CPUs and GCC have some
> > trouble with COBOLisms like packed decimal.
> It may be true that GCC has trouble with packed decimal, but it is plain
> wrong to say that modern CPUs have trouble with this. You can do packed
> decimal addition very efficiently on any modern RISC machine (the algorithms
> for doing multiple digits in parallel are non-trivial, the otherwise OT
> topic on counting bits is relevant here :-), but well known.
> In fact, going back to the original statement, it really is NOT true that
> GCC has trouble with packed decimal. No more than it has toruble in Ada
> with decimal fixed-point types. It is just that the generated code will
> have to call appropriate run-time routines. Big deal, so what?