This is the mail archive of the
mailing list for the GCC project.
Re: First cut on outputing gimple for LTO using DWARF3. Discussion invited!!!!
- From: Kenneth Zadeck <zadeck at naturalbridge dot com>
- To: Mark Mitchell <mark at codesourcery dot com>
- Cc: GCC <gcc at gcc dot gnu dot org>, "Berlin, Daniel" <dberlin at dberlin dot org>, "Hubicha, Jan" <jh at suse dot cz>, "Novillo, Diego" <dnovillo at redhat dot com>, Ian Lance Taylor <ian at airs dot com>, "Edelsohn, David" <dje at watson dot ibm dot com>
- Date: Thu, 31 Aug 2006 07:34:57 -0400
- Subject: Re: First cut on outputing gimple for LTO using DWARF3. Discussion invited!!!!
- References: <44F2F642.email@example.com> <44F606CD.firstname.lastname@example.org> <44F619F7.email@example.com> <44F63A36.firstname.lastname@example.org>
Mark Mitchell wrote:
> Kenneth Zadeck wrote:
>> Even if we decide that we are going to process all of the functions in
>> one file at one time, we still have to have access to the functions that
>> are going to be inlined into the function being compiled. Getting at
>> those functions that are going to be inlined is where the double the i/o
>> arguement comes from.
> I understand -- but it's natural to expect that those functions will
> be clumped together. In a gigantic program, I expect there are going
> to be clumps of tightly connected object files, with relatively few
> connections between the clumps. So, you're likely to get good cache
> behavior for any per-object-file specific data that you need to access.
I just do not know. I assume that you are right, that there is some
clumping. But I am just no sure.
>> I have never depended on the kindness of strangers or the virtues of
>> virtual memory. I fear the size of the virtual memory when we go to
>> compile really large programs.
> I don't think we're going to blow out a 64-bit address space any time
> soon. Disks are big, but they are nowhere near *that* big, so it's
> going to be pretty hard for anyone to hand us that many .o files.
> And, there's no point manually reading/writing stuff (as opposed to
> mapping it into memory), unless we actually run out of address space.
I am not so concerned with running out of virtual address space than I
am about being able to break this up so that it can be done in parallel,
on a farm of machines. Otherwise, lto can never be part of anyone's
The notion of having 20 or 50 compile servers each mapping all of the
files of a large system in seems like a bad design point.
> In fact, if you're going to design your own encoding formats, I would
> consider a format with self-relative pointers (or, offsets from some
> fixed base) that you could just map into memory. It wouldn't be as
> compact as using compression, so the total number of bytes written
> when generating the object files would be bigger. But, it will be
> very quick to load it into memory.
If you look at my code, that is what I have, at least with respect to
the function itself.
There is one big difference here between lto and what a debugger needs.
I could see designing a debugger (and I have no idea if any such
debuggers exist) that simply maps in the debug information and just uses
the incore representation as is. Dwarf seems to have been designed to
support this. (but then again I could be dreaming). With an
intermediate form of a compiler, the usage is quite different. All that
we are going to do is load a function, convert it gimple and then throw
away (the notion of throw away may not have meaning for memory mapped
files) the on disk version.
The prime goal is that the format be designed so that an enitity
(generally a function) can be expanded into gimple in one pass. Then
the question of the benefit of using a compressor comes down to
processor speed vs io speed.
With the parts that you are in charge of, namely the types and the
globals, this is not true. I can very easily see an implementation of
the types and decls that is like I describe for the debugger, you map it
into mem and just use if from there. But since the intermediate code
for a function body is going to get very modified, and our existing
gimple is chocked full of pointers, it is harder to envision ever
winning at that the mapping game.
> I guess my overriding concern is that we're focusing heavily on the
> data format here (DWARF? Something else? Memory-mappable? What
> compression scheme?) and we may not have enough data. I guess we just
> have to pick something and run with it. I think we should try to keep
> that code as as separate as possible so that we can recover easily if
> whatever we pick turns out to be (another) bad choice. :-)
>> One of the comments that was made by a person on the dwarf committee is
>> that the abbrev tables really can be used for compression. If you have
>> information that is really common to a bunch of records, you can build
>> an abbrev entry with the common info in it.
> Yes. I was a little bit surprised that you don't seem to have seen
> much commonality. If you recorded most of the tree flags, and treated
> them as DWARF attributes, I'd expect you would see relatively many
> expressions of a fixed form. Like, there must be a lot of PLUS_EXPRs
> with TREE_USED set on them. But, I gather that you're trying to avoid
> recording some of these flags, hoping either that (a) they won't be
> needed, or (b) you can recreate them when reading the file. I think
> both (a) and (b) hold in many cases, so I think it's reasonable to
> assume we're writing out very few attributes.
I had thought about that with the flags, and decided it was too much
work/per gain. I decided to just compress the flags, so that only the
flags that are used for a given tree node were written as a uleb128 bit
number and then let zlib do the rest. The flags are the only thing that
fits the abbrev model anyway. The line numbers and types would not.
>> I had a discussion on chat today with drow and he indicated that you
>> were busily adding all of the missing stuff here.
> "All" is an overstatement. :-) Sandra is busily adding missing stuff
> and I'll be working on the new APIs you need.
>> I told him that I
>> thought this was fine as long as there is not a temporal drift in
>> information encoded for the types and decls between the time I write my
>> stuff and when the types and decls are written.
I have no idea how "stable" all the types and decls are over a
compilation. I write my info pretty early, and I assume the types and
decls are written pretty late in the compilation (otherwise you would
not have address expressions for the debugger). If there has been any
"processing" on these between when I write my stuff and when the types
and decls get written, things may not match up.
My particular fear here are the flags, which seem to be reused with
reckless abandon over the course of a compilation. I have no smoking
gun here, but I have seen and heard of ugly things being done.
> I'm not sure what this means.