This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: Using of parse tree externally
- To: gcc at gcc dot gnu dot org
- Subject: Re: Using of parse tree externally
- From: Theodore Papadopoulo <Theodore dot Papadopoulo at sophia dot inria dot fr>
- Date: Thu, 12 Oct 2000 22:22:28 +0200
> On the other hand, I do personally hope that the FSF will end up
> re-considering some of these issues, because of the technical problems
> and limitations that the current stance holds. If somebody were to
> generate a good internal tree representation that the gcc backend
> could use, that would be a good thing, and would probably help gcc
> itself.
All this discussion makes me wonder:
1) What distinguishes gcc internal parts from gcc itself ?
The gcc license autorizes people to use gcc in a proprietary system
(and such systems are supported by the gcc team). Still those
systems are competing with the free systems (among which the hurd)...
Most of those are specialized for a specific task and/or
architecture, so the competition is somewhat unfair with the free
systems that are portable (eg linux) and no one finds that bad.
1') What prevents a company to take gcc as it exists now, create a
patch that dumps a representation of the tree, distribute this
freely and then design its own compiler around the output of the
"their" gcc distribution ??
2) Why politics always have to interfere so much ?
If skilled people believe that giving clean interfaces between the
various compiler parts would benefit to the development of gcc and
other various tools around it, and if this cannot be done in a
public fashion for political/legal reasons, then something is
wrong. IMHO free software must be driven by technical
considerations first, and politics should come second.
Of course, this is an overstatement. But if we try to keep our
market share by making things more difficult for others, then
are we so different of some well-known companies ??
3) It seems to me that fearing competition from companies
making specialized back-ends is a bad question. gcc is
certainly the most used compiler (after VC++) because it's
free, open and behaves the same way everywhere. Remember, not so
long ago there was a derived compiler pgcc (maybe it still exists,
I have not checked it for months) that was free, tuned for x86 and
I'm not quite sure it ever really hurted gcc. On the contrary, it
seems it has pushed gcc's development towards a more open process
which is good.
4) gcc is bigger and bigger everyday, with more language and more
optmizations (and more is to come). It seems already to be a pain
to stabilize everything to make a release. It is already difficult
for someone to enter into gcc to do something (where to start with?
the question has been raised again a few days ago on the list).
Separating the issues of the frontends, and the backend
seems to be a useful step just in terms of software management.
I'm not so sure it can continue very long without such a move.
So even for gcc itself it looks to me a useful thing...
Now, of course, I speak only for myself and I certainly do not know
all the issues (both technical and political). Reading this list
regularly and trying (when I have free time) to understand how gcc works,
convinced me that modularizing the development would be a good thing.
Is there a real risk of doing so ?? Or is the fear somewhat
unjustified ? After all, it does not seem to me (see 1') that the
danger is avoided currently, so why prevent good technical design
decisions to be applied for gcc ???
Theo.
PGP signature