This is the mail archive of the
mailing list for the GCC project.
Re: libtool (was Re: [patch] releases.html)
On Thu, Feb 15, 2001 at 05:25:39AM -0200, Alexandre Oliva wrote:
> On Feb 15, 2001, "Zack Weinberg" <email@example.com> wrote:
> > However, should we ever wish to support, say, libstdc++3 on VMS,
> > libtool will *get in our way*. We will have to do extra work on top
> > of the already-large burden of constructing VMS-suitable makefiles and
> > configuration logic, just to deal with libtool. The set of autoconf-
> > determined variables I seem to be advocating won't.
> Err... If there's no Bourne shell to run libtool, where is there one
> to run autoconf? Or are you implying we'd run something equivalent to
> autoconf, that could easily replace references to LIBTOOL with some
> other VMS-specific tool in the Makefiles (or whatever is used to build
> things on VMS)?
We would run something equivalent to autoconf. For VMS, Perl has a
configure.com written in DCL, and a bunch of helper scripts. Most of
the Makefile could be preserved intact; all the commands would have to
Replacing all the libtool-isms in a package is not as simple as
changing what LIBTOOL points to in the makefile, though.
> > Are you saying that under some conditions libtool will RELINK THINGS
> > AT INSTALL TIME?!!!!
> Yep. But only on systems in which this is absolutely necessary.
> IIRC, that's HPsUX (you'd have guessed, wouldn't you? :-).
Oh, okay. I somehow got the impression it was done on many common
platforms. If it's only HP/UX I don't care so much.
> > Even worse, what if the path I'm installing into is not the same as
> > the path the program will run from?
> Things will break. But then, it's breaking at install time or at
> run-time. I often prefer the former.
Um, if you relink the program at install time and then it gets moved
behind your back, it *will* break at runtime. You don't run the
program during installation.
[-rpath and LD_LIBRARY_PATH]
> In any case, this doesn't force libtool to relink at installation
> time. It just forces libtool to build two separate copies of an
> executable, one to be installed, and one to be run in the build tree.
> By default, the latter is only created if the wrapper script created
> in place of the program is run, but there's a configure option to do
> it the other way round.
I suppose that is less horrible. I still don't like the wrapper
scripts, just because they make debugging such a nuisance. I do
"gdb ./cc1" all the time and I don't want to have to rewire my
Perl's makefiles seem to think that LD_LIBRARY_PATH suffices. They
might be wrong, or they might not be using -rpath at all, or they
might have some other workaround; it isn't obvious to me.
> > On systems where the dynamic linker is buggy and can't be fixed, maybe
> > we don't support shared libraries at all.
> Wow! That's certainly an easy way to support shared libraries on as
> many systems as possible.
This is another exaggeration for effect. However, I wouldn't feel
terribly sad if we decided we didn't support shared libraries on
HP/UX, for instance.
Obviously, where we can work around things without insane effort, we
> > And finally, we are in a special position. All the shared libraries
> > we want to use, right now, are target libraries.
> How about libbackend? That's definitely one I'd like to see as a
> shared library. For all the others, we already have machinery in
Yes, libbackend.so might be nice just to reduce disk consumption.
(Didn't HJ have patches for this a long, long, time ago?) Wouldn't
it cause arguments about opening loopholes for non-free front ends,
though? I've thought about cleaning up libcpp's interface and making
it an installable shared library; a couple of people have asked me
about using it in their code-scanning projects.
We're still in the special position of knowing we have gcc, though,
Either it's a native build and we can postpone shared library creation
until stage 2 (there's no point in building libbackend.so in stage 1
when we're only building one front end) or it's a cross build and we
already require gcc for the bootstrap compiler.
Earnest question: can libtool handle the construction of host and
target libraries in the same Makefile? 'Cos we need to do just that
in the gcc directory. Yeah, libgcc.mk is separate right now, but it
won't be forever.
> > So, does it hurt to build all the target static libraries -fPIC then
> > (assuming --enable-shared)?
> It certainly does. -fPIC significantly hurts performance on
> register-starved machines such as x86. I know of people who build
> dynamic libraries out of non-PIC code on x86 just because of that.
Good point. (What does this say about libbackend.so? The compiler is
already too damn slow.)
I'd really like to solve this inside the Makefile because then I can
extend it to profiling libraries. Hm. pmake's canned "library
construction" makefile uses funky object file extensions (.so, .po);
that's another vote for fixing collect2. With GNU make one can do
which might be sufficient argument for requiring GNU make.
> > We have collect2, which is an entire C program specifically to work
> > around problems with the linker. We don't need to do dynamic module
> > loading. A lot of libtool's complexity is superfluous. And
> > superfluous complexity means maintenance headaches from now until
> > the wolf devours the Sun.
> Well, we could also modify the compiler so as to generate object code
> directly, instead of using an assembler. But introducing the
> assembler isolates some of the complexity of the problem of
> object-file generation into two separate modules. That's called
> abstraction. Some people like this. That's the reason why libtool
> was started.
Abstractions are all very well until they get so generalized that they
turn out to add more complexity than they remove. It's my opinion
that this is the case with libtool. You wind up doing as much, or
more, work fighting with libtool as you would have put in coming up
with custom shared-library-support code.
> I can't disagree it's currently slow, but Bruce Korb is working on
> converting libtool to a C program, pretty much like fixincludes.
> Hopefully, this will let libtool fly. It may also be possible to
> #ifdef away unneeded components. But I'd rather concentrate
> library-building knowledge into a single tool, instead of having
> incomplete work duplicated across multiple projects.
Even if it's a C program: an additional fork+exec+libc init sequence
means the time before cc1 actually starts compiling anything probably
doubled. You may not notice this because your computer is wicked
fast, but try working for a few weeks on an old Sparc 20 running
Solaris 2.5 and you will.
I'm already contemplating ways to get Make to run cc1 directly - heh,
now there's where we may want dynamic module loading, come to think of
it. The sanest approach I can think of involves making a dynamic
module for Make to pull in when it notices that the compiler supports
zw "You can tell [the lunatic] by the liberties he takes with
common sense, his flashes of inspiration, and by the fact that
sooner or later he brings up the Templars."
-- Umberto Eco, _Foucault's Pendulum_