This is the mail archive of the gcc-help@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: statically linked gcc executables


On Jan 24, 2008 3:28 PM, Ted Byers <r.ted.byers@rogers.com> wrote:
> --- Angelo Leto <angleto@gmail.com> wrote:
> > I'm working on applications which are data critical,
> > so when I change
> > a library on the system there is the risk that
> > results may be
> > different, so I create a repository with the
> > critical libraries, and I
> > upgrade the libraries on repository only when it is
> > needed and
> > independently from the system libraries (I do this
> > in order to upgrade
> > the productivity tools and their related libraries
> > without interacting
> > with the libraries linked by my application).
> > Obviously when I change
> > the compiler I obtain different results on my
> > applications, so my idea
> > is to create a "development package" which includes
> > my critical
> > libraries and also the compiler in order to obtain
> > the same result
> > (always using the same optimizations flags) on  my
> > application also
> > when I'm compiling on different Linux installations.
>
> This would make me nervous.  If you program gives
> different results if you use different tool chains,
> that suggests to me that either your program is broken
> or the results you're obtaining are affected by bugs
> in the libraries you're using.

may be the problem is on my application (and or libraries) and not on
toolchains, but the situation is the following:
we obtain differents results with non linear algorithms (on higher
significative bits!!) using gcc 4.1 and gcc 4.2 using
aggressive optimizations flags (e.g. -march=nocona) and I think is
quite normal if the optimizations algorithms changes
between the two versions. So if my results are validated for gcc 4.1
with optimization flags, I cannot be sure that with the new
version of gcc 4.2 the result would be the same if the optimization
routines changes. Then I will use the new compiler only when
I'm sure about the results produced by the application compiled with it.

>
> You're half right.  If your program uses library X,
> and  that library has a subtle bug in the function
> you're using, then the result you get using a
> different library will be different.  The fix is not
> to ensure that you use the same library all the time,

why not? I would not use the same library "all the time", but only
until my new results are validated.
I mean, that's not the preferred way, but could be the only way (if
you have time constraints) to guarantee the results already
 validated, meanwhile you go to check the problem.
Let's the case you have an application that give you the same results
everywhere and in all the environments; then you
upgrade a set of tools which requires some new libraries used by your
application, then your regression testing procedures
say that there is a difference in results. You need both the new
upgraded tools but you cannot stop compiling your application
until you have solved the problem and  validated again the results.
The only effective solution to this problem I found was to
keep separated the system libraries from the development libraries and
upgrade them in different moments and with different
version when needed. I think the same thing is valid also for toolchains.

> but to ensure your test suite is sufficiently well
> developed that you can detect such a bug, and use a
> different function (even if you have to write it
> yourself) that routinely gives you provably correct
> answers.

True, but meanwhile, you cannot stop the whole development process.
The goal is to make the other tools "safely" upgradable without pain
to introduce unexpected differences on your application, or if you
prefer to switch to the new library when you trust the new output
data.
Moreover to write a very complex algorithm could not ever be a
feasible step in terms of time; since you discovered the different
results, you can still use the old library meanwhile you write
yourself the new one.

>
> To illustrate, I generally work with number crunching
> related to risk assessment.  My programs had better
> give me identical results regardless of whether I use
> gcc or MS Visual C++ or Intel's compiler, or whatever
> other tool might be tried, and on whatever platform.
> I have written code to do numeric integration, compute
> the eigenstructure of general matrices, &c.  In each
> case, there are well defined mathematical properties
> that must be true of the result, and I construct a
> test suite that, for example, will apply my
> eigensystem calculation code to tens of millions of
> random general square matrices (random values and
> random size of matrix), and test the result.  My code,
> then, is provably correct if it consistently provides
> mathematically correct results, and these results will
> be the same regardless of the platform and tool chain
> used because the mathematics of the problem do not
> depend on these things.  Even if you're dealing with
> numerically unstable systems (such as a dynamic system
> that produces chaos), it ought to give identical
> results for identical input.  Something is wrong if it

this in my experience is true if you don't use strong optimization flags.

> doesn't, and the fix isn't to ensure the program is
> executed always with binaries created from the same
> toolchain.  It is to figure out precisely why so you
> can fix the program.  Whether the bug is in my program
> or in a library I am using, if I do not take
> corrective action, my program remains buggy, and I
> have yet to see a situation where a program that is
> correct gives different results when compiled using
> different tools.
>
> I am sorry to say that if one has to resort to the
> practices you describe to ensure the same results by
> ensuring the same libraries are used, then I would not
> consider trusting the program at all.  Rather, use of

I partially agree with you: if you upgrade a library and the results
changes, this may not be due to your code,
you already validated your results, they are enough accurate and fits
you model. The point is that with the new
libraries you introduced a factor of variation. Until you do not
demonstrate that the
new result are valid, the good results are the previous.

> such practices suggests QA code for the program is
> inadequate to ensure correct results.  I certainly
> would not tolerate a situation where I get different
> trajectories from a numeric integration, or a
> different eigensystem from a given matrix, simply
> because I used a different library to compile the
> program.  If such a situation arose, then one of the
> versions, if not both, is giving mathematically
> incorrect results!

thanks for your opinion.
bye
Angelo

>
> HTH
>
> Ted
>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]