This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [PATCH 01/15] Selftest framework (unittests v4)
- From: Jeff Law <law at redhat dot com>
- To: Bernd Schmidt <bschmidt at redhat dot com>, David Malcolm <dmalcolm at redhat dot com>
- Cc: gcc-patches at gcc dot gnu dot org
- Date: Mon, 30 Nov 2015 16:05:33 -0700
- Subject: Re: [PATCH 01/15] Selftest framework (unittests v4)
- Authentication-results: sourceware.org; auth=none
- References: <564A1DAB dot 1030700 at redhat dot com> <1447952699-40820-1-git-send-email-dmalcolm at redhat dot com> <1447952699-40820-2-git-send-email-dmalcolm at redhat dot com> <564E084A dot 20006 at redhat dot com> <1447956495 dot 19594 dot 140 dot camel at surprise> <564E1881 dot 3030306 at redhat dot com> <1448491656 dot 19594 dot 253 dot camel at surprise> <5656FD12 dot 1030205 at redhat dot com>
On 11/26/2015 05:37 AM, Bernd Schmidt wrote:
On 11/25/2015 11:47 PM, David Malcolm wrote:
FWIW, the reason I special-cased the linked list was to avoid any
dynamic memory allocation: the ctors run before main, so I wanted to
keep them as simple as possible.
Is there any particular reason for this? C++ doesn't disallow memory
allocation in global constructors, does it?
I'm not aware of any such restriction, but I'm not a C++ guru.
David, what's the reason for avoiding dynamic memory allocation here?
I do want some level of determinism over test ordering, for the sake of
everyone's sanity. It's probably simplest to either hardcode the order,
or have priority levels. I favor the former (and right now am leaning
towards a very explicit no-magic approach with no auto-registration,
given the linker issues I've been seeing with auto-registration).
I guess that works too. Certainly explicit function calls are
preferrable over #including other C files as a workaround for such a
problem.
My problem with priorities is that it's really just a poor man's
substitution for dependency analysis. And in my experience, it usually
fails.
I still wish others would chime in on the rest of the issues we've
discussed (run to first failure vs. providing elaborate test summaries),
I want to make my preference clear but I don't want to dictate it.
I favor run-all over run-to-first-failure as long as we don't have good
dependency analysis to order the tests. That in turn tends to imply
that each test ought to have a pass/fail indicator.
If we had good dependency analysis, then run-to-first-failure would be
my preference.
Jeff