This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: GCC's instrumentation and the target environment


On Mon, Nov 4, 2019 at 7:06 AM <David.Taylor@dell.com> wrote:

> > From: Martin Liška <mliska@suse.cz>
> > Sent: Monday, November 4, 2019 4:20 AM
> > To: taylor, david; gcc@gcc.gnu.org
> > Subject: Re: GCC's instrumentation and the target environment
>
> > On 11/1/19 7:13 PM, David Taylor wrote:
>
> > Hello.
>
> Hello.
>
> > > What I'd like is a stable API between the routines that 'collect' the
> > > data and the routines that do the i/o.  With the i/o routines being
> > > non-static and in a separate file from the others that is not
> > > #include'd.
> > >
> > > I want them to be replaceable by the application.  Depending upon
> > > circumstances I can imagine the routines doing network i/o, disk i/o,
> > > or using a serial port.
> >
> > What's difference in between i/o and disk i/o? What about using a NFS
> file
> > system into which you can save the data (via
> -fprofile-dir=/mnt/mynfs/...)?
>
> I/O encompasses more than just reading and writing a file in a file system.
> Depending on the embedded target you might not have the ability to NFS
> mount.
> You might not even have a file system accessible to instrumentation.
>
> By network I/O I'm thinking sockets.  There's some code possibly run at
> 'boot' time or possibly run during the first __gcov_open that establishes a
> network connection with
> a process running on another system.  There's some protocol, agreed to by
> the
> application and remote process, for communicating the data collected and
> which
> file it belongs to.
>
> By serial I/O, I'm thinking of a serial port.
>
> Hopefully that is clearer.
>
> > I can imagine dump into stderr for example. That can be quite easily
> doable.
>
> I don't think that the current implementation would make that easy.  For
> us there
> are potentially over a thousand files being instrumented.  You need to
> communicate
> which file the data belongs to.  Whether it is via stderr, a serial port,
> or a network
> connection, the file name needs to be in the stream and there needs to be
> a way
> of determining when one file ends and the next one begins.
>
> For us, stderr and stdout, when defined, are used for communicating
> status and extraordinary events.  And not well suited for transferring
> instrumentation
> data.
>

And I generally agree with that statement but I am also on a project
evaluating the
use of a commercial tool which does coverage and includes MCDC analysis. It
has a very flexible plugin for this specific purpose. You can dump in any
format
you can decode to any output destination. They have many standard
implementations
and plenty of examples you can tailor.

It wouldn't be terribly difficult to multiplex the console and filter it.

I would suggest consideration for dumping into a buffer and having an
external
agent (e.g. debugger, JTAG based program, etc) retrieve it.

RTEMS programs generally don't exit and often have no networking. You have
to
have flexibility. No one is forcing a singular output media -- just
flexibility.

<hint> I'd love to see decision and MCDC coverage support </hint>.

--joel


>
> > Martin
>
> David
>
>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]