This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: wwwdocs/htdocs/egcs-1.1/fortran.html



  In message <19990227134631.26569.qmail@deer>you write:
  > I'm still a bit concerned about any (semi-)automated process taking
  > over the job.  (I feel a *fully* automated process would necessarily
  > require complete integration of a bug-, patch-, and release-tracking
  > system, because....)  Ideally, a given release should document, in its
  > "news" and "known bugs" info, only the items that apply to that release.
I'd prefer a semi-automated job to one done totally by hand.  The more things
we have to do by hand the more error prone the job will be and the more time
that's taken away from other tasks.

For example, take mailing lists.  Initially almost everything was done by
hand.  That sucked, in a big way, we knew it didn't scale.

Then we beefed up the majordomo configuration to handle more stuff on its
own without human intervention.  This worked reasonably well and allows us
to scale the project from ~20 initial folks to >2k.  Unfortunately as the
project grew the amount of things that needed to be done by a human grew
(particularly removal of bouncers, 3rd party [un]subscription approvals and
big message approval/rejection).

Then came ezmlm, which has cut out all but a trickle of mailing list
maintenance and as a result we don't have to really have someone that
takes care of the mailing lists.  I wouldn't be suprised if the system could
scale to 20k or more subscribers before we start to need a human involved
again.

None of these schemes is completely automated, but each one takes a step
forward by automating some significant set of tasks that a human used to take
care of.

Iterate and improve (majordomo+sendmail), know when to start over
(ezmlm+qmail).  It's not significantly different from software engineering --
know when to improve something (alias analysis) and when to throw it out and
start again (configure/make subsystem).

- --

Given the volunteer, free-form nature of this project, I'd be very suprised
if we could get a completely integrated, bug, patch & release tracking system
together.  That's a tough problem, I know because I have to deal with these
issues as part of my day job :-)


  > Yet, also ideally, people enquiring about the overall product,
  > especially one still in multiple-life limbo-land like g77 (where
  > there's an egcs version and an FSF version), should be able to see
  > *all* the news and known-bugs info, across all the releases (or, at
  > least, the releases for which such info is supported).
I would think that with some minor work on the texi files we could have
a generated web page which breaks out the bugs/news by release.  Either
by using menus or some kind of marker to note when information about a
particular release starts and ends.  In the former case, texi2html does
the work for us, in the latter, we need a little perl code to break the
file into suitable hunks for each release.

  > what is *not* already fixed in that release.  (Remember, we find out
  > about known bugs in releases *after* they're released.  In a very
  > small way, my fortran.html improvements embody this by putting
  > two new items in the known-bugs section.  Those bugs have already
  > been fixed in the trunk, for 1.2, but even though they're now "known"
  > for 1.1.2, they should be available to people reading up on 1.1 in
  > general.  Maybe even 1.0, though there are natural limits on how
  > far back we'd want any system -- whether automated, manual, or
  > in-between -- to try to research whether known bugs existed.)
Yup.  And this is precisely why a system which reads the news/bugs directly
out of the CVS tree is a good thing -- you update the CVS tree and the
result of those updates appear immediately on the web.  Really.  It a lot like
how we get instant web updates when we check in changes to wwwdocs.

Pull up the URL I posted last night.  Then make a tweak to f/news.texi and
check it in.  Then reload the URL.  Poof, updated information.

Right now this is driven by the act of pulling up the URL, the other obvious
approach is to use cvswrappers to update this data only at checkin time.  I'm
not sure which is best.


  > For now, given that the automated tools we *do* have aren't capable
  > of handling this without us introducing, into the sources, way too much
  > "conditionals" cruft, I'm leaning towards preferring to use those
  > tools by hand, and then post-editing the results, for things like
  > producing the Web document that lists changes for a particular release.
I would think just a @c egcs-1.1.1 or some other marker would be enough for
perl to be able to do the right thing to pick out the stuff we want.  Or
use menus in texinfo to get to each release.  I *think* texi2html is supposed
to DTRT for texinfo menus.

  > So, I wouldn't mind adding automation to provide .html versions of
  > those files in a suitable place.
Hmm, maybe I mis-understood something.  I thought this is what you were
arguing against.  

  > Since I do try to keep the g77
  > doc sources up-to-date vis-a-vis checkins to the trunk, it'd be
  > kinda cool if the Web pages were automatically updated so people
  > could access the latest info on g77 improvements, perhaps from a
  > parent page called "Ongoing Unreleased Development Work".
Now I'm sure I mis-understood something :-)

One way to approach this would be to have those markers in the texi source,
then a page which allows the user to click on news/bugs for a particular
release (or all releases).  That cranks up a cgi script with a suitable
argument/environment to extract info for a particular release or all releases.


  > It's also been interesting to me to observe how more and more people
  > on the egcs projects are, in my estimation, converging on some kind
  > of wholesale solution to the documentation problem (which I personally
  > don't see clearly myself, at this point) -- ranging from the compiler
  > documentation all the way to the up-to-the-minute web pages.
I think we all see it as long term goal.  We're just not sure how to get
there yet.

  > In light of that, it might be wise to discuss various approaches first,
  > and do implementations later, so we don't end up playing whack-a-mole
  > with regard to the growing, and wide, set of opinions about requirements
  > for the doc set (which include ease of maintenance by developers, but
  > not as the only one!).
Certainly, but we also need to do some experimentation to find out what
really works and what doesn't.  At least in the first cut I want to have
docs on the web and in distributions generated from a single source without
human intervention.

To that end, I'm open to experimentation with schemes to solve particular
issues -- ie, when are the pages generated, how do we deal with getting
release specific information, what additional tool work might we need to
handle this better, etc.

For example, on-the-fly pages are made more difficult than they should
be because texi2html doesn't allow output to stdout and doesn't appear to
have a good way to set titles and similar stuff.  Or I'm just missing something
simple.

Generation of pages at checkin time is made difficult by the fragile and
hard to use cvswrappers code.

A nightly script to build the html files is probably the simplest scheme.  And
it may even be sufficient for our needs, I'm not sure.


jeff




Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]