This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: Serious problems accessing cvs
- To: pedwards at jaj dot com
- Subject: Re: Serious problems accessing cvs
- From: "Martin v. Loewis" <martin at loewis dot home dot cs dot tu-berlin dot de>
- Date: Tue, 7 Mar 2000 23:49:11 +0100
- CC: gcc at gcc dot gnu dot org
- References: <200003072239.RAA21076@jaj.com>
> > The more local changes you have, the more data that has to be moved across
> > the wire. For each file you change, assume that a copy of it has to be
> > moved across the net at least once, possibly twice (I'm not a CVS expert, I
> > can only report what I see in practice).
> This obviates a lot of the need for "cvs diff" and "cvs -n update"
> and the like. Much of this (all of it?) is already in the web pages
> as recommended practice. Always specifying a list of files/dirs to
> work on instead of defaulting to everything has a sizeable speedup
> on my CVS checkins. In the work I do, there isn't a lot of merging,
> so I can't report any experience there.
I think I understand all that, but I also think this is not the
problem. I'm connected through an 64k ISDN line, and I can see exactly
when and how much data is transferred (through the isdn4linux
xisdnload). So on an update, I see that there is a long period of
activity (about a minute), at which time my files are copied to
egcs.cygnus.com. This is no problem for me; as I understand it
shouldn't be a problem for egcs.cygnus.com, either (we have enough
memory).
The problem is that there is then *no* activity after the last bit was
sent out, for several minutes; I don't get a single bit back from the
CVS server. If I retry the same thing a short time later, it works
just fine.
Personally, I found the CVS update with merging to be quite a
convenient feature, and consider it one of the strength of CVS. I
don't see that I should give up that convenience for something that
may be simple problem.
Regards,
Martin