This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [patch] build xz (instead of bz2) compressed tarballs and diffs
- From: Jakub Jelinek <jakub at redhat dot com>
- To: Markus Trippelsdorf <markus at trippelsdorf dot de>
- Cc: Joseph Myers <joseph at codesourcery dot com>, Matthias Klose <doko at ubuntu dot com>, GCC Patches <gcc-patches at gcc dot gnu dot org>
- Date: Mon, 15 May 2017 16:24:16 +0200
- Subject: Re: [patch] build xz (instead of bz2) compressed tarballs and diffs
- Authentication-results: sourceware.org; auth=none
- References: <421aad71-31d2-ec16-9c8b-4b1eaefda201@ubuntu.com> <alpine.DEB.2.20.1705151359310.31959@digraph.polyomino.org.uk> <20170515141344.GB27845@x4>
- Reply-to: Jakub Jelinek <jakub at redhat dot com>
On Mon, May 15, 2017 at 04:13:44PM +0200, Markus Trippelsdorf wrote:
> On 2017.05.15 at 14:02 +0000, Joseph Myers wrote:
> > The xz manpage warns against blindly using -9 (for which --best is a
> > deprecated alias) because of the implications for memory requirements for
> > decompressing. If there's a reason it's considered appropriate here, I
> > think it needs an explanatory comment.
>
> I think it is unacceptable, because it would increase memory usage when
> decompressing over 20x compared to bz2 (and over 100x while compressing).
The memory using during compressing isn't that interesting as long as it
isn't prohibitive for sourceware or the machines RMs use.
For the decompression, I guess it matters what is actually the memory needed
for decompression the -9 gcc tarball, and compare that to minimal memory
requirements to compile (not bootstrap) the compiler using typical system
compilers. If compilation of gcc takes more memory than the decompression,
then it should be fine, why would anyone try to decompress gcc not to build
it afterwards?
Jakub