This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

PATCH: new GC implementation


This patch provides gcc with a new garbage collector implementation.
It is based on a `bag of pages' model, in which objects are allocated
from larger pages which are in turn obtained from mmap.  Mark bits for
the objects on a page are stored off to the side, in a bitmap along
with other bookkeeping information about the page, rather than with
the object itself; the mark bits are interpreted as `in-use' bits when
collection is not in progress.

This collector is selected by defining GGC=ggc-mmap.o in Makefile.in.

In a test case (`cc1 insn-attrtab.i', built -O2 on i386 Linux) the
actual collection takes about 4% of total execution time, and the
penalty for using this GC's allocator instead of obstacks is about
4-5% over the no-GC version.

I think I've taken care in previous patches of any failed regressions
caused by using this GC, except for one remaining potential problem in
g++, for which a patch will soon be forthcoming.

One important caveat: the GC implementation, when recursively marking
tree nodes, rtx, etc., assumes all memory it encounters when following
child pointers has been allocated from the ggc_alloc* functions.  In
testing the allocator I discovered several cases in gcc/g++ common
code and the i386 backend in which static variables or memory
allocated by malloc was referenced by tree or rtx.  There may be
similar cases in other backends; maintainers may wish to investigate
this.

Building ggc-mmap.c with ENABLE_CHECKING explicitly tests for this.
It also defines GGC_POISON and GGC_ALWAYS_COLLECT.  The former causes
to be poisoned memory that the collector decides is no longer in use.
The latter causes the collector to run every time ggc_collect is
called (as opposed to the usual behavior, in which collection is
actually performed only when significant memory has been allocated
since the last collection).  Running with these options enabled should
be a strong test of memory use.

Finally, the allocator obtains memory by mmap'ing from /dev/zero.
However, the fact that the memory is zeroed out is not used.  On
Linux, the non-standard `anonymous' mmap is used, which hands back the
memory as-is without zeroing it.  Those familiar with other host
systems may know of similar optimizations.

Alex Samuel
CodeSourcery, LLC



Wed Sep 22 00:28:57 1999  Alex Samuel  <samuel@codesourcery.com>

	* Makefile.in (ggc-mmap.o): New rule.
	* ggc-mmap.c: New file.



Index: Makefile.in
===================================================================
RCS file: /cvs/egcs/egcs/gcc/Makefile.in,v
retrieving revision 1.309
diff -c -r1.309 Makefile.in
*** Makefile.in	1999/09/20 09:59:33	1.309
--- Makefile.in	1999/09/22 07:29:10
***************
*** 1432,1437 ****
--- 1432,1440 ----
  ggc-simple.o: ggc-simple.c $(CONFIG_H) $(RTL_BASE_H) $(TREE_H) flags.h \
  	ggc.h varray.h hash.h
  
+ ggc-mmap.o: ggc-mmap.c $(CONFIG_H) $(RTL_BASE_H) $(TREE_H) flags.h \
+ 	ggc.h varray.h hash.h
+ 
  ggc-none.o: ggc-none.c $(CONFIG_H) $(RTL_BASE_H) ggc.h
  
  ggc-callbacks.o: ggc-callbacks.c $(CONFIG_H) $(RTL_BASE_H) $(TREE_H) ggc.h



*** /dev/null	Tue May  5 13:32:27 1998
--- ggc-mmap.c	Tue Sep 21 23:31:04 1999
***************
*** 0 ****
--- 1,902 ----
+ /* "Bag-of-pages" garbage collector using mmap for the GNU compiler.
+    Copyright (C) 1998 Free Software Foundation, Inc.
+ 
+    This file is part of GNU CC.
+ 
+    GNU CC is free software; you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation; either version 2, or (at your option)
+    any later version.
+ 
+    GNU CC is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+ 
+    You should have received a copy of the GNU General Public License
+    along with GNU CC; see the file COPYING.  If not, write to
+    the Free Software Foundation, 59 Temple Place - Suite 330,
+    Boston, MA 02111-1307, USA.  */
+ 
+ #include <unistd.h>
+ #include <stdio.h>
+ #include <sys/mman.h>
+ 
+ #include "config.h"
+ #include "ggc.h"
+ #include "rtl.h"
+ #include "system.h"
+ #include "tree.h"
+ #include "varray.h"
+ #include "flags.h"
+ 
+ 
+ /* Stategy: 
+ 
+    This garbage-collecting allocator allocates objects on one of a set
+    of pages.  Each page can allocate objects of a single size only;
+    available sizes are powers of two starting at four bytes.  The size
+    of an allocation request is rounded up to the next power of two,
+    and satisfied from the appropriate page.
+ 
+    Each page is recorded in a page-entry, which also maintains an
+    in-use bitmap of object positions on the page.  This allows the
+    allocation state of a particular object to be flipped without
+    touching the page itself.
+ 
+    Page entries are arranged in an array of singly-linked lists.  The
+    array is indexed by the allocation size, in bits, of the pages on
+    it; i.e. all pages on a list allocate objects of the same size.
+    Pages are ordered on the list such that all full pages follow all
+    non-full pages.  */
+ 
+ 
+ /* Define GGC_POISON to poison memory marked unused by the collector.  */
+ #undef GGC_POISON
+ 
+ /* Define GGC_ALWAYS_COLLECT to perform collection every time
+    ggc_collect is invoked.  Otherwise, collection is performed only
+    when a significant amount of memory has been allocated since the
+    last collection.  */
+ #undef GGC_ALWAYS_COLLECT.
+ 
+ /* If ENABLE_CHECKING is defined, enable GGC_POISON and
+    GGC_ALWAYS_COLLECT automatically.  */
+ #ifdef ENABLE_CHECKING
+ #define GGC_POISON
+ #define GGC_ALWAYS_COLLECT
+ #endif
+ 
+ /* Define GGC_MMAP_DEBUG_LEVEL to print debugging information.
+      0: No debugging output.
+      1: GC statistics only.
+      2: Page-entry allocations/deallocations as well.
+      3: Object allocations as well.
+      4: Object marks as well.   */
+ #define GGC_MMAP_DEBUG_LEVEL (0)
+ 
+ /* The file descriptor for debugging output.  */
+ #define GGC_MMAP_DEBUG_FILE (stdout)
+ static FILE *ggc_mmap_debug_file;
+ 
+ 
+ /* Timing information for collect execution goes into here.  */
+ extern int gc_time;
+ 
+ /* Host page size, width in bits.  */
+ #define HOST_PAGE_SIZE_BITS (12)
+ 
+ /* Host page size, in bytes.  */
+ #define HOST_PAGE_SIZE (1 << HOST_PAGE_SIZE_BITS)
+ 
+ /* A page_entry records the status of an allocation page.  This
+    structure is dynamically sized to fit the bitmap in_use_p.  */
+ typedef struct page_entry 
+ {
+   /* The address at which the memory is allocated.  */
+   char *page;
+   /* Context depth of this page.  */
+   unsigned context_depth;
+   /* The size of objects allocated from this page in bits.  */
+   size_t object_bits;
+   /* The number of bytes allocated.  (This will always be a multiple
+      of the host system page size.)  */
+   size_t bytes;
+   /* The number of free objects remaining on this page.  */
+   unsigned num_free_objects;
+   /* The next page-entry with objects of the same size, or NULL if
+      this is the last page-entry.  */
+   struct page_entry *next;
+   /* A likely candidate for the bit position of a free object for the
+      next allocation from this page.  */
+   unsigned next_bit_hint;
+   /* Saved in-use bit vector for pages that aren't in the topmost
+      context during collection.  */
+   unsigned *save_in_use_p;
+   /* Saved number of free objects for pages that aren't in the topmost
+      context during colleciton.  */
+   size_t save_num_free_objects;
+   /* A bit vector indicating whether or not objects are in use.  The
+      Nth bit is one if the Nth object on this page is allocated.  This
+      array is dynamically sized.  */
+   unsigned in_use_p[1];
+ } page_entry;
+ 
+ 
+ /* The page_table contains the master lists of allocation pages,
+    listed by allocation size.  */
+ typedef struct page_table
+ {
+   /* The Nth element in this array is a page with objects of size 2^N.
+      If there are any pages with free objects, they will be at the
+      head of the list.  NULL if there are no page-entries for this
+      object size.  */
+   page_entry *pages[HOST_BITS_PER_WIDE_INT];
+   /* The Nth element in this array is the last page with objects of
+      size 2^N.  NULL if there are no page-entries for this object
+      size.  */
+   page_entry *page_tails[HOST_BITS_PER_WIDE_INT];
+ } page_table;
+ 
+ static struct page_table table;
+ 
+ /* The current depth in the context stack.  */
+ static unsigned context_depth;
+ 
+ 
+ /* A two-level tree is used to look up the page-entry for a given
+    pointer.  Two chunks of the pointer's bits are extracted to index
+    the first and second levels of the tree, as follows:
+ 
+ 				   HOST_PAGE_SIZE_BITS
+ 					|      |
+        msb +----------------+----+------+------+ lsb
+ 			    |    |      |
+ 			 PAGE_L1_BITS   |
+ 				 |      |
+ 			       PAGE_L2_BITS
+ 
+    The bottommost HOST_PAGE_SIZE_BITS are ignored, since page-entry
+    pages are aligned on system page boundaries.  The next most
+    significant PAGE_L2_BITS and PAGE_L1_BITS are the second and first
+    index values in the lookup table, respectively.  
+ 
+    The topmost leftover bits, if any, are ignored.  For 32-bit
+    architectures and the settings below, there are no leftover bits.
+    For architectures with wider pointers, this scheme is currently
+    broken; the lookup tree needs to point to a list of pages, which
+    must be scanned to find the correct one.  */
+ 
+ #define PAGE_L1_BITS (8)
+ #define PAGE_L2_BITS (HOST_BITS_PER_WIDE_INT \
+ 		      - PAGE_L1_BITS - HOST_PAGE_SIZE_BITS)
+ #define PAGE_L1_SIZE (1 << PAGE_L1_BITS)
+ #define PAGE_L2_SIZE (1 << PAGE_L2_BITS)
+ 
+ /* Lookup table for associating allocation pages with object
+    addresses.  */
+ static page_entry ***lookup;
+ 
+ /* Extract the level-1 lookup index from a pointer.  */
+ #define LOOKUP_L1(__p) \
+   (((size_t) __p) >> (HOST_PAGE_SIZE_BITS + PAGE_L2_BITS) \
+    & ((1 << PAGE_L1_BITS) - 1))
+ 
+ /* Extract the level-2 lookup index from a pointer.  */
+ #define LOOKUP_L2(__p) \
+   (((size_t) __p) >> (HOST_PAGE_SIZE_BITS) & ((1 << PAGE_L2_BITS) - 1))
+ 
+ /* A function call version of the lookup operation is provided to aid
+    in debugging.  */
+ static page_entry *
+ debug_mmap_lookup (p)
+      void *p;
+ {
+   int l1, l2;
+ 
+   l1 = LOOKUP_L1 (p);
+   l2 = LOOKUP_L2 (p);
+ 
+   if (lookup == NULL || lookup[l1] == NULL)
+     return NULL;
+   else
+     return lookup[l1][l2];
+ }
+ 
+ 
+ /* Compute DIVIDEND / DIVISOR, rounded up.  */
+ #define DIV_ROUND_UP(__dividend, __divisor) \
+   ((__dividend + __divisor - 1) / __divisor)
+ 
+ /* The number of objects per allocation page, for objects of size
+    2^BITS.  */
+ #define OBJECTS_PER_PAGE(__bits) \
+   (unsigned) ((((unsigned) 1 << (__bits)) >= HOST_PAGE_SIZE) \
+               ? 1 \
+               : (HOST_PAGE_SIZE / (1 << (__bits))))
+ 
+ /* The number of host integers required to maintain a bitmap for the
+    objects on a page-entry.  */
+ #define BITMAP_SIZE(__num_objects) \
+   DIV_ROUND_UP ((__num_objects), HOST_BITS_PER_INT)
+ 
+ 
+ /* Bytes currently allocated.  */
+ static size_t allocated = 0;
+ 
+ /* Bytes currently allocated at the end of the last collection.  This
+    is 4 MB by default, so no collection will be performed until the
+    heap is somewhat larger than 4 MB.  */
+ static size_t allocated_last_gc = 4 * 1024 * 1024;
+ 
+ /* Skip garbage collection if the current allocation is not at least
+    this factor times the allocation at the end of the last collection.
+    In other words, total allocation must expand by (this factor minus
+    one) before collection is performed.  */
+ #define GGC_MMAP_MIN_EXPAND_FOR_GC (1.3f)
+ 
+ /* A file descriptor open to /dev/zero for reading.  */
+ #if !defined(__linux__)
+ static int dev_zero_fd;
+ #endif
+ 
+ char *empty_string;
+ 
+ /* Prints the page-entry for object size BITS, for debugging.  */
+ static void
+ debug_print_page_list (bits)
+      int bits;
+ {
+   page_entry *p;
+   printf ("Head=%p, Tail=%p:\n", table.pages[bits], table.page_tails[bits]);
+   p = table.pages[bits];
+   while (p != NULL)
+     {
+       printf ("%p(%1d|%3d) -> ", p, p->context_depth, p->num_free_objects);
+       p = p->next;
+     }
+   printf ("NULL\n");
+   fflush (stdout);
+ }
+ 
+ /* `Poisons' the region of memory starting at START and extending for
+    LEN bytes by writing the value 0xdead repeatedly over the region.  */
+ static void
+ poison (start, len)
+      void *start;
+      size_t len;
+ {
+   static int dead_code = (int) 0xdeaddeaddeaddeadl;
+   char *p;
+   for(p = (char *) start; p < (char *) start + len; p += 2)
+     memcpy (p, &dead_code, 2);
+ }
+ 
+ /* Allocate a new page for allocating objects of size 2^OBJECT_BITS,
+    and return an entry for it.  The entry is not added to the
+    appropriate page_table list.  */
+ static struct page_entry *
+ alloc_page (object_bits)
+      unsigned object_bits;
+ {
+   struct page_entry *entry;
+   size_t num_objects;
+   size_t bitmap_size;
+   size_t page_entry_size;
+ 
+   num_objects = OBJECTS_PER_PAGE (object_bits);
+   bitmap_size = BITMAP_SIZE (num_objects + 1);
+ 
+   /* The page_entry struct is dynamically sized to accomodate the
+      in_use_p bitmap.  One int is already included in sizeof
+      (page_entry).  */
+   page_entry_size = sizeof (page_entry) + sizeof (int) * (bitmap_size - 1);
+ 
+   entry = (struct page_entry *) xmalloc (page_entry_size);
+   memset (entry, 0, page_entry_size);
+   entry->context_depth = context_depth;
+   entry->object_bits = object_bits;
+   entry->num_free_objects = num_objects;
+   entry->bytes = entry->num_free_objects * (1 << object_bits);
+   /* Set the one-past-the-end in-use bit.  This acts as a sentry as we
+      increment the hint.  */
+   entry->in_use_p[num_objects / HOST_BITS_PER_INT] |=
+     (1 << (num_objects % HOST_BITS_PER_INT));
+ 
+   /* Actually allocate the memory, using mmap.  On Linux, use the
+      non-standard ANONYMOUS mode, which gives us memory without the
+      expense of zeroing it out.  On other systems, mmap from
+      /dev/zero.  Optimizations for other systems that provide
+      comparable means of getting memory quickly could be added here;
+      no assumptions are subsequently made about the memory's
+      contents.  */
+ #if defined(__linux__)
+   entry->page = (char *) mmap (NULL, entry->bytes, PROT_READ | PROT_WRITE,
+ 			       MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+ #else
+   entry->page = (char *) mmap (NULL, entry->bytes, PROT_READ | PROT_WRITE,
+ 			       MAP_PRIVATE, dev_zero_fd, 0);
+ #endif
+ 
+   if (GGC_MMAP_DEBUG_LEVEL >= 2)
+     fprintf (GGC_MMAP_DEBUG_FILE, 
+ 	     "Allocating page at %p, object size=%d, data %p-%p\n", entry,
+ 	     1 << object_bits, entry->page, entry->page + entry->bytes - 1);
+ 
+   return entry;
+ }
+ 
+ 
+ /* Free a page when it's no longer needed.  */
+ static void
+ free_page (entry)
+      struct page_entry *entry;
+ {
+   if (GGC_MMAP_DEBUG_LEVEL >= 2)
+     fprintf (GGC_MMAP_DEBUG_FILE, 
+ 	     "Deallocating page at %p, data %p-%p\n", entry,
+ 	     entry->page, entry->page + entry->bytes - 1);
+ 
+   if (munmap (entry->page, entry->bytes) == -1)
+     abort ();
+   free (entry);
+ }
+ 
+ 
+ /* This table provides a fast way to determine ceil(log_2(size)) for
+    allocation requests.  The minimum allocation size is four bytes.  */
+ static unsigned size_lookup[257] = 
+ { 
+   2, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 
+   4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 
+   5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 
+   6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 
+   6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 
+   7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 
+   7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7,
+   7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 
+   7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
+   8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
+   8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
+   8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
+   8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
+   8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
+   8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
+   8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
+   8
+ };
+ 
+ /* Allocate a chunk of memory of SIZE bytes.  If ZERO is non-zero, the
+    memory is zeroed; otherwise, its contents are undefined.  */
+ static inline void *
+ alloc (size, zero)
+      size_t size;
+      int zero;
+ {
+   unsigned size_bits;
+   struct page_entry *entry;
+   unsigned bitmap_word;
+   unsigned bit;
+   void *result;
+   unsigned object_offset;
+ 
+   if (size <= 256)
+     size_bits = size_lookup[size];
+   else
+     {
+       size_bits = 9;
+       while (size > ((unsigned) 1 << size_bits))
+ 	size_bits++;
+     }
+ 
+   /* If there are non-full pages for this size allocation, they are at
+      the head of the list.  */
+   entry = table.pages[size_bits];
+ 
+   /* If there is no page for this object size, or all pages in this
+      context are full, allocate a new page.  */
+   if (entry == NULL 
+       || entry->num_free_objects == 0 
+       || entry->context_depth != context_depth)
+     {
+       struct page_entry *new_entry;
+       new_entry = alloc_page (size_bits);
+       
+       /* If this is the only entry, it's also the tail.  */
+       if (entry == NULL)
+ 	table.page_tails[size_bits] = new_entry;
+      
+       /* Put new pages at the head of the page list.  */
+       new_entry->next = entry;
+       entry = new_entry;
+       table.pages[size_bits] = new_entry;
+ 
+       /* For a new page, we know the word and bit positions (in the
+ 	 in_use bitmap) of the first available object -- they're zero.  */
+       bitmap_word = 0;
+       bit = 0;
+       object_offset = 0;
+       new_entry->next_bit_hint = 1;
+     }
+   else
+     {
+       /* First try to use the hint left from the previous allocation
+ 	 to locate a clear bit in the in-use bitmap.  We've made sure
+ 	 that the one-past-the-end bit is always set, so if the hint
+ 	 has run over, this test will fail.  */
+       unsigned hint = entry->next_bit_hint;
+       bitmap_word = hint / HOST_BITS_PER_INT;
+       bit = hint % HOST_BITS_PER_INT;
+       
+       if ((entry->in_use_p[bitmap_word] & (1 << bit)) == 0)
+ 	{
+ 	  object_offset = hint;
+ 	  /* Next time, try the next bit.  */
+ 	  entry->next_bit_hint++;
+ 	}
+       /* If the hint didn't work, scan the bitmap from the beginning.  */
+       else
+ 	{
+ 	  bitmap_word = 0;
+ 	  bit = 0;
+ 	  while (entry->in_use_p[bitmap_word] == (unsigned) -1)
+ 	    ++bitmap_word;
+ 	  while ((entry->in_use_p[bitmap_word] & (1 << bit)) != 0)
+ 	    ++bit;
+ 	  object_offset = bitmap_word * HOST_BITS_PER_INT + bit;
+ 	  /* Next time, try the next bit.  */
+ 	  entry->next_bit_hint = bitmap_word * HOST_BITS_PER_INT + bit + 1;
+ 	}
+     }
+ 
+   /* Set the in-use bit.  */
+   entry->in_use_p[bitmap_word] |= (1 << bit);
+ 
+   /* Keep a running total of the number of free objects.  If this page
+      fills up, we may have to move it to the end of the list if the
+      next page isn't full.  If the next page is full, all subsequent
+      pages are full, so there's no need to move it.  */
+   if (--(entry->num_free_objects) == 0
+       && entry->next != NULL
+       && entry->next->num_free_objects > 0)
+     {
+       table.pages[size_bits] = entry->next;
+       entry->next = NULL;
+       table.page_tails[size_bits]->next = entry;
+       table.page_tails[size_bits] = entry;
+     }
+   /* Calculate the object's offset and address.  */
+   result = entry->page + (object_offset << size_bits);
+ 
+   if (zero)
+     {
+ #ifdef GGC_POISON
+       /* `Poison' the entire allocated object before zeroing the
+ 	 requested area , so that bytes beyond the end, if any, will
+ 	 not necessarily be zero.  */
+       poison (result, 1 << size_bits);
+ #endif
+       memset (result, 0, size);
+     }
+ 
+   /* Keep track of how many bytes are being allocated.  This
+      information is used in deciding when to collect.  */
+   allocated += (1 << size_bits);
+ 
+   if (GGC_MMAP_DEBUG_LEVEL >= 3)
+     fprintf (GGC_MMAP_DEBUG_FILE, 
+ 	     "Allocating object, requested size=%d, actual=%d at %p on %p\n",
+ 	     size, 1 << size_bits, result, entry);
+ 
+   return result;
+ }
+ 
+ 
+ /* If P is not marked, marks it and returns 0.  Otherwise returns 1.
+    P must have been allocated by the GC allocator; it mustn't point to
+    static objects, stack variables, or memory allocated with malloc.  */
+ static inline int
+ mark (p)
+      void *p;
+ {
+   page_entry *entry;
+   unsigned bit, word, mask;
+ 
+   /* Look up the page on which the object is alloced.  If
+      ENABLE_CHECKING is in force, check explicitly that p was
+      allocated by the GC collector.  If it wasn't and we're not
+      checking, we're just going to dereference NULL at some point.  */
+ #ifdef ENABLE_CHECKING
+   /* Check explicitly that p was allocated by the GC allocator.  */
+   if (lookup[LOOKUP_L1(p)] == NULL)
+     abort ();
+ #endif
+   entry = lookup[LOOKUP_L1(p)][LOOKUP_L2(p)];
+ #ifdef ENABLE_CHECKING
+   if (entry == NULL)
+     abort ();
+ #endif
+ 
+   /* Calculate the index of the object on the page; this is its bit
+      position in the in_use_p bitmap.  */
+   bit = (((char *) p) - entry->page) >> entry->object_bits;
+   word = bit / HOST_BITS_PER_INT;
+   mask = 1 << (bit % HOST_BITS_PER_INT);
+   
+   /* If the bit was previously set, skip it. */
+   if ((entry->in_use_p[word] & mask) > 0)
+     return 1;
+   /* Otherwise set it, and decrement the free object count.  */
+   else
+     {
+       entry->in_use_p[word] |= mask;
+       --(entry->num_free_objects);
+ 
+       allocated += (1 << entry->object_bits);
+ 
+       if (GGC_MMAP_DEBUG_LEVEL >= 4)
+ 	fprintf (GGC_MMAP_DEBUG_FILE, "Marking %p\n", p);
+ 
+       return 0;
+     }
+ }
+ 
+ 
+ /* Initialize the ggc-mmap allocator.  */
+ void
+ init_ggc ()
+ {
+ #if !defined(__linux__)
+   dev_zero_fd = open ("/dev/zero", O_RDONLY);
+   if (dev_zero_fd == -1)
+     abort ();
+ #endif
+ 
+ /* ggc_mmap_debug_file = fopen ("ggc-mmap.debug", "w"); */
+ 
+   empty_string = ggc_alloc_string ("", 0);
+   ggc_add_string_root (&empty_string, 1);
+ }
+ 
+ 
+ void
+ ggc_push_context PROTO ((void))
+ {
+   ++context_depth;
+ }
+ 
+ 
+ void
+ ggc_pop_context PROTO ((void))
+ {
+   int bits;
+ 
+   /* Any remaining pages in the popped context are lowered to the new
+      current context; i.e. objects allocated in the popped context and
+      left over are imported into the previous context.  */
+   for (bits = 0; bits < HOST_BITS_PER_WIDE_INT; bits++)
+     {
+       struct page_entry *p = table.pages[bits];
+       while (p != NULL)
+ 	{
+ 	  if (p->context_depth == context_depth)
+ 	    --(p->context_depth);
+ 	  p = p->next;
+ 	}
+     }
+   
+   --context_depth;
+ }
+ 
+ 
+ struct rtx_def *
+ ggc_alloc_rtx (nslots)
+      int nslots;
+ {
+   return (struct rtx_def *) 
+     alloc (sizeof (struct rtx_def) + ((nslots) - 1) * sizeof (rtunion), 1);
+ }
+ 
+ 
+ struct rtvec_def *
+ ggc_alloc_rtvec (nelt)
+      int nelt;
+ {
+   return (struct rtvec_def *)
+     alloc (sizeof (struct rtvec_def) + ((nelt) - 1) * sizeof (rtunion), 1);
+ }
+ 
+ 
+ union tree_node *
+ ggc_alloc_tree (length)
+      int length;
+ {
+   return (union tree_node *)
+     alloc (length, 1);
+ }
+ 
+ 
+ char *
+ ggc_alloc_string (contents, length)
+      const char *contents;
+      int length;
+ {
+   char *string;
+ 
+   if (length < 0)
+     {
+       if (contents == NULL)
+ 	return NULL;
+       length = strlen (contents);
+     }
+ 
+   string = (char *) alloc (length + 1, 0);
+   if (contents != NULL)
+     memcpy (string, contents, length);
+   string[length] = 0;
+ 
+   return string;
+ }
+ 
+ 
+ void *
+ ggc_alloc (size)
+      size_t size;
+ {
+   return alloc (size, 0);
+ }
+ 
+ 
+ void
+ ggc_collect ()
+ {
+   int bits;
+   int l1;
+   page_entry *p;
+   int time;
+ 
+   /* Avoid frequent unnecessary work by skipping collection if the
+      total allocations haven't expanded much since the last
+      collection.  */
+ #ifndef GGC_ALWAYS_COLLECT
+   if (allocated < GGC_MMAP_MIN_EXPAND_FOR_GC * allocated_last_gc)
+     return;
+ #endif
+   time = get_run_time ();
+   if (!quiet_flag)
+     fprintf (stderr, " {GC %d -> ", allocated / 1024);
+ 
+   /* Zero the total allocated bytes.  We'll reaccumulate this while
+      marking.  */
+   allocated = 0;
+   
+   /* Build a two-level tree to look up pages from object addresses.
+      Only the first level is built up front; second-level arrays are
+      added as needed.  */
+   lookup = xcalloc (1, sizeof (page_entry **) * PAGE_L1_SIZE);
+   memset (lookup, 0, sizeof (page_entry **) * PAGE_L1_SIZE);
+   for (bits = 0; bits < HOST_BITS_PER_WIDE_INT; bits++)
+     {
+       int num_objects = OBJECTS_PER_PAGE (bits);
+       /* Size, in bytes, of the in-use bit vector.  */
+       int bitmap_size = sizeof (int) * BITMAP_SIZE (num_objects);
+ 
+       p = table.pages[bits];
+       while (p != NULL)
+ 	{
+ #ifdef ENABLE_CHECKING
+ 	  /* p->page should be page-aligned.  */
+ 	  if ((((size_t) p->page) & ((1 << HOST_PAGE_SIZE_BITS) - 1)) != 0)
+ 	    abort ();
+ #endif
+ 	  l1 = LOOKUP_L1 (p->page);
+ 	  if (lookup[l1] == NULL)
+ 	    {
+ 	      lookup[l1] = xcalloc (1, sizeof (page_entry *) * PAGE_L2_SIZE);
+ 	      memset (lookup[l1], 0, sizeof (page_entry *) * PAGE_L2_SIZE);
+ 	    }
+ #ifdef ENABLE_CHECKING
+ 	  if(lookup[l1][LOOKUP_L2 (p->page)] != NULL)
+ 	    abort ();
+ #endif
+ 	  lookup[l1][LOOKUP_L2 (p->page)] = p;
+ 
+ 	  /* Pages that aren't in the topmost context are not
+ 	     collected; nevertheless, we need their in-use bit vectors
+ 	     to store GC marks.  So, back them up first.  */
+ 	  if (p->context_depth < context_depth)
+ 	    {
+ 	      p->save_in_use_p = xmalloc (bitmap_size);
+ 	      memcpy (p->save_in_use_p, p->in_use_p, bitmap_size);
+ 	      p->save_num_free_objects = p->num_free_objects;
+ 	    }
+ 
+ 	  /* Reset reset the number of free objects and clear the
+              in-use bits.  These will be adjusted by mark.  */
+ 	  p->num_free_objects = num_objects;
+ 	  memset (p->in_use_p, 0, bitmap_size);
+ 	  /* Make sure the one-past-the-end bit is always set.  */
+ 	  p->in_use_p[num_objects / HOST_BITS_PER_INT] 
+ 	    |= (1 << (num_objects % HOST_BITS_PER_INT));
+ 
+ 	  p = p->next;
+ 	}
+     }
+ 
+   /* All the marking happens here.  */
+   ggc_mark_roots ();
+   
+   /* Clean up the page lookup tree.  */
+   for (l1 = 0; l1 < PAGE_L1_SIZE; l1++)
+     if (lookup[l1] != NULL)
+       free (lookup[l1]);
+   free (lookup);
+   lookup = NULL;
+ 
+   /* Put pages without empty objects at the end of the list.  Free
+      each page that is unused.  */
+   for (bits = 0; bits < HOST_BITS_PER_WIDE_INT; bits++)
+     if (table.pages[bits] != NULL)
+       {
+ 	int done = 0;
+ 	unsigned num_objects = OBJECTS_PER_PAGE (bits);
+ 	/* Size, in bytes, of the in-use bit vector.  */
+ 	unsigned bitmap_size = sizeof (int) * BITMAP_SIZE (num_objects);
+ 	
+ 	/* The last page-entry to consider, regardless of entries
+ 	   placed at the end of the list.  */
+ 	const page_entry *last = table.page_tails[bits];
+ 	/* The previously examined entry in the list.  */
+ 	page_entry *previous = NULL;
+ 
+ 	p = table.pages[bits];
+ 	if (p == NULL)
+ 	  continue;
+ 
+ 	/* At the beginning and end of each iteration of the loop,
+ 	     - The list is properly formed.
+ 	     - previous points to the element before p, or is NULL if
+ 	       p is the head of the list.  */
+  	do
+ 	  {
+ 	    page_entry *next = p->next;
+ 
+ 	    /* Loop until all entries have been examined.  */
+ 	    if (p == last)
+ 	      done = 1;
+ 
+ 	    /* Only objects on pages in the topmost context should get
+ 	       collected.  If this page is not in the topmost context,
+ 	       restore its allocation state.  */
+ 	    if (p->context_depth < context_depth)
+ 	      {
+ 		memcpy (p->in_use_p, p->save_in_use_p, bitmap_size);
+ 		free (p->save_in_use_p);
+ 		p->num_free_objects = p->save_num_free_objects;
+ 	      }
+ 	    /* Remove the page if it's empty.  */
+ 	    else if (p->num_free_objects == num_objects)
+ 	      {
+ 		if (p == table.pages[bits])
+ 		  table.pages[bits] = next;
+ 		else
+ 		  previous->next = next;
+ 		/* Are we removing the last element?  */
+ 		if (p == table.page_tails[bits])
+ 		  table.page_tails[bits] = previous;
+ 		free_page (p);
+ 		p = previous;
+ 	      }
+ 	    /* If the page is full, move it to the end.  */
+ 	    else if (p->num_free_objects == 0)
+ 	      {
+ 		/* Don't move it if it's already at the end.  */
+ 		if (p != table.page_tails[bits])
+ 		  {
+ 		    /* Move p to the end of the list.  */
+ 		    p->next = NULL;
+ 		    table.page_tails[bits]->next = p;
+ 		    /* Update the tail pointer...  */
+ 		    table.page_tails[bits] = p;
+ 		    /* ... and the head pointer, if necessary.  */
+ 		    if (table.pages[bits] == p)
+ 		      table.pages[bits] = next;
+ 		    else
+ 		      previous->next = next;
+ 		    p = previous;
+ 		  }
+ 	      }
+ 	    /* If we've fallen through to here, it's a page in the
+ 	       topmost context that is neither full nor empty.  Such a
+ 	       page must precede pages at lesser context depth in the
+ 	       list, so move it to the head.  */
+ 	    else if (p != table.pages[bits])
+ 	      {
+ 		/* Don't move it if it's already at the head.  */
+ 		if (p != table.pages[bits])
+ 		  {
+ 		    previous->next = p->next;
+ 		    p->next = table.pages[bits];
+ 		    table.pages[bits] = p;
+ 		    /* Are we moving the last element?  */
+ 		    if (table.page_tails[bits] == p)
+ 		      table.page_tails[bits] = previous;
+ 		    p = previous;
+ 		  }
+ 	      }
+ 
+ 	    previous = p;
+ 	    p = next;
+ 	  } 
+ 	while (! done);
+       }
+ 
+ #ifdef GGC_POISON
+   /* If checking is enabled, poison all unallocated pages.  */
+   for (bits = 0; bits < HOST_BITS_PER_WIDE_INT; bits++)
+     {
+       p = table.pages[bits];
+       while (p != NULL)
+ 	{
+ 	  unsigned i;
+ 	  for (i = 0; i < OBJECTS_PER_PAGE (bits); i++)
+ 	    {
+ 	      int word, mask;
+ 	      word = i / HOST_BITS_PER_INT;
+ 	      mask = 1 << (i % HOST_BITS_PER_INT);
+ 	      if ((p->in_use_p[word] & mask) == 0)
+ 		  poison (p->page + i * (1 << bits), 1 << bits);
+ 	    }
+ 	  p = p->next;
+ 	}
+     }
+ #endif
+ 
+   allocated_last_gc = allocated;
+ 
+   time = get_run_time () - time;
+   gc_time += time;
+ 
+   time = (time + 500) / 1000;
+   if (!quiet_flag)
+     fprintf (stderr, "%d in %d.%03d} ", 
+ 	     allocated / 1024, time / 1000, time % 1000);
+ }
+ 
+ 
+ 
+ int
+ ggc_set_mark_rtx (r)
+      rtx r;
+ {
+   return mark (r);
+ }
+ 
+ int
+ ggc_set_mark_rtvec (v)
+      rtvec v;
+ {
+   return mark (v);
+ }
+ 
+ int
+ ggc_set_mark_tree (t)
+      tree t;
+ {
+   return mark (t);
+ }
+ 
+ void
+ ggc_mark_string (s)
+      char *s;
+ {
+   if (s)
+     mark (s);
+ }
+ 
+ void 
+ ggc_mark (p)
+      void *p;
+ {
+   mark (p);
+ }


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]