Differences between revisions 2 and 3
Revision 2 as of 2019-08-24 20:03:13
Size: 13789
Comment: First Post. Describe the GCC Parallel project.
Revision 3 as of 2019-08-24 20:09:21
Size: 13880
Comment: Add git clone to steps
Deletions are marked like this. Additions are marked like this.
Line 17: Line 17:
$ git clone https://gitlab.com/flusp/gcc.git
$ cd gcc
$ git checkout giulianob_parallel

The Parallel GCC

This page introduces the Parallel GCC -- a research project aiming to parallelize a real-world compiler. This can be useful in many-core machines where GNU Make itself can not provide enough parallelism, or in the future if someone wants to design a parallel compiler from scratch.

In this page, we document and discuss how to use this project, the theoretical backgrounds that motivate this project, challenges found so far, documentations about what was done and what decisions were made to fix some problems, the architecture developed to control the parallelism, some results so far, and what is left to do.

Please, keep in mind that this project is under development and still have several bugs.

Downloading and Building the Project

Clone this branch, checkout to giulianob_parallel, create a new build directory, navigate through it, and then run configure. For example:

$ git clone https://gitlab.com/flusp/gcc.git
$ cd gcc
$ git checkout giulianob_parallel
$ mkdir build && cd build
$ ../configure --disable-bootstrap --enable-languages=c --disable-multilib
$ make

The only additional dependency required by this project are the POSIX threads, which are included in most Unix'es.

Using the Parallel GCC

After you install GCC, use:

$ gcc --param=num-threads=4 <PARAMS>

This will make GCC spawn 4 threads to compile the code.

Theoretical Background

Inter and Intra Procedural Optimizations

GCC is an optimizer compiler, which means that it automatically optimizes your code when compiling. GCC's splits optimizations into two categories, which are defined as follows:

  • Intra Procedural Analysis are optimizations which are applied inside a function, ignoring its call and callee relationship. Therefore it uses no information with regard to how this function interacts with the others. One example is the vectorizer.

  • Inter Procedural Analysis are optimizations which require information about how this function interact with the others. One example is inlining.

From this definition, we can assume that Intra Procedural Optimizations can be performed in parallel between two or more functions.

GCC optimization phase is split into three steps:

  • Inter Procedural Analysis (IPA): Builds a callgraph and uses it to decide how to perform optimizations.

  • GIMPLE Intra Procedural Optimizations: Performs several hardware-independent optimizations inside the function.

  • RTL Intra Procedural Optimizations: Performs several hardware-dependent optimizations inside the function.

The pipeline works as follows: As IPA collects information and decides how to optimize all functions, it then sends a function to the GIMPLE optimizer, which then sends the function to the RTL optimizer, and the final code is generated. This process repeats for every function in the code. The pseudocode below illustrates this process:

void expand_all_functions () {
        graph* g = build_callgraph ();
        ipa_perform_analysis (g);
        function* cfun;
        FOR_EACH_FUNCTION (g, cfun) {
                cfun->expand_ipa ();
                cfun->expand_gimple ();
                cfun->expand_rtl ();

You can check this part in cgraphunit.c, where expand_all_functions is implemented.

Here we started this project parallelizing GIMPLE, as it is hardware-independent and therefore its parallelization can increase parallelism in all architecture supported by GCC.

Parallel Architecture

We designed the following architecture intending to increase parallelism and reduce overhead. As IPA finishes its analysis, a number of threads equal to the number of logical processors are spawned to avoid scheduling overhead. Then one of those thread inserts all analyzed functions into a threadsafe producer-consumer queue, which all threads are responsible to consume. Once a thread has finished processing one function, it queries for the next function available in the queue, until it finds an EMPTY token. When it happens, the thread should finalize as there are no more functions to be processed.

This architecture is used to parallelize per-function GIMPLE Intra Process Optimizations and can be easily extended to also support RTL Intra Process Optimizations. This, however, does not cover IPA passes nor the per-language Front End analysis.

The current code snippet of this architecture is illustrated below:

void expand_all_functions () {
    graph* g = build_callgraph ();
    ipa_perform_analysis (g);
    function* cfun;
    working_set ws;

    FOR_EACH_FUNCTION (g, cfun) {
        cfun->expand_ipa ();

    ws.spawn_threads (expand_gimple):

    FOR_EACH_FUNCTION (g, cfun) {
        ws.insert_work (cfun);

    FOR_EACH_FUNCTION (g, cfun) {
        cfun->expand_rtl ();

Code Refactoring

Several parts of GCC middle-end code was refactored in this project, and there are still many places where code refactoring is necessary for this project to succeed.

First changes were made regarding how these functions are optimized in the pipeline. The original code required a single function to be optimized and outputted from GIMPLE to RTL without any possible change of what function is being compiled. Several structures in GCC where made per-thread or threadsafe, either being replicated by using the C11 thread notation, by allocating the data structure in the thread stack, or simply inserting locks.

One of the most tedious parts of the job was detecting making several global variables threadsafe, and they were the cause of most crashes in this project. Tools made for detecting data-races, such as Helgrind and DRD, were useful in the beginning but then showed its limitations as the project advanced. Several race conditions had a small window and did not happen when the compiler ran inside these tools. Therefore there is a need for better tools to help to find global variables or race conditions. Finding these variables through static analysis of the entire code may be a good addition to these current tools.

In the below subchapters we discuss some data structures which I found to be not easily replicated.

Memory Pools

Memory pools are data structures which allocate several objects of the same type in chunks to avoid calls to the malloc() function, and to ensure that data is always aligned. This serves both as optimization and to avoid memory leaks, as one can free the entire pool at once.

Memory pools were implemented in GCC as a class which all points to one singleton Memory Allocator object, which carries the memory allocation and therefore had a serious race condition when threads tried to allocate and deallocate memory pools. One thread could release a pool which other threads holds pointers to, resulting into references to invalid memory, or typical race conditions in this structure with counters, which needs to be increased and decreased as chunks of objects are allocated and released.

As the data structure is required by other threads later in the compilation, which is still carried by a single thread in the current state of this project, our first try was to implemented a Threadsafe Memory Pool allocator, which locks a mutex each time memory is allocated or released and annotates the thread ID on each chunk. Therefore, when memory is released, the thread only releases the chunk they currently own. This approach made the compilation slow, and the GCC tests failed due to time concerns. Therefore, another strategy was designed.

The second approach was to use a distributed memory pool. Each thread holds one memory pool, and as a result, there is no need for locking when allocating and releasing the chunks. This also guarantees that one thread does not release the content of another thread, as they have no access to pools that belongs to other threads. However, this leads to an issue, as the data is required by another thread later in the compilation. The solution was to implement a pool merge feature, which merges two memory pools upon request. Since memory pools are implemented as a linked list, the merge feature could be implemented in O(1), although the current implemented algorithm requires O(n). The reason for this is that the memory pool currently uses a single-headed linked list, and it needs to be refactored into a double-headed linked list.

All memory pools touched by GIMPLE Intra Process Optimizations, except one, where refactored with this approach, and the merge feature was used only in those memory pools which were required. The only pool which was not refactored using this approach was the Euler Transversal Forest datastructure (et-forest.c), simple because the compiler crashes when the strategy is employed here. The reason for this must still be investigated.

Garbage Collection

GCC has an internal garbage collector, which is a reference counter of objects. Objects that are watched by the garbage collector are declared with the GTY(()) annotation, and we can not simply use C11 thread annotation, as it is not supported by the Garbage Collector. Currently, our approach is either to insert locks in these variables or move it to the struct function object.

Currently, we inserted a global lock in the Garbage Collector to ensure that memory allocation at this point is serialized, and disabled any memory collection when the program is running in multithreaded mode. This will not necessary when multi-thread is supported by the Garbage Collector.

Memory Address to Symbol Conversion

In tree-ssa-address.c, there is a vector for converting memory references to symbols, detecting if an address is part of a symbol (i. e. a reference to an array element), and vice versa. This array is marked to be watched by the garbage collector, therefore we lock this structure every time this array has to be accessed. Research is needed to evaluate how much this lock impacts performance, and if there is a better way of handling this situation.

Integer to Tree Node Hash

In tree.c, there is a hash table used to avoid reconstruction of tree nodes which represent integer constants. This hash is also marked to be watched by the Garbage Collector, and therefore we used a simple lock to this structure every time this hash is accessed. This approach may not be the best if the cost of locking and hashing becomes greater than recreating the tree node. Therefore research is also needed here.

The rtl_data Structure

GCC uses a single instance of rtl_data class, representing the current function being compiled in RTL. So far, this should not be a problem as RTL expansion and optimization phase is still single-threaded. However, there are GIMPLE passes which calculate instruction costs in RTL mode to decide how the function will be optimized. This access the rtl_data singleton and therefore exposes a race condition that needs to be solved. To fix this issue, we have either to replicate this structure, which is necessary to parallelize the Intra Process RTL optimizations, or fix the GIMPLE pass so that it does not depends on instruction costs.


Here we present our current performance results by parallelizing the GIMPLE Intra Process Optimizations. It must be highlighted that we are still facing race conditions and there are locks which can be removed, as the data structure can be duplicated.

Here we compile the file gimple-match.c, which are the biggest file in the GCC project. This file has more than 100,000 lines of code, with around 1700 functions, and almost no loops inside these functions. The computer used in this Benchmark had an Intel(R) Core(TM) i5-8250U CPU, with 8Gb of RAM. Therefore, this computer had a CPU with 4 cores with Hyperthreading, resulting in 8 virtual cores. All points are the mean of 30 samples, and the confidence interval to the populational mean was suppressed, as the standard deviation was fairly low.

The figure below shows our results before and after Intra Procedural GIMPLE parallelization. In this figure, we can observe that the time elapsed in this part dropped from 7 seconds to around 4 seconds with 2 threads and around 3 seconds with 4 threads, resulting in a speedup of 1.72x and 2.52x, respectively. Here we can also see that using Hyperthreading did not impact the result. This result was used to estimate the improvement in RTL parallelization.


The next figure shows an estimative of time when RTL is parallelized. In this figure, we can observe that we can drop the required time in this step from more than 20s to around 10s if we maintain a speedup of x2.52.


Merging this collected information with the total time required by GCC compilation, we can estimate an improvement of up to 1.62x in GCC, while using 4 threads, although we only obtained a speedup of 1.10x in the total compilation only by parallelizing GIMPLE, as illustrated in this next figure.



  • Find and fix all race conditions in GIMPLE. There are still random crashes when a code is compiled using the parallel option.
  • Make this GCC compile itself.
  • Make this GCC pass all tests in the testsuite.
  • Add support to a multithread environment to Garbage Collector.
  • Parallelize RTL part. This will improve our current results, as indicated in Results chapter.
  • Parallelize IPA part. This can also improve the time during LTO compilations.
  • Refactor all occurrences of thread by allocating these variables as soon as threads are started, or at a pass execution.

None: ParallelGcc (last edited 2019-12-16 17:36:11 by GiulianoBelinassi)