Porting to GCC 6

The GCC 6 release series differs from previous GCC releases in a number of ways. Some of these are a result of bug fixing, and some old behaviors have been intentionally changed in order to support new standards, or relaxed in standards-conforming ways to facilitate compilation or run-time performance. Some of these changes are not visible to the naked eye and will not cause problems when updating from older versions.

However, some of these changes are visible, and can cause grief to users porting to GCC 6. This document is an effort to identify major issues and provide clear solutions in a quick and easily searched manner. Additions and suggestions for improvement are welcome.

Preprocessor issues

C language issues

C++ language issues

Default standard is now GNU++14

GCC 6 defaults to -std=gnu++14 instead of -std=gnu++98: the C++14 standard, plus GNU extensions. This brings several changes that users should be aware of, some new with the C++14 standard, others that appeared with the C++11 standard. The following paragraphs describe some of these changes and suggest how to deal with them.

Some users might prefer to stay with gnu++98, in which case we suggest to use the -std=gnu++98 command-line option, perhaps by putting it in CXXFLAGS or similar variables in Makefiles.

Alternatively, you might prefer to update to gnu++11, bringing in the C++11 changes but not the C++14 ones. If so, use the -std=gnu++11 command-line option.

Narrowing conversions

The C++11 standard does not allow "narrowing conversions" inside braced initialization lists, meaning conversions to a type with less precision or a smaller range, for example:

    int i = 127;
    char s[] = { i, 256 };

In the above example the value 127 would fit in char but because it's not a constant it is still a narrowing conversion. If the value 256 is larger than CHAR_MAX then that is also a narrowing conversion. Narrowing conversions can be avoided by using an explicit cast, e.g. (char)i.

Invalid literal suffixes

The C++11 "user-defined literals" feature allows custom suffixes to be added to literals, so that for example "Hello, world!"s creates a std::string object. This means that code relying on string concatenation of string literals and macros might fail to compile, for example using printf("%"PRIu64, uint64_value) is not valid in C++11, because PRIu64 is parsed as a literal suffix. To fix the code to compile in C++11 add whitespace between the string literal and the macro: printf("%" PRIu64, uint64_value).

Cannot convert 'bool' to 'T*'

The current C++ standard only allows integer literals to be used as null pointer constants, so other constants such as false and (1 - 1) cannot be used where a null pointer is desired. Code that fails to compile with this error should be changed to use nullptr, or 0, or NULL.

Cannot convert 'std::ostream' to 'bool'

As of C++11, iostream classes are no longer implicitly convertible to void* so it is no longer valid to do something like:

  bool valid(std::ostream& os) { return os; }

Such code must be changed to convert the iostream object to bool explicitly, e.g. return (bool)os; or return static_cast<bool>(os);

No match for 'operator!=' (operand types are 'std::ifstream' and 'int')

The change to iostream classes also affects code that tries to check for stream errors by comparing to NULL or 0. Such code should be changed to simply test the stream directly, instead of comparing it to a null pointer:

  if (file) {   // not if (file != NULL), or if (file != 0)

Lvalue required as left operand of assignment with complex numbers

Since C++11 (as per DR#387) the member functions real() and imag() of std::complex can no longer be used as lvalues, thus the following code is rejected:

  std::complex<double> f;
  f.real () = val;

To assign val to the real component of f, the following should be used instead:

  std::complex<double> f;
  f.real (val);

Destructors are noexcept by default

As of C++11, destructors have an implicit noexcept exception-specification (unless a base class or non-static member variable has a destructor that is noexcept(false)). In practice this means that the following program behaves differently in C++11 than in C++03:

  #include <stdexcept>
  struct S
    ~S() { throw std::runtime_error ("oops"); }
  main (void)
    try { S s; }
    catch (...) {
      return 42;

While in C++03 this program returns 42, in C++11 it terminates with a call to std::terminate. By default GCC will now issue a warning for throw-expressions in noexcept functions, including destructors, that would immediately result in a call to terminate. The new warning can be disabled with -Wno-terminate. It is possible to restore the old behavior when defining the destructor like this:

    ~S() noexcept(false) { throw std::runtime_error ("oops"); }

Header dependency changes

The <algorithm> header has been changed to reduce the number of other headers it includes in C++11 mode or above. As such, C++ programs that used components defined in <random>, <vector>, or <memory> without explicitly including the right headers will no longer compile.

Header <cmath> changes

Some C libraries declare obsolete int isinf(double) or int isnan(double) functions in the <math.h> header. These functions conflict with standard C++ functions with the same name but a different return type (the C++ functions return bool). When the obsolete functions are declared by the C library the C++ library will use them and import them into namespace std instead of defining the correct signatures.

Header <math.h> changes

The C++ library now provides its own <math.h> header that wraps the C library header of the same name. The C++ header defines additional overloads of some functions and ensures that all standard functions are defined as real functions and not as macros. Code which assumes that sin, cos, pow, isfinite etc. are macros may no longer compile.

Header <stdlib.h> changes

The C++ library now provides its own <stdlib.h> header that wraps the C library header of the same name. The C++ header defines additional overloads of some functions and ensures that all standard functions are defined as real functions and not as macros. Code which assumes that abs, malloc etc. are macros may no longer compile.

Programs which provide their own wrappers for <stdlib.h> or other standard headers are operating outside the standard and so are responsible for ensuring their headers work correctly with the headers in the C++ standard library.

Call of overloaded 'abs(unsigned int&)' is ambiguous

The additional overloads can cause the compiler to reject invalid code that was accepted before. An example of such code is the below:

#include <stdlib.h>
foo (unsigned x)
  return abs (x);

Since calling abs() on an unsigned value doesn't make sense, this code will become explicitly invalid as per discussion in the LWG.

Optimizations remove null pointer checks for this

When optimizing, GCC now assumes the this pointer can never be null, which is guaranteed by the language rules. Invalid programs which assume it is OK to invoke a member function through a null pointer (possibly relying on checks like this != NULL) may crash or otherwise fail at run time if null pointer checks are optimized away. With the -Wnull-dereference option the compiler tries to warn when it detects such invalid code.

If the program cannot be fixed to remove the undefined behavior then the option -fno-delete-null-pointer-checks can be used to disable this optimization. That option also disables other optimizations involving pointers, not only those involving this.

Deprecation of std::auto_ptr

The std::auto_ptr class template was deprecated in C++11, so GCC now warns about its usage. This warning can be suppressed with the -Wno-deprecated-declarations command-line option, though we advise to port the code to use C++11's std::unique_ptr instead.

'constexpr' needed for in-class initialization of static data member

Since C++11, the constexpr keyword is needed when initializing a non-integral static data member in a class. As a GNU extension, the following program is accepted in C++03 (albeit with a -Wpedantic warning):

struct X {
  const static double i = 10;

The C++11 standard supports that in-class initialization using constexpr instead, so the GNU extension is no longer supported for C++11 or later. Programs relying on the extension will be rejected with an error. The fix is to use constexpr instead of const.

Stricter flexible array member rules

As of this release, the C++ compiler is now more strict about flexible array member rules. As a consequence, the following code is no longer accepted:

union U {
  int i;
  char a[];

Furthermore, the C++ compiler now rejects structures with a flexible array member as the only member:

struct S {
  char a[];

Finally, the type and mangling of flexible array members has changed from previous releases. While in GCC 5 and prior the type of a flexible array member is an array of zero elements (a GCC extension), in GCC 6 it is that of an array of an unspecified bound (i.e., T[] as opposed to T[0]). This is a silent ABI change with no corresponding -fabi-version or -Wabi option to disable or warn about.

More aggressive optimization of -flifetime-dse

The C++ compiler (with -flifetime-dse enabled) is more aggressive about dead-store elimination in situations where a memory store to a location precedes the construction of an object at that memory location. Such situations are commonly found in programs which zero memory in a custom new operator:

#include <stdlib.h>
#include <string.h>
#include <assert.h>

struct A
 A() {}

 void* operator new(size_t s)
   void* ptr = malloc(s);
   memset(ptr, 0xFF, s);
   return ptr;

 void operator delete(void* ptr) { free(ptr); }

 int value;

int main()
 A* a =  new A;
 assert(a->value == -1); // Use of uninitialized value
 delete a;

An object's constructor begins the lifetime of a new object at the relevant memory location, so any stores to that memory location which happen before the constructor are considered "dead stores" and so can be optimized away. If the memory needs to be initialized to specific values then that should be done by the constructor, not by code that happens before the constructor.

If the program cannot be fixed to remove the undefined behavior then the option -flifetime-dse=1 can be used to disable this optimization.


A new warning -Wmisleading-indentation was added to -Wall, warning about places where the indentation of the code might mislead a human reader about the control flow:

sslKeyExchange.c: In function 'SSLVerifySignedServerKeyExchange':
sslKeyExchange.c:629:3: warning: this 'if' clause does not guard... [-Wmisleading-indentation]
    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
sslKeyExchange.c:631:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the 'if'
        goto fail;

This has highlighted genuine bugs, often due to missing braces, but it sometimes reports warnings for poorly-indented files, or on projects with unusual indentation. This may cause build errors if you have -Wall -Werror in your project.

The best fix is usually to fix the indentation of the code to match the block structure, or to fix the block structure by adding missing braces. If changing the source is not practical or desirable (e.g. for autogenerated code, or to avoid churn in the source history), the warning can be disabled by adding -Wno-misleading-indentation to the build flags. Alternatively, you can disable it for just one part of a source file or function using pragmas:

#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wmisleading-indentation"

/* (code for which the warning is to be disabled)  */

#pragma GCC diagnostic pop

Source files with mixed tabs and spaces that don't use 8-space tabs may lead to warnings. A real-world example was for such a source file, which contained an Emacs directive to view tabs to be 4 spaces wide:

  /* -*- Mode: C; tab-width: 4; indent-tabs-mode: nil; c-basic-offset: 4 -*- */

The mixture of tabs and spaces did correctly reflect the block structure when viewed in Emacs, but not in other editors, or in an HTML view of the source repository. By default, -Wmisleading-indentation assumes tabs to be 8 spaces wide. It would have been possible to avoid this warning by adding -ftabstop=4 to the build flags for this file, but given that the code was confusing when viewed in other editors, the indentation of the source was fixed instead.


A new warning -Wnonnull-compare was added to -Wall. It warns about comparing parameters declared as nonnull with NULL. For example, the compiler will now warn about the following code:

__attribute__((nonnull)) void
foo (void *p)
  if (p == NULL)
    abort ();
  // ...

Plugin issues

The internals of GCC have seen various improvements, and these may affect plugins. Some notes on porting GCC plugins to GCC 6 follow.

gimple became a struct, rather than a pointer

Prior to GCC 6, gimple meant a pointer to a statement. It was a typedef aliasing the type struct gimple_statement_base *:

/* Excerpt from GCC 5's coretypes.h.  */
typedef struct gimple_statement_base *gimple;
typedef const struct gimple_statement_base *const_gimple;
typedef gimple gimple_seq;

As of GCC 6, the code above became:

/* Excerpt from GCC 6's coretypes.h.  */
struct gimple;
typedef gimple *gimple_seq;

gimple is now the statement struct itself, not a pointer. The gimple struct is now the base class of the gimple statement class hierarchy, and throughout gcc every instance of gimple was changed to a gimple * (revision r227941 is the commit in question). The typedef const_gimple is no more; use const gimple * if you need to represent a pointer to a unmodifiable gimple statement.

Plugins that work with gimple will need to be updated to reflect this change. If you aim for compatibility between both GCC 6 and earlier releases of GCC, it may be cleanest to introduce a compatibility typedef in your plugin, such as:

#if (GCC_VERSION >= 6000)
typedef gimple *gimple_stmt_ptr;
typedef gimple gimple_stmt_ptr;

Marek Polacek Fedora mass rebuild 2016 on x86_64