__intN patch 3/5: main __int128 -> __intN conversion.
Joseph S. Myers
joseph@codesourcery.com
Thu Aug 21 20:53:00 GMT 2014
On Wed, 13 Aug 2014, DJ Delorie wrote:
> + for (i = 0; i < NUM_INT_N_ENTS; i ++)
> + {
> + int_n_trees[i].signed_type = make_signed_type (int_n_data[i].bitsize);
> + int_n_trees[i].unsigned_type = make_unsigned_type (int_n_data[i].bitsize);
> + TYPE_SIZE (int_n_trees[i].signed_type) = bitsize_int (int_n_data[i].bitsize);
> + TYPE_SIZE (int_n_trees[i].unsigned_type) = bitsize_int (int_n_data[i].bitsize);
> +
> + if (int_n_data[i].bitsize > LONG_LONG_TYPE_SIZE)
> + {
> + integer_types[itk_intN_0 + i * 2] = int_n_trees[i].signed_type;
> + integer_types[itk_unsigned_intN_0 + i * 2] = int_n_trees[i].unsigned_type;
> + }
> + }
Why are types only entered in integer_types if wider than long long?
> +static bool
> +standard_type_bitsize (int bitsize)
> +{
> + /* As a special exception, we always want __int128 enabled if possible. */
> + if (bitsize == 128)
> + return false;
> + if (bitsize == CHAR_TYPE_SIZE
> + || bitsize == SHORT_TYPE_SIZE
> + || bitsize == INT_TYPE_SIZE
> + || bitsize == LONG_TYPE_SIZE
> + || (bitsize == LONG_LONG_TYPE_SIZE && LONG_LONG_TYPE_SIZE < GET_MODE_BITSIZE (TImode)))
I don't think there should be a special case depending on the size of long
long here.
> + /* This must happen after the backend has a chance to process
> + command line options, but before the parsers are
> + initialized. */
> + for (i = 0; i < NUM_INT_N_ENTS; i ++)
> + if (targetm.scalar_mode_supported_p (int_n_data[i].m)
> + && ! standard_type_bitsize (int_n_data[i].bitsize)
> + && int_n_data[i].bitsize <= HOST_BITS_PER_WIDE_INT * 2)
> + int_n_enabled_p[i] = true;
This HOST_BITS_PER_WIDE_INT * 2 check seems wrong. wide-int should allow
integer types bigger than HOST_BITS_PER_WIDE_INT * 2 to be supported; it's
the job of targetm.scalar_mode_supported_p to return the correct result
(not claiming to support such types if the target doesn't have the
required wide-int support or is missing anything else required), not for
target-independent code to override it. I don't see any definitions of
that hook that obviously get this wrong.
> mode_for_size (unsigned int size, enum mode_class mclass, int limit)
> {
> - enum machine_mode mode;
> + enum machine_mode mode = BLKmode;
> + int i;
I don't see any need for this initialization to be added.
> bprecision
> - = MIN (precision + BITS_PER_UNIT_LOG + 1, MAX_FIXED_MODE_SIZE);
> + = MIN (precision, MAX_FIXED_MODE_SIZE);
This change doesn't make sense to me - if it is needed, I think it should
be separated out, not included in this patch which should simply about
providing generic __intN support in the cases where __int128 is currently
supported.
> @@ -1157,13 +1165,13 @@ enum omp_clause_map_kind
> OMP_CLAUSE_MAP_ALLOC,
> OMP_CLAUSE_MAP_TO,
> OMP_CLAUSE_MAP_FROM,
> OMP_CLAUSE_MAP_TOFROM,
> /* The following kind is an internal only map kind, used for pointer based
> array sections. OMP_CLAUSE_SIZE for these is not the pointer size,
> - which is implicitly POINTER_SIZE / BITS_PER_UNIT, but the bias. */
> + which is implicitly POINTER_SIZE_UNITS, but the bias. */
POINTER_SIZE_UNITS changes belong in another patch, not this one.
> /* ptr_type_node can't be used here since ptr_mode is only set when
> toplev calls backend_init which is not done with -E or pch. */
> - psize = POINTER_SIZE / BITS_PER_UNIT;
> + psize = POINTER_SIZE_UNITS;
Likewise.
> @@ -844,12 +846,47 @@ c_cpp_builtins (cpp_reader *pfile)
> builtin_define_type_max ("__LONG_LONG_MAX__", long_long_integer_type_node);
> builtin_define_type_minmax ("__WCHAR_MIN__", "__WCHAR_MAX__",
> underlying_wchar_type_node);
> builtin_define_type_minmax ("__WINT_MIN__", "__WINT_MAX__", wint_type_node);
> builtin_define_type_max ("__PTRDIFF_MAX__", ptrdiff_type_node);
> builtin_define_type_max ("__SIZE_MAX__", size_type_node);
> + for (i = 0; i < NUM_INT_N_ENTS; i ++)
> + if (int_n_enabled_p[i])
> + {
> + char buf[35+20+20];
> + char buf_min[15+20];
> + char buf_max[15+20];
> +
> + sprintf(buf_min, "__INT%d_MIN__", int_n_data[i].bitsize);
> + sprintf(buf_max, "__INT%d_MAX__", int_n_data[i].bitsize);
All this block of code appears to be new rather than replacing any
existing code doing something similar with __int128. As such, I think
it's best considered separately from the main __intN support. Also note
missing spaces before '(' in this code.
Some of this may be needed for libstdc++, but not all. As far as I can
tell, the existing __glibcxx_min, __glibcxx_max, __glibcxx_digits,
__glibcxx_digits10, __glibcxx_max_digits10 macros can be used in most
cases and avoid any need to predefine macros for the min or max of __intN;
you only need to communicate which types exist and their sizes in bits
(that is, a single macro predefine for each N, with anything else being
considered separately if otherwise thought desirable).
> + if (!in_system_header_at (input_location)
> + /* As a special exception, allow a type that's used
> + for __SIZE_TYPE__. */
> + && int_n_data[specs->int_n_idx].bitsize != POINTER_SIZE)
> pedwarn (loc, OPT_Wpedantic,
> - "ISO C does not support %<__int128%> type");
> + "ISO C does not support %<__int%d%> types",
> + int_n_data[specs->int_n_idx].bitsize);
I don't think there should be such a special exception. Existing practice
is for testcases to do "__extension__ typedef __SIZE_TYPE__ size_t;" to
avoid -pedantic diagnostics for size_t being unsigned long long on Win64.
> - align = exact_log2 (POINTER_SIZE / BITS_PER_UNIT);
> + align = ceil_log2 (POINTER_SIZE_UNITS);
Again, belongs in a different patch.
--
Joseph S. Myers
joseph@codesourcery.com
More information about the Gcc-patches
mailing list