compiler: don't generate stubs for ambiguous direct interface methods
Current implementation checks whether it has to generate a stub method for a
promoted method of an embedded struct field in Type::build_stub_methods(). If
the promoted method is ambiguous it's simply skipped. But struct types that
can fit in an interface value (e.g. structs that consist of a single pointer
field) get a second chance in Type::build_direct_iface_stub_methods().
This patch adds the same check used by Type::build_stub_methods() to
Type::build_direct_iface_stub_methods().
As recently done for std::basic_string, __gnu_cxx::__versa_string
equality comparisons can check lengths first for any character type and
traits type, not only for std::char_traits<char>.
libstdc++-v3/ChangeLog:
PR libstdc++/101482
* include/ext/vstring.h (operator==): Always check lengths
before comparing.
Nathan Sidwell [Thu, 16 Jun 2022 17:14:56 +0000 (10:14 -0700)]
c++: Elide inactive initializer fns from init array
There's no point adding no-op initializer fns (that a module might
have) to the static initializer list. Also, we can add any objc
initializer call to a partial initializer function and simplify some
control flow.
gcc/cp/
* decl2.cc (finish_objects): Add startp parameter, adjust.
(generate_ctor_or_dtor_function): Detect empty fn, and don't
generate unnecessary code. Remove objc startup here ...
(c_parse_final_cleanyps): ... do it here.
Andrew MacLeod [Thu, 16 Jun 2022 16:44:33 +0000 (12:44 -0400)]
Clear invariant bit for inferred ranges.
The range of an invariant SSA (no outgoing edge range anywhere) is not tracked.
If an inferred range is registered, remove the invariant flag.
* gimple-range-cache.cc (ranger_cache::apply_inferred_ranges): If name
was invaraint before, clear the invariant bit.
* gimple-range-gori.cc (gori_map::set_range_invariant): Add a flag.
* gimple-range-gori.h (gori_map::set_range_invariant): Adjust prototype.
Jakub Jelinek [Thu, 16 Jun 2022 12:37:06 +0000 (14:37 +0200)]
match.pd: Improve y == MIN || x < y optimization [PR105983]
On the following testcase, we only optimize bar where this optimization
is performed at GENERIC folding time, but on GIMPLE it doesn't trigger
anymore, as we actually don't see
(bit_and (ne @1 min_value) (ge @0 @1))
but
(bit_and (ne @1 min_value) (le @1 @0))
genmatch handles :c modifier not just on commutative operations, but
also comparisons and in that case it means it swaps the comparison.
2022-06-16 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/105983
* match.pd (y == XXX_MIN || x < y -> x <= y - 1,
y != XXX_MIN && x >= y -> x > y - 1): Use :cs instead of :s
on non-equality comparisons.
Jakub Jelinek [Thu, 16 Jun 2022 12:36:04 +0000 (14:36 +0200)]
match.pd: Fix up __builtin_mul_overflow_p signed type optimization [PR105984]
Earlier in the simplification pattern, we require that @0 has compatible
type to the type of IMAGPART_EXPR, but for @1 which is a non-zero constant
all we require is that it the constant fits into that type.
Later the code checks if the constant is negative, because when min / max
values are divided by negative divisor, lo will be higher than hi.
In the following testcase, @1 has unsigned char type, while @0 has
int type, so @1 which is 254 is wi::neg_p and we were swapping lo and hi,
even when @1 cast to int isn't negative.
We could use tree_int_cst_sgn (@1) < 0 as the check instead and it would
work both for narrower types of @1 and even same or wider ones, but
I've noticed we probably don't want to call fold_convert (TREE_TYPE (@0), @1)
twice and when we save that result in a temporary, we can just use wi::neg_p
on that temporary.
2022-06-16 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/105984
* match.pd (__builtin_mul_overflow_p (x, cst, (stype) 0) ->
x > stype_max / cst || x < stype_min / cst): fold_convert @1
to TREE_TYPE (@0) just once and test for negative divisor
also on that folded constant instead of on @1.
Jakub Jelinek [Thu, 16 Jun 2022 08:58:58 +0000 (10:58 +0200)]
expand: Fix up IFN_ATOMIC_{BIT*,*CMP_0} expansion [PR105951]
Both IFN_ATOMIC_BIT_TEST_AND_* and IFN_ATOMIC_*_FETCH_CMP_0 ifns
are matched if their corresponding optab is implemented for the particular
mode. The fact that those optabs are implemented doesn't guarantee
they will succeed though, they can just FAIL in their expansion.
The expansion in that case uses expand_atomic_fetch_op as fallback, but
as has been reported and and can be reproduced on the testcases,
even those can fail and we didn't have any fallback after that.
For IFN_ATOMIC_BIT_TEST_AND_* we actually have such calls. One is
done whenever we lost lhs of the ifn at some point in between matching
it in tree-ssa-ccp.cc and expansion. The following patch for that case
just falls through and expands as if there was a lhs, creates a temporary
for it. For the other expand_atomic_fetch_op call in the same expander
and for the only expand_atomic_fetch_op call in the other, this falls
back the hard way, by constructing a CALL_EXPR to the call from which
the ifn has been matched and expanding that. Either it is lucky and manages
to expand inline, or it emits a libatomic API call.
So that we don't have to rediscover which builtin function to call in the
fallback, we record at tree-ssa-ccp.cc time gimple_call_fn (call) in
an extra argument to the ifn.
2022-06-16 Jakub Jelinek <jakub@redhat.com>
PR middle-end/105951
* tree-ssa-ccp.cc (optimize_atomic_bit_test_and,
optimize_atomic_op_fetch_cmp_0): Remember gimple_call_fn (call)
as last argument to the internal functions.
* builtins.cc (expand_ifn_atomic_bit_test_and): Adjust for the
extra call argument to ifns. If expand_atomic_fetch_op fails for the
lhs == NULL_TREE case, fall through into the optab code with
gen_reg_rtx (mode) as target. If second expand_atomic_fetch_op
fails, construct a CALL_EXPR and expand that.
(expand_ifn_atomic_op_fetch_cmp_0): Adjust for the extra call argument
to ifns. If expand_atomic_fetch_op fails, construct a CALL_EXPR and
expand that.
* gcc.target/i386/pr105951-1.c: New test.
* gcc.target/i386/pr105951-2.c: New test.
Haochen Gui [Mon, 30 May 2022 01:12:34 +0000 (09:12 +0800)]
rs6000: add V1TI into vector comparison expand [PR103316]
This patch adds V1TI mode into a new mode iterator used in vector comparison,shift and rotation expands. It also merges some vector comparison, shift and rotation expands for V1T1 and other vector integer modes as they have the similar patterns. The expands for V1TI only are removed.
gcc/
PR target/103316
* config/rs6000/rs6000-builtin.cc (rs6000_gimple_fold_builtin): Enable
gimple folding for RS6000_BIF_VCMPEQUT, RS6000_BIF_VCMPNET,
RS6000_BIF_CMPGE_1TI, RS6000_BIF_CMPGE_U1TI, RS6000_BIF_VCMPGTUT,
RS6000_BIF_VCMPGTST, RS6000_BIF_CMPLE_1TI, RS6000_BIF_CMPLE_U1TI.
* config/rs6000/vector.md (VEC_IC): New mode iterator. Add support
for new Power10 V1TI instructions.
(vec_cmp<mode><mode>): Set mode iterator to VEC_IC.
(vec_cmpu<mode><mode>): Likewise.
(vector_nlt<mode>): Set mode iterator to VEC_IC.
(vector_nltv1ti): Remove.
(vector_gtu<mode>): Set mode iterator to VEC_IC.
(vector_gtuv1ti): Remove.
(vector_nltu<mode>): Set mode iterator to VEC_IC.
(vector_nltuv1ti): Remove.
(vector_geu<mode>): Set mode iterator to VEC_IC.
(vector_ngt<mode>): Likewise.
(vector_ngtv1ti): Remove.
(vector_ngtu<mode>): Set mode iterator to VEC_IC.
(vector_ngtuv1ti): Remove.
(vector_gtu_<mode>_p): Set mode iterator to VEC_IC.
(vector_gtu_v1ti_p): Remove.
(vrotl<mode>3): Set mode iterator to VEC_IC. Emit insns for V1TI.
(vrotlv1ti3): Remove.
(vashr<mode>3): Set mode iterator to VEC_IC. Emit insns for V1TI.
(vashrv1ti3): Remove.
liuhongt [Tue, 31 May 2022 09:13:21 +0000 (17:13 +0800)]
Simplify (B * v + C) * D -> BD* v + CD when B,C,D are all INTEGER_CST.
Similar for (v + B) * C + D -> C * v + BCD.
Don't simplify it when there's overflow and overflow is UB for type v.
gcc/ChangeLog:
PR tree-optimization/53533
* match.pd: Simplify (B * v + C) * D -> BD * v + CD and
(v + B) * C + D -> C * v + BCD when B,C,D are all INTEGER_CST,
and there's no overflow or !TYPE_OVERFLOW_UNDEFINED.
gcc/testsuite/ChangeLog:
* gcc.target/i386/pr53533-1.c: New test.
* gcc.target/i386/pr53533-2.c: New test.
* gcc.target/i386/pr53533-3.c: New test.
* gcc.target/i386/pr53533-4.c: New test.
* gcc.target/i386/pr53533-5.c: New test.
* gcc.dg/vect/slp-11a.c: Adjust testcase.
xtensa: Eliminate unwanted reg-reg moves during DFmode input reloads
When spilled DFmode registers are reloaded in, once loaded into a pair of
SImode regs and then copied from that regs. Such unwanted reg-reg moves
seems not to be eliminated at the "cprop_hardreg" stage, despite no problem
in output reloads.
Luckily it is easy to resolve such inefficiencies, with the use of peephole2
pattern.
gcc/ChangeLog:
* config/xtensa/predicates.md (reload_operand):
New predicate.
* config/xtensa/xtensa.md: New peephole2 pattern.
xtensa: Add some dedicated patterns that correspond to GIMPLE canonicalizations
This patch offers better RTL representations against straightforward
derivations from some tree optimizers' canonicalized forms.
- rounding up to even, such as '(x + (x & 1))', is canonicalized to
'((x + 1) & -2)', but the former is one instruction less than the latter
in Xtensa ISA.
- signed greater or equal to zero as logical value '((signed)x >= 0)',
is canonicalized to '((unsigned)(x ^ -1) >> 31)', but the equivalent
'(((signed)x >> 31) + 1)' is one instruction less.
gcc/ChangeLog:
* config/xtensa/xtensa.md (*round_up_to_even):
New insn-and-split pattern.
(*signed_ge_zero): Ditto.
This patch introduces support for sibling call optimization, when call0
ABI is in effect.
gcc/ChangeLog:
* config/xtensa/xtensa-protos.h (xtensa_prepare_expand_call,
xtensa_emit_sibcall): New prototypes.
(xtensa_expand_epilogue): Add new argument that specifies whether
or not sibling call.
* config/xtensa/xtensa.cc (TARGET_FUNCTION_OK_FOR_SIBCALL):
New macro definition.
(xtensa_prepare_expand_call): New function in order to share
the common code.
(xtensa_emit_sibcall, xtensa_function_ok_for_sibcall):
New functions.
(xtensa_expand_epilogue): Add new argument sibcall_p and use it
for sibling call handling.
* config/xtensa/xtensa.md (call, call_value):
Use xtensa_prepare_expand_call.
(call_internal, call_value_internal):
Add the condition in order to be disabled if sibling call.
(sibcall, sibcall_value, sibcall_epilogue): New expansions.
(sibcall_internal, sibcall_value_internal): New insn patterns,
and split ones in order to take care of the indirect sibcalls.
David Malcolm [Wed, 15 Jun 2022 21:44:14 +0000 (17:44 -0400)]
analyzer: fix up paths for inlining (PR analyzer/105962)
-fanalyzer runs late compared to other code analysis tools, in that in
runs on the partially-optimized gimple-ssa representation. I chose this
point to run in the hope of easy integration with LTO.
As PR analyzer/105962 notes, this means that function inlining can occur
before the -fanalyzer "sees" the user's code. For example given:
void foo (void *p)
{
__builtin_free (p);
}
void bar (void *q)
{
foo (q);
foo (q);
}
Below -O2, -fanalyzer shows the calls and returns:
where "foo" has been inlined away, leading to this unhelpful output:
In function ‘foo’,
inlined from ‘bar’ at inline-1.c:9:3:
inline-1.c:3:3: warning: double-‘free’ of ‘q’ [CWE-415] [-Wanalyzer-double-free]
3 | __builtin_free (p);
| ^~~~~~~~~~~~~~~~~~
‘bar’: events 1-2
|
| 3 | __builtin_free (p);
| | ^~~~~~~~~~~~~~~~~~
| | |
| | (1) first ‘free’ here
| | (2) second ‘free’ here; first ‘free’ was at (1)
where the stack frame information in the execution path suggests that these
events are happening in "bar", in the top stack frame.
This is what the analyzer sees, but I find it hard to decipher such
output. Hence, as a workaround for the fact that -fanalyzer runs so
late, this patch attempts to reconstruct the "true" stack frame
information, and to inject events showing inline calls, based on the
inlining chain information recorded in the location_t values for the events.
Doing so leads to this output at -O2 on the above example (with
-fdiagnostics-show-path-depths):
In function ‘foo’,
inlined from ‘bar’ at inline-1.c:9:3:
inline-1.c:3:3: warning: double-‘free’ of ‘q’ [CWE-415] [-Wanalyzer-double-free]
3 | __builtin_free (p);
| ^~~~~~~~~~~~~~~~~~
‘bar’: events 1-2 (depth 1)
|
| 6 | void bar (void *q)
| | ^~~
| | |
| | (1) entry to ‘bar’
| 7 | {
| 8 | foo (q);
| | ~
| | |
| | (2) inlined call to ‘foo’ from ‘bar’
|
+--> ‘foo’: event 3 (depth 2)
|
| 3 | __builtin_free (p);
| | ^~~~~~~~~~~~~~~~~~
| | |
| | (3) first ‘free’ here
|
<------+
|
‘bar’: event 4 (depth 1)
|
| 9 | foo (q);
| | ^
| | |
| | (4) inlined call to ‘foo’ from ‘bar’
|
+--> ‘foo’: event 5 (depth 2)
|
| 3 | __builtin_free (p);
| | ^~~~~~~~~~~~~~~~~~
| | |
| | (5) second ‘free’ here; first ‘free’ was at (3)
|
reconstructing the calls and returns.
The patch also adds a new option, -fno-analyzer-undo-inlining, which can
be used to disable this reconstruction, restoring the output listed
above (this time with -fdiagnostics-show-path-depths):
In function ‘foo’,
inlined from ‘bar’ at inline-1.c:9:3:
inline-1.c:3:3: warning: double-‘free’ of ‘q’ [CWE-415] [-Wanalyzer-double-free]
3 | __builtin_free (p);
| ^~~~~~~~~~~~~~~~~~
‘bar’: events 1-2 (depth 1)
|
| 3 | __builtin_free (p);
| | ^~~~~~~~~~~~~~~~~~
| | |
| | (1) first ‘free’ here
| | (2) second ‘free’ here; first ‘free’ was at (1)
|
gcc/analyzer/ChangeLog:
PR analyzer/105962
* analyzer.opt (fanalyzer-undo-inlining): New option.
* checker-path.cc: Include "diagnostic-core.h" and
"inlining-iterator.h".
(event_kind_to_string): Handle EK_INLINED_CALL.
(class inlining_info): New class.
(checker_event::checker_event): Move here from checker-path.h.
Store original fndecl and depth, and calculate effective fndecl
and depth based on inlining information.
(checker_event::dump): Emit original depth as well as effective
depth when they differ; likewise for fndecl.
(region_creation_event::get_desc): Use m_effective_fndecl.
(inlined_call_event::get_desc): New.
(inlined_call_event::get_meaning): New.
(checker_path::inject_any_inlined_call_events): New.
* checker-path.h (enum event_kind): Add EK_INLINED_CALL.
(checker_event::checker_event): Make protected, and move
definition to checker-path.cc.
(checker_event::get_fndecl): Use effective fndecl.
(checker_event::get_stack_depth): Use effective stack depth.
(checker_event::get_logical_location): Use effective stack depth.
(checker_event::get_original_stack_depth): New.
(checker_event::m_fndecl): Rename to...
(checker_event::m_original_fndecl): ...this.
(checker_event::m_depth): Rename to...
(checker_event::m_original_depth): ...this.
(checker_event::m_effective_fndecl): New field.
(checker_event::m_effective_depth): New field.
(class inlined_call_event): New checker_event subclass.
(checker_path::inject_any_inlined_call_events): New decl.
* diagnostic-manager.cc: Include "inlining-iterator.h".
(diagnostic_manager::emit_saved_diagnostic): Call
checker_path::inject_any_inlined_call_events.
(diagnostic_manager::prune_for_sm_diagnostic): Handle
EK_INLINED_CALL.
* engine.cc (tainted_args_function_custom_event::get_desc): Use
effective fndecl.
* inlining-iterator.h: New file.
gcc/testsuite/ChangeLog:
PR analyzer/105962
* gcc.dg/analyzer/inlining-1-multiline.c: New test.
* gcc.dg/analyzer/inlining-1-no-undo.c: New test.
* gcc.dg/analyzer/inlining-1.c: New test.
* gcc.dg/analyzer/inlining-2-multiline.c: New test.
* gcc.dg/analyzer/inlining-2.c: New test.
* gcc.dg/analyzer/inlining-3-multiline.c: New test.
* gcc.dg/analyzer/inlining-3.c: New test.
* gcc.dg/analyzer/inlining-4-multiline.c: New test.
* gcc.dg/analyzer/inlining-4.c: New test.
* gcc.dg/analyzer/inlining-5-multiline.c: New test.
* gcc.dg/analyzer/inlining-5.c: New test.
* gcc.dg/analyzer/inlining-6-multiline.c: New test.
* gcc.dg/analyzer/inlining-6.c: New test.
* gcc.dg/analyzer/inlining-7-multiline.c: New test.
* gcc.dg/analyzer/inlining-7.c: New test.
gcc/ChangeLog:
PR analyzer/105962
* doc/invoke.texi: Add -fno-analyzer-undo-inlining.
* tree-diagnostic-path.cc (default_tree_diagnostic_path_printer):
Extend -fdiagnostics-path-format=separate-events so that with
-fdiagnostics-show-path-depths it prints fndecls as well as stack
depths.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
David Malcolm [Wed, 15 Jun 2022 21:40:33 +0000 (17:40 -0400)]
analyzer: show saved diagnostics as nodes in .eg.dot dumps
I've been using this tweak to the output of
-fdump-analyzer-exploded-graph in my working copies for a while;
the extra red nodes make it *much* easier to find the places where
diagnostics are being emitted (or rejected by the diagnostic_manager).
gcc/analyzer/ChangeLog:
* diagnostic-manager.cc (saved_diagnostic::dump_dot_id): New.
(saved_diagnostic::dump_as_dot_node): New.
* diagnostic-manager.h (saved_diagnostic::dump_dot_id): New decl.
(saved_diagnostic::dump_as_dot_node): New decl.
* engine.cc (exploded_node::dump_dot): Add nodes for saved
diagnostics.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
David Malcolm [Wed, 15 Jun 2022 21:39:42 +0000 (17:39 -0400)]
analyzer: add more uninit test coverage
gcc/testsuite/ChangeLog:
* gcc.dg/analyzer/uninit-1.c: Add test coverage of attempts
to jump through an uninitialized function pointer, and of attempts
to pass an uninitialized value to a function call.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
Iain Buclaw [Wed, 15 Jun 2022 20:51:52 +0000 (22:51 +0200)]
d: Add `@no_sanitize' attribute to compiler and library.
The `@no_sanitize' attribute disables a particular sanitizer for this
function, analogous to `__attribute__((no_sanitize))'. The library also
defines `@noSanitize' to be compatible with the LLVM D compiler's
`ldc.attributes'.
gcc/d/ChangeLog:
* d-attribs.cc (d_langhook_attribute_table): Add no_sanitize.
(d_handle_no_sanitize_attribute): New function.
Iain Buclaw [Wed, 15 Jun 2022 17:44:36 +0000 (19:44 +0200)]
d: Add `@visibility' and `@hidden' attributes.
The `@visibility' attribute is functionality the same as
`__attribute__((visibility))', and `@hidden' is a convenience alias to
`@visibility("hidden")' defined in the `gcc.attributes' module.
As the visibility of a symbol is also indirectly controlled by the
`export' keyword, the handling of this in the code generation pass has
been improved so that conflicts will be appropriately diagnosed.
gcc/d/ChangeLog:
* d-attribs.cc (d_langhook_attribute_table): Add visibility.
(insert_type_attribute): Use decl_attributes instead of
merge_attributes.
(insert_decl_attribute): Likewise.
(apply_user_attributes): Do nothing when no UDAs applied.
(d_handle_visibility_attribute): New function.
* d-gimplify.cc (d_gimplify_binary_expr): Adjust.
* d-tree.h (set_visibility_for_decl): Declare.
* decl.cc (get_symbol_decl): Move setting of visibility flags to...
(set_visibility_for_decl): ... here. New function.
* types.cc (TypeVisitor::visit (TypeStruct *)): Call
set_visibility_for_decl().
(TypeVisitor::visit (TypeClass *)): Likewise.
gcc/testsuite/ChangeLog:
* gdc.dg/attr_visibility1.d: New test.
* gdc.dg/attr_visibility2.d: New test.
* gdc.dg/attr_visibility3.d: New test.
The recent internal-fn “clean-ups” triggered problems on nvptx
because some of the omp_simt_* patterns had modeless operands.
I wondered about adapting expand_fn_using_insn to cope with that,
but then the problem becomes: what should the mode of operand 0
be when there is no lhs? The answer depends on the target insn.
For GOMP_SIMT_ENTER_ALLOC the answer was: use Pmode.
For GOMP_SIMT_ORDERED_PRED and others the answer was: elide the call.
(However, GOMP_SIMT_ORDERED_PRED doesn't seem to have ECF_* flags
that would normally allow it to be dropped at the gimple level.)
So these instructions seem to be special enough that they need
their own code after all. This patch reverts the second patch
and most of the first. The only part retained from the first
is splitting expand_fn_using_insn out of expand_direct_optab_fn,
since I think expand_fn_using_insn could still be useful in future.
gcc/
PR middle-end/105975
Revert everything apart from the expand_fn_using_insn and
expand_direct_optab_fn changes from:
* internal-fn.def (DEF_INTERNAL_INSN_FN): New macro.
(GOMP_SIMT_ENTER_ALLOC, GOMP_SIMT_EXIT, GOMP_SIMT_LANE)
(GOMP_SIMT_LAST_LANE, GOMP_SIMT_ORDERED_PRED, GOMP_SIMT_VOTE_ANY)
(GOMP_SIMT_XCHG_BFLY, GOMP_SIMT_XCHG_IDX): Use it.
* internal-fn.h (direct_internal_fn_info::directly_mapped): New
member variable.
(direct_internal_fn_info::vectorizable): Reduce to 1 bit.
(direct_internal_fn_p): Also return true for internal functions
that map directly to instructions defined target-insns.def.
(direct_internal_fn): Adjust comment accordingly.
* internal-fn.cc (direct_insn, optab1, optab2, vectorizable_optab1)
(vectorizable_optab2): New local macros.
(not_direct): Initialize directly_mapped.
(mask_load_direct, load_lanes_direct, mask_load_lanes_direct)
(gather_load_direct, len_load_direct, mask_store_direct)
(store_lanes_direct, mask_store_lanes_direct, vec_cond_mask_direct)
(vec_cond_direct, scatter_store_direct, len_store_direct)
(vec_set_direct, unary_direct, binary_direct, ternary_direct)
(cond_unary_direct, cond_binary_direct, cond_ternary_direct)
(while_direct, fold_extract_direct, fold_left_direct)
(mask_fold_left_direct, check_ptrs_direct): Use the macros above.
(expand_GOMP_SIMT_ENTER_ALLOC, expand_GOMP_SIMT_EXIT): Delete
(expand_GOMP_SIMT_LANE, expand_GOMP_SIMT_LAST_LANE): Likewise;
(expand_GOMP_SIMT_ORDERED_PRED, expand_GOMP_SIMT_VOTE_ANY): Likewise.
(expand_GOMP_SIMT_XCHG_BFLY, expand_GOMP_SIMT_XCHG_IDX): Likewise.
(direct_internal_fn_types): Handle functions that map to instructions
defined in target-insns.def.
(direct_internal_fn_types): Likewise.
(direct_internal_fn_supported_p): Likewise.
(internal_fn_expanders): Likewise.
(expand_fn_using_insn): New function,
split out and adapted from...
(expand_direct_optab_fn): ...here.
(expand_GOMP_SIMT_ENTER_ALLOC): Use it.
(expand_GOMP_SIMT_EXIT): Likewise.
(expand_GOMP_SIMT_LANE): Likewise.
(expand_GOMP_SIMT_LAST_LANE): Likewise.
(expand_GOMP_SIMT_ORDERED_PRED): Likewise.
(expand_GOMP_SIMT_VOTE_ANY): Likewise.
(expand_GOMP_SIMT_XCHG_BFLY): Likewise.
(expand_GOMP_SIMT_XCHG_IDX): Likewise.
Richard Earnshaw [Wed, 15 Jun 2022 15:07:20 +0000 (16:07 +0100)]
arm: big-endian issue in gen_cpymem_ldrd_strd [PR105981]
The code in gen_cpymem_ldrd_strd has been incorrect for big-endian
since r230663. The problem is that we use gen_lowpart, etc. to split
the 64-bit quantity, but fail to account for the fact that these
routines are really dealing with 64-bit /values/ and in big-endian the
ordering of the sub-registers changes.
To fix this, I've renamed the conceptually misnamed low_reg and hi_reg
as first_reg and second_reg, and then used different logic for
big-endian targets to initialize these values. This makes the logic
clearer than trying to think about high bits and low bits.
gcc/ChangeLog:
PR target/105981
* config/arm/arm.cc (gen_cpymem_ldrd_strd): Rename low_reg and hi_reg
to first_reg and second_reg respectively. Initialize them correctly
when generating big-endian code.
Nathan Sidwell [Fri, 10 Jun 2022 18:57:38 +0000 (11:57 -0700)]
c++: Use better module partition naming
It turns out that 'implementation partition' is not a term used in the
std, and is confusing to users. Let's use the better term 'internal
partition'. While there, adjust header unit naming.
gcc/cp/
* module.cc (module_state::write_readme): Use less confusing
importable unit names.
Richard Earnshaw [Wed, 15 Jun 2022 12:42:23 +0000 (13:42 +0100)]
arm: fix thinko in arm_bfi_1_p() [PR105974]
I clearly wasn't thinking straight when I wrote the arm_bfi_1_p
function and used XUINT rather than UINTVAL when extracting CONST_INT
values. It seemed to work in testing, but was incorrect and failed
RTL checking.
Fixed thusly:
gcc/ChangeLog:
PR target/105974
* config/arm/arm.cc (arm_bfi_1_p): Use UINTVAL instead of XUINT.
Richard Biener [Wed, 15 Jun 2022 09:27:31 +0000 (11:27 +0200)]
tree-optimization/105971 - less surprising refs_may_alias_p_2
When DSE asks whether __real a is using __imag a it gets a surprising
result when a is a FUNCTION_DECL. The following makes sure this case
is less surprising to callers but keeping the bail-out for the
non-decl case where it is true that PTA doesn't track aliases to code
correctly.
2022-06-15 Richard Biener <rguenther@suse.de>
PR tree-optimization/105971
* tree-ssa-alias.cc (refs_may_alias_p_2): Put bail-out for
FUNCTION_DECL and LABEL_DECL refs after decl-decl disambiguation
to leak less surprising alias results.
Richard Biener [Wed, 15 Jun 2022 08:54:48 +0000 (10:54 +0200)]
tree-optimization/105969 - FPE with array diagnostics
For a [0][0] array we have to be careful when dividing by the element
size which is zero for the outermost dimension. Luckily the division
is only for an overflow check which is pointless for array size zero.
2022-06-15 Richard Biener <rguenther@suse.de>
PR tree-optimization/105969
* gimple-ssa-sprintf.cc (get_origin_and_offset_r): Avoid division
by zero in overflow check.
Iain Buclaw [Tue, 14 Jun 2022 13:56:59 +0000 (15:56 +0200)]
d: Delay completing aggregate and enum types until after attributes have been applied.
Because of forward/recursive references, the TYPE_SIZE, TYPE_ALIGN, and
TYPE_MODE of structs and enums were set before laying out its members.
This adds a new macro TYPE_FORWARD_REFERENCES for storing those forward
references against the incomplete type, laying them out after the type
has been completed. Construction of the TYPE_DECL has also been moved
on earlier in the type generation pass, which will allow the possibility
of adding gdc-specific type attributes to the D front-end in the future.
gcc/d/ChangeLog:
* d-attribs.cc (apply_user_attributes): Set ATTR_FLAG_TYPE_IN_PLACE
only on incomplete types.
* d-codegen.cc (copy_aggregate_type): Set TYPE_STUB_DECL after copy.
* d-compiler.cc (Compiler::onParseModule): Adjust.
* d-tree.h (AGGREGATE_OR_ENUM_TYPE_CHECK): Define.
(TYPE_FORWARD_REFERENCES): Define.
* decl.cc (gcc_attribute_p): Update documentation.
(DeclVisitor::visit (StructDeclaration *)): Exit before building type
node if gcc.attributes symbol.
(DeclVisitor::visit (ClassDeclaration *)): Build type node and add
TYPE_NAME to current binding level before emitting anything else.
(DeclVisitor::visit (InterfaceDeclaration *)): Likewise.
(DeclVisitor::visit (EnumDeclaration *)): Likewise.
(build_type_decl): Move rest_of_decl_compilation() call to
finish_aggregate_type().
* types.cc (insert_aggregate_field): Move layout_decl() call to
finish_aggregate_type().
(insert_aggregate_bitfield): Likewise.
(layout_aggregate_members): Adjust.
(finish_incomplete_fields): New function.
(finish_aggregate_type): Handle forward referenced field types. Call
rest_of_type_compilation() after completing the aggregate.
(TypeVisitor::visit (TypeEnum *)): Don't set size and alignment until
after apply_user_attributes(). Call rest_of_type_compilation() after
completing the enumeral.
(TypeVisitor::visit (TypeStruct *)): Call build_type_decl() before
apply_user_attributes(). Don't set size, alignment, and mode until
after apply_user_attributes().
(TypeVisitor::visit (TypeClass *)): Call build_type_decl() before
applly_user_attributes().
In f2ebf2d98efe0ac2314b58cf474f44cb8ebd5244 I'd forced the
chosen unroll factor to be a factor of the VF, in order to
work around an exact_div ICE in PR105254. This was completely
bogus -- clearly I didn't look in enough detail at why we ended
up with an unrolled VF that wasn't a multiple of the UF.
Kewen has since fixed the bug properly for PR105940, so this
patch reverts my earlier attempt. Sorry for the stupidity.
* config/aarch64/aarch64.cc
(aarch64_vector_costs::determine_suggested_unroll_factor): Take a
loop_vec_info as argument. Restrict the unroll factor to values
that divide the VF.
(aarch64_vector_costs::finish_cost): Update call accordingly.
gcc/testsuite/
* gcc.target/aarch64/sve/cost_model_14.c: New test.
This is because symbolic constants are substituted during lexing
and only apply to bare symbol names, not strings.
One option would have been to extend this lexing substitution
to define_*_attribute values as well. However, that would replace
symbolic names with integer constants in the generated .cc code,
making it less readable.
This patch goes for the more localised approach of only
applying define_constants when we want their integer value.
I don't think any changes to the docs are needed. This isn't
adding a new feature, it's just making an existing one work in
the expected way.
gcc/
* read-rtl.cc (find_int): Substitute symbolic constants
before converting the string to an integer.
Jakub Jelinek [Wed, 15 Jun 2022 08:45:04 +0000 (10:45 +0200)]
openmp: Fix up get-mapped-ptr-1.{c,f90} tests
On Tue, Jun 14, 2022 at 06:41:37PM +0200, Thomas Schwinge wrote:
> In an offloading configuration, I'm seeing:
>
> PASS: libgomp.fortran/get-mapped-ptr-1.f90 -O (test for excess errors)
> [-PASS:-]{+FAIL:+} libgomp.fortran/get-mapped-ptr-1.f90 -O execution test
>
> Does that one need similar treatment?
I assume not just that but libgomp.c-c++-common/get-mapped-ptr-1.c too?
It both needs the same treatment, and in the get-mapped-ptr-1.c
case there is even UB, while the Fortran version was using c_loc (q)
as the host pointer, in C/C++ it was using q which was value of
uninitialized pointer.
2022-06-15 Jakub Jelinek <jakub@redhat.com>
* testsuite/libgomp.c-c++-common/get-mapped-ptr-1.c (main): Initialize
q to ddress of an automatic variable. Use -5 instead of -1 in
omp_get_mapped_ptr call. Add test with omp_initial_device.
* testsuite/libgomp.fortran/get-mapped-ptr-1.f90 (main): Use -5 instead
of -1 in omp_get_mapped_ptr call. Add test with omp_initial_device.
Renumber stop arguments afterwards.
Roger Sayle [Wed, 15 Jun 2022 07:31:13 +0000 (09:31 +0200)]
Fold truncations of left shifts in match.pd
Whilst investigating PR 55278, I noticed that the tree-ssa optimizers
aren't eliminating the promotions of shifts to "int" as inserted by the
c-family front-ends, instead leaving this simplification to be left to
the RTL optimizers. This patch allows match.pd to do this itself earlier,
narrowing (T)(X << C) to (T)X << C when the constant C is known to be
valid for the (narrower) type T.
Hence for this simple test case:
short foo(short x) { return x << 5; }
the .optimized dump currently looks like:
short int foo (short int x)
{
int _1;
int _2;
short int _4;
This is always reasonable as RTL expansion knows how to use
widening optabs if it makes sense at the RTL level to perform
this shift in a wider mode.
Of course, there's often a catch. The above simplification not only
reduces the number of statements in gimple, but also allows further
optimizations, for example including the perception of rotate idioms
and bswap16. Alas, optimizing things earlier than anticipated
requires several testsuite changes [though all these tests have
been confirmed to generate identical assembly code on x86_64].
The only significant change is that the vectorization pass wouldn't
previously lower rotations of signed integer types. Hence this
patch includes a refinement to tree-vect-patterns to allow signed
types, by using the equivalent unsigned shifts.
2022-06-15 Roger Sayle <roger@nextmovesoftware.com>
Richard Biener <rguenther@suse.de>
gcc/ChangeLog
* match.pd (convert (lshift @1 INTEGER_CST@2)): Narrow integer
left shifts by a constant when the result is truncated, and the
shift constant is well-defined.
* tree-vect-patterns.cc (vect_recog_rotate_pattern): Add
support for rotations of signed integer types, by lowering
using unsigned vector shifts.
gcc/testsuite/ChangeLog
* gcc.dg/fold-convlshift-4.c: New test case.
* gcc.dg/optimize-bswaphi-1.c: Update found bswap count.
* gcc.dg/tree-ssa/pr61839_3.c: Shift is now optimized before VRP.
* gcc.dg/vect/vect-over-widen-1-big-array.c: Remove obsolete tests.
* gcc.dg/vect/vect-over-widen-1.c: Likewise.
* gcc.dg/vect/vect-over-widen-3-big-array.c: Likewise.
* gcc.dg/vect/vect-over-widen-3.c: Likewise.
* gcc.dg/vect/vect-over-widen-4-big-array.c: Likewise.
* gcc.dg/vect/vect-over-widen-4.c: Likewise.
Jonathan Wakely [Tue, 14 Jun 2022 15:19:32 +0000 (16:19 +0100)]
libstdc++: Check lengths first in operator== for basic_string [PR62187]
As confirmed by LWG 2852, the calls to traits_type::compare do not need
to be obsvervable, so we can make operator== compare string lengths
first and return immediately for non-equal lengths. This avoids doing a
slow string comparison for "abc...xyz" == "abc...xy". Previously we only
did this optimization for std::char_traits<char>, but we can enable it
unconditionally thanks to LWG 2852.
For comparisons with a const char* we can call traits_type::length right
away to do the same optimization. That strlen call can be folded away
for constant arguments, making it very efficient.
For the pre-C++20 operator== and operator!= overloads we can swap the
order of the arguments to take advantage of the operator== improvements.
Jonathan Wakely [Tue, 14 Jun 2022 13:54:27 +0000 (14:54 +0100)]
libstdc++: Inline all basic_string::compare overloads [PR59048]
Defining the compare member functions inline allows calls to
traits_type::length and std::min to be inlined, taking advantage of
constant expression arguments. When not inline, the compiler prefers to
use the explicit instantiation definitions in libstdc++.so and can't
take advantage of constant arguments.
Jonathan Wakely [Tue, 14 Jun 2022 13:37:25 +0000 (14:37 +0100)]
libstdc++: Check for size overflow in constexpr allocation [PR105957]
libstdc++-v3/ChangeLog:
PR libstdc++/105957
* include/bits/allocator.h (allocator::allocate): Check for
overflow in constexpr allocation.
* testsuite/20_util/allocator/105975.cc: New test.
regrename: Fix -fcompare-debug issue in check_new_reg_p [PR105041]
In check_new_reg_p, the nregs of a du chain is computed by obtaining the
MODE of the first element in the chain, and then calling
hard_regno_nregs() with the MODE. But the first element of the chain can
be a DEBUG_INSN whose mode need not be the same as the rest of the
elements in the du chain. This was resulting in fcompare-debug failure
as check_new_reg_p was returning a different result with -g for the same
candidate register. We can instead obtain nregs from the du chain
itself.
Philipp Tomsich [Wed, 11 May 2022 10:12:57 +0000 (12:12 +0200)]
RISC-V: Split slli+sh[123]add.uw opportunities to avoid zext.w
When encountering a prescaled (biased) value as a candidate for
sh[123]add.uw, the combine pass will present this as shifted by the
aggregate amount (prescale + shift-amount) with an appropriately
adjusted mask constant that has fewer than 32 bits set.
E.g., here's the failing expression seen in combine for a prescale of
1 and a shift of 2 (note how 0x3fffffff8 >> 3 is 0x7fffffff).
Trying 7, 8 -> 10:
7: r78:SI=r81:DI#0<<0x1
REG_DEAD r81:DI
8: r79:DI=zero_extend(r78:SI)
REG_DEAD r78:SI
10: r80:DI=r79:DI<<0x2+r82:DI
REG_DEAD r79:DI
REG_DEAD r82:DI
Failed to match this instruction:
(set (reg:DI 80 [ cD.1491 ])
(plus:DI (and:DI (ashift:DI (reg:DI 81)
(const_int 3 [0x3]))
(const_int 17179869176 [0x3fffffff8]))
(reg:DI 82)))
To address this, we introduce a splitter handling these cases.
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu> Co-developed-by: Manolis Tsamis <manolis.tsamis@vrull.eu>
gcc/ChangeLog:
* config/riscv/bitmanip.md: Add split to handle opportunities
for slli + sh[123]add.uw
Richard Biener [Tue, 14 Jun 2022 09:10:13 +0000 (11:10 +0200)]
tree-optimization/105946 - avoid accessing excess args from uninit diag
uninit diagnostics uses passing via reference and access attributes
but that iterates over function type arguments which can in some
cases appearantly outrun the actual arguments leading to ICEs.
The following simply ignores not present arguments.
2022-06-14 Richard Biener <rguenther@suse.de>
PR tree-optimization/105946
* tree-ssa-uninit.cc (maybe_warn_pass_by_reference):
Do not look at arguments not specified in the function call.
Eric Botcazou [Tue, 14 Jun 2022 10:28:24 +0000 (12:28 +0200)]
Restore bootstrap on ARM
The -Wuse-after-free warning is explicitly disabled for destructors on ARM
because of the special ABI and the previous change to the warning machinery
uncovered another case where the warning data would be incorrectly erased.
gcc/
* warning-control.cc (copy_warning) [generic version]: Do not erase
the warning data of the destination location when the no-warning
bit is not set on the source.
(copy_warning) [tree version]: Return early if TO is equal to FROM.
(copy_warning) [gimple version]: Likewise.
gcc/testsuite/
* g++.dg/warn/Wuse-after-free5.C: New test.
In function vect_analyze_loop_2, the current place of
suggested_unroll_factor applying can't guarantee it's
applied for all cases. As the case shows, vectorizer
could retry with SLP forced off, the vf is reset by
saved_vectorization_factor which isn't applied with
suggested_unroll_factor before. It means it can end
up with one vf which neglects suggested_unroll_factor.
I think it's off design, we should move the applying
of suggested_unroll_factor after start_over.
PR tree-optimization/105940
gcc/ChangeLog:
* tree-vect-loop.cc (vect_analyze_loop_2): Move the place of
applying suggested_unroll_factor after start_over.
xtensa: Optimize bitwise AND operation with some specific forms of constants
This patch offers several insn-and-split patterns for bitwise AND with
register and constant that can be represented as:
i. 1's least significant N bits and the others 0's (17 <= N <= 31)
ii. 1's most significant N bits and the others 0's (12 <= N <= 31)
iii. M 1's sequence of bits and trailing N 0's bits, that cannot fit into a
"MOVI Ax, simm12" instruction (1 <= M <= 16, 1 <= N <= 30)
And also offers shortcuts for conditional branch if each of the abovementioned
operations is (not) equal to zero.
gcc/ChangeLog:
* config/xtensa/predicates.md (shifted_mask_operand):
New predicate.
* config/xtensa/xtensa.md (*andsi3_const_pow2_minus_one):
New insn-and-split pattern.
(*andsi3_const_negative_pow2, *andsi3_const_shifted_mask,
*masktrue_const_pow2_minus_one, *masktrue_const_negative_pow2,
*masktrue_const_shifted_mask): Ditto.
No need to describe the "false side" conditional insn patterns anymore.
gcc/ChangeLog:
* config/xtensa/xtensa-protos.h (xtensa_emit_branch):
Remove the first argument.
(xtensa_emit_bit_branch): Remove it because now called only from the
output statement of *bittrue insn pattern.
* config/xtensa/xtensa.cc (gen_int_relational): Remove the last
argument 'p_invert', and make so that the condition is reversed by
itself as needed.
(xtensa_expand_conditional_branch): Share the common path, and remove
condition inversion code.
(xtensa_emit_branch, xtensa_emit_movcc): Simplify by removing the
"false side" pattern.
(xtensa_emit_bit_branch): Remove it because of the abovementioned
reason, and move the function body to *bittrue insn pattern.
* config/xtensa/xtensa.md (*bittrue): Transplant the output
statement from removed xtensa_emit_bit_branch().
(*bfalse, *ubfalse, *bitfalse, *maskfalse): Remove the "false side"
insn patterns.
This patch introduces funnel shifter utilization, and rearranges existing
"per-byte shift" insn patterns.
gcc/ChangeLog:
* config/xtensa/predicates.md (logical_shift_operator,
xtensa_shift_per_byte_operator): New predicates.
* config/xtensa/xtensa-protos.h (xtensa_shlrd_which_direction):
New prototype.
* config/xtensa/xtensa.cc (xtensa_shlrd_which_direction):
New helper function for funnel shift patterns.
* config/xtensa/xtensa.md (ior_op): New code iterator.
(*ashlsi3_1): Replace with new split pattern.
(*shift_per_byte): Unify *ashlsi3_3x, *ashrsi3_3x and *lshrsi3_3x.
(*shift_per_byte_omit_AND_0, *shift_per_byte_omit_AND_1):
New insn-and-split patterns that redirect to *xtensa_shift_per_byte,
in order to omit unnecessary bitwise AND operation.
(*shlrd_reg_<code>, *shlrd_const_<code>, *shlrd_per_byte_<code>,
*shlrd_per_byte_<code>_omit_AND):
New insn patterns for funnel shifts.
Jason Merrill [Fri, 10 Jun 2022 19:26:36 +0000 (15:26 -0400)]
ubsan: -Wreturn-type and ubsan trap-on-error
I noticed that -fsanitize=undefined -fsanitize-undefined-trap-on-error was
omitting the usual -Wreturn-type warning for control flowing off the end of
a function. This was because the warning code was looking for calls either
to __builtin_unreachable or the UBSan function, but these flags produce a
call to __builtin_trap instead.
gcc/c-family/ChangeLog:
* c-ubsan.cc (ubsan_instrument_return): Use BUILTINS_LOCATION.
gcc/ChangeLog:
* tree-cfg.cc (pass_warn_function_return::execute): Also check
BUILT_IN_TRAP.
RISC-V: Reset the length to the default of 4 for FP comparisons
The default length for floating-point compare operations is overridden
to 8, however the FEQ.fmt, FLT.fmt, FLE.fmt machine instructions and
FGE.fmt, FGT.fmt assembly idioms the relevant RTL insns produce are all
4 bytes long each. And all the floating-point compare RTL insns that
produce multiple machine instructions explicitly set their lengths.
Remove the override then, letting the default of 4 apply for the single
instruction case.
gcc/
* config/riscv/riscv.md (length): Remove the explicit setting
for "fcmp".
Mark Mentovai [Mon, 13 Jun 2022 15:40:19 +0000 (16:40 +0100)]
libstdc++: Rename __null_terminated to avoid collision with Apple SDK
The macOS 13 SDK (and equivalent-version iOS and other Apple OS SDKs)
contain this definition in <sys/cdefs.h>:
863 #define __null_terminated
This collides with the use of __null_terminated in libstdc++'s
experimental fs_path.h.
As libstdc++'s use of this token is entirely internal to fs_path.h, the
simplest workaround, renaming it, is most appropriate. Here, it's
renamed to __nul_terminated, referencing the NUL ('\0') value that is
used to terminate the strings in the context in which this tag structure
is used.
libstdc++-v3/ChangeLog:
* include/experimental/bits/fs_path.h (__detail::__null_terminated):
Rename to __nul_terminated to avoid colliding with a macro in
Apple's SDK.
Jonathan Wakely [Mon, 13 Jun 2022 15:36:14 +0000 (16:36 +0100)]
libstdc++: Use type_identity_t for non-deducible std::atomic_xxx args
This is LWG 3220 which is about to become Tentatively Ready.
libstdc++-v3/ChangeLog:
* include/std/atomic (__atomic_val_t): Use __type_identity_t
instead of atomic<T>::value_type, as per LWG 3220.
* testsuite/29_atomics/atomic/lwg3220.cc: New test.
Uros Bizjak [Mon, 13 Jun 2022 15:08:18 +0000 (17:08 +0200)]
i386: Return true for (SUBREG (MEM....)) in register_no_elim_operand [PR105927]
Under certain conditions register_operand predicate also allows
subregs of memory operands. When RTL checking is enabled, these
will fail with REGNO (op).
Allow subregs of memory operands, these are guaranteed
to be reloaded to a register.
2022-06-13 Uroš Bizjak <ubizjak@gmail.com>
gcc/ChangeLog:
PR target/105927
* config/i386/predicates.md (register_no_elim_operand):
Return true for subreg of a memory operand.
gcc/testsuite/ChangeLog:
PR target/105927
* gcc.target/i386/pr105927.c: New test.
Iain Buclaw [Sat, 11 Jun 2022 10:40:00 +0000 (12:40 +0200)]
d: Match function declarations of gcc built-ins from any module.
Declarations of recognised gcc built-in functions are now matched from
any module. Previously, only the `core.stdc' package was scanned.
In addition to matching of the symbol, any user-applied `@attributes' or
`pragma(mangle)' name will be applied to the built-in decl as well.
Because there would now be no control over where built-in declarations
are coming from, the warning option `-Wbuiltin-declaration-mismatch' has
been implemented in the D front-end too.
gcc/d/ChangeLog:
* d-builtins.cc: Include builtins.h.
(gcc_builtins_libfuncs): Remove.
(strip_type_modifiers): New function.
(matches_builtin_type): New function.
(covariant_with_builtin_type_p): New function.
(maybe_set_builtin_1): Set front-end built-in if identifier matches
gcc built-in name. Apply user-specified attributes and assembler name
overrides to the built-in. Warn about built-in declaration mismatches.
(d_builtin_function): Set IDENTIFIER_DECL_TREE of built-in functions.
* d-compiler.cc (Compiler::onParseModule): Scan all modules for any
identifiers that match built-in function names.
* lang.opt (Wbuiltin-declaration-mismatch): New option.
gcc/testsuite/ChangeLog:
* gdc.dg/Wbuiltin_declaration_mismatch.d: New test.
* gdc.dg/builtins.d: New test.
Add a general mapping from internal fns to target insns
Several existing internal functions map directly to an instruction
defined in target-insns.def. This patch makes it easier to define
more such functions in future.
This should help to reduce cut-&-paste, but more importantly, it allows
the difference between optab functions and target-insns.def functions
to be abstracted away; both are now treated as “directly-mapped”.
gcc/
* internal-fn.def (DEF_INTERNAL_INSN_FN): New macro.
(GOMP_SIMT_ENTER_ALLOC, GOMP_SIMT_EXIT, GOMP_SIMT_LANE)
(GOMP_SIMT_LAST_LANE, GOMP_SIMT_ORDERED_PRED, GOMP_SIMT_VOTE_ANY)
(GOMP_SIMT_XCHG_BFLY, GOMP_SIMT_XCHG_IDX): Use it.
* internal-fn.h (direct_internal_fn_info::directly_mapped): New
member variable.
(direct_internal_fn_info::vectorizable): Reduce to 1 bit.
(direct_internal_fn_p): Also return true for internal functions
that map directly to instructions defined target-insns.def.
(direct_internal_fn): Adjust comment accordingly.
* internal-fn.cc (direct_insn, optab1, optab2, vectorizable_optab1)
(vectorizable_optab2): New local macros.
(not_direct): Initialize directly_mapped.
(mask_load_direct, load_lanes_direct, mask_load_lanes_direct)
(gather_load_direct, len_load_direct, mask_store_direct)
(store_lanes_direct, mask_store_lanes_direct, vec_cond_mask_direct)
(vec_cond_direct, scatter_store_direct, len_store_direct)
(vec_set_direct, unary_direct, binary_direct, ternary_direct)
(cond_unary_direct, cond_binary_direct, cond_ternary_direct)
(while_direct, fold_extract_direct, fold_left_direct)
(mask_fold_left_direct, check_ptrs_direct): Use the macros above.
(expand_GOMP_SIMT_ENTER_ALLOC, expand_GOMP_SIMT_EXIT): Delete
(expand_GOMP_SIMT_LANE, expand_GOMP_SIMT_LAST_LANE): Likewise;
(expand_GOMP_SIMT_ORDERED_PRED, expand_GOMP_SIMT_VOTE_ANY): Likewise.
(expand_GOMP_SIMT_XCHG_BFLY, expand_GOMP_SIMT_XCHG_IDX): Likewise.
(direct_internal_fn_types): Handle functions that map to instructions
defined in target-insns.def.
(direct_internal_fn_types): Likewise.
(direct_internal_fn_supported_p): Likewise.
(internal_fn_expanders): Likewise.
internal-fn.c has quite a few functions that simply map the result
of the call to an instruction's output operand (if any) and map
each argument to an instruction's input operand, in order.
This patch adds a single function for doing that. It's really
just a generalisation of expand_direct_optab_fn, but with the
output operand being optional.
Unfortunately, it isn't possible to do this for vcond_mask
because the internal function has a different argument order
from the optab.
gcc/
* internal-fn.cc (expand_fn_using_insn): New function,
split out and adapted from...
(expand_direct_optab_fn): ...here.
(expand_GOMP_SIMT_ENTER_ALLOC): Use it.
(expand_GOMP_SIMT_EXIT): Likewise.
(expand_GOMP_SIMT_LANE): Likewise.
(expand_GOMP_SIMT_LAST_LANE): Likewise.
(expand_GOMP_SIMT_ORDERED_PRED): Likewise.
(expand_GOMP_SIMT_VOTE_ANY): Likewise.
(expand_GOMP_SIMT_XCHG_BFLY): Likewise.
(expand_GOMP_SIMT_XCHG_IDX): Likewise.
Iain Buclaw [Mon, 13 Jun 2022 12:35:38 +0000 (14:35 +0200)]
d: Improve TypeInfo errors when compiling in -fno-rtti mode
The existing TypeInfo errors can be cryptic. This alters the diagnostic
to include which expression is requiring `object.TypeInfo'.
gcc/d/ChangeLog:
* d-tree.h (check_typeinfo_type): Add Expression* parameter.
(build_typeinfo): Likewise. Declare new override.
* expr.cc (ExprVisitor): Call build_typeinfo with Expression*.
* typeinfo.cc (check_typeinfo_type): Include expression in the
diagnostic message.
(build_typeinfo): New override.
Jakub Jelinek [Mon, 13 Jun 2022 11:42:59 +0000 (13:42 +0200)]
openmp: Conforming device numbers and omp_{initial,invalid}_device
OpenMP 5.2 changed once more what device numbers are allowed.
In 5.1, valid device numbers were [0, omp_get_num_devices()].
5.2 makes also -1 valid (calls it omp_initial_device), which is equivalent
in behavior to omp_get_num_devices() number but has the advantage that it
is a constant. And it also introduces omp_invalid_device which is
also a constant with implementation defined value < -1. That value should
act like sNaN, any time any device construct (GOMP_target*) or OpenMP runtime
API routine is asked for such a device, the program is terminated.
And if OMP_TARGET_OFFLOAD=mandatory, all non-conforming device numbers (which
is all but [-1, omp_get_num_devices()] other than omp_invalid_device)
must be treated like omp_invalid_device.
For device constructs, we have a compatibility problem, we've historically
used 2 magic negative values to mean something special.
GOMP_DEVICE_ICV (-1) means device clause wasn't present, pick the
omp_get_default_device () number
GOMP_DEVICE_FALLBACK (-2) means the host device (this is used e.g. for
#pragma omp target if (cond)
where if cond is false, we pass -2
But 5.2 requires that omp_initial_device is -1 (there were discussions
about it, advantage of -1 is that one can say iterate over the
[-1, omp_get_num_devices()-1] range to get all devices starting with
the host/initial one.
And also, if user passes -2, unless it is omp_invalid_device, we need to
treat it like non-conforming with OMP_TARGET_OFFLOAD=mandatory.
So, the patch does on the compiler side some number remapping,
user_device_num >= -2U ? user_device_num - 1 : user_device_num.
This remapping is done at compile time if device clause has constant
argument, otherwise at runtime, and means that for user -1 (omp_initial_device)
we pass -2 to GOMP_* in the runtime library where it treats it like host
fallback, while -2 is remapped to -3 (one of the non-conforming device numbers,
for those it doesn't matter which one is which).
omp_invalid_device is then -4.
For the OpenMP device runtime APIs, no remapping is done.
This patch doesn't deal with the initial default-device-var for
OMP_TARGET_OFFLOAD=mandatory , the spec says that the inital ICV value
for that should in that case depend on whether there are any offloading
devices or not (if not, should be omp_invalid_device), but that means
we can't determine the number of devices lazily (and let libraries have the
possibility to register their offloading data etc.).
2022-06-13 Jakub Jelinek <jakub@redhat.com>
gcc/
* omp-expand.cc (expand_omp_target): Remap user provided
device clause arguments, -1 to -2 and -2 to -3, either
at compile time if constant, or at runtime.
include/
* gomp-constants.h (GOMP_DEVICE_INVALID): Define.
libgomp/
* omp.h.in (omp_initial_device, omp_invalid_device): New enumerators.
* omp_lib.f90.in (omp_initial_device, omp_invalid_device): New
parameters.
* omp_lib.h.in (omp_initial_device, omp_invalid_device): Likewise.
* target.c (resolve_device): Add remapped argument, handle
GOMP_DEVICE_ICV only if remapped is true (and clear remapped),
for negative values, treat GOMP_DEVICE_FALLBACK as fallback only
if remapped, otherwise treat omp_initial_device that way. For
omp_invalid_device, always emit gomp_fatal, even when
OMP_TARGET_OFFLOAD isn't mandatory.
(GOMP_target, GOMP_target_ext, GOMP_target_data, GOMP_target_data_ext,
GOMP_target_update, GOMP_target_update_ext,
GOMP_target_enter_exit_data): Pass true as remapped argument to
resolve_device.
(omp_target_alloc, omp_target_free, omp_target_is_present,
omp_target_memcpy_check, omp_target_associate_ptr,
omp_target_disassociate_ptr, omp_get_mapped_ptr,
omp_target_is_accessible): Pass false as remapped argument to
resolve_device. Treat omp_initial_device the same as
gomp_get_num_devices (). Don't bypass resolve_device calls if
device_num is negative.
(omp_pause_resource): Treat omp_initial_device the same as
gomp_get_num_devices (). Call resolve_device.
* icv-device.c (omp_set_default_device): Always set to device_num
even when it is negative.
* libgomp.texi: Document that Conforming device numbers,
omp_initial_device and omp_invalid_device is implemented.
* testsuite/libgomp.c/target-41.c (main): Add test with
omp_initial_device.
* testsuite/libgomp.c/target-45.c: New test.
* testsuite/libgomp.c/target-46.c: New test.
* testsuite/libgomp.c/target-47.c: New test.
* testsuite/libgomp.c-c++-common/target-is-accessible-1.c (main): Add
test with omp_initial_device. Use -5 instead of -1 for negative value
test.
* testsuite/libgomp.fortran/target-is-accessible-1.f90 (main):
Likewise. Reorder stop numbers.
Eric Botcazou [Mon, 13 Jun 2022 11:32:53 +0000 (13:32 +0200)]
Introduce -finstrument-functions-once
The goal is to make it possible to use it in (large) production binaries
to do function-level coverage, so the overhead must be minimum and, in
particular, there is no protection against data races so the "once"
moniker is imprecise.
gcc/
* common.opt (finstrument-functions): Set explicit value.
(-finstrument-functions-once): New option.
* doc/invoke.texi (Program Instrumentation Options): Document it.
* gimplify.cc (build_instrumentation_call): New static function.
(gimplify_function_tree): Call it to emit the instrumentation calls
if -finstrument-functions[-once] is specified.
gcc/testsuite/
* gcc.dg/instrument-4.c: New test.
Eric Botcazou [Mon, 13 Jun 2022 08:03:36 +0000 (10:03 +0200)]
Do not erase warning data in gimple_set_location
gimple_set_location is mostly invoked on newly built GIMPLE statements, so
their location is UNKNOWN_LOCATION and setting it will clobber the warning
data of the passed location, if any.
gcc/
* dwarf2out.cc (output_one_line_info_table): Initialize prev_addr.
* gimple.h (gimple_set_location): Do not copy warning data from
the previous location when it is UNKNOWN_LOCATION.
* optabs.cc (expand_widen_pattern_expr): Always set oprnd{1,2}.
gcc/testsuite/
* c-c++-common/nonnull-1.c: Remove XFAIL for C++.
Jakub Jelinek [Mon, 13 Jun 2022 08:53:33 +0000 (10:53 +0200)]
i386: Fix up *<dwi>3_doubleword_mask [PR105911]
Another regression caused by my recent patch.
This time because define_insn_and_split only requires that the
constant mask is const_int_operand. When it was only SImode,
that wasn't a problem, HImode neither, but for DImode if we need
to and the shift count we might run into a problem that it isn't
a representable signed 32-bit immediate.
But, we don't really care about the upper bits of the mask, so
we can just mask the CONST_INT with the mode mask.
2022-06-13 Jakub Jelinek <jakub@redhat.com>
PR target/105911
* config/i386/i386.md (*ashl<dwi>3_doubleword_mask,
*<insn><dwi>3_doubleword_mask): Use operands[3] masked with
(<MODE_SIZE> * BITS_PER_UNIT) - 1 as AND operand instead of
operands[3] unmodified.
Simon Wright [Sun, 12 Jun 2022 16:01:22 +0000 (17:01 +0100)]
Darwin: Truncate kernel-provided version to OS major for Darwin >= 20.
In common with system tools, GCC uses a version obtained from the kernel as
the prevailing macOS target, when that is not overridden by command line or
environment versions (i.e. mmacosx-version-min=, MACOSX_DEPLOYMENT_TARGET).
Presently, GCC assumes that if the OS version is >= 20, the value used should
include both major and minium version identifiers. However the system tools
(for those versions) truncate the value to the major version - this leads to
link errors when combining objects built with clang and GCC for example:
ld: warning: object file (null.o) was built for newer macOS version (12.2)
than being linked (12.0)
The change here truncates the values GCC uses to the major version.
gcc/ChangeLog:
PR target/104871
* config/darwin-driver.cc (darwin_find_version_from_kernel): If the OS
version is darwin20 (macOS 11) or greater, truncate the version to the
major number.
Mark Mentovai [Fri, 10 Jun 2022 14:56:42 +0000 (15:56 +0100)]
Darwin: Future-proof -mmacosx-version-min
f18cbc1ee1f4 (2021-12-18) updated various parts of gcc to not impose a
Darwin or macOS version maximum of the current known release. Different
parts of gcc accept, variously, Darwin version numbers matching
darwin2*, and macOS major version numbers up to 99. The current released
version is Darwin 21 and macOS 12, with Darwin 22 and macOS 13 expected
for public release later this year. With one major OS release per year,
this strategy is expected to provide another 8 years of headroom.
However, f18cbc1ee1f4 missed config/darwin-c.c (now .cc), which
continued to impose a maximum of macOS 12 on the -mmacosx-version-min
compiler driver argument. This was last updated from 11 to 12 in 11b967577483 (2021-10-27), but kicking the can down the road one year at
a time is not a viable strategy, and is not in line with the more recent
technique from f18cbc1ee1f4.
Prior to 556ab5125912 (2020-11-06), config/darwin-c.c did not impose a
maximum that needed annual maintenance, as at that point, all macOS
releases had used a major version of 10. The stricter approach imposed
since then was valuable for a time until the particulars of the new
versioning scheme were established and understood, but now that they
are, it's prudent to restore a more permissive approach.
gcc/ChangeLog:
* config/darwin-c.cc: Make -mmacosx-version-min more future-proof.
The patch relaxes type-checking for VEC_PERM_EXPR by allowing different
vector types for lhs and rhs provided:
(1) rhs3 is constant and has integer type element.
(2) len(lhs) == len(rhs3) and len(rhs1) == len(rhs2)
(3) lhs and rhs have same element type.
gcc/ChangeLog:
PR target/96463
* config/aarch64/aarch64-sve-builtins-base.cc: Include ssa.h.
(svld1rq_impl::fold): Define.
* config/aarch64/aarch64.cc (expand_vec_perm_d): Define new members
op_mode and op_vec_flags.
(aarch64_evpc_reencode): Initialize newd.op_mode and
newd.op_vec_flags.
(aarch64_evpc_sve_dup): New function.
(aarch64_expand_vec_perm_const_1): Gate existing calls to
aarch64_evpc_* functions under d->vmode == d->op_mode,
and call aarch64_evpc_sve_dup.
(aarch64_vectorize_vec_perm_const): Remove assert
d->vmode != d->op_mode, and initialize d.op_mode and d.op_vec_flags.
* tree-cfg.cc (verify_gimple_assign_ternary): Allow different
vector types for lhs and rhs in VEC_PERM_EXPR if rhs3 is
constant.
gcc/testsuite/ChangeLog:
PR target/96463
* gcc.target/aarch64/sve/acle/general/pr96463-1.c: New test.
* gcc.target/aarch64/sve/acle/general/pr96463-2.c: Likewise.
xtensa: Improve constant synthesis for both integer and floating-point
This patch revises the previous implementation of constant synthesis.
First, changed to use define_split machine description pattern and to run
after reload pass, in order not to interfere some optimizations such as
the loop invariant motion.
Second, not only integer but floating-point is subject to processing.
Third, several new synthesis patterns - when the constant cannot fit into
a "MOVI Ax, simm12" instruction, but:
I. can be represented as a power of two minus one (eg. 32767, 65535 or
0x7fffffffUL)
=> "MOVI(.N) Ax, -1" + "SRLI Ax, Ax, 1 ... 31" (or "EXTUI")
II. is between -34816 and 34559
=> "MOVI(.N) Ax, -2048 ... 2047" + "ADDMI Ax, Ax, -32768 ... 32512"
III. (existing case) can fit into a signed 12-bit if the trailing zero bits
are stripped
=> "MOVI(.N) Ax, -2048 ... 2047" + "SLLI Ax, Ax, 1 ... 31"
The above sequences consist of 5 or 6 bytes and have latency of 2 clock cycles,
in contrast with "L32R Ax, <litpool>" (3 bytes and one clock latency, but may
suffer additional one clock pipeline stall and implementation-specific
InstRAM/ROM access penalty) plus 4 bytes of constant value.
In addition, 3-instructions synthesis patterns (8 or 9 bytes, 3 clock latency)
are also provided when optimizing for speed and L32R instruction has
considerable access penalty:
IV. 2-instructions synthesis (any of I ... III) followed by
"SLLI Ax, Ax, 1 ... 31"
V. 2-instructions synthesis followed by either "ADDX[248] Ax, Ax, Ax"
or "SUBX8 Ax, Ax, Ax" (multiplying by 3, 5, 7 or 9)
gcc/ChangeLog:
* config/xtensa/xtensa-protos.h (xtensa_constantsynth):
New prototype.
* config/xtensa/xtensa.cc (xtensa_emit_constantsynth,
xtensa_constantsynth_2insn, xtensa_constantsynth_rtx_SLLI,
xtensa_constantsynth_rtx_ADDSUBX, xtensa_constantsynth):
New backend functions that process the abovementioned logic.
(xtensa_emit_move_sequence): Revert the previous changes.
* config/xtensa/xtensa.md: New split patterns for integer
and floating-point, as the frontend part.
xtensa: Improve instruction cost estimation and suggestion
This patch implements a new target-specific relative RTL insn cost function
because of suboptimal cost estimation by default, and fixes several "length"
insn attributes (related to the cost estimation).
And also introduces a new machine-dependent option "-mextra-l32r-costs="
that tells implementation-specific InstRAM/ROM access penalty for L32R
instruction to the compiler (in clock-cycle units, 0 by default).
gcc/ChangeLog:
* config/xtensa/xtensa.cc (xtensa_rtx_costs): Correct wrong case
for ABS and NEG, add missing case for BSWAP and CLRSB, and
double the costs for integer divisions using libfuncs if
optimizing for speed, in order to take advantage of fast constant
division by multiplication.
(TARGET_INSN_COST): New macro definition.
(xtensa_is_insn_L32R_p, xtensa_insn_cost): New functions for
calculating relative costs of a RTL insns, for both of speed and
size.
* config/xtensa/xtensa.md (return, nop, trap): Correct values of
the attribute "length" that depends on TARGET_DENSITY.
(define_asm_attributes, blockage, frame_blockage): Add missing
attributes.
* config/xtensa/xtensa.opt (-mextra-l32r-costs=): New machine-
dependent option, however, preparatory work for now.
xtensa: Consider the Loop Option when setmemsi is expanded to small loop
Now apply to almost any size of aligned block under such circumstances.
gcc/ChangeLog:
* config/xtensa/xtensa.cc (xtensa_expand_block_set_small_loop):
Pass through the block length / loop count conditions if
zero-overhead looping is configured and active,
umulsidi3 is faster than umuldi3 even if library call, and is also
prerequisite for fast constant division by multiplication.
gcc/ChangeLog:
* config/xtensa/xtensa.md (mulsidi3, umulsidi3):
Split into individual signedness, in order to use libcall
"__umulsidi3" but not the other.
(<u>mulhisi3): Merge into one by using code iterator.
(<u>mulsidi3, mulhisi3, umulhisi3): Remove.
Michael Meissner [Sat, 11 Jun 2022 04:40:16 +0000 (00:40 -0400)]
Disable generating load/store vector pairs for block copies.
Testing has found that using load and store vector pair for block copies
can result in a slow down on power10. This patch disables using the
vector pair instructions for block copies if we are tuning for power10.
2022-06-11 Michael Meissner <meissner@linux.ibm.com>
gcc/
* config/rs6000/rs6000.cc (rs6000_option_override_internal): Do
not generate block copies with vector pair instructions if we are
tuning for power10.
Patrick Palka [Fri, 10 Jun 2022 20:10:02 +0000 (16:10 -0400)]
c++: improve TYPENAME_TYPE hashing [PR65328]
For the testcase in this PR, compilation takes very long ultimately due
to our poor hashing of TYPENAME_TYPE causing a huge number of collisions
in the spec_hasher and typename_hasher tables.
In spec_hasher, we don't hash the components of TYPENAME_TYPE, which
means most TYPENAME_TYPE arguments end up contributing the same hash.
This is the safe thing to do uniformly since structural_comptypes may
try resolving a TYPENAME_TYPE via the current instantiation. But this
behavior of structural_comptypes is suppressed from spec_hasher::equal
via the comparing_specializations flag, which means spec_hasher::hash
can assume it's disabled too. To that end, this patch makes
spec_hasher::hash set the flag, and teaches iterative_hash_template_arg
to hash the relevant components of TYPENAME_TYPE when the flag is set.
And in typename_hasher, the hash function considers TYPE_IDENTIFIER
instead of the more informative TYPENAME_TYPE_FULLNAME, which this patch
fixes accordingly.
After this patch, compile time for the testcase in the PR falls to
around 30 seconds on my machine (down from dozens of minutes).
PR c++/65328
gcc/cp/ChangeLog:
* decl.cc (typename_hasher::hash): Add extra overloads.
Use iterative_hash_object instead of htab_hash_pointer.
Hash TYPENAME_TYPE_FULLNAME instead of TYPE_IDENTIFIER.
(build_typename_type): Use typename_hasher::hash.
* pt.cc (spec_hasher::hash): Add two-parameter overload.
Set comparing_specializations around the call to
hash_tmpl_and_args.
(iterative_hash_template_arg) <case TYPENAME_TYPE>:
When comparing_specializations, hash the TYPE_CONTEXT
and TYPENAME_TYPE_FULLNAME.
(tsubst_function_decl): Use spec_hasher::hash instead of
hash_tmpl_and_args.
(tsubst_template_decl): Likewise.
(tsubst_decl): Likewise.
Patrick Palka [Fri, 10 Jun 2022 20:09:58 +0000 (16:09 -0400)]
c++: optimize specialization of templated member functions
This applies one of the lookup_template_class optimizations from the
previous patch to instantiate_template as well.
gcc/cp/ChangeLog:
* pt.cc (instantiate_template): Don't substitute the context
of the most general template if that of the partially
instantiated template is already non-dependent.
Patrick Palka [Fri, 10 Jun 2022 20:09:48 +0000 (16:09 -0400)]
c++: optimize specialization of nested templated classes
When substituting a class template specialization, tsubst_aggr_type
substitutes the TYPE_CONTEXT before passing it to lookup_template_class.
This appears to be unnecessary, however, because the the initial value
of lookup_template_class's context parameter is unused outside of the
IDENTIFIER_NODE case, and l_t_c performs its own substitution of the
context, anyway. So this patch removes the redundant substitution in
tsubst_aggr_type. Doing so causes us to ICE on template/nested5.C
because during lookup_template_class for A<T>::C::D<S> with T=E and S=S,
we substitute and complete the context A<T>::C with T=E, which in turn
registers the desired dependent specialization of D for us which we end
up trying to register twice. This patch fixes this by checking the
specializations table again after completion of the context.
This patch also implements a couple of other optimizations:
* In lookup_template_class, if the context of the partially
instantiated template is already non-dependent, then we could
reuse that instead of substituting the context of the most
general template.
* During tsubst_decl for the TYPE_DECL for an injected-class-name,
we can avoid substituting its TREE_TYPE. We can also avoid
template argument substitution/coercion for this TYPE_DECL, and
for class-scope non-template VAR_/TYPE_DECLs more generally.
Together these optimizations improve memory usage for the range-v3
file test/view/zip.cc by about 5%.
gcc/cp/ChangeLog:
* pt.cc (lookup_template_class): Remove dead stores to
context parameter. Don't substitute the context of the
most general template if that of the partially instantiated
template is already non-dependent. Check the specializations
table again after completing the context of a nested dependent
specialization.
(tsubst_aggr_type) <case RECORD_TYPE>: Don't substitute
TYPE_CONTEXT or pass it to lookup_template_class.
(tsubst_decl) <case TYPE_DECL, case TYPE_DECL>: Avoid substituting
the TREE_TYPE for DECL_SELF_REFERENCE_P. Avoid template argument
substitution or coercion in some cases.
Nathan Sidwell [Thu, 9 Jun 2022 15:14:31 +0000 (08:14 -0700)]
c++: Add a late-writing step for modules
To add a module initializer optimization, we need to defer finishing writing
out the module file until the end of determining the dynamic initializers.
This is achieved by passing some saved-state from the main module writing
to a new function that completes it.
This patch merely adds the skeleton of that state and move things around,
allowing the finalization of the ELF file to be postponed. None of the
contents writing is moved, or the init optimization added.
gcc/cp/
* cp-tree.h (fini_modules): Add some parameters.
(finish_module_processing): Return an opaque pointer.
* decl2.cc (c_parse_final_cleanups): Propagate a cookie from
finish_module_processing to fini_modules.
* module.cc (struct module_processing_cookie): New.
(finish_module_processing): Return a heap-allocated cookie.
(late_finish_module): New. Finish out the module writing.
(fini_modules): Adjust.