Jan Beulich [Mon, 7 Aug 2023 09:45:20 +0000 (11:45 +0200)]
x86: "ssemuladd" adjustments
They're all VEX3- (also covering XOP) or EVEX-encoded. Express that in
the default calculation of "prefix". FMA4 insns also all have a 1-byte
immediate operand.
Where the default calculation is not sufficient / applicable, add
explicit "prefix" attributes. While there also add a "mode" attribute to
fma_<complexpairopname>_<mode>_pair.
Jan Beulich [Mon, 7 Aug 2023 09:44:37 +0000 (11:44 +0200)]
x86: "sse4arg" adjustments
Record common properties in other attributes' default calculations:
There's always a 1-byte immediate, and they're always encoded in a VEX3-
like manner (note that "prefix_extra" already evaluates to 1 in this
case). The drop now (or already previously) redundant explicit
attributes, adding "mode" ones where they were missing.
Furthermore use "sse4arg" consistently for all VPCOM* insns; so far
signed comparisons did use it, while unsigned ones used "ssecmp". Note
that while they have (not counting the explicit or implicit immediate
operand) they really only have 3 operands, the operator is also counted
in those patterns. That's relevant for establishing the "memory"
attribute's value, and at the same time benign when there are only
register operands.
Note that despite also having 4 operands, multiply-add insns aren't
affected by this change, as they use "ssemuladd" for "type".
gcc/
* config/i386/i386.md (length_immediate): Handle "sse4arg".
(prefix): Likewise.
(*xop_pcmov_<mode>): Add "mode" attribute.
* config/i386/mmx.md (*xop_maskcmp<mode>3): Drop "prefix_data16",
"prefix_rep", "prefix_extra", and "length_immediate" attributes.
(*xop_maskcmp_uns<mode>3): Likewise. Switch "type" to "sse4arg".
(*xop_pcmov_<mode>): Add "mode" attribute.
* config/i386/sse.md (xop_pcmov_<mode><avxsizesuffix>): Add "mode"
attribute.
(xop_maskcmp<mode>3): Drop "prefix_data16", "prefix_rep",
"prefix_extra", and "length_immediate" attributes.
(xop_maskcmp_uns<mode>3): Likewise. Switch "type" to "sse4arg".
(xop_maskcmp_uns2<mode>3): Drop "prefix_data16", "prefix_extra",
and "length_immediate" attributes. Switch "type" to "sse4arg".
(xop_pcom_tf<mode>3): Likewise.
(xop_vpermil2<mode>3): Drop "length_immediate" attribute.
Jan Beulich [Mon, 7 Aug 2023 09:43:55 +0000 (11:43 +0200)]
x86: "prefix_extra" tidying
Drop SSE5 leftovers from both its comment and its default calculation.
A value of 2 simply cannot occur anymore. Instead extend the comment to
mention the use of the attribute in "length_vex", clarifying why
"prefix_extra" can actually be meaningful on VEX-encoded insns despite
those not having any real prefixes except possibly segment overrides.
Rainer Orth [Mon, 7 Aug 2023 09:29:02 +0000 (11:29 +0200)]
libsanitizer: Fix SPARC stacktraces
As detailed in LLVM Issue #57624
(https://github.com/llvm/llvm-project/issues/57624), a patch to
sanitizer_internal_defs.h broke SPARC stacktraces in the sanitizers.
The issue has now been fixed upstream (https://reviews.llvm.org/D156504)
and I'd like to cherry-pick that patch.
Bootstrapped without regressions on sparc-sun-solaris2.11.
Jan Hubicka [Mon, 7 Aug 2023 08:55:58 +0000 (10:55 +0200)]
Fix profile update after versioning ifconverted loop
If loop is ifconverted and later versioning by vectorizer, vectorizer will
reuse the scalar loop produced by ifconvert. Curiously enough it does not seem
to do so for versions produced by loop distribution while for loop distribution
this matters (since since both ldist versions survive to final code) while
after ifcvt it does not (since we remove non-vectorized path).
This patch fixes associated profile update. Here it is necessary to scale both
arms of the conditional according to runtime checks inserted. We got partly
right the loop body, but not the preheader block and block after exit. The
first is particularly bad since it changes loop iterations estimates.
So we now turn 4 original loops:
loop 1: iterations by profile: 473.497707 (reliable) entry count:84821 (precise, freq 0.9979)
loop 2: iterations by profile: 100.000000 (reliable) entry count:39848881 (precise, freq 468.8104)
loop 3: iterations by profile: 100.000000 (reliable) entry count:39848881 (precise, freq 468.8104)
loop 4: iterations by profile: 100.999596 (reliable) entry count:84167 (precise, freq 0.9902)
Into following loops
iterations by profile: 5.312499 (unreliable, maybe flat) entry count:12742188 (guessed, freq 149.9081)
vectorized and split loop 1, peeled
iterations by profile: 0.009496 (unreliable, maybe flat) entry count:374798 (guessed, freq 4.4094)
split loop 1 (last iteration), peeled
iterations by profile: 100.000008 (unreliable) entry count:3945039 (guessed, freq 46.4122)
scalar version of loop 1
iterations by profile: 100.000007 (unreliable) entry count:7101070 (guessed, freq 83.5420)
redundant scalar version of loop 1 which we could eliminate if vectorizer understood ldist
iterations by profile: 100.000000 (unreliable) entry count:35505353 (guessed, freq 417.7100)
unvectorized loop 2
iterations by profile: 5.312500 (unreliable) entry count:25563855 (guessed, freq 300.7512)
vectorized loop 2, not peeled (hits max-peel-insns)
iterations by profile: 100.000007 (unreliable) entry count:7101070 (guessed, freq 83.5420)
unvectorized loop 3
iterations by profile: 5.312500 (unreliable) entry count:25563855 (guessed, freq 300.7512)
vectorized loop 3, not peeled (hits max-peel-insns)
iterations by profile: 473.497707 (reliable) entry count:84821 (precise, freq 0.9979)
loop 1
iterations by profile: 100.999596 (reliable) entry count:84167 (precise, freq 0.9902)
loop 4
With this change we are on 0 profile erros on hmmer benchmark:
Andrew Pinski [Sat, 5 Aug 2023 16:23:26 +0000 (09:23 -0700)]
MATCH: Extend min_value/max_value to pointer types
Since we already had the infrastructure to optimize
`(x == 0) && (x > y)` to false for integer types,
this extends the same to pointer types as indirectly
requested by PR 96695.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
PR tree-optimization/96695
* gcc.dg/pr96695-1.c: New test.
* gcc.dg/pr96695-10.c: New test.
* gcc.dg/pr96695-11.c: New test.
* gcc.dg/pr96695-12.c: New test.
* gcc.dg/pr96695-2.c: New test.
* gcc.dg/pr96695-3.c: New test.
* gcc.dg/pr96695-4.c: New test.
* gcc.dg/pr96695-5.c: New test.
* gcc.dg/pr96695-6.c: New test.
* gcc.dg/pr96695-7.c: New test.
* gcc.dg/pr96695-8.c: New test.
* gcc.dg/pr96695-9.c: New test.
Roger Sayle [Sun, 6 Aug 2023 22:19:10 +0000 (23:19 +0100)]
[Committed] Avoid FAIL of gcc.target/i386/pr110792.c
My apologies (again), I managed to mess up the 64-bit version of the
test case for PR 110792. Unlike the 32-bit version, the 64-bit case
contains exactly the same load instructions, just in a different order
making the correct and incorrect behaviours impossible to distinguish
with a scan-assembler-not. Somewhere between checking that this test
failed in a clean tree without the patch, and getting the escaping
correct, I'd failed to notice that this also FAILs in the patched tree.
Doh! Instead of removing the test completely, I've left it as a
compilation test.
The original fix is tested by the 32-bit test case.
Committed to mainline as obvious. Sorry for the incovenience.
2023-08-06 Roger Sayle <roger@nextmovesoftware.com>
Jan Hubicka [Sun, 6 Aug 2023 20:33:33 +0000 (22:33 +0200)]
Disable loop distribution for loops with estimated iterations 0
This prevents useless loop distribiton produced in hmmer. With FDO we now
correctly work out that the loop created for last iteraiton is not going to
iterate however loop distribution still produces a verioned loop that has no
chance to survive loop vectorizer since we only keep distributed loops
when loop vectorization suceeds and it requires number of (header) iterations
to exceed the vectorization factor.
gcc/ChangeLog:
* tree-loop-distribution.cc (loop_distribution::execute): Disable
distribution for loops with estimated iterations 0.
Jan Hubicka [Sun, 6 Aug 2023 19:23:31 +0000 (21:23 +0200)]
Fix profile update after peeled epilogues
Epilogue peeling expects the scalar loop to have same number of executions as
the vector loop which is true at the beggining of vectorization. However if the
epilogues are vectorized, this is no longer the case. In this situation the
loop preheader is replaced by new guard code with correct profile, however
loop body is left unscaled. This leads to loop that exists more often then
it is entered.
This patch add slogic to scale the frequencies down and also to fix profile
of original preheader where necesary.
Bootstrapped/regtested x86_64-linux, comitted.
gcc/ChangeLog:
* tree-vect-loop-manip.cc (vect_do_peeling): Fix profile update of peeled epilogues.
Gaius Mulley [Sat, 5 Aug 2023 16:35:12 +0000 (17:35 +0100)]
PR modula2/110779 SysClock can not read the clock
This patch completes the implementation of the ISO module
SysClock.mod. Three new testcases are provided. wrapclock.{cc,def}
are new support files providing access to clock_settime, clock_gettime
and glibc timezone variables.
gcc/m2/ChangeLog:
PR modula2/110779
* gm2-libs-iso/SysClock.mod: Re-implement using wrapclock.
* gm2-libs-iso/wrapclock.def: New file.
libgm2/ChangeLog:
PR modula2/110779
* config.h.in: Regenerate.
* configure: Regenerate.
* configure.ac (GM2_CHECK_LIB): Check for clock_gettime
and clock_settime.
* libm2iso/Makefile.am (M2DEFS): Add wrapclock.def.
* libm2iso/Makefile.in: Regenerate.
* libm2iso/wraptime.cc: Replace HAVE_TIMEVAL with
HAVE_STRUCT_TIMEVAL.
* libm2iso/wrapclock.cc: New file.
gcc/testsuite/ChangeLog:
PR modula2/110779
* gm2/iso/run/pass/m2date.mod: New test.
* gm2/iso/run/pass/testclock.mod: New test.
* gm2/iso/run/pass/testclock2.mod: New test.
Martin Uecker [Thu, 13 Apr 2023 17:35:15 +0000 (19:35 +0200)]
c: Less warnings for parameters declared as arrays [PR98536]
To avoid false positivies, tune the warnings for parameters declared
as arrays with size expressions. Do not warn when more bounds are
specified in the declaration than before.
PR c/98536
gcc/c-family/:
* c-warn.cc (warn_parm_array_mismatch): Do not warn if more
bounds are specified.
Martin Uecker [Fri, 4 Aug 2023 05:48:21 +0000 (07:48 +0200)]
c: _Generic should not warn in non-active branches [PR68193,PR97100,PR110703]
To avoid false diagnostics, use c_inhibit_evaluation_warnings when
a generic association is known to not match during parsing. We may
still generate false positives if the default branch comes earler than
a specific association that matches.
PR c/68193
PR c/97100
PR c/110703
gcc/c/:
* c-parser.cc (c_parser_generic_selection): Inhibit evaluation
warnings branches that are known not be taken during parsing.
gcc/testsuite/ChangeLog:
* gcc.dg/pr68193.c: New test.
David Malcolm [Fri, 4 Aug 2023 20:18:40 +0000 (16:18 -0400)]
analyzer: handle function attribute "alloc_size" [PR110426]
This patch makes -fanalyzer make use of the function attribute
"alloc_size", allowing -fanalyzer to emit -Wanalyzer-allocation-size,
-Wanalyzer-out-of-bounds, and -Wanalyzer-tainted-allocation-size on
execution paths involving allocations using such functions.
gcc/analyzer/ChangeLog:
PR analyzer/110426
* bounds-checking.cc (region_model::check_region_bounds): Handle
symbolic base regions.
* call-details.cc: Include "stringpool.h" and "attribs.h".
(call_details::lookup_function_attribute): New function.
* call-details.h (call_details::lookup_function_attribute): New
function decl.
* region-model-manager.cc
(region_model_manager::maybe_fold_binop): Add reference to
PR analyzer/110902.
* region-model-reachability.cc (reachable_regions::handle_sval):
Add symbolic regions for pointers that are conjured svalues for
the LHS of a stmt.
* region-model.cc (region_model::canonicalize): Purge dynamic
extents for regions that aren't referenced.
(get_result_size_in_bytes): New function.
(region_model::on_call_pre): Use get_result_size_in_bytes and
potentially set the dynamic extents of the region pointed to by
the return value.
(region_model::deref_rvalue): Add param "add_nonnull_constraint"
and use it to conditionalize adding the constraint.
(pending_diagnostic_subclass::dubious_allocation_size): Add "stmt"
param to both ctors and use it to initialize new "m_stmt" field.
(pending_diagnostic_subclass::operator==): Use m_stmt; don't use
m_lhs or m_rhs.
(pending_diagnostic_subclass::m_stmt): New field.
(region_model::check_region_size): Generalize to any kind of
pointer svalue by using deref_rvalue rather than checking for
region_svalue. Pass stmt to dubious_allocation_size ctor.
* region-model.h (region_model::deref_rvalue): Add param
"add_nonnull_constraint".
* svalue.cc (conjured_svalue::lhs_value_p): New function.
* svalue.h (conjured_svalue::lhs_value_p): New decl.
gcc/testsuite/ChangeLog:
PR analyzer/110426
* gcc.dg/analyzer/allocation-size-1.c: Update expected message to
reflect consolidation of size and assignment into a single event.
* gcc.dg/analyzer/allocation-size-2.c: Likewise.
* gcc.dg/analyzer/allocation-size-3.c: Likewise.
* gcc.dg/analyzer/allocation-size-4.c: Likewise.
* gcc.dg/analyzer/allocation-size-multiline-1.c: Likewise.
* gcc.dg/analyzer/allocation-size-multiline-2.c: Likewise.
* gcc.dg/analyzer/allocation-size-multiline-3.c: Likewise.
* gcc.dg/analyzer/attr-alloc_size-1.c: New test.
* gcc.dg/analyzer/attr-alloc_size-2.c: New test.
* gcc.dg/analyzer/attr-alloc_size-3.c: New test.
* gcc.dg/analyzer/explode-4.c: New test.
* gcc.dg/analyzer/taint-size-1.c: Add test coverage for
__attribute__ alloc_size.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
Yan Simonaytes [Tue, 25 Jul 2023 17:43:19 +0000 (20:43 +0300)]
i386: eliminate redundant operands of VPTERNLOG
As mentioned in PR 110202, GCC may be presented with input where control
word of the VPTERNLOG intrinsic implies that some of its operands do not
affect the result. In that case, we can eliminate redundant operands
of the instruction by substituting any other operand in their place.
This removes false dependencies.
For instance, instead of (252 = 0xfc = _MM_TERNLOG_A | _MM_TERNLOG_B)
vpternlogq $252, %zmm2, %zmm1, %zmm0
emit
vpternlogq $252, %zmm0, %zmm1, %zmm0
When VPTERNLOG is invariant w.r.t first and second operands, and the
third operand is memory, load memory into the output operand first, i.e.
instead of (85 = 0x55 = ~_MM_TERNLOG_C)
PR target/110202
* config/i386/i386-protos.h
(vpternlog_redundant_operand_mask): Declare.
(substitute_vpternlog_operands): Declare.
* config/i386/i386.cc
(vpternlog_redundant_operand_mask): New helper.
(substitute_vpternlog_operands): New function. Use them...
* config/i386/sse.md: ... here in new VPTERNLOG define_splits.
gcc/testsuite/ChangeLog:
PR target/110202
* gcc.target/i386/invariant-ternlog-1.c: New test.
* gcc.target/i386/invariant-ternlog-2.c: New test.
Roger Sayle [Fri, 4 Aug 2023 15:26:06 +0000 (16:26 +0100)]
Specify signed/unsigned/dontcare in calls to extract_bit_field_1.
This patch is inspired by Jakub's work on PR rtl-optimization/110717.
The bitfield example described in comment #2, looks like:
struct S { __int128 a : 69; };
unsigned type bar (struct S *p) {
return p->a;
}
which on x86_64 with -O2 currently generates:
bar: movzbl 8(%rdi), %ecx
movq (%rdi), %rax
andl $31, %ecx
movq %rcx, %rdx
salq $59, %rdx
sarq $59, %rdx
ret
The ANDL $31 is interesting... we first extract an unsigned 69-bit bitfield
by masking/clearing the top bits of the most significant word, and then
it gets sign-extended, by left shifting and arithmetic right shifting.
Obviously, this bit-wise AND is redundant, for signed bit-fields, we don't
require these bits to be cleared, if we're about to set them appropriately.
This patch eliminates this redundancy in the middle-end, during RTL
expansion, but extending the extract_bit_field APIs so that the integer
UNSIGNEDP argument takes a special value; 0 indicates the field should
be sign extended, 1 (any non-zero value) indicates the field should be
zero extended, but -1 indicates a third option, that we don't care how
or whether the field is extended. By passing and checking this sentinel
value at the appropriate places we avoid the useless bit masking (on
all targets).
For the test case above, with this patch we now generate:
bar: movzbl 8(%rdi), %ecx
movq (%rdi), %rax
movq %rcx, %rdx
salq $59, %rdx
sarq $59, %rdx
ret
2023-08-04 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* expmed.cc (extract_bit_field_1): Document that an UNSIGNEDP
value of -1 is equivalent to don't care.
(extract_integral_bit_field): Indicate that we don't require
the most significant word to be zero extended, if we're about
to sign extend it.
(extract_fixed_bit_field_1): Document that an UNSIGNEDP value
of -1 is equivalent to don't care. Don't clear the most
significant bits with AND mask when UNSIGNEDP is -1.
gcc/testsuite/ChangeLog
* gcc.target/i386/pr110717-2.c: New test case.
Roger Sayle [Fri, 4 Aug 2023 15:23:38 +0000 (16:23 +0100)]
i386: Split SUBREGs of SSE vector registers into vec_select insns.
This patch is the final piece in the series to improve the ABI issues
affecting PR 88873. The previous patches tackled inserting DFmode
values into V2DFmode registers, by introducing insvti_{low,high}part
patterns. This patch improves the extraction of DFmode values from
V2DFmode registers via TImode intermediates.
I'd initially thought this would require new extvti_{low,high}part
patterns to be defined, but all that's required is to recognize that
the SUBREG idioms produced by combine are equivalent to (forms of)
vec_select patterns. The target-independent middle-end can't be sure
that the appropriate vec_select instruction exists on the target,
hence doesn't canonicalize a SUBREG of a vector mode as a vec_select,
but the backend can provide a define_split stating where and when
this is useful, for example, considering whether the operand is in
memory, or whether !TARGET_SSE_MATH and the destination is i387.
For pr88873.c, gcc -O2 -march=cascadelake currently generates:
The improvement is even more dramatic when compared to the original
29 instructions shown in comment #8. GCC 13, for example, required
12 transfers to/from memory.
2023-08-04 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/i386/sse.md (define_split): Convert highpart:DF extract
from V2DFmode register into a sse2_storehpd instruction.
(define_split): Likewise, convert lowpart:DF extract from V2DF
register into a sse2_storelpd instruction.
gcc/testsuite/ChangeLog
* gcc.target/i386/pr88873.c: Tweak to check for improved code.
Qing Zhao [Fri, 4 Aug 2023 14:24:32 +0000 (14:24 +0000)]
Add documentation for -Wflex-array-member-not-at-end.
'-Wflex-array-member-not-at-end (C and C++ only)'
Warn when a structure containing a C99 flexible array member as the
last field is not at the end of another structure. This warning
warns e.g. about
struct flex { int length; char data[]; };
struct mid_flex { int m; struct flex flex_data; int n; };
gcc/ChangeLog:
* doc/invoke.texi (-Wflex-array-member-not-at-end): Document
new option.
The insn gets the same value in r26 and r30. The culprit is clobbering
r30 and using r30 as input. For such situation LRA wrongly assumes that
r30 does not live before the insn. The patch is fixing it.
gcc/ChangeLog:
* lra-lives.cc (process_bb_lives): Check input insn pattern hard regs
against early clobber hard regs.
Tamar Christina [Fri, 4 Aug 2023 12:52:46 +0000 (13:52 +0100)]
middle-end: clean up vect testsuite using pragma novector
The support for early break vectorization breaks lots of scan vect and slp
testcases because they assume that loops with abort () in them cannot be
vectorized. Additionally it breaks the point of having a scalar loop to check
the output of the vectorizer if that loop is also vectorized.
For that reason this adds
vectorized using this patch series.
FWIW, none of these tests were failing to vectorize or run before the pragma.
The tests that did point to some issues were copies to the early break test
suit as well.
Tamar Christina [Fri, 4 Aug 2023 12:51:16 +0000 (13:51 +0100)]
frontend: Add novector C pragma
FORTRAN currently has a pragma NOVECTOR for indicating that vectorization should
not be applied to a particular loop.
ICC/ICX also has such a pragma for C and C++ called #pragma novector.
As part of this patch series I need a way to easily turn off vectorization of
particular loops, particularly for testsuite reasons.
This patch proposes a #pragma GCC novector that does the same for C
as gfortan does for FORTRAN and what ICX/ICX does for C.
I added only some basic tests here, but the next patch in the series uses this
in the testsuite in about ~800 tests.
gcc/c-family/ChangeLog:
* c-pragma.h (enum pragma_kind): Add PRAGMA_NOVECTOR.
* c-pragma.cc (init_pragma): Use it.
gcc/c/ChangeLog:
* c-parser.cc (c_parser_while_statement, c_parser_do_statement,
c_parser_for_statement, c_parser_statement_after_labels,
c_parse_pragma_novector, c_parser_pragma): Wire through novector and
default to false.
In GCC 11 we implemented the vectorizer optab for widening left shifts,
however this optab is only supported for uniform shift constants.
At the moment GCC still has two loop vectorization strategy (classical loop and
SLP based loop vec) and the optab is implemented as a scalar pattern.
This means that when we apply it to a non-uniform constant inside a loop we only
find out during SLP build that the constants aren't uniform. At this point it's
too late and we lose SLP entirely.
Over the years I've tried various options but none of it works well:
1. Dissolving patterns during SLP built (problematic, also dissolves them for
non-slp).
2. Optionally ignoring patterns for SLP build (problematic, ends up interfearing
with relevancy detection).
3. Relaxing contraint on SLP build to allow non-constant values and dissolving
them after SLP build using an SLP pattern. (problematic, ends up breaking
shift reassociation).
As a result we've concluded that for now this pattern should just be removed
and formed during RTL.
The plan is to move this to an SLP only pattern once we remove classical loop
vectorization support from GCC, at which time we can also properly support SVE's
Top and Bottom variants.
This removes the optab and reworks the RTL to recognize both the vector variant
and the intrinsics variant. Also just simplifies all these patterns.
Tamar Christina [Fri, 4 Aug 2023 12:48:56 +0000 (13:48 +0100)]
gensupport: Don't segfault on empty attrs list
Currently we segfault when len == 0 for an attribute list.
essentially [cons: =0, 1, 2, 3; attrs: ] segfaults but should be equivalent to
[cons: =0, 1, 2, 3] and [cons: =0, 1, 2, 3; attrs:]. This fixes it by just
returning early and leaving it to the validators whether this should error out
or not.
gcc/ChangeLog:
* gensupport.cc (conlist): Support length 0 attribute.
Tamar Christina [Fri, 4 Aug 2023 12:48:35 +0000 (13:48 +0100)]
AArch64: update costing for combining vector conditionals
boolean comparisons have different cost depending on the mode. e.g.
for SVE, a && b doesn't require an additional instruction when a or b
is predicated by combining the predicate of the one operation into the
second one. At the moment though we only fuse compares so this update
requires one of the operands to be a comparison.
Scalars also don't require this because the non-ifcvt variant is a series of
branches where following the branch sequences themselves are natural ANDs.
Advanced SIMD however does require an actual AND to combine the boolean values.
As such this patch discounts Scalar and SVE boolean operation latency and
throughput.
With this patch comparison heavy code prefers SVE as it should, especially in
cases with SVE VL == Advanced SIMD VL where previously the SVE prologue costs
would tip it towards Advanced SIMD.
gcc/ChangeLog:
* config/aarch64/aarch64.cc (aarch64_bool_compound_p): New.
(aarch64_adjust_stmt_cost, aarch64_vector_costs::count_ops): Use it.
Tamar Christina [Fri, 4 Aug 2023 12:46:36 +0000 (13:46 +0100)]
AArch64: update costing for MLA by invariant
When determining issue rates we currently discount non-constant MLA accumulators
for Advanced SIMD but don't do it for the latency.
This means the costs for Advanced SIMD with a constant accumulator are wrong and
results in us costing SVE and Advanced SIMD the same. This can cauze us to
vectorize with Advanced SIMD instead of SVE in some cases.
This patch adds the same discount for SVE and Scalar as we do for issue rate.
This gives a 5% improvement in fotonik3d_r in SPECCPU 2017 on large
Neoverse cores.
gcc/ChangeLog:
* config/aarch64/aarch64.cc (aarch64_multiply_add_p): Update handling
of constants.
(aarch64_adjust_stmt_cost): Use it.
(aarch64_vector_costs::count_ops): Likewise.
(aarch64_vector_costs::add_stmt_cost): Pass vinfo to
aarch64_adjust_stmt_cost.
Richard Biener [Fri, 4 Aug 2023 10:11:45 +0000 (12:11 +0200)]
tree-optimization/110838 - vectorization of widened right shifts
The following fixes a problem with my last attempt of avoiding
out-of-bound shift values for vectorized right shifts of widened
operands. Instead of truncating the shift amount with a bitwise
and we actually need to saturate it to the target precision.
The following does that and adds test coverage for the constant
and invariant but variable case that would previously have failed.
PR tree-optimization/110838
* tree-vect-patterns.cc (vect_recog_over_widening_pattern):
Fix right-shift value sanitizing. Properly emit external
def mangling in the preheader rather than in the pattern
def sequence where it will fail vectorizing.
mid-end: Use integral time intervals in timevar.cc
On some AArch64 bootstrapped builds, we were getting a flaky test
because the floating point operations in `get_time` were being fused
with the floating point operations in `timevar_accumulate`.
This meant that the rounding behaviour of our multiplication with
`ticks_to_msec` was different when used in `timer::start` and when
performed in `timer::stop`. These extra inaccuracies led to the
testcase `g++.dg/ext/timevar1.C` being flaky on some hardware.
------------------------------
Avoiding the inlining which was agreed to be undesirable. Three
alternative approaches:
1) Use `-ffp-contract=on` to avoid this particular optimisation.
2) Adjusting the code so that the "tolerance" is always of the order of
a "tick".
3) Recording times and elapsed differences in integral values.
- Could be in terms of a standard measurement (e.g. nanoseconds or
microseconds).
- Could be in terms of whatever integral value ("ticks" /
secondsµseconds / "clock ticks") is returned from the syscall
chosen at configure time.
While `-ffp-contract=on` removes the problem that I bumped into, there
has been a similar bug on x86 that was to do with a different floating
point problem that also happens after `get_time` and
`timevar_accumulate` both being inlined into the same function. Hence
it seems worth choosing a different approach.
Of the two other solutions, recording measurements in integral values
seems the most robust against slightly "off" measurements being
presented to the user -- even though it could avoid the ICE that creates
a flaky test.
I considered storing time in whatever units our syscall returns and
normalising them at the time we print out rather than normalising them
to nanoseconds at the point we record our "current time". The logic
being that normalisation could have some rounding affect (e.g. if
TICKS_PER_SECOND is 3) that would be taken into account in calculations.
I decided against it in order to give the values recorded in
`timevar_time_def` some interpretive value so it's easier to read the
code. Compared to the small rounding that would represent a tiny amount
of time and AIUI can not trigger the same kind of ICE's as we are
attempting to fix, said interpretive value seems more valuable.
Recording time in microseconds seemed reasonable since all obvious
values for ticks and `getrusage` are at microsecond granularity or less
precise. That said, since TICKS_PER_SECOND and CLOCKS_PER_SEC are both
variables given to use by the host system I was not sure of that enough
to make this decision.
------------------------------
timer::all_zero is ignoring rows which are inconsequential to the user
and would be printed out as all zeros. Since upon printing rows we
convert to the same double value and print out the same precision as
before, we return true/false based on the same amount of time as before.
timer::print_row casts to a floating point measurement in units of
seconds as was printed out before.
timer::validate_phases -- I'm printing out nanoseconds here rather than
floating point seconds since this is an error message for when things
have "gone wrong" printing out the actual nanoseconds that have been
recorded seems like the best approach.
N.b. since we now print out nanoseconds instead of floating point value
the padding requirements are different. Originally we were padding to
24 characters and printing 18 decimal places. This looked odd with the
now visually smaller values getting printed. I judged 13 characters
(corresponding to 2 hours) to be a reasonable point at which our
alignment could start to degrade and this provides a more compact output
for the majority of cases (checked by triggering the error case via
GDB).
------------------------------
N.b. I use a literal 1000000000 for "NANOSEC_PER_SEC". I believe this
would fit in an integer on all hosts that GCC supports, but am not
certain there are not strange integer sizes we support hence am pointing
it out for special attention during review.
------------------------------
No expected change in generated code.
Bootstrapped and regtested on AArch64 with no regressions.
Hope this is acceptable -- I had originally planned to use
`-ffp-contract` as agreed until I saw mention of the old x86 bug in the
same area which was not to do with floating point contraction of
operations (PR 99903).
gcc/ChangeLog:
PR middle-end/110316
PR middle-end/9903
* timevar.cc (NANOSEC_PER_SEC, TICKS_TO_NANOSEC,
CLOCKS_TO_NANOSEC, nanosec_to_floating_sec, percent_of): New.
(TICKS_TO_MSEC, CLOCKS_TO_MSEC): Remove these macros.
(timer::validate_phases): Use integral arithmetic to check
validity.
(timer::print_row, timer::print): Convert from integral
nanoseconds to floating point seconds before printing.
(timer::all_zero): Change limit to nanosec count instead of
fractional count of seconds.
(make_json_for_timevar_time_def): Convert from integral
nanoseconds to floating point seconds before recording.
* timevar.h (struct timevar_time_def): Update all measurements
to use uint64_t nanoseconds rather than seconds stored in a
double.
Richard Biener [Fri, 4 Aug 2023 09:24:49 +0000 (11:24 +0200)]
tree-optimization/110838 - less aggressively fold out-of-bound shifts
The following adjusts the shift simplification patterns to avoid
touching out-of-bound shift value arithmetic right shifts of
possibly negative values. While simplifying those to zero isn't
wrong it's violating the principle of least surprise.
PR tree-optimization/110838
* match.pd (([rl]shift @0 out-of-bounds) -> zero): Restrict
the arithmetic right-shift case to non-negative operands.
Andrew Pinski [Wed, 2 Aug 2023 21:49:00 +0000 (14:49 -0700)]
Fix PR 110874: infinite loop in gimple_bitwise_inverted_equal_p with fre
This changes gimple_bitwise_inverted_equal_p to use a 2 different match patterns
to try to match bit_not wrapped with a possible nop_convert and a comparison
also wrapped with a possible nop_convert. This is to avoid being recursive.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
gcc/ChangeLog:
PR tree-optimization/110874
* gimple-match-head.cc (gimple_bit_not_with_nop): New declaration.
(gimple_maybe_cmp): Likewise.
(gimple_bitwise_inverted_equal_p): Rewrite to use gimple_bit_not_with_nop
and gimple_maybe_cmp instead of being recursive.
* match.pd (bit_not_with_nop): New match pattern.
(maybe_cmp): Likewise.
gcc/testsuite/ChangeLog:
PR tree-optimization/110874
* gcc.c-torture/compile/pr110874-a.c: New test.
Drew Ross [Fri, 4 Aug 2023 07:08:05 +0000 (09:08 +0200)]
match.pd: Canonicalize (signed x << c) >> c [PR101955]
Canonicalizes (signed x << c) >> c into the lowest
precision(type) - c bits of x IF those bits have a mode precision or a
precision of 1. Also combines this rule with (unsigned x << c) >> c -> x &
((unsigned)-1 >> c) to prevent duplicate pattern.
PR middle-end/101955
* match.pd ((signed x << c) >> c): New canonicalization.
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc
(class vfnmsac_frm): New class for vfnmsac frm.
(vfnmsac_frm_obj): New declaration.
(BASE): Ditto.
* config/riscv/riscv-vector-builtins-bases.h: Ditto.
* config/riscv/riscv-vector-builtins-functions.def
(vfnmsac_frm): New function definition.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/float-point-single-negate-multiply-sub.c:
New test.
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc
(class vfmsac_frm): New class for vfmsac frm.
(vfmsac_frm_obj): New declaration.
(BASE): Ditto.
* config/riscv/riscv-vector-builtins-bases.h: Ditto.
* config/riscv/riscv-vector-builtins-functions.def
(vfmsac_frm): New function definition.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/float-point-single-multiply-sub.c: New test.
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc
(class vfnmacc_frm): New class for vfnmacc.
(vfnmacc_frm_obj): New declaration.
(BASE): Ditto.
* config/riscv/riscv-vector-builtins-bases.h: Ditto.
* config/riscv/riscv-vector-builtins-functions.def
(vfnmacc_frm): New function definition.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/float-point-single-negate-multiply-add.c:
New test.
Hao Liu [Fri, 4 Aug 2023 02:32:52 +0000 (10:32 +0800)]
AArch64: Avoid the ICE on empty reduction definition in info_for_reduction [PR110625]
Fix the assertion failure on empty reduction define in info_for_reduction.
Even a stmt is live, it may still have empty reduction define. Check the
reduction definition instead of live info before calling info_for_reduction.
gcc/ChangeLog:
PR target/110625
* config/aarch64/aarch64.cc (aarch64_force_single_cycle): check
STMT_VINFO_REDUC_DEF to avoid failures in info_for_reduction.
So the first loops is outer loop and second/third loops are nesed. Fourth loop is not critical.
Precise iteraiton counts are unknown (473 and 100 comes from profile)
Nested loop has following form:
for (k = 1; k <= M; k++) {
mc[k] = mpp[k-1] + tpmm[k-1];
if ((sc = ip[k-1] + tpim[k-1]) > mc[k]) mc[k] = sc;
if ((sc = dpp[k-1] + tpdm[k-1]) > mc[k]) mc[k] = sc;
if ((sc = xmb + bp[k]) > mc[k]) mc[k] = sc;
mc[k] += ms[k];
if (mc[k] < -INFTY) mc[k] = -INFTY;
if (k < M) {
ic[k] = mpp[k] + tpmi[k];
if ((sc = ip[k] + tpii[k]) > ic[k]) ic[k] = sc;
ic[k] += is[k];
if (ic[k] < -INFTY) ic[k] = -INFTY;
}
We do quite some belly dancing here.
1) loop-ch slightly misupdates profile, so the estimates of 99
does not match profile setimate of 100.
2) loops-split splits on if (k < M) and produces two loops.
It fails to notice that the second loop never iterates.
It used to misupdate profile a lot which later caused internal
loop to become cold. This is fixed now.
3) loop-dist introduces runtime aliasing checks for both loops
4) tree vectorizer vectorizes some of the copies of the loop produces
and drops expected iteration counts
5) loop peeling peels the loops with expected low iteration counts
6) complete loop unrolling kills some loops in prologues/epilogues.
We end up with quite many loops and run out of registers:
iterations by profile: 5.312499 (unreliable, maybe flat)
this is vectorized internal loops after loop peeling
iterations by profile: 0.009495 (unreliable, maybe flat)
iterations by profile: 0.009495 (unreliable, maybe flat)
iterations by profile: 0.009495 (unreliable, maybe flat)
iterations by profile: 0.009495 (unreliable, maybe flat)
Those are all versioned/peeled and vectorized variants of the loop never looping
iterations by profile: 100.000008 (unreliable)
iterations by profile: 100.000000 (unreliable)
Those are variants with failed aliasing checks
iterations by profile: 9.662853 (unreliable, maybe flat)
iterations by profile: 4.646072 (unreliable)
iterations by profile: 100.000007 (unreliable)
iterations by profile: 5.312500 (unreliable)
iterations by profile: 473.497707 (reliable)
This is loop 1
iterations by profile: 100.999596 (reliable)
This is the loop 4.
This patch fixes loop iteration estimate update after loop split so we get:
iterations by profile: 5.312499 (unreliable, maybe flat) entry count:12742188 (guessed, freq 149.9081)
This is remainder of the peeled vectorized loop 2. It misses estimate that is correct since after peeling it 6 times it is essentially
impossible to tell what the remaining loop profile is (without histograms)
iterations by profile: 0.009496 (unreliable, maybe flat) entry count:374801 (guessed, freq 4.4094)
Peeled split part of loop 2 (one that never loops). We ought to work this out
but at least w
estimate 99
iterations by profile: 9.662853 (unreliable, maybe flat) entry count:35505353 (guessed, freq 417.7100)
Profile here mismatches estimate - I will need to work out why.
estimate 5
iterations by profile: 4.646072 (unreliable) entry count:31954818 (guessed, freq 375.9390)
This is vectorized but not peeled loop 3
estimate 99
iterations by profile: 100.000007 (unreliable) entry count:7101070 (guessed, freq 83.5420)
Unvectorized variant of loop 3
estimate 5
iterations by profile: 5.312500 (unreliable) entry count:25563855 (guessed, freq 300.7512)
Another vectorized variant of loop 3
estimate 472
iterations by profile: 473.497707 (reliable) entry count:84821 (precise, freq 0.9979)
Outer loop
estimate 100
iterations by profile: 100.999596 (reliable) entry count:84167 (precise, freq 0.9902)
loop 4, not vectorized/peeled
So there is still work to do on this testcase, but with the patch we prevent 3 useless loops.
Bootstrapped/regtested x86_64-linux, plan to commit it later today.
Jan Hubicka [Thu, 3 Aug 2023 20:42:27 +0000 (22:42 +0200)]
Fix profiledbootstrap
Profiledbootstrap fails with ICE in update_loop_exit_probability_scale_dom_bbs
called from loop unroling.
The reason is that under relatively rare situations, we may run into case where
loop has multiple exits and all are considered as likely but then we scale down
the profile and one of the exits becomes unlikely.
We pass around unadjusted_exit_count to scale exit probability correctly. In this
case we may end up using uninitialized value and profile-count type intentionally
bombs on that.
Instead of reading the known zero bits in IPA, read the value/mask
pair which is available.
There is a slight change of behavior here. I have removed the check
for SSA_NAME, as the ranger can calculate the range and value/mask for
INTEGER_CST. This simplifies the code a bit, since there's no special
casing when setting the jfunc bits. The default range for VR is
undefined, so I think it's safe just to check for undefined_p().
gcc/ChangeLog:
* ipa-prop.cc (ipa_compute_jump_functions_for_edge): Read global
value/mask.
gcc/testsuite/ChangeLog:
* g++.dg/ipa/pure-const-3.C: Move source to...
* g++.dg/ipa/pure-const-3.h: ...here, and adjust original test
accordingly.
* g++.dg/ipa/pure-const-3b.C: New.
* config/riscv/riscv.cc (riscv_expand_conditional_move): Recognize
various Zicond patterns.
* config/riscv/riscv.md (mov<mode>cc): Allow TARGET_ZICOND. Use
sfb_alu_operand for both arms of the conditional move.
This patch adds tests for the following builtins:
__builtin_preserve_enum_value
__builtin_btf_type_id
__builtin_preserve_type_info
gcc/testsuite/ChangeLog:
* gcc.target/bpf/core-builtin-enumvalue.c: New test.
* gcc.target/bpf/core-builtin-enumvalue-errors.c: New test.
* gcc.target/bpf/core-builtin-enumvalue-opt.c: New test.
* gcc.target/bpf/core-builtin-fieldinfo-const-elimination.c: New test.
* gcc.target/bpf/core-builtin-fieldinfo-errors-1.c: Changed.
* gcc.target/bpf/core-builtin-fieldinfo-errors-2.c: Changed.
* gcc.target/bpf/core-builtin-type-based.c: New test.
* gcc.target/bpf/core-builtin-type-id.c: New test.
* gcc.target/bpf/core-support.h: New test.
This patch updates the support for the BPF CO-RE builtins
__builtin_preserve_access_index and __builtin_preserve_field_info,
and adds support for the CO-RE builtins __builtin_btf_type_id,
__builtin_preserve_type_info and __builtin_preserve_enum_value.
These CO-RE relocations are now converted to __builtin_core_reloc which
abstracts all of the original builtins in a polymorphic relocation
specific builtin.
The builtin processing is now split in 2 stages, the first (pack) is
executed right after the front-end and the second (process) right before
the asm output.
In expand pass the __builtin_core_reloc is converted to a
unspec:UNSPEC_CORE_RELOC rtx entry.
The data required to process the builtin is now collected in the packing
stage (after front-end), not allowing the compiler to optimize any of
the relevant information required to compose the relocation when
necessary.
At expansion, that information is recovered and CTF/BTF is queried to
construct the information that will be used in the relocation.
At this point the relocation is added to specific section and the
builtin is expanded to the expected default value for the builtin.
In order to process __builtin_preserve_enum_value, it was necessary to
hook the front-end to collect the original enum value reference.
This is needed since the parser folds all the enum values to its
integer_cst representation.
More details can be found within the core-builtins.cc.
Regtested in host x86_64-linux-gnu and target bpf-unknown-none.
Andrew MacLeod [Tue, 1 Aug 2023 18:33:09 +0000 (14:33 -0400)]
Add operand ranges to op1_op2_relation API.
With additional floating point relations in the pipeline, we can no
longer tell based on the LHS what the relation of X < Y is without knowing
the type of X and Y.
* gimple-range-fold.cc (fold_using_range::range_of_range_op): Add
ranges to the call to relation_fold_and_or.
(fold_using_range::relation_fold_and_or): Add op1 and op2 ranges.
(fur_source::register_outgoing_edges): Add op1 and op2 ranges.
* gimple-range-fold.h (relation_fold_and_or): Adjust params.
* gimple-range-gori.cc (gori_compute::compute_operand_range): Add
a varying op1 and op2 to call.
* range-op-float.cc (range_operator::op1_op2_relation): New dafaults.
(operator_equal::op1_op2_relation): New float version.
(operator_not_equal::op1_op2_relation): Ditto.
(operator_lt::op1_op2_relation): Ditto.
(operator_le::op1_op2_relation): Ditto.
(operator_gt::op1_op2_relation): Ditto.
(operator_ge::op1_op2_relation) Ditto.
* range-op-mixed.h (operator_equal::op1_op2_relation): New float
prototype.
(operator_not_equal::op1_op2_relation): Ditto.
(operator_lt::op1_op2_relation): Ditto.
(operator_le::op1_op2_relation): Ditto.
(operator_gt::op1_op2_relation): Ditto.
(operator_ge::op1_op2_relation): Ditto.
* range-op.cc (range_op_handler::op1_op2_relation): Dispatch new
variations.
(range_operator::op1_op2_relation): Add extra params.
(operator_equal::op1_op2_relation): Ditto.
(operator_not_equal::op1_op2_relation): Ditto.
(operator_lt::op1_op2_relation): Ditto.
(operator_le::op1_op2_relation): Ditto.
(operator_gt::op1_op2_relation): Ditto.
(operator_ge::op1_op2_relation): Ditto.
* range-op.h (range_operator): New prototypes.
(range_op_handler): Ditto.
Jeff Law [Thu, 3 Aug 2023 14:57:23 +0000 (10:57 -0400)]
[committed][RISC-V] Remove errant hunk of code
I'm using this hunk locally to more thoroughly exercise the zicond paths
due to inaccuracies elsewhere in the costing model. It was never
supposed to be part of the costing commit though. And as we've seen
it's causing problems with the vector bits.
While my testing isn't complete, this hunk was never supposed to be
pushed and it's causing problems. So I'm just ripping it out.
There's a bigger TODO in this space WRT a top-to-bottom evaluation of
the costing on RISC-V. I'm still formulating what that evaluation is
going to look like, so don't hold your breath waiting on it.
Pushed to the trunk.
gcc/
* config/riscv/riscv.cc (riscv_rtx_costs): Remove errant hunk from
recent commit.
Richard Biener [Thu, 3 Aug 2023 13:21:51 +0000 (15:21 +0200)]
[libbacktrace] fix up broken test
zstdtest has some inline data where some testcases lack the
uncompressed length field. Thus it computes that but still
ends up allocating memory for the uncompressed buffer based on
that (zero) length. Oops. Causes memory corruption if the
allocator returns non-NULL.
libbacktrace/
* zstdtest.c (test_samples): Properly compute the allocation
size for the uncompressed data.
can_div_trunc_p (a, b, &Q, &r) tries to compute a Q and r that
satisfy the usual conditions for truncating division:
(1) a = b * Q + r
(2) |b * Q| <= |a|
(3) |r| < |b|
We can compute Q using the constant component (the case when
all indeterminates are zero). Since |r| < |b| for the constant
case, the requirements for indeterminate xi with coefficients
ai (for a) and bi (for b) are:
(2') |bi * Q| <= |ai|
(3') |ai - bi * Q| <= |bi|
(See the big comment for more details, restrictions, and reasoning).
However, the function works on abstract arithmetic types, and so
it has to be careful not to introduce new overflow. The code
therefore only handled the extreme for (3'), that is:
|ai - bi * Q| = |bi|
for the case where Q is zero.
Looking at it again, the overflow issue is a bit easier to handle than
I'd originally thought (or so I hope). This patch therefore extends the
code to handle |ai - bi * Q| = |bi| for all Q, with Q = 0 no longer
being a separate case.
The net effect is to allow the function to succeed for things like:
(a0 + b1 (Q+1) x) / (b0 + b1 x)
where Q = a0 / b0, with various sign conditions. E.g. we now handle:
(7 + 8x) / (4 + 4x)
with Q = 1 and r = 3 + 4x,
gcc/
* poly-int.h (can_div_trunc_p): Succeed for more boundary conditions.
gcc/testsuite/
* gcc.dg/plugin/poly-int-tests.h (test_can_div_trunc_p_const)
(test_can_div_trunc_p_const): Add more tests.
Richard Biener [Mon, 31 Jul 2023 12:44:52 +0000 (14:44 +0200)]
tree-optimization/110838 - vectorization of widened shifts
The following makes sure to limit the shift operand when vectorizing
(short)((int)x >> 31) via (short)x >> 31 as the out of bounds shift
operand otherwise invokes undefined behavior. When we determine
whether we can demote the operand we know we at most shift in the
sign bit so we can adjust the shift amount.
Note this has the possibility of un-CSEing common shift operands
as there's no good way to share pattern stmts between patterns.
We'd have to separately pattern recognize the definition.
PR tree-optimization/110838
* tree-vect-patterns.cc (vect_recog_over_widening_pattern):
Adjust the shift operand of RSHIFT_EXPRs.
Richard Biener [Thu, 3 Aug 2023 11:11:12 +0000 (13:11 +0200)]
tree-optimization/110702 - avoid zero-based memory references in IVOPTs
Sometimes IVOPTs chooses a weird induction variable which downstream
leads to issues. Most of the times we can fend those off during costing
by rejecting the candidate but it looks like the address description
costing synthesizes is different from what we end up generating so
the following fixes things up at code generation time. Specifically
we avoid the create_mem_ref_raw fallback which uses a literal zero
address base with the actual base in index2. For the case in question
we have the address
type = unsigned long
offset = 0
elements = {
[0] = &e * -3,
[1] = (sizetype) a.9_30 * 232,
[2] = ivtmp.28_44 * 4
}
which references the object at address zero. The patch below
recognizes the fallback after the fact and transforms the
TARGET_MEM_REF memory reference into a LEA for which this form
isn't problematic:
hereby avoiding the correctness issue. We'd later conclude the
program terminates at the null pointer dereference and make the
function pure, miscompling the main function of the testcase.
PR tree-optimization/110702
* tree-ssa-loop-ivopts.cc (rewrite_use_address): When
we created a NULL pointer based access rewrite that to
a LEA.
ada: Rewrite Set_Image_*_Unsigned routines to remove recursion.
This rewriting removes algorithm inefficiencies due to unnecessary
recursion and copying. The new version has much smaller and statically known
stack requirements and is additionally up to 2x faster.
Eric Botcazou [Tue, 25 Jul 2023 21:03:22 +0000 (23:03 +0200)]
ada: Fix spurious error on 'Input of private type with Type_Invariant aspect
The problem is that it is necessary to break the privacy during the
expansion of the Input attribute, which may introduce a view mismatch
with the parameter of the routine checking the invariant of the type.
gcc/ada/
* exp_util.adb (Make_Invariant_Call): Convert the expression to
the type of the formal parameter if need be.
Eric Botcazou [Mon, 24 Jul 2023 13:02:25 +0000 (15:02 +0200)]
ada: Adjust again address arithmetics in System.Dwarf_Lines
Using the operator of System.Storage_Elements has introduced a range check
that may be tripped on, so this removes the intermediate conversion to the
Storage_Count subtype that is responsible for it.
gcc/ada/
* libgnat/s-dwalin.adb ("-"): New subtraction operator.
(Enable_Cache): Use it to compute the offset.
(Symbolic_Address): Likewise.
Richard Biener [Wed, 26 Jul 2023 13:23:45 +0000 (15:23 +0200)]
Improve sinking with unrelated defs
statement_sink_location for loads is currently confused about
stores that are not on the paths we are sinking across. The
following replaces the logic that tries to ensure we are not
sinking across stores by instead of walking all immediate virtual
uses and then checking whether found stores are on the paths
we sink through with checking the live virtual operand at the
sinking location. To obtain the live virtual operand we rely
on the new virtual_operand_live class which provides an overall
cheaper and also more precise way to check the constraints.
* tree-ssa-sink.cc: Include tree-ssa-live.h.
(pass_sink_code::execute): Instantiate virtual_operand_live
and pass it down.
(sink_code_in_bb): Pass down virtual_operand_live.
(statement_sink_location): Get virtual_operand_live and
verify we are not sinking loads across stores by looking up
the live virtual operand at the sink location.
Richard Biener [Wed, 2 Aug 2023 11:33:43 +0000 (13:33 +0200)]
Add virtual operand global liveness computation class
The following adds an on-demand global liveness computation class
computing and caching the live-out virtual operand of basic blocks
and answering live-out, live-in and live-on-edge queries. The flow
is optimized for the intended use in code sinking which will query
live-in and possibly can be optimized further when the originating
query is for live-out.
The code relies on up-to-date immediate dominator information and
on an unchanging virtual operand state.
Richard Biener [Thu, 3 Aug 2023 08:59:52 +0000 (10:59 +0200)]
Swap loop splitting and final value replacement
The following swaps the loop splitting pass and the final value
replacement pass to avoid keeping the IV of the earlier loop
live when not necessary. The existing gcc.target/i386/pr87007-5.c
testcase shows that we otherwise fail to elide an empty loop
later. I don't see any good reason why loop splitting would need
final value replacement, all exit values honor the constraints
we place on loop header PHIs automatically.
* passes.def: Exchange loop splitting and final value
replacement passes.
* gcc.target/i386/pr87007-5.c: Make sure we split the loop
and eliminate both in the end.
s390: Try to emit vlbr/vstbr instead of vperm et al.
gcc/ChangeLog:
* config/s390/s390.cc (expand_perm_as_a_vlbr_vstbr_candidate):
New function which handles bswap patterns for vec_perm_const.
(vectorize_vec_perm_const_1): Call new function.
* config/s390/vector.md (*bswap<mode>): Fix operands in output
template.
(*vstbr<mode>): New insn.
gcc/testsuite/ChangeLog:
* gcc.target/s390/s390.exp: Add subdirectory vxe2.
* gcc.target/s390/vxe2/vlbr-1.c: New test.
* gcc.target/s390/vxe2/vstbr-1.c: New test.
* gcc.target/s390/vxe2/vstbr-2.c: New test.
Alexandre Oliva [Thu, 3 Aug 2023 06:34:31 +0000 (03:34 -0300)]
Introduce -msmp to select /lib_smp/ on ppc-vx6
The .spec files used for linking on ppc-vx6, when the rtp-smp runtime
is selected, add -L flags for /lib_smp/ and /lib/.
There was a problem, though: although /lib_smp/ and /lib/ were to be
searched in this order, and the specs files do that correctly, the
compiler would search /lib/ first regardless, because
STARTFILE_PREFIX_SPEC said so, and specs files cannot override that.
With this patch, we arrange for the presence of -msmp to affect
STARTFILE_PREFIX_SPEC, so that the compiler searches /lib_smp/ rather
than /lib/ for crt files. A separate patch for GNAT ensures that when
the rtp-smp runtime is selected, -msmp is passed to the compiler
driver for linking, along with the --specs flags.
for gcc/ChangeLog
* config/vxworks-smp.opt: New. Introduce -msmp.
* config.gcc: Enable it on powerpc* vxworks prior to 7r*.
* config/rs6000/vxworks.h (STARTFILE_PREFIX_SPEC): Choose
lib_smp when -msmp is present in the command line.
* doc/invoke.texi: Document it.
* config/riscv/riscv.cc (riscv_save_reg_p): Save ra for leaf
when enabling -mno-omit-leaf-frame-pointer
(riscv_option_override): Override omit-frame-pointer.
(riscv_frame_pointer_required): Save s0 for non-leaf function
(TARGET_FRAME_POINTER_REQUIRED): Override defination
* config/riscv/riscv.opt: Add option support.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/omit-frame-pointer-1.c: New test.
* gcc.target/riscv/omit-frame-pointer-2.c: New test.
* gcc.target/riscv/omit-frame-pointer-3.c: New test.
* gcc.target/riscv/omit-frame-pointer-4.c: New test.
* gcc.target/riscv/omit-frame-pointer-test.c: New test.
Signed-off-by: Yanzhang Wang <yanzhang.wang@intel.com>
Roger Sayle [Thu, 3 Aug 2023 06:12:04 +0000 (07:12 +0100)]
PR target/110792: Early clobber issues with rot32di2_doubleword on i386.
This patch is a conservative fix for PR target/110792, a wrong-code
regression affecting doubleword rotations by BITS_PER_WORD, which
effectively swaps the highpart and lowpart words, when the source to be
rotated resides in memory. The issue is that if the register used to
hold the lowpart of the destination is mentioned in the address of
the memory operand, the current define_insn_and_split unintentionally
clobbers it before reading the highpart.
Hence, for the testcase, the incorrectly generated code looks like:
but unfortunately this currently generates significantly worse code,
due to a strange choice of reloads (effectively memcpy), which ends up
looking like:
salq $4, %rdi // calculate address
movdqa WHIRL_S(%rdi), %xmm0 // load the double word in SSE reg.
movaps %xmm0, -16(%rsp) // store the SSE reg back to the stack
movq -8(%rsp), %rdi // load highpart
movq -16(%rsp), %rbp // load lowpart
Note that reload's "&" doesn't distinguish between the memory being
early clobbered, vs the registers used in an addressing mode being
early clobbered.
The fix proposed in this patch is to remove the third alternative, that
allowed offsetable memory as an operand, forcing reload to place the
operand into a register before the rotation. This results in:
I believe there's a more advanced solution, by swapping the order of
the loads (if first destination register is mentioned in the address),
or inserting a lea insn (if both destination registers are mentioned
in the address), but this fix is a minimal "safe" solution, that
should hopefully be suitable for backporting.
2023-08-03 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR target/110792
* config/i386/i386.md (<any_rotate>ti3): For rotations by 64 bits
place operand in a register before gen_<insn>64ti2_doubleword.
(<any_rotate>di3): Likewise, for rotations by 32 bits, place
operand in a register before gen_<insn>32di2_doubleword.
(<any_rotate>32di2_doubleword): Constrain operand to be in register.
(<any_rotate>64ti2_doubleword): Likewise.
gcc/testsuite/ChangeLog
PR target/110792
* g++.target/i386/pr110792.C: New 32-bit C++ test case.
* gcc.target/i386/pr110792.c: New 64-bit C test case.
Andrew Pinski [Wed, 2 Aug 2023 22:54:20 +0000 (15:54 -0700)]
Fix `~X & X` and `~X | X` patterns
As Jakub noticed in https://gcc.gnu.org/pipermail/gcc-patches/2023-August/626039.html
what I did was not totally correct because sometimes chosing the wrong type.
So to get back to what the original code but keeping around the use of bitwise_inverted_equal_p,
we just need to check if the types of the two catupures are the same type.
Also adds a testcase for the problem Jakub found.
Committed as obvious after a bootstrap and test.
gcc/ChangeLog:
* match.pd (`~X & X`): Check that the types match.
(`~x | x`, `~x ^ x`): Likewise.
Eric Feng [Wed, 2 Aug 2023 20:54:55 +0000 (16:54 -0400)]
analyzer: stash values for CPython plugin [PR107646]
This patch adds a hook to the end of ana::on_finish_translation_unit
which calls relevant stashing-related callbacks registered during plugin
initialization. This feature is used to stash named types and global
variables for a CPython analyzer plugin [PR107646].
gcc/analyzer/ChangeLog:
PR analyzer/107646
* analyzer-language.cc (run_callbacks): New function.
(on_finish_translation_unit): New function.
* analyzer-language.h (GCC_ANALYZER_LANGUAGE_H): New include.
(class translation_unit): New vfuncs.
gcc/c/ChangeLog:
PR analyzer/107646
* c-parser.cc: New functions on stashing values for the
analyzer.
gcc/testsuite/ChangeLog:
PR analyzer/107646
* gcc.dg/plugin/plugin.exp: Add new plugin and test.
* gcc.dg/plugin/analyzer_cpython_plugin.c: New plugin.
* gcc.dg/plugin/cpython-plugin-test-1.c: New test.
rtl-optimization/110867 Fix narrow comparison of memory and constant
In certain cases a constant may not fit into the mode used to perform a
comparison. This may be the case for sign-extended constants which are
used during an unsigned comparison as e.g. in
Fixed by ensuring that the constant fits into comparison mode.
Furthermore, on some targets as e.g. sparc the constant used in a
comparison is chopped off before combine which leads to failing test
cases (see PR 110869). Fixed by not requiring that the source mode has
to be DImode, and excluding sparc from the last two test cases entirely
since there the constant cannot be further reduced.
gcc/ChangeLog:
PR rtl-optimization/110867
* combine.cc (simplify_compare_const): Try the optimization only
in case the constant fits into the comparison mode.
gcc/testsuite/ChangeLog:
PR rtl-optimization/110869
* gcc.dg/cmp-mem-const-1.c: Relax mode for constant.
* gcc.dg/cmp-mem-const-2.c: Relax mode for constant.
* gcc.dg/cmp-mem-const-3.c: Relax mode for constant.
* gcc.dg/cmp-mem-const-4.c: Relax mode for constant.
* gcc.dg/cmp-mem-const-5.c: Exclude sparc since here the
constant is already reduced.
* gcc.dg/cmp-mem-const-6.c: Exclude sparc since here the
constant is already reduced.
The RTL semantics of this pattern are op0 = (op1 != 0) ? op1 : op2 which
obviously doesn't match to any zicond instruction as op1 is selected
when it is not zero.
So two of the patterns are just totally bogus as they are not
implementable with zicond. They are removed. The asm template for the
.opt3 pattern is fixed to use czero.nez and its name is changed to
.opt2.
Richard Biener [Thu, 27 Jul 2023 13:34:12 +0000 (15:34 +0200)]
tree-optimization/92335 - Improve sinking heuristics for vectorization
The following delays sinking of loads within the same innermost
loop when it was unconditional before. That's a not uncommon
issue preventing vectorization when masked loads are not available.
PR tree-optimization/92335
* tree-ssa-sink.cc (select_best_block): Before loop
optimizations avoid sinking unconditional loads/stores
in innermost loops to conditional executed places.
* gcc.dg/tree-ssa/ssa-sink-10.c: Disable vectorizing.
* gcc.dg/tree-ssa/predcom-9.c: Clone from ssa-sink-10.c,
expect predictive commoning to happen instead of sinking.
* gcc.dg/vect/pr65947-3.c: Ajdust.
This slighly improves bitwise_inverted_equal_p
for comparisons. Instead of just comparing the
comparisons operands also valueize them.
This will allow ccp and others to match the 2 comparisons
without an extra pass happening.
OK? Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* gimple-match-head.cc (gimple_bitwise_inverted_equal_p): Valueize
the comparison operands before comparing them.
Andrew Pinski [Sat, 29 Jul 2023 20:00:04 +0000 (13:00 -0700)]
Move `~X & X` and `~X | X` over to use bitwise_inverted_equal_p
This is a simple patch to move these 2 patterns over to use
bitwise_inverted_equal_p. It also allows us to remove 2 other patterns
which were used on comparisons as they are now handled by
the original pattern.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
gcc/ChangeLog:
* match.pd (`~X & X`, `~X | X`): Move over to
use bitwise_inverted_equal_p, removing :c as bitwise_inverted_equal_p
handles that already.
Remove range test simplifications to true/false as they
are now handled by these patterns.
Andrew Pinski [Tue, 13 Jun 2023 16:17:45 +0000 (09:17 -0700)]
PHIOPT: Mark the conditional lhs and rhs as to look at to see if DCEable
In some cases (usually dealing with bools only), there could be some statements
left behind which are considered trivial dead.
An example is:
```
bool f(bool a, bool b)
{
if (!a && !b)
return 0;
if (!a && b)
return 0;
if (a && !b)
return 0;
return 1;
}
```
Where during phiopt2, the IR had:
```
_3 = ~b_7(D);
_4 = _3 & a_6(D);
_4 != 0 ? 0 : 1
```
match-and-simplify would transform that into:
```
_11 = ~a_6(D);
_12 = b_7(D) | _11;
```
But phiopt would leave around the statements defining _4 and _3.
This helps by marking the conditional's lhs and rhs to see if they are
trivial dead.
OK? Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* tree-ssa-phiopt.cc (match_simplify_replacement): Mark's cond
statement's lhs and rhs to check if trivial dead.
Rename inserted_exprs to exprs_maybe_dce; also move it so
bitmap is not allocated if not needed.
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc
(class widen_binop_frm): New class for binop frm.
(BASE): Add vfwadd_frm.
* config/riscv/riscv-vector-builtins-bases.h: New declaration.
* config/riscv/riscv-vector-builtins-functions.def
(vfwadd_frm): New function definition.
* config/riscv/riscv-vector-builtins-shapes.cc
(BASE_NAME_MAX_LEN): New macro.
(struct alu_frm_def): Leverage new base class.
(struct build_frm_base): New build base for frm.
(struct widen_alu_frm_def): New struct for widen alu frm.
(SHAPE): Add widen_alu_frm shape.
* config/riscv/riscv-vector-builtins-shapes.h: New declaration.
* config/riscv/vector.md (frm_mode): Add vfwalu type.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/float-point-widening-add.c: New test.
Jan Hubicka [Wed, 2 Aug 2023 07:44:06 +0000 (09:44 +0200)]
More profile updating clenaups
This patch commonizes loop_count_in computatoin with
expected_loop_iterations_by_profile (and moves it to cfgloopanal.cc rather than
manip) and fixes roundoff error in scale_loop_profile. I alos noticed that
I managed to misapply the template change to gcc.dg/unroll-1.c.
Bootstrapped/regtested x86_64-linux, comitted.
gcc/ChangeLog:
* cfgloop.h (loop_count_in): Declare.
* cfgloopanal.cc (expected_loop_iterations_by_profile): Use count_in.
(loop_count_in): Move here from ...
* cfgloopmanip.cc (loop_count_in): ... here.
(scale_loop_profile): Improve dumping; cast iteration bound to sreal.
Jan Hubicka [Wed, 2 Aug 2023 07:25:12 +0000 (09:25 +0200)]
Fix profile update after cancelled loop distribution
Loop distribution and ifcvt introduces verisons of loops which may be removed
later if vectorization fails. Ifcvt does this by temporarily breaking profile
and producing conditional that has two arms with 100% probability because we
know one of the versions will be removed.
Loop distribution is trickier, since it introduces test for alignment that
either survives to final code if vecotorization suceeds or is turned if it
fails.
Here we need to assign some reasonable probabilities for the case vectorization
goes well, so this code adds logic to scale profile back in case we remove the
call.
This is not perfect since we drop precise BB counts to guessed. It is not big
deal since we do not use much reliablity of bb counts after this point. Other
option would be to apply scale only if vectorization succeeds which however
needs bit more work at tree-loop-distribution side and would need all code in
this patch with small change that fold_loop_internal_call will have to know how
to adjust if conditional stays. I decided to go for easier solution for now.
The following removes the code checking whether a noop copy
is between something involved in the return sequence composed
of a SET and USE. Instead of checking for this special-case
the following makes us only ever remove noop copies between
pseudos - which is the case that is necessary for IRA/LRA
interfacing to function according to the comment. That makes
looking for the return reg special case unnecessary, reducing
the compile-time in LRA non-specific to zero for the testcase.
PR rtl-optimization/110587
* lra-spills.cc (return_regno_p): Remove.
(regno_in_use_p): Likewise.
(lra_final_code_change): Do not remove noop moves
between hard registers.