[PATCH, aarch64] Fix target/70120
Richard Henderson
rth@redhat.com
Mon Mar 21 17:54:00 GMT 2016
On 03/21/2016 06:40 AM, Jiong Wang wrote:
> On 17/03/16 19:17, Richard Henderson wrote:
>> PR target/70120
>> * varasm.c (for_each_section): New.
>> * varasm.h (for_each_section): Declare.
>> * config/aarch64/aarch64.c (aarch64_align_code_section): New.
>> (aarch64_asm_file_end): New.
>> (TARGET_ASM_FILE_END): Redefine.
>
> Will ASM_OUTPUT_POOL_EPILOGUE be a better place to fix this issue? which can
> avoid the for_each_section traversal.
It's a good point. I hadn't noticed this old (and currently unused) hook.
This alternate patch does in fact work.
r~
-------------- next part --------------
* config/aarch64/aarch64.c (aarch64_asm_output_pool_epilogue): New.
* config/aarch64/aarch64-protos.h: Declare it.
* config/aarch64/aarch64.h (ASM_OUTPUT_POOL_EPILOGUE): New.
diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h
index dced209..58c9d0d 100644
--- a/gcc/config/aarch64/aarch64-protos.h
+++ b/gcc/config/aarch64/aarch64-protos.h
@@ -429,4 +429,8 @@ bool extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset);
bool aarch64_operands_ok_for_ldpstp (rtx *, bool, enum machine_mode);
bool aarch64_operands_adjust_ok_for_ldpstp (rtx *, bool, enum machine_mode);
extern bool aarch64_nopcrelative_literal_loads;
+
+extern void aarch64_asm_output_pool_epilogue (FILE *, const char *,
+ tree, HOST_WIDE_INT);
+
#endif /* GCC_AARCH64_PROTOS_H */
diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
index cf1239d..732ed70 100644
--- a/gcc/config/aarch64/aarch64.c
+++ b/gcc/config/aarch64/aarch64.c
@@ -5579,6 +5579,18 @@ aarch64_select_rtx_section (machine_mode mode,
return default_elf_select_rtx_section (mode, x, align);
}
+/* Implement ASM_OUTPUT_POOL_EPILOGUE. */
+void
+aarch64_asm_output_pool_epilogue (FILE *f, const char *, tree,
+ HOST_WIDE_INT offset)
+{
+ /* When using per-function literal pools, we must ensure that any code
+ section is aligned to the minimal instruction length, lest we get
+ errors from the assembler re "unaligned instructions". */
+ if ((offset & 3) && aarch64_can_use_per_function_literal_pools_p ())
+ ASM_OUTPUT_ALIGN (f, 2);
+}
+
/* Costs. */
/* Helper function for rtx cost calculation. Strip a shift expression
diff --git a/gcc/config/aarch64/aarch64.h b/gcc/config/aarch64/aarch64.h
index ec96ce3..7750d1c 100644
--- a/gcc/config/aarch64/aarch64.h
+++ b/gcc/config/aarch64/aarch64.h
@@ -928,4 +928,6 @@ extern const char *host_detect_local_cpu (int argc, const char **argv);
#define EXTRA_SPECS \
{ "asm_cpu_spec", ASM_CPU_SPEC }
+#define ASM_OUTPUT_POOL_EPILOGUE aarch64_asm_output_pool_epilogue
+
#endif /* GCC_AARCH64_H */
More information about the Gcc-patches
mailing list