* rtl.h (addr_diff_vec_flags): New typedef.
(union rtunion_def): New member rt_addr_diff_vec_flags.
(ADDR_DIFF_VEC_FLAGS): New macro.
* sh.c (output_branch): Fix offset overflow problems.
* final.c (shorten_branches): Implement CASE_VECTOR_SHORTEN_MODE.
(final_scan_insn): New argument BODY for ASM_OUTPUT_ADDR_DIFF_ELT.
* rtl.def (ADDR_DIFF_VEC): Three new fields (min, max and flags).
* stmt.c (expand_end_case): Supply new arguments to
gen_rtx_ADDR_DIFF_VEC.
* 1750a.h (ASM_OUTPUT_ADDR_DIFF_ELT): New argument BODY.
* alpha.h, arc.h, clipper.h, convex.h : Likewise.
* dsp16xx.h, elxsi.h, fx80.h, gmicro.h, h8300.h : Likewise.
* i370.h, i386.h, i860.h, i960.h, m32r.h, m68k.h, m88k.h : Likewise.
* mips.h, mn10200.h, mn10300.h, ns32k.h, pa.h, pyr.h : Likewise.
* rs6000.h, sh.h, sparc.h, spur.h, tahoe.h, v850.h : Likewise.
* vax.h, we32k.h, alpha/vms.h, arm/aof.h, arm/aout.h : Likewise.
* i386/386bsd.h, i386/freebsd-elf.h : Likewise.
* i386/freebsd.h, i386/linux.h : Likewise.
* i386/netbsd.h, i386/osfrose.h, i386/ptx4-i.h, i386/sco5.h : Likewise.
* i386/sysv4.h, m68k/3b1.h, m68k/dpx2.h, m68k/hp320.h : Likewise.
* m68k/mot3300.h, m68k/sgs.h : Likewise.
* m68k/tower-as.h, ns32k/encore.h, sparc/pbd.h : Likewise.
* sh.h (INSN_ALIGN, INSN_LENGTH_ALIGNMENT): Define.
(CASE_VECTOR_SHORTEN_MODE): Define.
(short_cbranch_p, align_length, addr_diff_vec_adjust): Don't declare.
(med_branch_p, braf_branch_p): Don't declare.
(mdep_reorg_phase, barrier_align): Declare.
(ADJUST_INSN_LENGTH): Remove alignment handling.
* sh.c (uid_align, uid_align_max): Deleted.
(max_uid_before_fixup_addr_diff_vecs, branch_offset): Deleted.
(short_cbranch_p, med_branch_p, braf_branch_p, align_length): Deleted.
(cache_align_p, fixup_aligns, addr_diff_vec_adjust): Deleted.
(output_far_jump): Don't use braf_branch_p.
(output_branchy_insn): Don't use branch_offset.
(find_barrier): Remove checks for max_uid_before_fixup_addr_diff_vecs.
Remove paired barrier stuff.
Don't use cache_align_p.
Take alignment insns into account.
(fixup_addr_diff_vecs): Reduce to only fixing up the base label of
the addr_diff_vec.
(barrier_align, branch_dest): New function.
(machine_dependent_reorg, split_branches): Remove infrastructure
for branch shortening that is now provided in the backend.
* sh.md (short_cbranch_p, med_branch_p, med_cbranch_p): New attributes.
(braf_branch_p, braf_cbranch_p): Likewise.
(attribute length): Use new attributes.
(casesi_worker): Get mode and unsignednedd from ADDR_DIFF_VEC.
(addr_diff_vec_adjust): Delete.
(align_2): Now a define_expand.
(align_log): Now length 0.
From-SVN: r18433
+Fri Mar 6 21:28:45 1998 J"orn Rennecke <amylaar@cygnus.co.uk>
+
+ * rtl.h (addr_diff_vec_flags): New typedef.
+ (union rtunion_def): New member rt_addr_diff_vec_flags.
+ (ADDR_DIFF_VEC_FLAGS): New macro.
+
+ * sh.c (output_branch): Fix offset overflow problems.
+
+ * final.c (shorten_branches): Implement CASE_VECTOR_SHORTEN_MODE.
+ (final_scan_insn): New argument BODY for ASM_OUTPUT_ADDR_DIFF_ELT.
+ * rtl.def (ADDR_DIFF_VEC): Three new fields (min, max and flags).
+ * stmt.c (expand_end_case): Supply new arguments to
+ gen_rtx_ADDR_DIFF_VEC.
+ * 1750a.h (ASM_OUTPUT_ADDR_DIFF_ELT): New argument BODY.
+ * alpha.h, arc.h, clipper.h, convex.h : Likewise.
+ * dsp16xx.h, elxsi.h, fx80.h, gmicro.h, h8300.h : Likewise.
+ * i370.h, i386.h, i860.h, i960.h, m32r.h, m68k.h, m88k.h : Likewise.
+ * mips.h, mn10200.h, mn10300.h, ns32k.h, pa.h, pyr.h : Likewise.
+ * rs6000.h, sh.h, sparc.h, spur.h, tahoe.h, v850.h : Likewise.
+ * vax.h, we32k.h, alpha/vms.h, arm/aof.h, arm/aout.h : Likewise.
+ * i386/386bsd.h, i386/freebsd-elf.h : Likewise.
+ * i386/freebsd.h, i386/linux.h : Likewise.
+ * i386/netbsd.h, i386/osfrose.h, i386/ptx4-i.h, i386/sco5.h : Likewise.
+ * i386/sysv4.h, m68k/3b1.h, m68k/dpx2.h, m68k/hp320.h : Likewise.
+ * m68k/mot3300.h, m68k/sgs.h : Likewise.
+ * m68k/tower-as.h, ns32k/encore.h, sparc/pbd.h : Likewise.
+ * sh.h (INSN_ALIGN, INSN_LENGTH_ALIGNMENT): Define.
+ (CASE_VECTOR_SHORTEN_MODE): Define.
+ (short_cbranch_p, align_length, addr_diff_vec_adjust): Don't declare.
+ (med_branch_p, braf_branch_p): Don't declare.
+ (mdep_reorg_phase, barrier_align): Declare.
+ (ADJUST_INSN_LENGTH): Remove alignment handling.
+ * sh.c (uid_align, uid_align_max): Deleted.
+ (max_uid_before_fixup_addr_diff_vecs, branch_offset): Deleted.
+ (short_cbranch_p, med_branch_p, braf_branch_p, align_length): Deleted.
+ (cache_align_p, fixup_aligns, addr_diff_vec_adjust): Deleted.
+ (output_far_jump): Don't use braf_branch_p.
+ (output_branchy_insn): Don't use branch_offset.
+ (find_barrier): Remove checks for max_uid_before_fixup_addr_diff_vecs.
+ Remove paired barrier stuff.
+ Don't use cache_align_p.
+ Take alignment insns into account.
+ (fixup_addr_diff_vecs): Reduce to only fixing up the base label of
+ the addr_diff_vec.
+ (barrier_align, branch_dest): New function.
+ (machine_dependent_reorg, split_branches): Remove infrastructure
+ for branch shortening that is now provided in the backend.
+ * sh.md (short_cbranch_p, med_branch_p, med_cbranch_p): New attributes.
+ (braf_branch_p, braf_cbranch_p): Likewise.
+ (attribute length): Use new attributes.
+ (casesi_worker): Get mode and unsignednedd from ADDR_DIFF_VEC.
+ (addr_diff_vec_adjust): Delete.
+ (align_2): Now a define_expand.
+ (align_log): Now length 0.
+
Fri Mar 6 14:41:33 1998 Michael Meissner <meissner@cygnus.com>
* m32r.md (right): Correctly check for length == 2, not 1.
* stmt.c (expand_end_case): Likewise.
* alpha.h (CASE_VECTOR_PC_RELATIVE): Update.
* fx80.h, gmicro.h, m68k.h, m88k.h, ns32k.h: Likewise.
- * rs6000.h, sh.h, tahoe.h, v850.h vax.h z8k.h: Likewise.
+ * rs6000.h, sh.h, tahoe.h, v850.h, vax.h: Likewise.
Tue Dec 16 15:14:09 1997 Andreas Schwab <schwab@issan.informatik.uni-dortmund.de>
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\tdata\tL%d-L%d ;addr_diff_elt\n", VALUE,REL)
/* This is how to output an assembler line
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.%s $L%d\n", TARGET_WINDOWS_NT ? "long" : "gprel32", \
(VALUE))
}
#undef ASM_OUTPUT_ADDR_DIFF_ELT
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) abort ()
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) abort ()
#undef ASM_OUTPUT_ADDR_VEC_ELT
#define ASM_OUTPUT_ADDR_VEC_ELT(FILE, VALUE) \
} while (0)
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
do { \
char label[30]; \
ASM_GENERATE_INTERNAL_LABEL (label, "L", VALUE); \
/* Definitions of target machine for GNU compiler, for Advanced RISC Machines
ARM compilation, AOF Assembler.
- Copyright (C) 1995, 1996 Free Software Foundation, Inc.
+ Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc.
Contributed by Richard Earnshaw (rearnsha@armltd.co.uk)
This file is part of GNU CC.
char *aof_data_section ();
#define DATA_SECTION_ASM_OP aof_data_section ()
-#define EXTRA_SECTIONS in_zero_init, in_ctor, in_dtor
+#define EXTRA_SECTIONS in_zero_init, in_ctor, in_dtor, in_common
#define EXTRA_SECTION_FUNCTIONS \
ZERO_INIT_SECTION \
CTOR_SECTION \
-DTOR_SECTION
+DTOR_SECTION \
+COMMON_SECTION
#define ZERO_INIT_SECTION \
void \
} \
}
+/* Used by ASM_OUTPUT_COMMON (below) to tell varasm.c that we've
+ changed areas. */
+#define COMMON_SECTION \
+void \
+common_section () \
+{ \
+ static int common_count = 1; \
+ if (in_section != in_common) \
+ { \
+ in_section = in_common; \
+ } \
+}
#define CTOR_LIST_BEGIN \
asm (CTORS_SECTION_ASM_OP); \
extern func_ptr __CTOR_END__[1]; \
/* Some systems use __main in a way incompatible with its use in gcc, in these
cases use the macros NAME__MAIN to give a quoted symbol and SYMBOL__MAIN to
give the same symbol without quotes for an alternative entry point. You
- must define both, or niether. */
+ must define both, or neither. */
#define NAME__MAIN "__gccmain"
#define SYMBOL__MAIN __gccmain
/* Output of Uninitialized Variables */
#define ASM_OUTPUT_COMMON(STREAM,NAME,SIZE,ROUNDED) \
- (fprintf ((STREAM), "\tAREA "), \
+ (common_section (), \
+ fprintf ((STREAM), "\tAREA "), \
assemble_name ((STREAM), (NAME)), \
fprintf ((STREAM), ", DATA, COMMON\n\t%% %d\t%s size=%d\n", \
(ROUNDED), ASM_COMMENT_START, SIZE))
arm_main_function = 1; \
} while (0)
-#define ARM_OUTPUT_LABEL(STREAM,NAME) \
+#define ASM_OUTPUT_LABEL(STREAM,NAME) \
do { \
assemble_name (STREAM,NAME); \
fputs ("\n", STREAM); \
/* Output of Dispatch Tables */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM,VALUE,REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM,BODY,VALUE,REL) \
fprintf ((STREAM), "\tb\t|L..%d|\n", (VALUE))
#define ASM_OUTPUT_ADDR_VEC_ELT(STREAM,VALUE) \
#define ASM_DECLARE_FUNCTION_NAME(STREAM,NAME,DECL) \
ASM_OUTPUT_LABEL(STREAM, NAME)
-#define ARM_OUTPUT_LABEL(STREAM,NAME) \
+#define ASM_OUTPUT_LABEL(STREAM,NAME) \
do { \
assemble_name (STREAM,NAME); \
fputs (":\n", STREAM); \
#define ASM_OUTPUT_ADDR_VEC_ELT(STREAM,VALUE) \
fprintf (STREAM, "\t.word\t%sL%d\n", LOCAL_LABEL_PREFIX, VALUE)
-#define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM,VALUE,REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM,BODY,VALUE,REL) \
fprintf (STREAM, "\tb\t%sL%d\n", LOCAL_LABEL_PREFIX, (VALUE))
/* Output various types of constants. For real numbers we output hex, with
assemble_name ((STREAM), (NAME)), \
fprintf(STREAM, ", %d\t%s %d\n", ROUNDED, ASM_COMMENT_START, SIZE))
-/* Output a local common block. /bin/as can't do this, so hack a `.space' into
- the bss segment. Note that this is *bad* practice. */
-#define ASM_OUTPUT_ALIGNED_LOCAL(STREAM,NAME,SIZE,ALIGN) \
- output_lcomm_directive (STREAM, NAME, SIZE, ALIGN)
+/* Output a local common block. /bin/as can't do this, so hack a
+ `.space' into the bss segment. Note that this is *bad* practice. */
+#define ASM_OUTPUT_ALIGNED_LOCAL(STREAM,NAME,SIZE,ALIGN) \
+ do { \
+ bss_section (); \
+ ASM_OUTPUT_ALIGN (STREAM, floor_log2 (ALIGN / BITS_PER_UNIT)); \
+ ASM_OUTPUT_LABEL (STREAM, NAME); \
+ fprintf (STREAM, "\t.space\t%d\n", SIZE); \
+ } while (0)
/* Output a zero-initialized block. */
#define ASM_OUTPUT_ALIGNED_BSS(STREAM,DECL,NAME,SIZE,ALIGN) \
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.word .L%d-.L%d\n", VALUE, REL)
/* This is how to output an assembler line
/* This is how to output an element of a case-vector that is relative.
(not used on Convex) */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\tds.w L%d-L%d\n", VALUE, REL)
/* This is how to output an assembler line
/* This macro should be provided on machines where the addresses in a dispatch
table are relative to the table's own address. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\tint L%d-L%d\n", VALUE, REL)
/* This macro should be provided on machines where the addresses in a dispatch
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.data .L%d-.L%d{32}\n", VALUE, REL)
/* This is how to output an assembler line
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.word L%d-L%d\n", VALUE, REL)
/* This is how to output an assembler line
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.data.w L%d-L%d\n", VALUE, REL)
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t%s .L%d-.L%d\n", ASM_WORD_OP, VALUE, REL)
/* This is how to output an assembler line
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
mvs_check_page (FILE, 4, 0); \
fprintf (FILE, "\tDC\tA(L%d-L%d)\n", VALUE, REL)
i386.md for an explanation of the expression this outputs. */
#undef ASM_OUTPUT_ADDR_DIFF_ELT
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.long _GLOBAL_OFFSET_TABLE_+[.-%s%d]\n", LPREFIX, VALUE)
/* Indicate that jump tables go in the text section. This is
Copyright (C) 1996 Free Software Foundation, Inc.
Contributed by Eric Youngdale.
Modified for stabs-in-ELF by H.J. Lu.
- Adapted from Linux version by John Polstra.
+ Adapted from GNU/Linux version by John Polstra.
This file is part of GNU CC.
This is only used for PIC code. See comments by the `casesi' insn in
i386.md for an explanation of the expression this outputs. */
#undef ASM_OUTPUT_ADDR_DIFF_ELT
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.long _GLOBAL_OFFSET_TABLE_+[.-%s%d]\n", LPREFIX, VALUE)
/* Indicate that jump tables go in the text section. This is
i386.md for an explanation of the expression this outputs. */
#undef ASM_OUTPUT_ADDR_DIFF_ELT
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.long _GLOBAL_OFFSET_TABLE_+[.-%s%d]\n", LPREFIX, VALUE)
/* Indicate that jump tables go in the text section. This is
forward reference the differences.
*/
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.word %s%d-%s%d\n",LPREFIX, VALUE,LPREFIX, REL)
/* Define the parentheses used to group arithmetic operations
This is only used for PIC code. See comments by the `casesi' insn in
i386.md for an explanation of the expression this outputs. */
#undef ASM_OUTPUT_ADDR_DIFF_ELT
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.long _GLOBAL_OFFSET_TABLE_+[.-%s%d]\n", LPREFIX, VALUE)
/* Indicate that jump tables go in the text section. This is
i386.md for an explanation of the expression this outputs. */
#undef ASM_OUTPUT_ADDR_DIFF_ELT
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.long _GLOBAL_OFFSET_TABLE_+[.-%s%d]\n", LPREFIX, VALUE)
/* Indicate that jump tables go in the text section. This is
i386.md for an explanation of the expression this outputs. */
#undef ASM_OUTPUT_ADDR_DIFF_ELT
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.long _GLOBAL_OFFSET_TABLE_+[.-%s%d]\n", LPREFIX, VALUE)
/* Output a definition */
i386.md for an explanation of the expression this outputs. */
#undef ASM_OUTPUT_ADDR_DIFF_ELT
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.long _GLOBAL_OFFSET_TABLE_+[.-%s%d]\n", LPREFIX, VALUE)
/* Indicate that jump tables go in the text section. This is
} while (0)
#undef ASM_OUTPUT_ADDR_DIFF_ELT
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
do { \
if (TARGET_ELF) \
fprintf (FILE, "%s _GLOBAL_OFFSET_TABLE_+[.-%s%d]\n", ASM_LONG, LPREFIX, VALUE); \
i386.md for an explanation of the expression this outputs. */
#undef ASM_OUTPUT_ADDR_DIFF_ELT
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.long _GLOBAL_OFFSET_TABLE_+[.-%s%d]\n", LPREFIX, VALUE)
/* Indicate that jump tables go in the text section. This is
(The i860 does not use such vectors,
but we must define this macro anyway.) */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.word .L%d-.L%d\n", VALUE, REL)
/* This is how to output an assembler line
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.word L%d-L%d\n", VALUE, REL)
/* This is how to output an assembler line that says to advance the
} while (0)
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
do { \
char label[30]; \
ASM_GENERATE_INTERNAL_LABEL (label, "L", VALUE); \
#define ASM_OUTPUT_ADDR_VEC_ELT(FILE, VALUE) \
fprintf (FILE, "\tlong L%%%d\n", (VALUE))
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\tshort L%%%d-L%%%d\n", (VALUE), (REL))
/* ihnp4!lmayk!lgm says that `short 0' triggers assembler bug;
/* This is how to output an element of a case-vector that is relative. */
#undef ASM_OUTPUT_ADDR_DIFF_ELT
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
asm_fprintf (FILE, "\tdc.w %LL%d-%LL%d\n", VALUE, REL)
/* Currently, JUMP_TABLES_IN_TEXT_SECTION must be defined in order to
#define ASM_OUTPUT_ADDR_VEC_ELT(FILE, VALUE) \
fprintf (FILE, "\tlong L%d\n", VALUE)
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\tshort L%d-L%d\n", VALUE, REL)
#define ASM_OUTPUT_ALIGN(FILE,LOG) \
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
asm_fprintf (FILE, "\t.word %LL%d-%LL%d\n", VALUE, REL)
/* This is how to output an assembler line
/* This is how to output an element of a case-vector that is relative. */
#undef ASM_OUTPUT_ADDR_DIFF_ELT
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
asm_fprintf (FILE, "\t%s %LL%d-%LL%d\n", ASM_SHORT, (VALUE), (REL))
#ifndef USE_GAS
/* This is how to output an element of a case-vector that is relative. */
#undef ASM_OUTPUT_ADDR_DIFF_ELT
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
asm_fprintf (FILE, "\t%s %LL%d-%LL%d\n", WORD_ASM_OP, VALUE, REL)
/* Currently, JUMP_TABLES_IN_TEXT_SECTION must be defined in order to
fprintf (FILE, "\tlong L%%%d\n", (VALUE))
#undef ASM_OUTPUT_ADDR_DIFF_ELT
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\tshort L%%%d-L%%%d\n", (VALUE), (REL))
#undef ASM_OUTPUT_ALIGN
Redefined in sysv4.h, and luna.h. */
#define VERSION_INFO1 "m88k, "
#ifndef VERSION_INFO2
-#define VERSION_INFO2 "$Revision: 1.4 $"
+#define VERSION_INFO2 "$Revision: 1.11 $"
#endif
#ifndef VERSION_STRING
#define VERSION_STRING version_string
#ifdef __STDC__
-#define TM_RCS_ID "@(#)" __FILE__ " $Revision: 1.4 $ " __DATE__
+#define TM_RCS_ID "@(#)" __FILE__ " $Revision: 1.11 $ " __DATE__
#else
#define TM_RCS_ID "$What: <@(#) m88k.h,v 1.1.1.2.2.2> $"
#endif /* __STDC__ */
} while (0)
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
ASM_OUTPUT_ADDR_VEC_ELT (FILE, VALUE)
/* This is how to output an assembler line
This is used for pc-relative code (e.g. when TARGET_ABICALLS or
TARGET_EMBEDDED_PIC). */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM, BODY, VALUE, REL) \
do { \
if (TARGET_MIPS16) \
fprintf (STREAM, "\t.half\t%sL%d-%sL%d\n", \
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t%s .L%d-.L%d\n", ".long", VALUE, REL)
#define ASM_OUTPUT_ALIGN(FILE,LOG) \
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t%s .L%d-.L%d\n", ".long", VALUE, REL)
#define ASM_OUTPUT_ALIGN(FILE,LOG) \
sprintf (LABEL, "*.%s%d", PREFIX, NUM)
#define ASM_OUTPUT_INTERNAL_LABEL(FILE,PREFIX,NUM) \
fprintf (FILE, ".%s%d:\n", PREFIX, NUM)
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.double .L%d-.LI%d\n", VALUE, REL)
/*
/* This is how to output an element of a case-vector that is relative. */
/* ** Notice that the second element is LI format! */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.long L%d-LI%d\n", VALUE, REL)
/* This is how to output an assembler line
on the PA since ASM_OUTPUT_ADDR_VEC_ELT uses pc-relative jump instructions
rather than a table of absolute addresses. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
if (TARGET_BIG_SWITCH) \
fprintf (FILE, "\tstw %%r1,-16(%%r30)\n\tldw T'L$%04d(%%r19),%%r1\n\tbv 0(%%r1)\n\tldw -16(%%r30),%%r1\n", VALUE); \
else \
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.word L%d-L%d\n", VALUE, REL)
/* This is how to output an assembler line
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL)\
do { char buf[100]; \
fputs ((TARGET_32BIT) ? "\t.long " : "\t.llong ", FILE); \
ASM_GENERATE_INTERNAL_LABEL (buf, "L", VALUE); \
rtx sh_compare_op1;
enum machine_mode sh_addr_diff_vec_mode;
-rtx *uid_align;
-int uid_align_max;
/* Provides the class number of the smallest class containing
reg number. */
struct { rtx lab, reg, op; } this;
char *jump;
int far;
+ int offset = branch_dest (insn) - insn_addresses[INSN_UID (insn)];
this.lab = gen_label_rtx ();
- if (braf_branch_p (insn, 0))
+ if (offset >= -32764 && offset - get_attr_length (insn) <= 32766)
{
far = 0;
jump = "mov.w %O0,%1;braf %1";
rtx insn;
rtx *operands;
{
- int offset
- = (insn_addresses[INSN_UID (XEXP (XEXP (SET_SRC (PATTERN (insn)), 1), 0))]
- - insn_addresses[INSN_UID (insn)]);
-
- if (offset == 260
- && final_sequence
- && ! INSN_ANNULLED_BRANCH_P (XVECEXP (final_sequence, 0, 0)))
- {
- /* The filling of the delay slot has caused a forward branch to exceed
- its range.
- Just emit the insn from the delay slot in front of the branch. */
- /* The call to print_slot will clobber the operands. */
- rtx op0 = operands[0];
- print_slot (final_sequence);
- operands[0] = op0;
- }
- else if (offset < -252 || offset > 258)
+ switch (get_attr_length (insn))
{
- /* This can happen when other condbranches hoist delay slot insn
+ case 6:
+ /* This can happen if filling the delay slot has caused a forward
+ branch to exceed its range (we could reverse it, but only
+ when we know we won't overextend other branches; this should
+ best be handled by relaxation).
+ It can also happen when other condbranches hoist delay slot insn
from their destination, thus leading to code size increase.
But the branch will still be in the range -4092..+4098 bytes. */
- int label = lf++;
- /* The call to print_slot will clobber the operands. */
- rtx op0 = operands[0];
-
- /* If the instruction in the delay slot is annulled (true), then
- there is no delay slot where we can put it now. The only safe
- place for it is after the label. final will do that by default. */
-
- if (final_sequence
- && ! INSN_ANNULLED_BRANCH_P (XVECEXP (final_sequence, 0, 0)))
+ if (! TARGET_RELAX)
{
- asm_fprintf (asm_out_file, "\tb%s%ss\t%LLF%d\n", logic ? "f" : "t",
- ASSEMBLER_DIALECT ? "/" : ".", label);
- print_slot (final_sequence);
+ int label = lf++;
+ /* The call to print_slot will clobber the operands. */
+ rtx op0 = operands[0];
+
+ /* If the instruction in the delay slot is annulled (true), then
+ there is no delay slot where we can put it now. The only safe
+ place for it is after the label. final will do that by default. */
+
+ if (final_sequence
+ && ! INSN_ANNULLED_BRANCH_P (XVECEXP (final_sequence, 0, 0)))
+ {
+ asm_fprintf (asm_out_file, "\tb%s%ss\t%LLF%d\n", logic ? "f" : "t",
+ ASSEMBLER_DIALECT ? "/" : ".", label);
+ print_slot (final_sequence);
+ }
+ else
+ asm_fprintf (asm_out_file, "\tb%s\t%LLF%d\n", logic ? "f" : "t", label);
+
+ output_asm_insn ("bra\t%l0", &op0);
+ fprintf (asm_out_file, "\tnop\n");
+ ASM_OUTPUT_INTERNAL_LABEL(asm_out_file, "LF", label);
+
+ return "";
}
- else
- asm_fprintf (asm_out_file, "\tb%s\t%LLF%d\n", logic ? "f" : "t", label);
-
- output_asm_insn ("bra\t%l0", &op0);
- fprintf (asm_out_file, "\tnop\n");
- ASM_OUTPUT_INTERNAL_LABEL(asm_out_file, "LF", label);
-
- return "";
+ /* When relaxing, handle this like a short branch. The linker
+ will fix it up if it still doesn't fit after relaxation. */
+ case 2:
+ return logic ? "bt%.\t%l0" : "bf%.\t%l0";
+ default:
+ abort ();
}
- return logic ? "bt%.\t%l0" : "bf%.\t%l0";
}
-int branch_offset ();
-
char *
output_branchy_insn (code, template, insn, operands)
char *template;
}
else
{
- int offset = branch_offset (next_insn) + 4;
- if (offset >= -252 && offset <= 256)
+ int offset = (branch_dest (next_insn)
+ - insn_addresses[INSN_UID (next_insn)] + 4);
+ if (offset >= -252 && offset <= 258)
{
if (GET_CODE (src) == IF_THEN_ELSE)
/* branch_true */
static pool_node pool_vector[MAX_POOL_SIZE];
static int pool_size;
-static int max_uid_before_fixup_addr_diff_vecs;
-
/* ??? If we need a constant in HImode which is the truncated value of a
constant we need in SImode, we could combine the two entries thus saving
two bytes. Is this common enough to be worth the effort of implementing
return 0;
}
-int
-cache_align_p (insn)
- rtx insn;
-{
- rtx pat;
-
- if (! insn)
- return 1;
-
- if (GET_CODE (insn) != INSN)
- return 0;
-
- pat = PATTERN (insn);
- return (GET_CODE (pat) == UNSPEC_VOLATILE
- && XINT (pat, 1) == 1
- && INTVAL (XVECEXP (pat, 0, 0)) == CACHE_LOG);
-}
-
static int
mova_p (insn)
rtx insn;
int count_hi = 0;
int found_hi = 0;
int found_si = 0;
+ int hi_align = 2;
+ int si_align = 2;
int leading_mova = num_mova;
rtx barrier_before_mova, found_barrier = 0, good_barrier = 0;
int si_limit;
while (from && count_si < si_limit && count_hi < hi_limit)
{
- int inc = 0;
+ int inc = get_attr_length (from);
+ int new_align = 1;
- /* The instructions created by fixup_addr_diff_vecs have no valid length
- info yet. They should be considered to have zero at this point. */
- if (INSN_UID (from) < max_uid_before_fixup_addr_diff_vecs)
- inc = get_attr_length (from);
+ if (GET_CODE (from) == CODE_LABEL)
+ new_align = optimize ? 1 << label_to_alignment (from) : 1;
if (GET_CODE (from) == BARRIER)
{
+
found_barrier = from;
+
/* If we are at the end of the function, or in front of an alignment
instruction, we need not insert an extra alignment. We prefer
this kind of barrier. */
-
- if (cache_align_p (next_real_insn (found_barrier)))
+ if (barrier_align (from) > 2)
good_barrier = from;
}
}
else
{
+ while (si_align > 2 && found_si + si_align - 2 > count_si)
+ si_align >>= 1;
if (found_si > count_si)
count_si = found_si;
found_si += GET_MODE_SIZE (mode);
}
}
- if (GET_CODE (from) == INSN
- && GET_CODE (PATTERN (from)) == SET
- && GET_CODE (SET_SRC (PATTERN (from))) == UNSPEC
- && XINT (SET_SRC (PATTERN (from)), 1) == 1)
+ if (mova_p (from))
{
if (! num_mova++)
{
{
if (num_mova)
num_mova--;
- if (cache_align_p (NEXT_INSN (next_nonnote_insn (from))))
+ if (found_barrier == good_barrier)
{
/* We have just passed the barrier in front front of the
ADDR_DIFF_VEC. Since the ADDR_DIFF_VEC is accessed
If we waited any longer, we could end up at a barrier in
front of code, which gives worse cache usage for separated
instruction / data caches. */
- good_barrier = found_barrier;
break;
}
}
if (found_si)
- count_si += inc;
+ {
+ if (new_align > si_align)
+ {
+ count_si = count_si + new_align - 1 & -si_align;
+ si_align = new_align;
+ }
+ else
+ count_si = count_si + new_align - 1 & -new_align;
+ count_si += inc;
+ }
if (found_hi)
- count_hi += inc;
+ {
+ if (new_align > hi_align)
+ {
+ count_hi = count_hi + new_align - 1 & -hi_align;
+ hi_align = new_align;
+ }
+ else
+ count_hi = count_hi + new_align - 1 & -new_align;
+ count_hi += inc;
+ }
from = NEXT_INSN (from);
}
if (found_barrier)
{
- /* We have before prepared barriers to come in pairs, with an
- alignment instruction in-between. We want to use the first
- barrier, so that the alignment applies to the code.
- If we are compiling for SH3 or newer, there are some exceptions
- when the second barrier and the alignment doesn't exist yet, so
- we have to add it. */
- if (good_barrier)
+ if (good_barrier && next_real_insn (found_barrier))
found_barrier = good_barrier;
- else if (! TARGET_SMALLCODE)
- {
- found_barrier
- = emit_insn_before (gen_align_log (GEN_INT (CACHE_LOG)),
- found_barrier);
- found_barrier = emit_barrier_before (found_barrier);
- }
}
else
{
LABEL_NUSES (label) = 1;
found_barrier = emit_barrier_after (from);
emit_label_after (label, found_barrier);
- if (! TARGET_SMALLCODE)
- {
- emit_barrier_after (found_barrier);
- emit_insn_after (gen_align_log (GEN_INT (CACHE_LOG)), found_barrier);
- }
}
return found_barrier;
gen_block_redirect (jump, bp->address += 2, 2);
}
-static void
-fixup_aligns ()
-{
- rtx insn = get_last_insn ();
- rtx align_tab[MAX_BITS_PER_WORD];
- int i;
-
- for (i = CACHE_LOG; i >= 0; i--)
- align_tab[i] = insn;
- bzero ((char *) uid_align, uid_align_max * sizeof *uid_align);
- for (; insn; insn = PREV_INSN (insn))
- {
- int uid = INSN_UID (insn);
- if (uid < uid_align_max)
- uid_align[uid] = align_tab[1];
- if (GET_CODE (insn) == INSN)
- {
- rtx pat = PATTERN (insn);
- if (GET_CODE (pat) == UNSPEC_VOLATILE && XINT (pat, 1) == 1)
- {
- /* Found an alignment instruction. */
- int log = INTVAL (XVECEXP (pat, 0, 0));
- uid_align[uid] = align_tab[log];
- for (i = log - 1; i >= 0; i--)
- align_tab[i] = insn;
- }
- }
- else if (GET_CODE (insn) == JUMP_INSN
- && GET_CODE (PATTERN (insn)) == SET)
- {
- rtx dest = SET_SRC (PATTERN (insn));
- if (GET_CODE (dest) == IF_THEN_ELSE)
- dest = XEXP (dest, 1);
- if (GET_CODE (dest) == LABEL_REF)
- {
- dest = XEXP (dest, 0);
- if (! uid_align[INSN_UID (dest)])
- /* Mark backward branch. */
- uid_align[uid] = 0;
- }
- }
- }
-}
-
/* Fix up ADDR_DIFF_VECs. */
void
fixup_addr_diff_vecs (first)
rtx first;
{
rtx insn;
- int max_address;
- int need_fixup_aligns = 0;
-
- if (optimize)
- max_address = insn_addresses[INSN_UID (get_last_insn ())] + 2;
+
for (insn = first; insn; insn = NEXT_INSN (insn))
{
- rtx vec_lab, rel_lab, pat, min_lab, max_lab, adj;
- int len, i, min, max, size;
+ rtx vec_lab, pat, prev, prevpat, x;
if (GET_CODE (insn) != JUMP_INSN
|| GET_CODE (PATTERN (insn)) != ADDR_DIFF_VEC)
continue;
pat = PATTERN (insn);
- rel_lab = vec_lab = XEXP (XEXP (pat, 0), 0);
- if (TARGET_SH2)
- {
- rtx prev, prevpat, x;
+ vec_lab = XEXP (XEXP (pat, 0), 0);
- /* Search the matching casesi_jump_2. */
- for (prev = vec_lab; ; prev = PREV_INSN (prev))
- {
- if (GET_CODE (prev) != JUMP_INSN)
- continue;
- prevpat = PATTERN (prev);
- if (GET_CODE (prevpat) != PARALLEL || XVECLEN (prevpat, 0) != 2)
- continue;
- x = XVECEXP (prevpat, 0, 1);
- if (GET_CODE (x) != USE)
- continue;
- x = XEXP (x, 0);
- if (GET_CODE (x) == LABEL_REF && XEXP (x, 0) == vec_lab)
- break;
- }
- /* Fix up the ADDR_DIF_VEC to be relative
- to the reference address of the braf. */
- XEXP (XEXP (pat, 0), 0)
- = rel_lab = XEXP (XEXP (SET_SRC (XVECEXP (prevpat, 0, 0)), 1), 0);
- }
- if (! optimize)
- continue;
- len = XVECLEN (pat, 1);
- if (len <= 0)
- abort ();
- for (min = max_address, max = 0, i = len - 1; i >= 0; i--)
- {
- rtx lab = XEXP (XVECEXP (pat, 1, i), 0);
- int addr = insn_addresses[INSN_UID (lab)];
- if (addr < min)
- {
- min = addr;
- min_lab = lab;
- }
- if (addr > max)
- {
- max = addr;
- max_lab = lab;
- }
- }
- adj
- = emit_insn_before (gen_addr_diff_vec_adjust (min_lab, max_lab, rel_lab,
- GEN_INT (len)), vec_lab);
- size = (XVECLEN (pat, 1) * GET_MODE_SIZE (GET_MODE (pat))
- - addr_diff_vec_adjust (adj, 0));
- /* If this is a very small table, we want to remove the alignment after
- the table. */
- if (! TARGET_SMALLCODE && size <= 1 << (CACHE_LOG - 2))
+ /* Search the matching casesi_jump_2. */
+ for (prev = vec_lab; ; prev = PREV_INSN (prev))
{
- rtx align = NEXT_INSN (next_nonnote_insn (insn));
- PUT_CODE (align, NOTE);
- NOTE_LINE_NUMBER (align) = NOTE_INSN_DELETED;
- NOTE_SOURCE_FILE (align) = 0;
- need_fixup_aligns = 1;
+ if (GET_CODE (prev) != JUMP_INSN)
+ continue;
+ prevpat = PATTERN (prev);
+ if (GET_CODE (prevpat) != PARALLEL || XVECLEN (prevpat, 0) != 2)
+ continue;
+ x = XVECEXP (prevpat, 0, 1);
+ if (GET_CODE (x) != USE)
+ continue;
+ x = XEXP (x, 0);
+ if (GET_CODE (x) == LABEL_REF && XEXP (x, 0) == vec_lab)
+ break;
}
+ /* Fix up the ADDR_DIF_VEC to be relative
+ to the reference address of the braf. */
+ XEXP (XEXP (pat, 0), 0)
+ = XEXP (XEXP (SET_SRC (XVECEXP (prevpat, 0, 0)), 1), 0);
}
- if (need_fixup_aligns)
- fixup_aligns ();
}
-/* Say how much the ADDR_DIFF_VEC following INSN can be shortened.
- If FIRST_PASS is nonzero, all addresses and length of following
- insns are still uninitialized. */
+/* BARRIER_OR_LABEL is either a BARRIER or a CODE_LABEL immediately following
+ a barrier. Return the base 2 logarithm of the desired alignment. */
int
-addr_diff_vec_adjust (insn, first_pass)
- rtx insn;
- int first_pass;
+barrier_align (barrier_or_label)
+ rtx barrier_or_label;
{
- rtx pat = PATTERN (insn);
- rtx min_lab = XEXP (XVECEXP (pat, 0, 0), 0);
- rtx max_lab = XEXP (XVECEXP (pat, 0, 1), 0);
- rtx rel_lab = XEXP (XVECEXP (pat, 0, 2), 0);
- int len = INTVAL (XVECEXP (pat, 0, 3));
- int addr, min_addr, max_addr, saving, prev_saving = 0, offset;
- rtx align_insn = uid_align[INSN_UID (rel_lab)];
- int standard_size = TARGET_BIGTABLE ? 4 : 2;
- int last_size = GET_MODE_SIZE ( GET_MODE(pat));
- int align_fuzz = 0;
-
- if (! insn_addresses)
+ rtx next = next_real_insn (barrier_or_label), pat, prev;
+ int slot, credit;
+
+ if (! next)
return 0;
- if (first_pass)
- /* If optimizing, we may start off with an optimistic guess. */
- return optimize ? len & ~1 : 0;
- addr = insn_addresses[INSN_UID (rel_lab)];
- min_addr = insn_addresses[INSN_UID (min_lab)];
- max_addr = insn_addresses[INSN_UID (max_lab)];
- if (! last_size)
- last_size = standard_size;
- if (TARGET_SH2)
- prev_saving = ((standard_size - last_size) * len) & ~1;
- /* The savings are linear to the vector length. However, if we have an
- odd saving, we need one byte again to reinstate 16 bit alignment. */
- saving = ((standard_size - 1) * len) & ~1;
- offset = prev_saving - saving;
+ pat = PATTERN (next);
- if ((insn_addresses[INSN_UID (align_insn)] < max_addr
- || (insn_addresses[INSN_UID (align_insn)] == max_addr
- && next_real_insn (max_lab) != align_insn))
- && GET_CODE (align_insn) == INSN)
- {
- int align = 1 << INTVAL (XVECEXP (PATTERN (align_insn), 0, 0));
- int align_addr = insn_addresses[INSN_UID (align_insn)];
- if (align_addr > insn_addresses[INSN_UID (insn)])
- {
- int old_offset = offset;
- offset = (align_addr - 1 & align - 1) + offset & -align;
- align_addr += old_offset;
- }
- align_fuzz += (align_addr - 1) & (align - 2);
- align_insn = uid_align[INSN_UID (align_insn)];
- if (insn_addresses[INSN_UID (align_insn)] <= max_addr
- && GET_CODE (align_insn) == INSN)
- {
- int align2 = 1 << INTVAL (XVECEXP (PATTERN (align_insn), 0, 0));
- align_addr = insn_addresses[INSN_UID (align_insn)];
- if (align_addr > insn_addresses[INSN_UID (insn)])
- {
- int old_offset = offset;
- offset = (align_addr - 1 & align2 - 1) + offset & -align2;
- align_addr += old_offset;
- }
- align_fuzz += (align_addr - 1) & (align2 - align);
- }
- }
+ if (GET_CODE (pat) == ADDR_DIFF_VEC)
+ return 2;
+
+ if (GET_CODE (pat) == UNSPEC_VOLATILE && XINT (pat, 1) == 1)
+ /* This is a barrier in front of a constant table. */
+ return 0;
- if (min_addr >= addr
- && max_addr + offset - addr + align_fuzz <= 255)
+ prev = prev_real_insn (barrier_or_label);
+ if (GET_CODE (PATTERN (prev)) == ADDR_DIFF_VEC)
{
- PUT_MODE (pat, QImode);
- return saving;
+ pat = PATTERN (prev);
+ /* If this is a very small table, we want to keep the alignment after
+ the table to the minimum for proper code alignment. */
+ return ((TARGET_SMALLCODE
+ || (XVECLEN (pat, 1) * GET_MODE_SIZE (GET_MODE (pat))
+ <= 1 << (CACHE_LOG - 2)))
+ ? 1 : CACHE_LOG);
}
- saving = 2 * len;
-/* Since alignment might play a role in min_addr if it is smaller than addr,
- we may not use it without exact alignment compensation; a 'worst case'
- estimate is not good enough, because it won't prevent infinite oscillation
- of shorten_branches.
- ??? We should fix that eventually, but the code to deal with alignments
- should go in a new function. */
-#if 0
- if (TARGET_BIGTABLE && min_addr - ((1 << CACHE_LOG) - 2) - addr >= -32768
-#else
- if (TARGET_BIGTABLE && (min_addr >= addr || addr <= 32768)
-#endif
- && max_addr - addr <= 32767 + saving - prev_saving)
+
+ if (TARGET_SMALLCODE)
+ return 0;
+
+ if (! TARGET_SH3 || ! optimize)
+ return CACHE_LOG;
+
+ /* Check if there is an immediately preceding branch to the insn beyond
+ the barrier. We must weight the cost of discarding useful information
+ from the current cache line when executing this branch and there is
+ an alignment, against that of fetching unneeded insn in front of the
+ branch target when there is no alignment. */
+
+ /* PREV is presumed to be the JUMP_INSN for the barrier under
+ investigation. Skip to the insn before it. */
+ prev = prev_real_insn (prev);
+
+ for (slot = 2, credit = 1 << (CACHE_LOG - 2) + 2;
+ credit >= 0 && prev && GET_CODE (prev) == INSN;
+ prev = prev_real_insn (prev))
{
- PUT_MODE (pat, HImode);
- return saving;
- }
- PUT_MODE (pat, TARGET_BIGTABLE ? SImode : HImode);
- return 0;
+ if (GET_CODE (PATTERN (prev)) == USE
+ || GET_CODE (PATTERN (prev)) == CLOBBER)
+ continue;
+ if (GET_CODE (PATTERN (prev)) == SEQUENCE)
+ prev = XVECEXP (PATTERN (prev), 0, 1);
+ if (slot &&
+ get_attr_in_delay_slot (prev) == IN_DELAY_SLOT_YES)
+ slot = 0;
+ credit -= get_attr_length (prev);
+ }
+ if (prev
+ && GET_CODE (prev) == JUMP_INSN
+ && JUMP_LABEL (prev)
+ && next_real_insn (JUMP_LABEL (prev)) == next_real_insn (barrier_or_label)
+ && (credit - slot >= (GET_CODE (SET_SRC (PATTERN (prev))) == PC ? 2 : 0)))
+ return 0;
+
+ return CACHE_LOG;
}
/* Exported to toplev.c.
}
}
- /* The following processing passes need length information.
- addr_diff_vec_adjust needs to know if insn_addresses is valid. */
- insn_addresses = 0;
-
- /* If not optimizing for space, we want extra alignment for code after
- a barrier, so that it starts on a word / cache line boundary.
- We used to emit the alignment for the barrier itself and associate the
- instruction length with the following instruction, but that had two
- problems:
- i) A code label that follows directly after a barrier gets too low an
- address. When there is a forward branch to it, the incorrect distance
- calculation can lead to out of range branches. That happened with
- compile/920625-2 -O -fomit-frame-pointer in copyQueryResult.
- ii) barriers before constant tables get the extra alignment too.
- That is just a waste of space.
-
- So what we do now is to insert align_* instructions after the
- barriers. By doing that before literal tables are generated, we
- don't have to care about these. */
- /* We also want alignment in front of ADDR_DIFF_VECs; this is done already
- by ASM_OUTPUT_CASE_LABEL, but when optimizing, we have to make it
- explicit in the RTL in order to correctly shorten branches. */
-
- if (optimize)
- for (insn = first; insn; insn = NEXT_INSN (insn))
- {
- rtx addr_diff_vec;
-
- if (GET_CODE (insn) == BARRIER
- && (addr_diff_vec = next_real_insn (insn)))
- if (GET_CODE (PATTERN (addr_diff_vec)) == ADDR_DIFF_VEC)
- emit_insn_before (gen_align_4 (),
- XEXP (XEXP (PATTERN (addr_diff_vec), 0), 0));
- else if (TARGET_SMALLCODE)
- continue;
- else if (TARGET_SH3)
- {
- /* We align for an entire cache line. If there is a immediately
- preceding branch to the insn beyond the barrier, it does not
- make sense to insert the align, because we are more likely
- to discard useful information from the current cache line
- when doing the align than to fetch unneeded insns when not. */
- rtx prev = prev_real_insn (prev_real_insn (insn));
- int slot, credit;
-
- for (slot = 2, credit = 1 << (CACHE_LOG - 2) + 2;
- credit >= 0 && prev && GET_CODE (prev) == INSN;
- prev = prev_real_insn (prev))
- {
- if (GET_CODE (PATTERN (prev)) == USE
- || GET_CODE (PATTERN (prev)) == CLOBBER)
- continue;
- if (slot &&
- get_attr_in_delay_slot (prev) == IN_DELAY_SLOT_YES)
- slot = 0;
- credit -= get_attr_length (prev);
- }
- if (! prev || GET_CODE (prev) != JUMP_INSN
- || (next_real_insn (JUMP_LABEL (prev))
- != next_real_insn (insn))
- || (credit - slot
- < (GET_CODE (SET_SRC (PATTERN (prev))) == PC ? 2 : 0)))
- {
- insn = emit_insn_after (gen_align_log (GEN_INT (CACHE_LOG)),
- insn);
- insn = emit_barrier_after (insn);
- }
- }
- else
- {
- insn = emit_insn_after (gen_align_4 (), insn);
- insn = emit_barrier_after (insn);
- }
- else if (TARGET_SMALLCODE)
- continue;
- else if (GET_CODE (insn) == NOTE
- && NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_BEG)
- {
- rtx next = next_nonnote_insn (insn);
- if (next && GET_CODE (next) == CODE_LABEL)
- emit_insn_after (gen_align_4 (), insn);
- }
- }
-
- /* If TARGET_IEEE, we might have to split some branches before fixup_align.
- If optimizing, the double call to shorten_branches will split insns twice,
- unless we split now all that is to split and delete the original insn. */
- if (TARGET_IEEE || optimize)
- for (insn = NEXT_INSN (first); insn; insn = NEXT_INSN (insn))
- if (GET_RTX_CLASS (GET_CODE (insn)) == 'i' && ! INSN_DELETED_P (insn))
- {
- rtx old = insn;
- insn = try_split (PATTERN (insn), insn, 1);
- if (INSN_DELETED_P (old))
- {
- PUT_CODE (old, NOTE);
- NOTE_LINE_NUMBER (old) = NOTE_INSN_DELETED;
- NOTE_SOURCE_FILE (old) = 0;
- }
- }
-
- max_uid_before_fixup_addr_diff_vecs = get_max_uid ();
+ if (TARGET_SH2)
+ fixup_addr_diff_vecs (first);
if (optimize)
{
- uid_align_max = get_max_uid ();
- uid_align = (rtx *) alloca (uid_align_max * sizeof *uid_align);
- fixup_aligns ();
mdep_reorg_phase = SH_SHORTEN_BRANCHES0;
shorten_branches (first);
}
- fixup_addr_diff_vecs (first);
/* Scan the function looking for move instructions which have to be
changed to pc-relative loads and insert the literal tables. */
/* Some code might have been inserted between the mova and
its ADDR_DIFF_VEC. Check if the mova is still in range. */
for (scan = mova, total = 0; scan != insn; scan = NEXT_INSN (scan))
- if (INSN_UID (scan) < max_uid_before_fixup_addr_diff_vecs)
- total += get_attr_length (scan);
+ total += get_attr_length (scan);
/* range of mova is 1020, add 4 because pc counts from address of
second instruction after this one, subtract 2 in case pc is 2
int max_uid = get_max_uid ();
/* Find out which branches are out of range. */
- uid_align_max = get_max_uid ();
- uid_align = (rtx *) alloca (uid_align_max * sizeof *uid_align);
- fixup_aligns ();
shorten_branches (first);
uid_branch = (struct far_branch **) alloca (max_uid * sizeof *uid_branch);
is too far away. */
/* We can't use JUMP_LABEL here because it might be undefined
when not optimizing. */
+ /* A syntax error might cause beyond to be NULL_RTX. */
beyond
= next_active_insn (XEXP (XEXP (SET_SRC (PATTERN (insn)), 1),
0));
- if ((GET_CODE (beyond) == JUMP_INSN
- || (GET_CODE (beyond = next_active_insn (beyond))
- == JUMP_INSN))
+ if (beyond
+ && (GET_CODE (beyond) == JUMP_INSN
+ || (GET_CODE (beyond = next_active_insn (beyond))
+ == JUMP_INSN))
&& GET_CODE (PATTERN (beyond)) == SET
&& recog_memoized (beyond) == CODE_FOR_jump
&& ((insn_addresses[INSN_UID (XEXP (SET_SRC (PATTERN (beyond)), 0))]
delete_insn (far_branch_list->far_label);
far_branch_list = far_branch_list->prev;
}
- uid_align_max = get_max_uid ();
- uid_align = (rtx *) oballoc (uid_align_max * sizeof *uid_align);
- fixup_aligns ();
}
/* Dump out instruction addresses, which is useful for debugging the
return 0;
}
\f
-/* Return the offset of a branch. Offsets for backward branches are
- reported relative to the branch instruction, while offsets for forward
- branches are reported relative to the following instruction. */
+/* Return the destination address of a branch. */
int
-branch_offset (branch)
+branch_dest (branch)
rtx branch;
{
- rtx dest = SET_SRC (PATTERN (branch)), dest_next;
- int branch_uid = INSN_UID (branch);
- int dest_uid, dest_addr;
- rtx branch_align = uid_align[branch_uid];
+ rtx dest = SET_SRC (PATTERN (branch));
+ int dest_uid;
if (GET_CODE (dest) == IF_THEN_ELSE)
dest = XEXP (dest, 1);
dest = XEXP (dest, 0);
dest_uid = INSN_UID (dest);
- dest_addr = insn_addresses[dest_uid];
- if (branch_align)
- {
- /* Forward branch. */
- /* If branch is in a sequence, get the successor of the sequence. */
- rtx next = NEXT_INSN (NEXT_INSN (PREV_INSN (branch)));
- int next_addr = insn_addresses[INSN_UID (next)];
- int diff;
-
- /* If NEXT has been hoisted in a sequence further on, it address has
- been clobbered in the previous pass. However, if that is the case,
- we know that it is exactly 2 bytes long (because it fits in a delay
- slot), and that there is a following label (the destination of the
- instruction that filled its delay slot with NEXT). The address of
- this label is reliable. */
- if (NEXT_INSN (next))
- {
- int next_next_addr = insn_addresses[INSN_UID (NEXT_INSN (next))];
- if (next_addr > next_next_addr)
- next_addr = next_next_addr - 2;
- }
- diff = dest_addr - next_addr;
- /* If BRANCH_ALIGN has been the last insn, it might be a barrier or
- a note. */
- if ((insn_addresses[INSN_UID (branch_align)] < dest_addr
- || (insn_addresses[INSN_UID (branch_align)] == dest_addr
- && next_real_insn (dest) != branch_align))
- && GET_CODE (branch_align) == INSN)
- {
- int align = 1 << INTVAL (XVECEXP (PATTERN (branch_align), 0, 0));
- int align_addr = insn_addresses[INSN_UID (branch_align)];
- diff += (align_addr - 1) & (align - 2);
- branch_align = uid_align[INSN_UID (branch_align)];
- if (insn_addresses[INSN_UID (branch_align)] <= dest_addr
- && GET_CODE (branch_align) == INSN)
- {
- int align2 = 1 << INTVAL (XVECEXP (PATTERN (branch_align), 0, 0));
- align_addr = insn_addresses[INSN_UID (branch_align)];
- diff += (align_addr - 1) & (align2 - align);
- }
- }
- return diff;
- }
- else
- {
- /* Backward branch. */
- int branch_addr = insn_addresses[branch_uid];
- int diff = dest_addr - branch_addr;
- int old_align = 2;
-
- while (dest_uid >= uid_align_max || ! uid_align[dest_uid])
- {
- /* Label might be outside the insn stream, or even in a separate
- insn stream, after a syntax error. */
- if (! NEXT_INSN (dest))
- return 0;
- dest = NEXT_INSN (dest), dest_uid = INSN_UID (dest);
- }
-
- /* By searching for a known destination, we might already have
- stumbled on the alignment instruction. */
- if (GET_CODE (dest) == INSN
- && GET_CODE (PATTERN (dest)) == UNSPEC_VOLATILE
- && XINT (PATTERN (dest), 1) == 1
- && INTVAL (XVECEXP (PATTERN (dest), 0, 0)) > 1)
- branch_align = dest;
- else
- branch_align = uid_align[dest_uid];
- while (insn_addresses[INSN_UID (branch_align)] <= branch_addr
- && GET_CODE (branch_align) == INSN)
- {
- int align = 1 << INTVAL (XVECEXP (PATTERN (branch_align), 0, 0));
- int align_addr = insn_addresses[INSN_UID (branch_align)];
- diff -= (align_addr - 1) & (align - old_align);
- old_align = align;
- branch_align = uid_align[INSN_UID (branch_align)];
- }
- return diff;
- }
-}
-
-int
-short_cbranch_p (branch)
- rtx branch;
-{
- int offset;
-
- if (! insn_addresses)
- return 0;
- if (mdep_reorg_phase <= SH_FIXUP_PCLOAD)
- return 0;
- offset = branch_offset (branch);
- return (offset >= -252
- && offset <= (NEXT_INSN (PREV_INSN (branch)) == branch ? 256 : 254));
-}
-
-/* The maximum range used for SImode constant pool entrys is 1018. A final
- instruction can add 8 bytes while only being 4 bytes in size, thus we
- can have a total of 1022 bytes in the pool. Add 4 bytes for a branch
- instruction around the pool table, 2 bytes of alignment before the table,
- and 30 bytes of alignment after the table. That gives a maximum total
- pool size of 1058 bytes.
- Worst case code/pool content size ratio is 1:2 (using asms).
- Thus, in the worst case, there is one instruction in front of a maximum
- sized pool, and then there are 1052 bytes of pool for every 508 bytes of
- code. For the last n bytes of code, there are 2n + 36 bytes of pool.
- If we have a forward branch, the initial table will be put after the
- unconditional branch.
-
- ??? We could do much better by keeping track of the actual pcloads within
- the branch range and in the pcload range in front of the branch range. */
-
-int
-med_branch_p (branch, condlen)
- rtx branch;
- int condlen;
-{
- int offset;
-
- if (! insn_addresses)
- return 0;
- offset = branch_offset (branch);
- if (mdep_reorg_phase <= SH_FIXUP_PCLOAD)
- return offset - condlen >= -990 && offset <= 998;
- return offset - condlen >= -4092 && offset <= 4094;
-}
-
-int
-braf_branch_p (branch, condlen)
- rtx branch;
- int condlen;
-{
- int offset;
-
- if (! insn_addresses)
- return 0;
- if (! TARGET_SH2)
- return 0;
- offset = branch_offset (branch);
- if (mdep_reorg_phase <= SH_FIXUP_PCLOAD)
- return offset - condlen >= -10330 && offset <= 10330;
- return offset -condlen >= -32764 && offset <= 32766;
-}
-
-int
-align_length (insn)
- rtx insn;
-{
- int align = 1 << INTVAL (XVECEXP (PATTERN (insn), 0, 0));
- if (! insn_addresses)
- if (optimize
- && (mdep_reorg_phase == SH_SHORTEN_BRANCHES0
- || mdep_reorg_phase == SH_SHORTEN_BRANCHES1))
- return 0;
- else
- return align - 2;
- return align - 2 - ((insn_addresses[INSN_UID (insn)] - 2) & (align - 2));
+ return insn_addresses[dest_uid];
}
\f
/* Return non-zero if REG is not used after INSN.
/* Set this nonzero if move instructions will actually fail to work
when given unaligned data. */
#define STRICT_ALIGNMENT 1
+
+/* If LABEL_AFTER_BARRIER demands an alignment, return its base 2 logarithm. */
+#define LABEL_ALIGN_AFTER_BARRIER(LABEL_AFTER_BARRIER) \
+ barrier_align (LABEL_AFTER_BARRIER)
+
+#define LOOP_ALIGN(A_LABEL) (TARGET_SMALLCODE ? 0 : 2)
+
+#define LABEL_ALIGN(A_LABEL) \
+( \
+ (PREV_INSN (A_LABEL) \
+ && GET_CODE (PREV_INSN (A_LABEL)) == INSN \
+ && GET_CODE (PATTERN (PREV_INSN (A_LABEL))) == UNSPEC_VOLATILE \
+ && XINT (PATTERN (PREV_INSN (A_LABEL)), 1) == 1) \
+ /* explicit alignment insn in constant tables. */ \
+ ? INTVAL (XVECEXP (PATTERN (PREV_INSN (A_LABEL)), 0, 0)) \
+ : 0)
+
+/* Jump tables must be 32 bit aligned, no matter the size of the element. */
+#define ADDR_VEC_ALIGN(ADDR_VEC) 2
+
+/* The base two logarithm of the known minimum alignment of an insn length. */
+#define INSN_LENGTH_ALIGNMENT(A_INSN) \
+ (GET_CODE (A_INSN) == INSN \
+ ? 1 \
+ : GET_CODE (A_INSN) == JUMP_INSN || GET_CODE (A_INSN) == CALL_INSN \
+ ? 1 \
+ : CACHE_LOG)
\f
/* Standard register usage. */
for the index in the tablejump instruction. */
#define CASE_VECTOR_MODE (TARGET_BIGTABLE ? SImode : HImode)
+#define CASE_VECTOR_SHORTEN_MODE(MIN_OFFSET, MAX_OFFSET, BODY) \
+((MIN_OFFSET) >= 0 && (MAX_OFFSET) <= 127 \
+ ? (ADDR_DIFF_VEC_FLAGS (BODY).offset_unsigned = 1, QImode) \
+ : (MIN_OFFSET) >= 0 && (MAX_OFFSET) <= 255 \
+ ? (ADDR_DIFF_VEC_FLAGS (BODY).offset_unsigned = 0, QImode) \
+ : (MIN_OFFSET) >= -32768 && (MAX_OFFSET) <= 32767 ? HImode \
+ : SImode)
+
/* Define as C expression which evaluates to nonzero if the tablejump
instruction expects the table to contain offsets from the address of the
table.
((OUTVAR) = (char *) alloca (strlen (NAME) + 10), \
sprintf ((OUTVAR), "%s.%d", (NAME), (NUMBER)))
-/* Jump tables must be 32 bit aligned, no matter the size of the element. */
-#define ASM_OUTPUT_CASE_LABEL(STREAM,PREFIX,NUM,TABLE) \
- fprintf ((STREAM), "\t.align 2\n%s%d:\n", (PREFIX), (NUM));
-
/* Output a relative address table. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM,VALUE,REL) \
- switch (sh_addr_diff_vec_mode) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM,BODY,VALUE,REL) \
+ switch (GET_MODE (BODY)) \
{ \
case SImode: \
asm_fprintf ((STREAM), "\t.long\t%LL%d-%LL%d\n", (VALUE),(REL)); \
extern enum machine_mode sh_addr_diff_vec_mode;
-extern int optimize; /* needed for gen_casesi, and addr_diff_vec_adjust. */
+extern int optimize; /* needed for gen_casesi. */
+
+extern short *label_align;
/* Declare functions defined in sh.c and used in templates. */
SH_AFTER_MDEP_REORG
};
+extern enum mdep_reorg_phase_e mdep_reorg_phase;
+
void machine_dependent_reorg ();
-int short_cbranch_p ();
-int med_branch_p ();
-int braf_branch_p ();
-int align_length ();
-int addr_diff_vec_adjust ();
struct rtx_def *sfunc_uses_reg ();
+int barrier_align ();
#define MACHINE_DEPENDENT_REORG(X) machine_dependent_reorg(X)
#define MOVE_RATIO (TARGET_SMALLCODE ? 2 : 16)
\f
/* Instructions with unfilled delay slots take up an extra two bytes for
- the nop in the delay slot. Instructions at the start of loops, or
- after unconditional branches, may take up extra room when they are
- aligned. ??? We would get more accurate results if we did instruction
- alignment based on the value of INSN_CURRENT_ADDRESS; the approach used
- here is too conservative. */
+ the nop in the delay slot. */
#define ADJUST_INSN_LENGTH(X, LENGTH) \
if (((GET_CODE (X) == INSN \
&& GET_CODE (PATTERN (X)) != ADDR_VEC)) \
&& GET_CODE (PATTERN (NEXT_INSN (PREV_INSN (X)))) != SEQUENCE \
&& get_attr_needs_delay_slot (X) == NEEDS_DELAY_SLOT_YES) \
- (LENGTH) += 2; \
- if (GET_CODE (X) == INSN \
- && GET_CODE (PATTERN (X)) == UNSPEC_VOLATILE \
- && XINT (PATTERN (X), 1) == 7) \
- (LENGTH) -= addr_diff_vec_adjust (X, LENGTH); \
- if (GET_CODE (X) == INSN \
- && GET_CODE (PATTERN (X)) == UNSPEC_VOLATILE \
- && XINT (PATTERN (X), 1) == 1) \
- (LENGTH) = align_length (X); \
- if (GET_CODE (X) == JUMP_INSN \
- && GET_CODE (PATTERN (X)) == ADDR_DIFF_VEC) \
- { \
- /* The code before an ADDR_DIFF_VEC is even aligned, \
- thus any odd estimate is wrong. */ \
- (LENGTH) &= ~1; \
- /* If not optimizing, the alignment is implicit. */ \
- if (! optimize) \
- (LENGTH) += 2; \
- }
+ (LENGTH) += 2;
/* Enable a bug fix for the shorten_branches pass. */
#define SHORTEN_WITH_ADJUST_INSN_LENGTH
; In machine_dependent_reorg, we split all branches that are longer than
; 2 bytes.
+;; The maximum range used for SImode constant pool entrys is 1018. A final
+;; instruction can add 8 bytes while only being 4 bytes in size, thus we
+;; can have a total of 1022 bytes in the pool. Add 4 bytes for a branch
+;; instruction around the pool table, 2 bytes of alignment before the table,
+;; and 30 bytes of alignment after the table. That gives a maximum total
+;; pool size of 1058 bytes.
+;; Worst case code/pool content size ratio is 1:2 (using asms).
+;; Thus, in the worst case, there is one instruction in front of a maximum
+;; sized pool, and then there are 1052 bytes of pool for every 508 bytes of
+;; code. For the last n bytes of code, there are 2n + 36 bytes of pool.
+;; If we have a forward branch, the initial table will be put after the
+;; unconditional branch.
+;;
+;; ??? We could do much better by keeping track of the actual pcloads within
+;; the branch range and in the pcload range in front of the branch range.
+
+;; ??? This looks ugly because genattrtab won't allow if_then_else or cond
+;; inside an le.
+(define_attr "short_cbranch_p" "no,yes"
+ (cond [(ne (symbol_ref "mdep_reorg_phase <= SH_FIXUP_PCLOAD") (const_int 0))
+ (const_string "no")
+ (leu (plus (minus (match_dup 0) (pc)) (const_int 252)) (const_int 506))
+ (const_string "yes")
+ (ne (symbol_ref "NEXT_INSN (PREV_INSN (insn)) != insn") (const_int 0))
+ (const_string "no")
+ (leu (plus (minus (match_dup 0) (pc)) (const_int 252)) (const_int 508))
+ (const_string "yes")
+ ] (const_string "no")))
+
+(define_attr "med_branch_p" "no,yes"
+ (cond [(leu (plus (minus (match_dup 0) (pc)) (const_int 990))
+ (const_int 1988))
+ (const_string "yes")
+ (ne (symbol_ref "mdep_reorg_phase <= SH_FIXUP_PCLOAD") (const_int 0))
+ (const_string "no")
+ (leu (plus (minus (match_dup 0) (pc)) (const_int 4092))
+ (const_int 8186))
+ (const_string "yes")
+ ] (const_string "no")))
+
+(define_attr "med_cbranch_p" "no,yes"
+ (cond [(leu (plus (minus (match_dup 0) (pc)) (const_int 988))
+ (const_int 1986))
+ (const_string "yes")
+ (ne (symbol_ref "mdep_reorg_phase <= SH_FIXUP_PCLOAD") (const_int 0))
+ (const_string "no")
+ (leu (plus (minus (match_dup 0) (pc)) (const_int 4090))
+ (const_int 8184))
+ (const_string "yes")
+ ] (const_string "no")))
+
+(define_attr "braf_branch_p" "no,yes"
+ (cond [(ne (symbol_ref "! TARGET_SH2") (const_int 0))
+ (const_string "no")
+ (leu (plus (minus (match_dup 0) (pc)) (const_int 10330))
+ (const_int 20660))
+ (const_string "yes")
+ (ne (symbol_ref "mdep_reorg_phase <= SH_FIXUP_PCLOAD") (const_int 0))
+ (const_string "no")
+ (leu (plus (minus (match_dup 0) (pc)) (const_int 32764))
+ (const_int 65530))
+ (const_string "yes")
+ ] (const_string "no")))
+
+(define_attr "braf_cbranch_p" "no,yes"
+ (cond [(ne (symbol_ref "! TARGET_SH2") (const_int 0))
+ (const_string "no")
+ (leu (plus (minus (match_dup 0) (pc)) (const_int 10328))
+ (const_int 20658))
+ (const_string "yes")
+ (ne (symbol_ref "mdep_reorg_phase <= SH_FIXUP_PCLOAD") (const_int 0))
+ (const_string "no")
+ (leu (plus (minus (match_dup 0) (pc)) (const_int 32762))
+ (const_int 65528))
+ (const_string "yes")
+ ] (const_string "no")))
+
; An unconditional jump in the range -4092..4098 can be 2 bytes long.
; For wider ranges, we need a combination of a code and a data part.
; If we can get a scratch register for a long range jump, the code
; All other instructions are two bytes long by default.
+;; ??? This should use something like *branch_p (minus (match_dup 0) (pc)),
+;; but getattrtab doesn't understand this.
(define_attr "length" ""
(cond [(eq_attr "type" "cbranch")
- (cond [(ne (symbol_ref "short_cbranch_p (insn)") (const_int 0))
+ (cond [(eq_attr "short_cbranch_p" "yes")
(const_int 2)
- (ne (symbol_ref "med_branch_p (insn, 2)") (const_int 0))
+ (eq_attr "med_cbranch_p" "yes")
(const_int 6)
- (ne (symbol_ref "braf_branch_p (insn, 2)") (const_int 0))
- (const_int 10)
- (ne (pc) (pc))
+ (eq_attr "braf_cbranch_p" "yes")
(const_int 12)
+;; ??? using pc is not computed transitively.
+ (ne (match_dup 0) (match_dup 0))
+ (const_int 14)
] (const_int 16))
(eq_attr "type" "jump")
- (cond [(ne (symbol_ref "med_branch_p (insn, 0)") (const_int 0))
+ (cond [(eq_attr "med_branch_p" "yes")
(const_int 2)
(and (eq (symbol_ref "GET_CODE (PREV_INSN (insn))")
(symbol_ref "INSN"))
(eq (symbol_ref "INSN_CODE (PREV_INSN (insn))")
(symbol_ref "code_for_indirect_jump_scratch")))
- (if_then_else (ne (symbol_ref "braf_branch_p (insn, 0)")
- (const_int 0))
+ (if_then_else (eq_attr "braf_branch_p" "yes")
(const_int 6)
(const_int 10))
- (ne (symbol_ref "braf_branch_p (insn, 0)") (const_int 0))
+ (eq_attr "braf_branch_p" "yes")
(const_int 10)
- (ne (pc) (pc))
+;; ??? using pc is not computed transitively.
+ (ne (match_dup 0) (match_dup 0))
(const_int 12)
] (const_int 14))
] (const_int 2)))
""
"*
{
- enum machine_mode mode
- = optimize
- ? GET_MODE (PATTERN (prev_real_insn (operands[2])))
- : sh_addr_diff_vec_mode;
- switch (mode)
+ rtx diff_vec = PATTERN (next_real_insn (operands[2]));
+
+ if (GET_CODE (diff_vec) != ADDR_DIFF_VEC)
+ abort ();
+
+ switch (GET_MODE (diff_vec))
{
case SImode:
return \"shll2 %1\;mov.l @(r0,%1),%0\";
case HImode:
return \"add %1,%1\;mov.w @(r0,%1),%0\";
case QImode:
- {
- rtx adj = PATTERN (prev_real_insn (operands[2]));
- if ((insn_addresses[INSN_UID (XEXP ( XVECEXP (adj, 0, 1), 0))]
- - insn_addresses[INSN_UID (XEXP (XVECEXP (adj, 0, 2), 0))])
- <= 126)
- return \"mov.b @(r0,%1),%0\";
+ if (ADDR_DIFF_VEC_FLAGS (diff_vec).offset_unsigned)
return \"mov.b @(r0,%1),%0\;extu.b %0,%0\";
- }
+ return \"mov.b @(r0,%1),%0\";
default:
abort ();
}
}"
[(set_attr "length" "4")])
-;; Include ADDR_DIFF_VECS in the shorten_branches pass; we have to
-;; use a negative-length instruction to actually accomplish this.
-(define_insn "addr_diff_vec_adjust"
- [(unspec_volatile [(label_ref (match_operand 0 "" ""))
- (label_ref (match_operand 1 "" ""))
- (label_ref (match_operand 2 "" ""))
- (match_operand 3 "const_int_operand" "")] 7)]
- ""
- "*
-{
- /* ??? ASM_OUTPUT_ADDR_DIFF_ELT gets passed no context information, so
- we must use a kludge with a global variable. */
- sh_addr_diff_vec_mode = GET_MODE (PATTERN (insn));
- return \"\";
-}"
-;; Need a variable length for this to be processed in each shorten_branch pass.
-;; The actual work is done in ADJUST_INSN_LENGTH, because length attributes
-;; need to be (a choice of) constants.
-;; We use the calculated length before ADJUST_INSN_LENGTH to
-;; determine if the insn_addresses array contents are valid.
- [(set (attr "length")
- (if_then_else (eq (pc) (const_int -1))
- (const_int 2) (const_int 0)))])
-
(define_insn "return"
[(return)]
"reload_completed"
; align to a two byte boundary
-(define_insn "align_2"
+(define_expand "align_2"
[(unspec_volatile [(const_int 1)] 1)]
""
- ".align 1"
- [(set_attr "length" "0")
- (set_attr "in_delay_slot" "no")])
+ "")
; align to a four byte boundary
;; align_4 and align_log are instructions for the starts of loops, or
(define_insn "align_log"
[(unspec_volatile [(match_operand 0 "const_int_operand" "")] 1)]
""
- ".align %O0"
-;; Need a variable length for this to be processed in each shorten_branch pass.
-;; The actual work is done in ADJUST_INSN_LENGTH, because length attributes
-;; need to be (a choice of) constants.
- [(set (attr "length")
- (if_then_else (ne (pc) (pc)) (const_int 2) (const_int 0)))
+ ""
+ [(set_attr "length" "0")
(set_attr "in_delay_slot" "no")])
; emitted at the end of the literal table, used to emit the
/* This is how to output an element of a case-vector that is relative. */
#undef ASM_OUTPUT_ADDR_DIFF_ELT
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.word .L%d-.L%d\n", VALUE, REL)
/* This is how to output an element of a case-vector that is absolute.
/* This is how to output an element of a case-vector that is relative.
(SPARC uses such vectors only when generating PIC.) */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
do { \
char label[30]; \
ASM_GENERATE_INTERNAL_LABEL (label, "L", VALUE); \
(SPUR does not use such vectors,
but we must define this macro anyway.) */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.word L%d-L%d\n", VALUE, REL)
/* This is how to output an assembler line
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.word L%d-L%d\n", VALUE, REL)
/* This is how to output an assembler line
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t%s .L%d-.L%d\n", \
(TARGET_BIG_SWITCH ? ".long" : ".short"), \
VALUE, REL)
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.word L%d-L%d\n", VALUE, REL)
/* This is how to output an assembler line
/* This is how to output an element of a case-vector that is relative. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, VALUE, REL) \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
fprintf (FILE, "\t.word .L%d-.L%d\n", VALUE, REL)
/* This is how to output an assembler line
any alignment we'd encounter, so we skip the call to align_fuzz. */
return insn_current_address;
dest = JUMP_LABEL (branch);
+ /* BRANCH has no proper alignment chain set, so use SEQ. */
if (INSN_SHUID (branch) < INSN_SHUID (dest))
{
/* Forward branch. */
for (i = MAX_CODE_ALIGN; --i >= 0; )
align_tab[i] = NULL_RTX;
seq = get_last_insn ();
- for (insn_current_address = 0; seq; seq = PREV_INSN (seq))
+ for (; seq; seq = PREV_INSN (seq))
{
int uid = INSN_UID (seq);
int log;
log = (GET_CODE (seq) == CODE_LABEL ? LABEL_TO_ALIGNMENT (seq) : 0);
uid_align[uid] = align_tab[0];
- insn_addresses[uid] = --insn_current_address;
if (log)
{
/* Found an alignment label. */
for (i = log - 1; i >= 0; i--)
align_tab[i] = seq;
}
- if (GET_CODE (seq) != INSN || GET_CODE (PATTERN (seq)) != SEQUENCE)
- insn = seq;
- else
+ }
+#ifdef CASE_VECTOR_SHORTEN_MODE
+ if (optimize)
+ {
+ /* Look for ADDR_DIFF_VECs, and initialize their minimum and maximum
+ label fields. */
+
+ int min_shuid = INSN_SHUID (get_insns ()) - 1;
+ int max_shuid = INSN_SHUID (get_last_insn ()) + 1;
+ int rel;
+
+ for (insn = first; insn != 0; insn = NEXT_INSN (insn))
{
- insn = XVECEXP (PATTERN (seq), 0, 0);
- uid = INSN_UID (insn);
+ rtx min_lab = NULL_RTX, max_lab = NULL_RTX, pat;
+ int len, i, min, max, insn_shuid;
+ int min_align;
+ addr_diff_vec_flags flags;
+
+ if (GET_CODE (insn) != JUMP_INSN
+ || GET_CODE (PATTERN (insn)) != ADDR_DIFF_VEC)
+ continue;
+ pat = PATTERN (insn);
+ len = XVECLEN (pat, 1);
+ if (len <= 0)
+ abort ();
+ min_align = MAX_CODE_ALIGN;
+ for (min = max_shuid, max = min_shuid, i = len - 1; i >= 0; i--)
+ {
+ rtx lab = XEXP (XVECEXP (pat, 1, i), 0);
+ int shuid = INSN_SHUID (lab);
+ if (shuid < min)
+ {
+ min = shuid;
+ min_lab = lab;
+ }
+ if (shuid > max)
+ {
+ max = shuid;
+ max_lab = lab;
+ }
+ if (min_align > LABEL_TO_ALIGNMENT (lab))
+ min_align = LABEL_TO_ALIGNMENT (lab);
+ }
+ XEXP (pat, 2) = gen_rtx_LABEL_REF (VOIDmode, min_lab);
+ XEXP (pat, 3) = gen_rtx_LABEL_REF (VOIDmode, max_lab);
+ insn_shuid = INSN_SHUID (insn);
+ rel = INSN_SHUID (XEXP (XEXP (pat, 0), 0));
+ flags.min_align = min_align;
+ flags.base_after_vec = rel > insn_shuid;
+ flags.min_after_vec = min > insn_shuid;
+ flags.max_after_vec = max > insn_shuid;
+ flags.min_after_base = min > rel;
+ flags.max_after_base = max > rel;
+ ADDR_DIFF_VEC_FLAGS (pat) = flags;
}
}
+#endif /* CASE_VECTOR_SHORTEN_MODE */
/* Compute initial lengths, addresses, and varying flags for each insn. */
insn_last_address = insn_addresses[uid];
insn_addresses[uid] = insn_current_address;
- if (! varying_length[uid])
+ if (optimize && GET_CODE (insn) == JUMP_INSN
+ && GET_CODE (PATTERN (insn)) == ADDR_DIFF_VEC)
+ {
+#ifdef CASE_VECTOR_SHORTEN_MODE
+ rtx body = PATTERN (insn);
+ int old_length = insn_lengths[uid];
+ rtx rel_lab = XEXP (XEXP (body, 0), 0);
+ rtx min_lab = XEXP (XEXP (body, 2), 0);
+ rtx max_lab = XEXP (XEXP (body, 3), 0);
+ addr_diff_vec_flags flags = ADDR_DIFF_VEC_FLAGS (body);
+ int rel_addr = insn_addresses[INSN_UID (rel_lab)];
+ int min_addr = insn_addresses[INSN_UID (min_lab)];
+ int max_addr = insn_addresses[INSN_UID (max_lab)];
+ rtx prev;
+ int rel_align = 0;
+
+ /* Try to find a known alignment for rel_lab. */
+ for (prev = rel_lab;
+ prev
+ && ! insn_lengths[INSN_UID (prev)]
+ && ! (varying_length[INSN_UID (prev)] & 1);
+ prev = PREV_INSN (prev))
+ if (varying_length[INSN_UID (prev)] & 2)
+ {
+ rel_align = LABEL_TO_ALIGNMENT (prev);
+ break;
+ }
+
+ /* See the comment on addr_diff_vec_flags in rtl.h for the
+ meaning of the flags values. base: REL_LAB vec: INSN */
+ /* Anything after INSN has still addresses from the last
+ pass; adjust these so that they reflect our current
+ estimate for this pass. */
+ if (flags.base_after_vec)
+ rel_addr += insn_current_address - insn_last_address;
+ if (flags.min_after_vec)
+ min_addr += insn_current_address - insn_last_address;
+ if (flags.max_after_vec)
+ max_addr += insn_current_address - insn_last_address;
+ /* We want to know the worst case, i.e. lowest possible value
+ for the offset of MIN_LAB. If MIN_LAB is after REL_LAB,
+ its offset is positive, and we have to be wary of code shrink;
+ otherwise, it is negative, and we have to be vary of code
+ size increase. */
+ if (flags.min_after_base)
+ {
+ /* If INSN is between REL_LAB and MIN_LAB, the size
+ changes we are about to make can change the alignment
+ within the observed offset, therefore we have to break
+ it up into two parts that are independent. */
+ if (! flags.base_after_vec && flags.min_after_vec)
+ {
+ min_addr -= align_fuzz (rel_lab, insn, rel_align, 0);
+ min_addr -= align_fuzz (insn, min_lab, 0, 0);
+ }
+ else
+ min_addr -= align_fuzz (rel_lab, min_lab, rel_align, 0);
+ }
+ else
+ {
+ if (flags.base_after_vec && ! flags.min_after_vec)
+ {
+ min_addr -= align_fuzz (min_lab, insn, 0, ~0);
+ min_addr -= align_fuzz (insn, rel_lab, 0, ~0);
+ }
+ else
+ min_addr -= align_fuzz (min_lab, rel_lab, 0, ~0);
+ }
+ /* Likewise, determine the highest lowest possible value
+ for the offset of MAX_LAB. */
+ if (flags.max_after_base)
+ {
+ if (! flags.base_after_vec && flags.max_after_vec)
+ {
+ max_addr += align_fuzz (rel_lab, insn, rel_align, ~0);
+ max_addr += align_fuzz (insn, max_lab, 0, ~0);
+ }
+ else
+ max_addr += align_fuzz (rel_lab, max_lab, rel_align, ~0);
+ }
+ else
+ {
+ if (flags.base_after_vec && ! flags.max_after_vec)
+ {
+ max_addr += align_fuzz (max_lab, insn, 0, 0);
+ max_addr += align_fuzz (insn, rel_lab, 0, 0);
+ }
+ else
+ max_addr += align_fuzz (max_lab, rel_lab, 0, 0);
+ }
+ PUT_MODE (body, CASE_VECTOR_SHORTEN_MODE (min_addr - rel_addr,
+ max_addr - rel_addr,
+ body));
+#if !defined(READONLY_DATA_SECTION) || defined(JUMP_TABLES_IN_TEXT_SECTION)
+ insn_lengths[uid]
+ = (XVECLEN (body, 1) * GET_MODE_SIZE (GET_MODE (body)));
+ insn_current_address += insn_lengths[uid];
+ if (insn_lengths[uid] != old_length)
+ something_changed = 1;
+#endif
+ continue;
+#endif /* CASE_VECTOR_SHORTEN_MODE */
+ }
+ else if (! (varying_length[uid]))
{
insn_current_address += insn_lengths[uid];
continue;
#ifdef ASM_OUTPUT_ADDR_DIFF_ELT
ASM_OUTPUT_ADDR_DIFF_ELT
(file,
+ body,
CODE_LABEL_NUMBER (XEXP (XVECEXP (body, 1, idx), 0)),
CODE_LABEL_NUMBER (XEXP (XEXP (body, 0), 0)));
#else
/* Vector of address differences X0 - BASE, X1 - BASE, ...
First operand is BASE; the vector contains the X's.
The machine mode of this rtx says how much space to leave
- for each difference. */
-DEF_RTL_EXPR(ADDR_DIFF_VEC, "addr_diff_vec", "eE", 'x')
+ for each difference and is adjusted by branch shortening if
+ CASE_VECTOR_SHORTEN_MODE is defined.
+ The third and fourth operands store the target labels with the
+ minimum and maximum addresses respectively.
+ The fifth operand stores flags for use by branch shortening.
+ Set at the start of shorten_branches:
+ min_align: the minimum alignment for any of the target labels.
+ base_after_vec: true iff BASE is after the ADDR_DIFF_VEC.
+ min_after_vec: true iff minimum addr target label is after the ADDR_DIFF_VEC.
+ max_after_vec: true iff maximum addr target label is after the ADDR_DIFF_VEC.
+ min_after_base: true iff minimum address target label is after BASE.
+ max_after_base: true iff maximum address target label is after BASE.
+ Set by the actual branch shortening process:
+ offset_unsigned: true iff offsets have to be treated as unsigned.
+ scale: scaling that is necessary to make offsets fit into the mode.
+
+ The third, fourth and fifth operands are only valid when
+ CASE_VECTOR_SHORTEN_MODE is defined, and only in an optimizing
+ compilations. */
+
+DEF_RTL_EXPR(ADDR_DIFF_VEC, "addr_diff_vec", "eEeei", 'x')
/* ----------------------------------------------------------------------
At the top level of an instruction (perhaps under PARALLEL).
extern char rtx_class[];
#define GET_RTX_CLASS(CODE) (rtx_class[(int) (CODE)])
\f
+/* The flags and bitfields of an ADDR_DIFF_VEC. BASE is the base label
+ relative to which the offsets are calculated, as explained in rtl.def. */
+typedef struct
+{
+ /* Set at the start of shorten_branches - ONLY WHEN OPTIMIZING - : */
+ unsigned min_align: 8;
+ /* Flags: */
+ unsigned base_after_vec: 1; /* BASE is after the ADDR_DIFF_VEC. */
+ unsigned min_after_vec: 1; /* minimum address target label is after the ADDR_DIFF_VEC. */
+ unsigned max_after_vec: 1; /* maximum address target label is after the ADDR_DIFF_VEC. */
+ unsigned min_after_base: 1; /* minimum address target label is after BASE. */
+ unsigned max_after_base: 1; /* maximum address target label is after BASE. */
+ /* Set by the actual branch shortening process - ONLY WHEN OPTIMIZING - : */
+ unsigned offset_unsigned: 1; /* offsets have to be treated as unsigned. */
+ unsigned : 2;
+ unsigned scale : 8;
+} addr_diff_vec_flags;
+
/* Common union for an element of an rtx. */
typedef union rtunion_def
struct rtx_def *rtx;
struct rtvec_def *rtvec;
enum machine_mode rttype;
+ addr_diff_vec_flags rt_addr_diff_vec_flags;
} rtunion;
/* RTL expression ("rtx"). */
#define REG_NOTES(INSN) ((INSN)->fld[6].rtx)
+#define ADDR_DIFF_VEC_FLAGS(RTX) ((RTX)->fld[4].rt_addr_diff_vec_flags)
+
/* Don't forget to change reg_note_name in rtl.c. */
enum reg_note { REG_DEAD = 1, REG_INC = 2, REG_EQUIV = 3, REG_WAS_0 = 4,
REG_EQUAL = 5, REG_RETVAL = 6, REG_LIBCALL = 7,
@code{Pmode}.
@findex addr_diff_vec
-@item (addr_diff_vec:@var{m} @var{base} [@var{lr0} @var{lr1} @dots{}])
+@item (addr_diff_vec:@var{m} @var{base} [@var{lr0} @var{lr1} @dots{}] @var{min} @var{max} @var{flags})
Represents a table of jump addresses expressed as offsets from
@var{base}. The vector elements @var{lr0}, etc., are @code{label_ref}
expressions and so is @var{base}. The mode @var{m} specifies how much
-space is given to each address-difference.@refill
+space is given to each address-difference. @var{min} and @var{max}
+are set up by branch shortening and hold a label with a minimum and a
+maximum address, respectively. @var{flags} indicates the relative
+position of @var{base}, @var{min} and @var{max} to the cointaining insn
+and of @var{min} and @var{max} to @var{base}. See rtl.def for details.@refill
@end table
@node Incdec, Assembler, Side Effects, RTL
if (CASE_VECTOR_PC_RELATIVE || flag_pic)
emit_jump_insn (gen_rtx_ADDR_DIFF_VEC (CASE_VECTOR_MODE,
gen_rtx_LABEL_REF (Pmode, table_label),
- gen_rtvec_v (ncases, labelvec)));
+ gen_rtvec_v (ncases, labelvec),
+ const0_rtx, const0_rtx, 0));
else
emit_jump_insn (gen_rtx_ADDR_VEC (CASE_VECTOR_MODE,
gen_rtvec_v (ncases, labelvec)));
@table @code
@cindex dispatch table
@findex ASM_OUTPUT_ADDR_DIFF_ELT
-@item ASM_OUTPUT_ADDR_DIFF_ELT (@var{stream}, @var{value}, @var{rel})
+@item ASM_OUTPUT_ADDR_DIFF_ELT (@var{stream}, @var{body}, @var{value}, @var{rel})
A C statement to output to the stdio stream @var{stream} an assembler
pseudo-instruction to generate a difference between two labels.
@var{value} and @var{rel} are the numbers of two internal labels. The
You must provide this macro on machines where the addresses in a
dispatch table are relative to the table's own address. If defined, GNU
CC will also use this macro on all machines when producing PIC.
+@var{body} is the body of the ADDR_DIFF_VEC; it is provided so that the
+mode and flags can be read.
@findex ASM_OUTPUT_ADDR_VEC_ELT
@item ASM_OUTPUT_ADDR_VEC_ELT (@var{stream}, @var{value})
An alias for a machine mode name. This is the machine mode that
elements of a jump-table should have.
+@findex CASE_VECTOR_SHORTEN_MODE
+@item CASE_VECTOR_SHORTEN_MODE (@var{min_offset}, @var{max_offset}, @var{body})
+Optional: return the preferred mode for an @code{addr_diff_vec}
+when the minimum and maximum offset are known. If you define this,
+it enables extra code in branch shortening to deal with @code{addr_diff_vec}.
+To make this work, you also have to define INSN_ALIGN and
+make the alignment for @code{addr_diff_vec} explicit.
+The @var{body} argument is provided so that teh offset_unsigned and scale
+flags can be updated.
+
@findex CASE_VECTOR_PC_RELATIVE
@item CASE_VECTOR_PC_RELATIVE
Define this macro to be a C expression to indicate when jump-tables