This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Merge C++ conversion into trunk (2/6 - VEC rewrite)
- From: Diego Novillo <dnovillo at google dot com>
- To: gcc-patches at gcc dot gnu dot org, Lawrence Crowl <crowl at google dot com>
- Cc: Richard Guenther <rguenther at suse dot de>
- Date: Sun, 12 Aug 2012 16:11:26 -0400
- Subject: Merge C++ conversion into trunk (2/6 - VEC rewrite)
This implements the VEC re-write.
See http://gcc.gnu.org/ml/gcc-patches/2012-08/msg00711.html for
details.
Diego.
2012-08-12 Diego Novillo <dnovillo@google.com>
Re-implement VEC in C++.
* vec.c (vec_heap_free): Convert into a template function.
(vec_gc_o_reserve_1): Make extern.
(vec_gc_p_reserve): Remove.
(vec_gc_p_reserve_exact): Remove.
(vec_gc_o_reserve): Remove.
(vec_gc_o_reserve_exact): Remove.
(vec_heap_o_reserve_1): Make extern.
(vec_heap_p_reserve): Remove.
(vec_heap_p_reserve_exact): Remove.
(vec_heap_o_reserve): Remove.
(vec_heap_o_reserve_exact): Remove.
(vec_stack_p_reserve): Remove.
(vec_stack_p_reserve_exact): Remove.
* vec.h (VEC_CHECK_INFO, VEC_CHECK_DECL, VEC_CHECK_PASS,
VEC_ASSERT, VEC_ASSERT_FAIL, vec_assert_fail): Move earlier
in the file.
(VEC): Define to vec_t<T>.
(vec_allocation_t): Define.
(struct vec_prefix): Move earlier in the file.
(vec_t<T>): New template.
(DEF_VEC_I, DEF_VECL_ALLOC_I, DEF_VEC_P, DEF_VEC_ALLOC_P,
DEF_VEC_O, DEF_VEC_ALLOC_P, DEF_VEC_O, DEF_VEC_ALLOC_O,
DEF_VEC_ALLOC_P_STACK, DEF_VEC_ALLOC_O_STACK,
DEF_VEC_ALLOC_I_STACK): Expand to 'struct vec_swallow_trailing_semi'.
(DEF_VEC_A): Provide template instantiations for
GC/PCH markers that do not traverse the vector.
(vec_stack_p_reserve): Remove.
(vec_stack_p_reserve_exact): Remove.
(vec_stack_p_reserve_exact_1): Remove.
(vec_stack_o_reserve): Remove.
(vec_stack_o_reserve_exact): Remove.
(vec_stack_free): Re-write as a template function.
(vec_reserve): New template function.
(vec_reserve_exact): New template function.
(vec_heap_free): New template function if GATHER_STATISTICS is
defined. Otherwise, macro that expands to free().
(VEC_length_1): New template function.
(VEC_length): Call it.
(VEC_empty_1): New template function.
(VEC_empty): Call it.
(VEC_address_1): New template function.
(VEC_address): Call it.
(VEC_last_1): New template function.
(VEC_last): Call it. Change return type to T&.
Change all users that used VEC_Os.
(VEC_index_1): New template function.
(VEC_index): Call it. Return a T& instead of a T*.
Update all callers that were using VEC_O before.
(VEC_iterate_1): New template function.
(VEC_iterate): Call it.
(VEC_embedded_size_1): New template function.
(VEC_embedded_size): Call it.
(VEC_embedded_init_1): New template function.
(VEC_embedded_init): Call it.
(VEC_alloc_1): New template function.
(VEC_alloc): Call it. If A is 'stack', call XALLOCAVAR to
do the allocation.
(VEC_free_1): New template function.
(VEC_free): Call it.
(VEC_copy_1): New template function.
(VEC_copy): Call it.
(VEC_space_1): New template function
(VEC_space): Call it.
(VEC_reserve_1): New template function.
(VEC_reserve): Call it.
(VEC_reserve_exact_1): New template function.
(VEC_reserve_exact): Call it.
(VEC_splice_1): New template function.
(VEC_splice): Call it.
(VEC_safe_splice_1): New template function.
(VEC_safe_splice): Call it.
(VEC_quick_push_1): New template function. Create two overloads, one
accepting T, the other accepting T *. Update all callers
where T and T * are ambiguous.
(VEC_quick_push): Call it.
(VEC_safe_push_1): New template function. Create two overloads, one
accepting T, the other accepting T *. Update all callers
where T and T * are ambiguous.
(VEC_safe_push): Call it.
(VEC_pop_1): New template function.
(VEC_pop): Call it.
(VEC_truncate_1): New template function.
(VEC_truncate): Call it.
(VEC_safe_grow_1): New template function.
(VEC_safe_grow): Call it.
(VEC_safe_grow_cleared_1): New template function.
(VEC_safe_grow_cleared): Call it.
(VEC_replace_1): New template function.
(VEC_replace): Call it. Always accept T instead of T*.
Update all callers that used VEC_Os.
(VEC_quick_insert_1): New template function.
(VEC_quick_insert): Call it.
(VEC_safe_insert_1): New template function.
(VEC_safe_insert): Call it.
(VEC_ordered_remove_1): New template function.
(VEC_ordered_remove): Call it.
(VEC_unordered_remove_1): New template function.
(VEC_unordered_remove): Call it.
(VEC_block_remove_1): New template function.
(VEC_block_remove): Call it.
(VEC_lower_bound_1): New template function.
(VEC_lower_bound): Call it.
(VEC_OP): Remove.
(DEF_VEC_FUNC_P): Remove.
(DEF_VEC_ALLOC_FUNC_P): Remove.
(DEF_VEC_NONALLOC_FUNCS_P): Remove.
(DEF_VEC_FUNC_O): Remove.
(DEF_VEC_ALLOC_FUNC_O): Remove.
(DEF_VEC_NONALLOC_FUNCS_O): Remove.
(DEF_VEC_ALLOC_FUNC_I): Remove.
(DEF_VEC_NONALLOC_FUNCS_I): Remove.
(DEF_VEC_ALLOC_FUNC_P_STACK): Remove.
(DEF_VEC_ALLOC_FUNC_O_STACK): Remove.
(DEF_VEC_ALLOC_FUNC_I_STACK): Remove.
(vec_reserve_exact): New template function.
* gengtype-lex.l (DEF_VEC_ALLOC_[IOP]/{EOID}): Remove.
* gengtype-parse.c (token_names): Remove DEF_VEC_ALLOC_[IOP].
(typedef_name): Emit vec_t<C1> instead of VEC_C1_C2.
(def_vec_alloc): Remove. Update all callers.
* gengtype.c (filter_type_name): New.
(output_mangled_typename): Call it.
(write_func_for_structure): Likewise.
(write_types): Likewise.
(write_root): Likewise.
(write_typed_alloc_def): Likewise.
(note_def_vec): Emit vec_t<TYPE_NAME> instead of VEC_TYPE_NAME_base.
(note_def_vec_alloc): Remove.
* gengtype.h (note_def_vec_alloc): Remove.
(DEFVEC_ALLOC): Remove token code.
* df-scan.c (df_bb_verify): Remove call to df_free_collection_rec
inside the insn traversal loop.
* gimplify.c (gimplify_compound_lval): Rename STACK to EXPR_STACK.
* ipa-inline.c (inline_small_functions): Rename HEAP to EDGE_HEAP.
* reg-stack.c (stack): Rename to STACK_PTR. Update all users.
* tree-vrp.c (stack): Rename to EQUIV_STACK. Update all users.
* config/bfin/bfin.c (hwloop_optimize): Update some calls to
VEC_* for vectors of non-pointers.
* config/c6x/c6x.c (try_rename_operands): Likewise.
(reshuffle_units): Likewise.
* config/mips/mips.c (mips_multi_start): Likewise.
(mips_multi_add): Likewise.
(mips_multi_copy_insn): Likewise.
(mips_multi_set_operand): Likewise.
* hw-doloop.c (discover_loop): Likewise.
(discover_loops): Likewise.
(reorg_loops): Likewise.
diff --git a/gcc/ada/gcc-interface/trans.c b/gcc/ada/gcc-interface/trans.c
index 1b1bca8..cd35cd1 100644
--- a/gcc/ada/gcc-interface/trans.c
+++ b/gcc/ada/gcc-interface/trans.c
@@ -2921,7 +2921,7 @@ finalize_nrv_unc_r (tree *tp, int *walk_subtrees, void *data)
= VEC_index (constructor_elt,
CONSTRUCTOR_ELTS
(TREE_OPERAND (TREE_OPERAND (ret_val, 0), 1)),
- 1)->value;
+ 1).value;
else
ret_val = TREE_OPERAND (TREE_OPERAND (ret_val, 0), 1);
}
@@ -2980,7 +2980,7 @@ finalize_nrv_unc_r (tree *tp, int *walk_subtrees, void *data)
TREE_OPERAND (alloc, 0),
VEC_index (constructor_elt,
CONSTRUCTOR_ELTS (TREE_OPERAND (alloc, 1)),
- 0)->value);
+ 0).value);
/* Build a modified CONSTRUCTOR that references NEW_VAR. */
p_array = TYPE_FIELDS (TREE_TYPE (alloc));
@@ -2990,7 +2990,7 @@ finalize_nrv_unc_r (tree *tp, int *walk_subtrees, void *data)
VEC_index (constructor_elt,
CONSTRUCTOR_ELTS
(TREE_OPERAND (alloc, 1)),
- 1)->value);
+ 1).value);
new_ret = build_constructor (TREE_TYPE (alloc), v);
}
else
diff --git a/gcc/ada/gcc-interface/utils.c b/gcc/ada/gcc-interface/utils.c
index cd91873..c9b29ad 100644
--- a/gcc/ada/gcc-interface/utils.c
+++ b/gcc/ada/gcc-interface/utils.c
@@ -4491,10 +4491,10 @@ convert (tree type, tree expr)
inner expression. */
if (TREE_CODE (expr) == CONSTRUCTOR
&& !VEC_empty (constructor_elt, CONSTRUCTOR_ELTS (expr))
- && VEC_index (constructor_elt, CONSTRUCTOR_ELTS (expr), 0)->index
+ && VEC_index (constructor_elt, CONSTRUCTOR_ELTS (expr), 0).index
== TYPE_FIELDS (etype))
unpadded
- = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (expr), 0)->value;
+ = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (expr), 0).value;
/* Otherwise, build an explicit component reference. */
else
@@ -5047,7 +5047,7 @@ remove_conversions (tree exp, bool true_address)
&& TYPE_JUSTIFIED_MODULAR_P (TREE_TYPE (exp)))
return
remove_conversions (VEC_index (constructor_elt,
- CONSTRUCTOR_ELTS (exp), 0)->value,
+ CONSTRUCTOR_ELTS (exp), 0).value,
true);
break;
diff --git a/gcc/ada/gcc-interface/utils2.c b/gcc/ada/gcc-interface/utils2.c
index a8a21a6..4578114 100644
--- a/gcc/ada/gcc-interface/utils2.c
+++ b/gcc/ada/gcc-interface/utils2.c
@@ -441,7 +441,7 @@ compare_fat_pointers (location_t loc, tree result_type, tree p1, tree p2)
/* The constant folder doesn't fold fat pointer types so we do it here. */
if (TREE_CODE (p1) == CONSTRUCTOR)
- p1_array = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (p1), 0)->value;
+ p1_array = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (p1), 0).value;
else
p1_array = build_component_ref (p1, NULL_TREE,
TYPE_FIELDS (TREE_TYPE (p1)), true);
@@ -452,7 +452,7 @@ compare_fat_pointers (location_t loc, tree result_type, tree p1, tree p2)
null_pointer_node));
if (TREE_CODE (p2) == CONSTRUCTOR)
- p2_array = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (p2), 0)->value;
+ p2_array = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (p2), 0).value;
else
p2_array = build_component_ref (p2, NULL_TREE,
TYPE_FIELDS (TREE_TYPE (p2)), true);
@@ -473,14 +473,14 @@ compare_fat_pointers (location_t loc, tree result_type, tree p1, tree p2)
= fold_build2_loc (loc, EQ_EXPR, result_type, p1_array, p2_array);
if (TREE_CODE (p1) == CONSTRUCTOR)
- p1_bounds = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (p1), 1)->value;
+ p1_bounds = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (p1), 1).value;
else
p1_bounds
= build_component_ref (p1, NULL_TREE,
DECL_CHAIN (TYPE_FIELDS (TREE_TYPE (p1))), true);
if (TREE_CODE (p2) == CONSTRUCTOR)
- p2_bounds = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (p2), 1)->value;
+ p2_bounds = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (p2), 1).value;
else
p2_bounds
= build_component_ref (p2, NULL_TREE,
@@ -1336,7 +1336,7 @@ build_unary_op (enum tree_code op_code, tree result_type, tree operand)
{
result = VEC_index (constructor_elt,
CONSTRUCTOR_ELTS (operand),
- 0)->value;
+ 0).value;
result = convert (build_pointer_type (TREE_TYPE (operand)),
build_unary_op (ADDR_EXPR, NULL_TREE, result));
break;
@@ -2676,9 +2676,9 @@ gnat_stabilize_reference (tree ref, bool force, bool *success)
&& VEC_length (constructor_elt, CONSTRUCTOR_ELTS (ref)) == 1)
{
tree index
- = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (ref), 0)->index;
+ = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (ref), 0).index;
tree value
- = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (ref), 0)->value;
+ = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (ref), 0).value;
result
= build_constructor_single (type, index,
gnat_stabilize_reference_1 (value,
diff --git a/gcc/alias.c b/gcc/alias.c
index a105004..de7640f 100644
--- a/gcc/alias.c
+++ b/gcc/alias.c
@@ -852,8 +852,8 @@ new_alias_set (void)
if (flag_strict_aliasing)
{
if (alias_sets == 0)
- VEC_safe_push (alias_set_entry, gc, alias_sets, 0);
- VEC_safe_push (alias_set_entry, gc, alias_sets, 0);
+ VEC_safe_push (alias_set_entry, gc, alias_sets, (alias_set_entry) 0);
+ VEC_safe_push (alias_set_entry, gc, alias_sets, (alias_set_entry) 0);
return VEC_length (alias_set_entry, alias_sets) - 1;
}
else
diff --git a/gcc/c-family/c-common.c b/gcc/c-family/c-common.c
index a002541..ab3eb0a 100644
--- a/gcc/c-family/c-common.c
+++ b/gcc/c-family/c-common.c
@@ -8392,7 +8392,7 @@ parse_optimize_options (tree args, bool attr_p)
/* Build up argv vector. Just in case the string is stored away, use garbage
collected strings. */
VEC_truncate (const_char_p, optimize_args, 0);
- VEC_safe_push (const_char_p, gc, optimize_args, NULL);
+ VEC_safe_push (const_char_p, gc, optimize_args, (const_char_p)NULL);
for (ap = args; ap != NULL_TREE; ap = TREE_CHAIN (ap))
{
@@ -9390,10 +9390,10 @@ complete_array_type (tree *ptype, tree initial_value, bool do_default)
constructor_elt *ce;
bool fold_p = false;
- if (VEC_index (constructor_elt, v, 0)->index)
+ if (VEC_index (constructor_elt, v, 0).index)
maxindex = fold_convert_loc (input_location, sizetype,
VEC_index (constructor_elt,
- v, 0)->index);
+ v, 0).index);
curindex = maxindex;
for (cnt = 1;
diff --git a/gcc/c-family/c-pragma.c b/gcc/c-family/c-pragma.c
index 830ca81..77ed0f0 100644
--- a/gcc/c-family/c-pragma.c
+++ b/gcc/c-family/c-pragma.c
@@ -1209,9 +1209,9 @@ c_pp_lookup_pragma (unsigned int id, const char **space, const char **name)
+ VEC_length (pragma_ns_name, registered_pp_pragmas)))
{
*space = VEC_index (pragma_ns_name, registered_pp_pragmas,
- id - PRAGMA_FIRST_EXTERNAL)->space;
+ id - PRAGMA_FIRST_EXTERNAL).space;
*name = VEC_index (pragma_ns_name, registered_pp_pragmas,
- id - PRAGMA_FIRST_EXTERNAL)->name;
+ id - PRAGMA_FIRST_EXTERNAL).name;
return;
}
@@ -1334,7 +1334,7 @@ c_invoke_pragma_handler (unsigned int id)
pragma_handler_2arg handler_2arg;
id -= PRAGMA_FIRST_EXTERNAL;
- ihandler = VEC_index (internal_pragma_handler, registered_pragmas, id);
+ ihandler = &VEC_index (internal_pragma_handler, registered_pragmas, id);
if (ihandler->extra_data)
{
handler_2arg = ihandler->handler.handler_2arg;
diff --git a/gcc/c/c-decl.c b/gcc/c/c-decl.c
index 09df65d..e5d17b7 100644
--- a/gcc/c/c-decl.c
+++ b/gcc/c/c-decl.c
@@ -3932,7 +3932,7 @@ add_flexible_array_elts_to_size (tree decl, tree init)
if (VEC_empty (constructor_elt, CONSTRUCTOR_ELTS (init)))
return;
- elt = VEC_last (constructor_elt, CONSTRUCTOR_ELTS (init))->value;
+ elt = VEC_last (constructor_elt, CONSTRUCTOR_ELTS (init)).value;
type = TREE_TYPE (elt);
if (TREE_CODE (type) == ARRAY_TYPE
&& TYPE_SIZE (type) == NULL_TREE
diff --git a/gcc/c/c-parser.c b/gcc/c/c-parser.c
index 2237749..34d5a34 100644
--- a/gcc/c/c-parser.c
+++ b/gcc/c/c-parser.c
@@ -6529,9 +6529,9 @@ c_parser_postfix_expression (c_parser *parser)
break;
}
- e1_p = VEC_index (c_expr_t, cexpr_list, 0);
- e2_p = VEC_index (c_expr_t, cexpr_list, 1);
- e3_p = VEC_index (c_expr_t, cexpr_list, 2);
+ e1_p = &VEC_index (c_expr_t, cexpr_list, 0);
+ e2_p = &VEC_index (c_expr_t, cexpr_list, 1);
+ e3_p = &VEC_index (c_expr_t, cexpr_list, 2);
c = e1_p->value;
mark_exp_read (e2_p->value);
@@ -6611,8 +6611,8 @@ c_parser_postfix_expression (c_parser *parser)
break;
}
- e1_p = VEC_index (c_expr_t, cexpr_list, 0);
- e2_p = VEC_index (c_expr_t, cexpr_list, 1);
+ e1_p = &VEC_index (c_expr_t, cexpr_list, 0);
+ e2_p = &VEC_index (c_expr_t, cexpr_list, 1);
mark_exp_read (e1_p->value);
if (TREE_CODE (e1_p->value) == EXCESS_PRECISION_EXPR)
@@ -6671,15 +6671,15 @@ c_parser_postfix_expression (c_parser *parser)
if (VEC_length (c_expr_t, cexpr_list) == 2)
expr.value =
c_build_vec_perm_expr
- (loc, VEC_index (c_expr_t, cexpr_list, 0)->value,
- NULL_TREE, VEC_index (c_expr_t, cexpr_list, 1)->value);
+ (loc, VEC_index (c_expr_t, cexpr_list, 0).value,
+ NULL_TREE, VEC_index (c_expr_t, cexpr_list, 1).value);
else if (VEC_length (c_expr_t, cexpr_list) == 3)
expr.value =
c_build_vec_perm_expr
- (loc, VEC_index (c_expr_t, cexpr_list, 0)->value,
- VEC_index (c_expr_t, cexpr_list, 1)->value,
- VEC_index (c_expr_t, cexpr_list, 2)->value);
+ (loc, VEC_index (c_expr_t, cexpr_list, 0).value,
+ VEC_index (c_expr_t, cexpr_list, 1).value,
+ VEC_index (c_expr_t, cexpr_list, 2).value);
else
{
error_at (loc, "wrong number of arguments to "
diff --git a/gcc/c/c-typeck.c b/gcc/c/c-typeck.c
index c2f713e..5b710c3 100644
--- a/gcc/c/c-typeck.c
+++ b/gcc/c/c-typeck.c
@@ -6950,7 +6950,7 @@ pop_init_level (int implicit, struct obstack * braced_init_obstack)
bool constructor_zeroinit =
(VEC_length (constructor_elt, constructor_elements) == 1
&& integer_zerop
- (VEC_index (constructor_elt, constructor_elements, 0)->value));
+ (VEC_index (constructor_elt, constructor_elements, 0).value));
/* Do not warn for flexible array members or zero-length arrays. */
while (constructor_unfilled_fields
@@ -6997,10 +6997,10 @@ pop_init_level (int implicit, struct obstack * braced_init_obstack)
else if (VEC_length (constructor_elt,constructor_elements) != 1)
{
error_init ("extra elements in scalar initializer");
- ret.value = VEC_index (constructor_elt,constructor_elements,0)->value;
+ ret.value = VEC_index (constructor_elt,constructor_elements,0).value;
}
else
- ret.value = VEC_index (constructor_elt,constructor_elements,0)->value;
+ ret.value = VEC_index (constructor_elt,constructor_elements,0).value;
}
else
{
@@ -7671,9 +7671,9 @@ find_init_member (tree field, struct obstack * braced_init_obstack)
else if (TREE_CODE (constructor_type) == UNION_TYPE)
{
if (!VEC_empty (constructor_elt, constructor_elements)
- && (VEC_last (constructor_elt, constructor_elements)->index
+ && (VEC_last (constructor_elt, constructor_elements).index
== field))
- return VEC_last (constructor_elt, constructor_elements)->value;
+ return VEC_last (constructor_elt, constructor_elements).value;
}
return 0;
}
@@ -7856,7 +7856,7 @@ output_init_element (tree value, tree origtype, bool strict_string, tree type,
if (!implicit)
{
if (TREE_SIDE_EFFECTS (VEC_last (constructor_elt,
- constructor_elements)->value))
+ constructor_elements).value))
warning_init (0,
"initialized field with side-effects overwritten");
else if (warn_override_init)
diff --git a/gcc/combine.c b/gcc/combine.c
index 2b91eb9..3c4dee8 100644
--- a/gcc/combine.c
+++ b/gcc/combine.c
@@ -1587,7 +1587,7 @@ set_nonzero_bits_and_sign_copies (rtx x, const_rtx set, void *data)
(DF_LR_IN (ENTRY_BLOCK_PTR->next_bb), REGNO (x))
&& HWI_COMPUTABLE_MODE_P (GET_MODE (x)))
{
- reg_stat_type *rsp = VEC_index (reg_stat_type, reg_stat, REGNO (x));
+ reg_stat_type *rsp = &VEC_index (reg_stat_type, reg_stat, REGNO (x));
if (set == 0 || GET_CODE (set) == CLOBBER)
{
@@ -3635,21 +3635,21 @@ try_combine (rtx i3, rtx i2, rtx i1, rtx i0, int *new_direct_jump_p,
&& ! (temp = SET_DEST (XVECEXP (newpat, 0, 1)),
(REG_P (temp)
&& VEC_index (reg_stat_type, reg_stat,
- REGNO (temp))->nonzero_bits != 0
+ REGNO (temp)).nonzero_bits != 0
&& GET_MODE_PRECISION (GET_MODE (temp)) < BITS_PER_WORD
&& GET_MODE_PRECISION (GET_MODE (temp)) < HOST_BITS_PER_INT
&& (VEC_index (reg_stat_type, reg_stat,
- REGNO (temp))->nonzero_bits
+ REGNO (temp)).nonzero_bits
!= GET_MODE_MASK (word_mode))))
&& ! (GET_CODE (SET_DEST (XVECEXP (newpat, 0, 1))) == SUBREG
&& (temp = SUBREG_REG (SET_DEST (XVECEXP (newpat, 0, 1))),
(REG_P (temp)
&& VEC_index (reg_stat_type, reg_stat,
- REGNO (temp))->nonzero_bits != 0
+ REGNO (temp)).nonzero_bits != 0
&& GET_MODE_PRECISION (GET_MODE (temp)) < BITS_PER_WORD
&& GET_MODE_PRECISION (GET_MODE (temp)) < HOST_BITS_PER_INT
&& (VEC_index (reg_stat_type, reg_stat,
- REGNO (temp))->nonzero_bits
+ REGNO (temp)).nonzero_bits
!= GET_MODE_MASK (word_mode)))))
&& ! reg_overlap_mentioned_p (SET_DEST (XVECEXP (newpat, 0, 1)),
SET_SRC (XVECEXP (newpat, 0, 1)))
@@ -9425,7 +9425,7 @@ reg_nonzero_bits_for_combine (const_rtx x, enum machine_mode mode,
value. Otherwise, use the previously-computed global nonzero bits
for this register. */
- rsp = VEC_index (reg_stat_type, reg_stat, REGNO (x));
+ rsp = &VEC_index (reg_stat_type, reg_stat, REGNO (x));
if (rsp->last_set_value != 0
&& (rsp->last_set_mode == mode
|| (GET_MODE_CLASS (rsp->last_set_mode) == MODE_INT
@@ -9494,7 +9494,7 @@ reg_num_sign_bit_copies_for_combine (const_rtx x, enum machine_mode mode,
rtx tem;
reg_stat_type *rsp;
- rsp = VEC_index (reg_stat_type, reg_stat, REGNO (x));
+ rsp = &VEC_index (reg_stat_type, reg_stat, REGNO (x));
if (rsp->last_set_value != 0
&& rsp->last_set_mode == mode
&& ((rsp->last_set_label >= label_tick_ebb_start
@@ -12033,7 +12033,7 @@ update_table_tick (rtx x)
for (r = regno; r < endregno; r++)
{
- reg_stat_type *rsp = VEC_index (reg_stat_type, reg_stat, r);
+ reg_stat_type *rsp = &VEC_index (reg_stat_type, reg_stat, r);
rsp->last_set_table_tick = label_tick;
}
@@ -12135,7 +12135,7 @@ record_value_for_reg (rtx reg, rtx insn, rtx value)
register. */
for (i = regno; i < endregno; i++)
{
- rsp = VEC_index (reg_stat_type, reg_stat, i);
+ rsp = &VEC_index (reg_stat_type, reg_stat, i);
if (insn)
rsp->last_set = insn;
@@ -12161,7 +12161,7 @@ record_value_for_reg (rtx reg, rtx insn, rtx value)
for (i = regno; i < endregno; i++)
{
- rsp = VEC_index (reg_stat_type, reg_stat, i);
+ rsp = &VEC_index (reg_stat_type, reg_stat, i);
rsp->last_set_label = label_tick;
if (!insn
|| (value && rsp->last_set_table_tick >= label_tick_ebb_start))
@@ -12173,7 +12173,7 @@ record_value_for_reg (rtx reg, rtx insn, rtx value)
/* The value being assigned might refer to X (like in "x++;"). In that
case, we must replace it with (clobber (const_int 0)) to prevent
infinite loops. */
- rsp = VEC_index (reg_stat_type, reg_stat, regno);
+ rsp = &VEC_index (reg_stat_type, reg_stat, regno);
if (value && !get_last_value_validate (&value, insn, label_tick, 0))
{
value = copy_rtx (value);
@@ -12271,7 +12271,7 @@ record_dead_and_set_regs (rtx insn)
{
reg_stat_type *rsp;
- rsp = VEC_index (reg_stat_type, reg_stat, i);
+ rsp = &VEC_index (reg_stat_type, reg_stat, i);
rsp->last_death = insn;
}
}
@@ -12286,7 +12286,7 @@ record_dead_and_set_regs (rtx insn)
{
reg_stat_type *rsp;
- rsp = VEC_index (reg_stat_type, reg_stat, i);
+ rsp = &VEC_index (reg_stat_type, reg_stat, i);
rsp->last_set_invalid = 1;
rsp->last_set = insn;
rsp->last_set_value = 0;
@@ -12344,7 +12344,7 @@ record_promoted_value (rtx insn, rtx subreg)
continue;
}
- rsp = VEC_index (reg_stat_type, reg_stat, regno);
+ rsp = &VEC_index (reg_stat_type, reg_stat, regno);
if (rsp->last_set == insn)
{
if (SUBREG_PROMOTED_UNSIGNED_P (subreg) > 0)
@@ -12369,7 +12369,7 @@ record_promoted_value (rtx insn, rtx subreg)
static bool
reg_truncated_to_mode (enum machine_mode mode, const_rtx x)
{
- reg_stat_type *rsp = VEC_index (reg_stat_type, reg_stat, REGNO (x));
+ reg_stat_type *rsp = &VEC_index (reg_stat_type, reg_stat, REGNO (x));
enum machine_mode truncated = rsp->truncated_to_mode;
if (truncated == 0
@@ -12414,7 +12414,7 @@ record_truncated_value (rtx *p, void *data ATTRIBUTE_UNUSED)
else
return 0;
- rsp = VEC_index (reg_stat_type, reg_stat, REGNO (x));
+ rsp = &VEC_index (reg_stat_type, reg_stat, REGNO (x));
if (rsp->truncated_to_mode == 0
|| rsp->truncation_label < label_tick_ebb_start
|| (GET_MODE_SIZE (truncated_mode)
@@ -12493,7 +12493,7 @@ get_last_value_validate (rtx *loc, rtx insn, int tick, int replace)
for (j = regno; j < endregno; j++)
{
- reg_stat_type *rsp = VEC_index (reg_stat_type, reg_stat, j);
+ reg_stat_type *rsp = &VEC_index (reg_stat_type, reg_stat, j);
if (rsp->last_set_invalid
/* If this is a pseudo-register that was only set once and not
live at the beginning of the function, it is always valid. */
@@ -12597,7 +12597,7 @@ get_last_value (const_rtx x)
return 0;
regno = REGNO (x);
- rsp = VEC_index (reg_stat_type, reg_stat, regno);
+ rsp = &VEC_index (reg_stat_type, reg_stat, regno);
value = rsp->last_set_value;
/* If we don't have a value, or if it isn't for this basic block and
@@ -12661,7 +12661,7 @@ use_crosses_set_p (const_rtx x, int from_luid)
#endif
for (; regno < endreg; regno++)
{
- reg_stat_type *rsp = VEC_index (reg_stat_type, reg_stat, regno);
+ reg_stat_type *rsp = &VEC_index (reg_stat_type, reg_stat, regno);
if (rsp->last_set
&& rsp->last_set_label == label_tick
&& DF_INSN_LUID (rsp->last_set) > from_luid)
@@ -12909,7 +12909,7 @@ move_deaths (rtx x, rtx maybe_kill_insn, int from_luid, rtx to_insn,
if (code == REG)
{
unsigned int regno = REGNO (x);
- rtx where_dead = VEC_index (reg_stat_type, reg_stat, regno)->last_death;
+ rtx where_dead = VEC_index (reg_stat_type, reg_stat, regno).last_death;
/* Don't move the register if it gets killed in between from and to. */
if (maybe_kill_insn && reg_set_p (x, maybe_kill_insn)
@@ -13524,7 +13524,7 @@ distribute_notes (rtx notes, rtx from_insn, rtx i3, rtx i2, rtx elim_i2,
if (place && REG_NOTE_KIND (note) == REG_DEAD)
{
unsigned int regno = REGNO (XEXP (note, 0));
- reg_stat_type *rsp = VEC_index (reg_stat_type, reg_stat, regno);
+ reg_stat_type *rsp = &VEC_index (reg_stat_type, reg_stat, regno);
if (dead_or_set_p (place, XEXP (note, 0))
|| reg_bitfield_target_p (XEXP (note, 0), PATTERN (place)))
diff --git a/gcc/dwarf2cfi.c b/gcc/dwarf2cfi.c
index 3edb6e1..1702785 100644
--- a/gcc/dwarf2cfi.c
+++ b/gcc/dwarf2cfi.c
@@ -2550,7 +2550,7 @@ create_cfi_notes (void)
gcc_checking_assert (trace_work_list == NULL);
/* Always begin at the entry trace. */
- ti = VEC_index (dw_trace_info, trace_info, 0);
+ ti = &VEC_index (dw_trace_info, trace_info, 0);
scan_trace (ti);
while (!VEC_empty (dw_trace_info_ref, trace_work_list))
@@ -2597,7 +2597,7 @@ connect_traces (void)
/* Remove all unprocessed traces from the list. */
for (i = n - 1; i > 0; --i)
{
- ti = VEC_index (dw_trace_info, trace_info, i);
+ ti = &VEC_index (dw_trace_info, trace_info, i);
if (ti->beg_row == NULL)
{
VEC_ordered_remove (dw_trace_info, trace_info, i);
@@ -2609,13 +2609,13 @@ connect_traces (void)
/* Work from the end back to the beginning. This lets us easily insert
remember/restore_state notes in the correct order wrt other notes. */
- prev_ti = VEC_index (dw_trace_info, trace_info, n - 1);
+ prev_ti = &VEC_index (dw_trace_info, trace_info, n - 1);
for (i = n - 1; i > 0; --i)
{
dw_cfi_row *old_row;
ti = prev_ti;
- prev_ti = VEC_index (dw_trace_info, trace_info, i - 1);
+ prev_ti = &VEC_index (dw_trace_info, trace_info, i - 1);
add_cfi_insn = ti->head;
@@ -2686,7 +2686,7 @@ connect_traces (void)
for (i = 0; i < n; ++i)
{
- ti = VEC_index (dw_trace_info, trace_info, i);
+ ti = &VEC_index (dw_trace_info, trace_info, i);
if (ti->switch_sections)
prev_args_size = 0;
@@ -2884,8 +2884,8 @@ create_cie_data (void)
break;
case 1:
cie_return_save = ggc_alloc_reg_saved_in_data ();
- *cie_return_save = *VEC_index (reg_saved_in_data,
- cie_trace.regs_saved_in_regs, 0);
+ *cie_return_save = VEC_index (reg_saved_in_data,
+ cie_trace.regs_saved_in_regs, 0);
VEC_free (reg_saved_in_data, heap, cie_trace.regs_saved_in_regs);
break;
default:
diff --git a/gcc/dwarf2out.c b/gcc/dwarf2out.c
index 4b67d82..4bc4cc3 100644
--- a/gcc/dwarf2out.c
+++ b/gcc/dwarf2out.c
@@ -5821,7 +5821,7 @@ same_die_p (dw_die_ref die1, dw_die_ref die2, int *mark)
return 0;
FOR_EACH_VEC_ELT (dw_attr_node, die1->die_attr, ix, a1)
- if (!same_attr_p (a1, VEC_index (dw_attr_node, die2->die_attr, ix), mark))
+ if (!same_attr_p (a1, &VEC_index (dw_attr_node, die2->die_attr, ix), mark))
return 0;
c1 = die1->die_child;
@@ -7072,7 +7072,7 @@ build_abbrev_table (dw_die_ref die, htab_t extern_map)
FOR_EACH_VEC_ELT (dw_attr_node, die->die_attr, ix, die_a)
{
- abbrev_a = VEC_index (dw_attr_node, abbrev->die_attr, ix);
+ abbrev_a = &VEC_index (dw_attr_node, abbrev->die_attr, ix);
if ((abbrev_a->dw_attr != die_a->dw_attr)
|| (value_format (abbrev_a) != value_format (die_a)))
{
@@ -20532,8 +20532,8 @@ optimize_macinfo_range (unsigned int idx, VEC (macinfo_entry, gc) *files,
unsigned int i, count, encoded_filename_len, linebuf_len;
void **slot;
- first = VEC_index (macinfo_entry, macinfo_table, idx);
- second = VEC_index (macinfo_entry, macinfo_table, idx + 1);
+ first = &VEC_index (macinfo_entry, macinfo_table, idx);
+ second = &VEC_index (macinfo_entry, macinfo_table, idx + 1);
/* Optimize only if there are at least two consecutive define/undef ops,
and either all of them are before first DW_MACINFO_start_file
@@ -20573,7 +20573,7 @@ optimize_macinfo_range (unsigned int idx, VEC (macinfo_entry, gc) *files,
if (VEC_empty (macinfo_entry, files))
base = "";
else
- base = lbasename (VEC_last (macinfo_entry, files)->info);
+ base = lbasename (VEC_last (macinfo_entry, files).info);
for (encoded_filename_len = 0, i = 0; base[i]; i++)
if (ISIDNUM (base[i]) || base[i] == '.')
encoded_filename_len++;
@@ -20604,7 +20604,7 @@ optimize_macinfo_range (unsigned int idx, VEC (macinfo_entry, gc) *files,
/* Construct a macinfo_entry for DW_MACRO_GNU_transparent_include
in the empty vector entry before the first define/undef. */
- inc = VEC_index (macinfo_entry, macinfo_table, idx - 1);
+ inc = &VEC_index (macinfo_entry, macinfo_table, idx - 1);
inc->code = DW_MACRO_GNU_transparent_include;
inc->lineno = 0;
inc->info = ggc_strdup (grp_name);
@@ -20697,7 +20697,7 @@ output_macinfo (void)
&& VEC_length (macinfo_entry, files) != 1
&& i > 0
&& i + 1 < length
- && VEC_index (macinfo_entry, macinfo_table, i - 1)->code == 0)
+ && VEC_index (macinfo_entry, macinfo_table, i - 1).code == 0)
{
unsigned count = optimize_macinfo_range (i, files, &macinfo_htab);
if (count)
@@ -21307,14 +21307,14 @@ static inline void
move_linkage_attr (dw_die_ref die)
{
unsigned ix = VEC_length (dw_attr_node, die->die_attr);
- dw_attr_node linkage = *VEC_index (dw_attr_node, die->die_attr, ix - 1);
+ dw_attr_node linkage = VEC_index (dw_attr_node, die->die_attr, ix - 1);
gcc_assert (linkage.dw_attr == DW_AT_linkage_name
|| linkage.dw_attr == DW_AT_MIPS_linkage_name);
while (--ix > 0)
{
- dw_attr_node *prev = VEC_index (dw_attr_node, die->die_attr, ix - 1);
+ dw_attr_node *prev = &VEC_index (dw_attr_node, die->die_attr, ix - 1);
if (prev->dw_attr == DW_AT_decl_line || prev->dw_attr == DW_AT_name)
break;
@@ -22226,8 +22226,8 @@ dwarf2out_finish (const char *filename)
for (i = 0; i < VEC_length (deferred_locations, deferred_locations_list); i++)
{
add_location_or_const_value_attribute (
- VEC_index (deferred_locations, deferred_locations_list, i)->die,
- VEC_index (deferred_locations, deferred_locations_list, i)->variable,
+ VEC_index (deferred_locations, deferred_locations_list, i).die,
+ VEC_index (deferred_locations, deferred_locations_list, i).variable,
false,
DW_AT_location);
}
diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c
index ebd49b3..210cc14 100644
--- a/gcc/emit-rtl.c
+++ b/gcc/emit-rtl.c
@@ -6091,7 +6091,7 @@ locator_location (int loc)
break;
}
}
- return *VEC_index (location_t, locations_locators_vals, min);
+ return VEC_index (location_t, locations_locators_vals, min);
}
/* Return source line of the statement that produced this insn. */
diff --git a/gcc/except.c b/gcc/except.c
index 605d8d7..0174512 100644
--- a/gcc/except.c
+++ b/gcc/except.c
@@ -304,8 +304,8 @@ init_eh_for_function (void)
cfun->eh = ggc_alloc_cleared_eh_status ();
/* Make sure zero'th entries are used. */
- VEC_safe_push (eh_region, gc, cfun->eh->region_array, NULL);
- VEC_safe_push (eh_landing_pad, gc, cfun->eh->lp_array, NULL);
+ VEC_safe_push (eh_region, gc, cfun->eh->region_array, (eh_region) NULL);
+ VEC_safe_push (eh_landing_pad, gc, cfun->eh->lp_array, (eh_landing_pad) NULL);
}
/* Routines to generate the exception tree somewhat directly.
@@ -806,7 +806,7 @@ add_ehspec_entry (htab_t ehspec_hash, htab_t ttypes_hash, tree list)
if (targetm.arm_eabi_unwinder)
VEC_safe_push (tree, gc, cfun->eh->ehspec_data.arm_eabi, NULL_TREE);
else
- VEC_safe_push (uchar, gc, cfun->eh->ehspec_data.other, 0);
+ VEC_safe_push (uchar, gc, cfun->eh->ehspec_data.other, (uchar) 0);
}
return n->filter;
@@ -2395,10 +2395,10 @@ add_call_site (rtx landing_pad, int action, int section)
record->action = action;
VEC_safe_push (call_site_record, gc,
- crtl->eh.call_site_record[section], record);
+ crtl->eh.call_site_record_v[section], record);
return call_site_base + VEC_length (call_site_record,
- crtl->eh.call_site_record[section]) - 1;
+ crtl->eh.call_site_record_v[section]) - 1;
}
/* Turn REG_EH_REGION notes back into NOTE_INSN_EH_REGION notes.
@@ -2546,10 +2546,10 @@ convert_to_eh_region_ranges (void)
else if (last_action != -3)
last_landing_pad = pc_rtx;
call_site_base += VEC_length (call_site_record,
- crtl->eh.call_site_record[cur_sec]);
+ crtl->eh.call_site_record_v[cur_sec]);
cur_sec++;
- gcc_assert (crtl->eh.call_site_record[cur_sec] == NULL);
- crtl->eh.call_site_record[cur_sec]
+ gcc_assert (crtl->eh.call_site_record_v[cur_sec] == NULL);
+ crtl->eh.call_site_record_v[cur_sec]
= VEC_alloc (call_site_record, gc, 10);
}
@@ -2633,14 +2633,14 @@ push_sleb128 (VEC (uchar, gc) **data_area, int value)
static int
dw2_size_of_call_site_table (int section)
{
- int n = VEC_length (call_site_record, crtl->eh.call_site_record[section]);
+ int n = VEC_length (call_site_record, crtl->eh.call_site_record_v[section]);
int size = n * (4 + 4 + 4);
int i;
for (i = 0; i < n; ++i)
{
struct call_site_record_d *cs =
- VEC_index (call_site_record, crtl->eh.call_site_record[section], i);
+ VEC_index (call_site_record, crtl->eh.call_site_record_v[section], i);
size += size_of_uleb128 (cs->action);
}
@@ -2650,14 +2650,14 @@ dw2_size_of_call_site_table (int section)
static int
sjlj_size_of_call_site_table (void)
{
- int n = VEC_length (call_site_record, crtl->eh.call_site_record[0]);
+ int n = VEC_length (call_site_record, crtl->eh.call_site_record_v[0]);
int size = 0;
int i;
for (i = 0; i < n; ++i)
{
struct call_site_record_d *cs =
- VEC_index (call_site_record, crtl->eh.call_site_record[0], i);
+ VEC_index (call_site_record, crtl->eh.call_site_record_v[0], i);
size += size_of_uleb128 (INTVAL (cs->landing_pad));
size += size_of_uleb128 (cs->action);
}
@@ -2669,7 +2669,7 @@ sjlj_size_of_call_site_table (void)
static void
dw2_output_call_site_table (int cs_format, int section)
{
- int n = VEC_length (call_site_record, crtl->eh.call_site_record[section]);
+ int n = VEC_length (call_site_record, crtl->eh.call_site_record_v[section]);
int i;
const char *begin;
@@ -2683,7 +2683,7 @@ dw2_output_call_site_table (int cs_format, int section)
for (i = 0; i < n; ++i)
{
struct call_site_record_d *cs =
- VEC_index (call_site_record, crtl->eh.call_site_record[section], i);
+ VEC_index (call_site_record, crtl->eh.call_site_record_v[section], i);
char reg_start_lab[32];
char reg_end_lab[32];
char landing_pad_lab[32];
@@ -2731,13 +2731,13 @@ dw2_output_call_site_table (int cs_format, int section)
static void
sjlj_output_call_site_table (void)
{
- int n = VEC_length (call_site_record, crtl->eh.call_site_record[0]);
+ int n = VEC_length (call_site_record, crtl->eh.call_site_record_v[0]);
int i;
for (i = 0; i < n; ++i)
{
struct call_site_record_d *cs =
- VEC_index (call_site_record, crtl->eh.call_site_record[0], i);
+ VEC_index (call_site_record, crtl->eh.call_site_record_v[0], i);
dw2_asm_output_data_uleb128 (INTVAL (cs->landing_pad),
"region %d landing pad", i);
@@ -3051,7 +3051,7 @@ output_function_exception_table (const char *fnname)
targetm.asm_out.emit_except_table_label (asm_out_file);
output_one_function_exception_table (0);
- if (crtl->eh.call_site_record[1] != NULL)
+ if (crtl->eh.call_site_record_v[1] != NULL)
output_one_function_exception_table (1);
switch_to_section (current_function_section ());
diff --git a/gcc/fold-const.c b/gcc/fold-const.c
index 5e14125..3bfd203 100644
--- a/gcc/fold-const.c
+++ b/gcc/fold-const.c
@@ -14285,7 +14285,7 @@ fold (tree expr)
while (begin != end)
{
unsigned HOST_WIDE_INT middle = (begin + end) / 2;
- tree index = VEC_index (constructor_elt, elts, middle)->index;
+ tree index = VEC_index (constructor_elt, elts, middle).index;
if (TREE_CODE (index) == INTEGER_CST
&& tree_int_cst_lt (index, op1))
@@ -14300,7 +14300,7 @@ fold (tree expr)
&& tree_int_cst_lt (op1, TREE_OPERAND (index, 0)))
end = middle;
else
- return VEC_index (constructor_elt, elts, middle)->value;
+ return VEC_index (constructor_elt, elts, middle).value;
}
}
diff --git a/gcc/function.h b/gcc/function.h
index 3d3313f..684bbce 100644
--- a/gcc/function.h
+++ b/gcc/function.h
@@ -157,7 +157,7 @@ struct GTY(()) rtl_eh {
VEC(uchar,gc) *action_record_data;
- VEC(call_site_record,gc) *call_site_record[2];
+ VEC(call_site_record,gc) *call_site_record_v[2];
};
#define pending_stack_adjust (crtl->expr.x_pending_stack_adjust)
diff --git a/gcc/fwprop.c b/gcc/fwprop.c
index 65087ad..d1cba88 100644
--- a/gcc/fwprop.c
+++ b/gcc/fwprop.c
@@ -223,7 +223,7 @@ single_def_use_enter_block (struct dom_walk_data *walk_data ATTRIBUTE_UNUSED,
bitmap_copy (local_lr, &lr_bb_info->in);
/* Push a marker for the leave_block callback. */
- VEC_safe_push (df_ref, heap, reg_defs_stack, NULL);
+ VEC_safe_push (df_ref, heap, reg_defs_stack, (df_ref) NULL);
process_uses (df_get_artificial_uses (bb_index), DF_REF_AT_TOP);
process_defs (df_get_artificial_defs (bb_index), DF_REF_AT_TOP);
diff --git a/gcc/gcc.c b/gcc/gcc.c
index bda354a..815747e 100644
--- a/gcc/gcc.c
+++ b/gcc/gcc.c
@@ -2520,7 +2520,7 @@ execute (void)
and record info about each one.
Also search for the programs that are to be run. */
- VEC_safe_push (const_char_p, heap, argbuf, 0);
+ VEC_safe_push (const_char_p, heap, argbuf, (const_char_p)0);
commands[0].prog = VEC_index (const_char_p, argbuf, 0); /* first command. */
commands[0].argv = VEC_address (const_char_p, argbuf);
diff --git a/gcc/genautomata.c b/gcc/genautomata.c
index 9f9e066..122a4a4 100644
--- a/gcc/genautomata.c
+++ b/gcc/genautomata.c
@@ -5076,7 +5076,8 @@ store_alt_unit_usage (regexp_t regexp, regexp_t unit, int cycle,
length = (cycle + 1) * REGEXP_ONEOF (regexp)->regexps_num;
while (VEC_length (unit_usage_t, cycle_alt_unit_usages) < length)
- VEC_safe_push (unit_usage_t, heap, cycle_alt_unit_usages, 0);
+ VEC_safe_push (unit_usage_t, heap, cycle_alt_unit_usages,
+ (unit_usage_t) NULL);
index = cycle * REGEXP_ONEOF (regexp)->regexps_num + alt_num;
prev = NULL;
@@ -7673,7 +7674,8 @@ output_min_issue_delay_table (automaton_t automaton)
if (VEC_index (vect_el_t, min_issue_delay_vect, asn))
{
- VEC_replace (vect_el_t, min_issue_delay_vect, asn, 0);
+ VEC_replace (vect_el_t, min_issue_delay_vect, asn,
+ (vect_el_t) 0);
changed = 1;
}
@@ -7723,7 +7725,8 @@ output_min_issue_delay_table (automaton_t automaton)
if (automaton->max_min_delay < x)
automaton->max_min_delay = x;
if (x == -1)
- VEC_replace (vect_el_t, min_issue_delay_vect, np, 0);
+ VEC_replace (vect_el_t, min_issue_delay_vect, np,
+ (vect_el_t) 0);
}
}
@@ -7749,7 +7752,8 @@ output_min_issue_delay_table (automaton_t automaton)
= VEC_alloc (vect_el_t, heap, compressed_min_issue_delay_len);
for (i = 0; i < compressed_min_issue_delay_len; i++)
- VEC_quick_push (vect_el_t, compressed_min_issue_delay_vect, 0);
+ VEC_quick_push (vect_el_t, compressed_min_issue_delay_vect,
+ (vect_el_t) 0);
for (i = 0; i < min_issue_delay_len; i++)
{
@@ -7798,7 +7802,8 @@ output_dead_lock_vect (automaton_t automaton)
automaton->locked_states++;
}
else
- VEC_replace (vect_el_t, dead_lock_vect, s->order_state_num, 0);
+ VEC_replace (vect_el_t, dead_lock_vect, s->order_state_num,
+ (vect_el_t) 0);
}
if (automaton->locked_states == 0)
return;
@@ -7840,7 +7845,7 @@ output_reserved_units_table (automaton_t automaton)
reserved_units_table = VEC_alloc (vect_el_t, heap, reserved_units_size);
for (i = 0; i < reserved_units_size; i++)
- VEC_quick_push (vect_el_t, reserved_units_table, 0);
+ VEC_quick_push (vect_el_t, reserved_units_table, (vect_el_t) 0);
for (n = 0; n < VEC_length (state_t, output_states_vect); n++)
{
state_t s = VEC_index (state_t, output_states_vect, n);
diff --git a/gcc/genextract.c b/gcc/genextract.c
index 09e7cde..175febe 100644
--- a/gcc/genextract.c
+++ b/gcc/genextract.c
@@ -201,7 +201,7 @@ VEC_safe_set_locstr (VEC(locstr,heap) **vp, unsigned int ix, char *str)
else
{
while (ix > VEC_length (locstr, *vp))
- VEC_safe_push (locstr, heap, *vp, 0);
+ VEC_safe_push (locstr, heap, *vp, (locstr) NULL);
VEC_safe_push (locstr, heap, *vp, str);
}
}
diff --git a/gcc/gimple-low.c b/gcc/gimple-low.c
index f17d8e7..7a51e8c 100644
--- a/gcc/gimple-low.c
+++ b/gcc/gimple-low.c
@@ -121,7 +121,7 @@ lower_function_body (void)
if (gimple_seq_may_fallthru (lowered_body)
&& (VEC_empty (return_statements_t, data.return_statements)
|| gimple_return_retval (VEC_last (return_statements_t,
- data.return_statements)->stmt) != NULL))
+ data.return_statements).stmt) != NULL))
{
x = gimple_build_return (NULL);
gimple_set_location (x, cfun->function_end_locus);
@@ -137,7 +137,7 @@ lower_function_body (void)
/* Unfortunately, we can't use VEC_pop because it returns void for
objects. */
- t = *VEC_last (return_statements_t, data.return_statements);
+ t = VEC_last (return_statements_t, data.return_statements);
VEC_truncate (return_statements_t,
data.return_statements,
VEC_length (return_statements_t,
@@ -835,7 +835,7 @@ lower_gimple_return (gimple_stmt_iterator *gsi, struct lower_data *data)
for (i = VEC_length (return_statements_t, data->return_statements) - 1;
i >= 0; i--)
{
- tmp_rs = *VEC_index (return_statements_t, data->return_statements, i);
+ tmp_rs = VEC_index (return_statements_t, data->return_statements, i);
if (gimple_return_retval (stmt) == gimple_return_retval (tmp_rs.stmt))
{
diff --git a/gcc/gimplify.c b/gcc/gimplify.c
index 03f7c9e..13cd535 100644
--- a/gcc/gimplify.c
+++ b/gcc/gimplify.c
@@ -2119,7 +2119,7 @@ gimplify_compound_lval (tree *expr_p, gimple_seq *pre_p, gimple_seq *post_p,
fallback_t fallback)
{
tree *p;
- VEC(tree,heap) *stack;
+ VEC(tree,heap) *expr_stack;
enum gimplify_status ret = GS_ALL_DONE, tret;
int i;
location_t loc = EXPR_LOCATION (*expr_p);
@@ -2127,7 +2127,7 @@ gimplify_compound_lval (tree *expr_p, gimple_seq *pre_p, gimple_seq *post_p,
/* Create a stack of the subexpressions so later we can walk them in
order from inner to outer. */
- stack = VEC_alloc (tree, heap, 10);
+ expr_stack = VEC_alloc (tree, heap, 10);
/* We can handle anything that get_inner_reference can deal with. */
for (p = expr_p; ; p = &TREE_OPERAND (*p, 0))
@@ -2147,13 +2147,13 @@ gimplify_compound_lval (tree *expr_p, gimple_seq *pre_p, gimple_seq *post_p,
else
break;
- VEC_safe_push (tree, heap, stack, *p);
+ VEC_safe_push (tree, heap, expr_stack, *p);
}
- gcc_assert (VEC_length (tree, stack));
+ gcc_assert (VEC_length (tree, expr_stack));
- /* Now STACK is a stack of pointers to all the refs we've walked through
- and P points to the innermost expression.
+ /* Now EXPR_STACK is a stack of pointers to all the refs we've
+ walked through and P points to the innermost expression.
Java requires that we elaborated nodes in source order. That
means we must gimplify the inner expression followed by each of
@@ -2164,9 +2164,9 @@ gimplify_compound_lval (tree *expr_p, gimple_seq *pre_p, gimple_seq *post_p,
So we do this in three steps. First we deal with the annotations
for any variables in the components, then we gimplify the base,
then we gimplify any indices, from left to right. */
- for (i = VEC_length (tree, stack) - 1; i >= 0; i--)
+ for (i = VEC_length (tree, expr_stack) - 1; i >= 0; i--)
{
- tree t = VEC_index (tree, stack, i);
+ tree t = VEC_index (tree, expr_stack, i);
if (TREE_CODE (t) == ARRAY_REF || TREE_CODE (t) == ARRAY_RANGE_REF)
{
@@ -2259,9 +2259,9 @@ gimplify_compound_lval (tree *expr_p, gimple_seq *pre_p, gimple_seq *post_p,
/* And finally, the indices and operands of ARRAY_REF. During this
loop we also remove any useless conversions. */
- for (; VEC_length (tree, stack) > 0; )
+ for (; VEC_length (tree, expr_stack) > 0; )
{
- tree t = VEC_pop (tree, stack);
+ tree t = VEC_pop (tree, expr_stack);
if (TREE_CODE (t) == ARRAY_REF || TREE_CODE (t) == ARRAY_RANGE_REF)
{
@@ -2289,7 +2289,7 @@ gimplify_compound_lval (tree *expr_p, gimple_seq *pre_p, gimple_seq *post_p,
canonicalize_component_ref (expr_p);
}
- VEC_free (tree, heap, stack);
+ VEC_free (tree, heap, expr_stack);
gcc_assert (*expr_p == expr || ret != GS_ALL_DONE);
@@ -3848,7 +3848,7 @@ optimize_compound_literals_in_ctor (tree orig_ctor)
for (idx = 0; idx < num; idx++)
{
- tree value = VEC_index (constructor_elt, elts, idx)->value;
+ tree value = VEC_index (constructor_elt, elts, idx).value;
tree newval = value;
if (TREE_CODE (value) == CONSTRUCTOR)
newval = optimize_compound_literals_in_ctor (value);
@@ -3872,7 +3872,7 @@ optimize_compound_literals_in_ctor (tree orig_ctor)
CONSTRUCTOR_ELTS (ctor) = VEC_copy (constructor_elt, gc, elts);
elts = CONSTRUCTOR_ELTS (ctor);
}
- VEC_index (constructor_elt, elts, idx)->value = newval;
+ VEC_index (constructor_elt, elts, idx).value = newval;
}
return ctor;
}
@@ -4123,8 +4123,8 @@ gimplify_init_constructor (tree *expr_p, gimple_seq *pre_p, gimple_seq *post_p,
/* Extract the real and imaginary parts out of the ctor. */
gcc_assert (VEC_length (constructor_elt, elts) == 2);
- r = VEC_index (constructor_elt, elts, 0)->value;
- i = VEC_index (constructor_elt, elts, 1)->value;
+ r = VEC_index (constructor_elt, elts, 0).value;
+ i = VEC_index (constructor_elt, elts, 1).value;
if (r == NULL || i == NULL)
{
tree zero = build_zero_cst (TREE_TYPE (type));
diff --git a/gcc/graphite-sese-to-poly.c b/gcc/graphite-sese-to-poly.c
index c168622..a92f92c 100644
--- a/gcc/graphite-sese-to-poly.c
+++ b/gcc/graphite-sese-to-poly.c
@@ -1249,7 +1249,7 @@ build_sese_conditions_before (struct dom_walk_data *dw_data,
if (e->flags & EDGE_TRUE_VALUE)
VEC_safe_push (gimple, heap, *cases, stmt);
else
- VEC_safe_push (gimple, heap, *cases, NULL);
+ VEC_safe_push (gimple, heap, *cases, (gimple) NULL);
}
gbb = gbb_from_bb (bb);
diff --git a/gcc/ipa-inline-analysis.c b/gcc/ipa-inline-analysis.c
index 41d556a..53fedd2 100644
--- a/gcc/ipa-inline-analysis.c
+++ b/gcc/ipa-inline-analysis.c
@@ -295,9 +295,9 @@ add_clause (conditions conditions, struct predicate *p, clause_t clause)
condition *cc1;
if (!(clause & (1 << c1)))
continue;
- cc1 = VEC_index (condition,
- conditions,
- c1 - predicate_first_dynamic_condition);
+ cc1 = &VEC_index (condition,
+ conditions,
+ c1 - predicate_first_dynamic_condition);
/* We have no way to represent !CHANGED and !IS_NOT_CONSTANT
and thus there is no point for looking for them. */
if (cc1->code == CHANGED
@@ -306,12 +306,12 @@ add_clause (conditions conditions, struct predicate *p, clause_t clause)
for (c2 = c1 + 1; c2 <= NUM_CONDITIONS; c2++)
if (clause & (1 << c2))
{
- condition *cc1 = VEC_index (condition,
- conditions,
- c1 - predicate_first_dynamic_condition);
- condition *cc2 = VEC_index (condition,
- conditions,
- c2 - predicate_first_dynamic_condition);
+ condition *cc1 = &VEC_index (condition,
+ conditions,
+ c1 - predicate_first_dynamic_condition);
+ condition *cc2 = &VEC_index (condition,
+ conditions,
+ c2 - predicate_first_dynamic_condition);
if (cc1->operand_num == cc2->operand_num
&& cc1->val == cc2->val
&& cc2->code != IS_NOT_CONSTANT
@@ -476,7 +476,7 @@ predicate_probability (conditions conds,
{
if (i2 >= predicate_first_dynamic_condition)
{
- condition *c = VEC_index
+ condition *c = &VEC_index
(condition, conds,
i2 - predicate_first_dynamic_condition);
if (c->code == CHANGED
@@ -486,7 +486,7 @@ predicate_probability (conditions conds,
{
int iprob = VEC_index (inline_param_summary_t,
inline_param_summary,
- c->operand_num)->change_prob;
+ c->operand_num).change_prob;
this_prob = MAX (this_prob, iprob);
}
else
@@ -516,8 +516,8 @@ dump_condition (FILE *f, conditions conditions, int cond)
fprintf (f, "not inlined");
else
{
- c = VEC_index (condition, conditions,
- cond - predicate_first_dynamic_condition);
+ c = &VEC_index (condition, conditions,
+ cond - predicate_first_dynamic_condition);
fprintf (f, "op%i", c->operand_num);
if (c->code == IS_NOT_CONSTANT)
{
@@ -609,7 +609,7 @@ account_size_time (struct inline_summary *summary, int size, int time,
{
i = 0;
found = true;
- e = VEC_index (size_time_entry, summary->entry, 0);
+ e = &VEC_index (size_time_entry, summary->entry, 0);
gcc_assert (!e->predicate.clause[0]);
}
if (dump_file && (dump_flags & TDF_DETAILS) && (time || size))
@@ -759,7 +759,7 @@ evaluate_properties_for_edge (struct cgraph_edge *e, bool inline_p,
else if (inline_p
&& !VEC_index (inline_param_summary_t,
es->param,
- i)->change_prob)
+ i).change_prob)
VEC_replace (tree, known_vals, i, error_mark_node);
}
}
@@ -1134,7 +1134,7 @@ dump_inline_edge_summary (FILE * f, int indent, struct cgraph_node *node,
i++)
{
int prob = VEC_index (inline_param_summary_t,
- es->param, i)->change_prob;
+ es->param, i).change_prob;
if (!prob)
fprintf (f, "%*s op%i is compile time invariant\n",
@@ -1711,8 +1711,8 @@ will_be_nonconstant_predicate (struct ipa_node_params *info,
return p;
/* If we know when operand is constant,
we still can say something useful. */
- if (!true_predicate_p (VEC_index (predicate_t, nonconstant_names,
- SSA_NAME_VERSION (use))))
+ if (!true_predicate_p (&VEC_index (predicate_t, nonconstant_names,
+ SSA_NAME_VERSION (use))))
continue;
return p;
}
@@ -1734,14 +1734,14 @@ will_be_nonconstant_predicate (struct ipa_node_params *info,
ipa_get_param_decl_index (info, parm),
CHANGED, NULL);
else
- p = *VEC_index (predicate_t, nonconstant_names,
- SSA_NAME_VERSION (use));
+ p = VEC_index (predicate_t, nonconstant_names,
+ SSA_NAME_VERSION (use));
op_non_const = or_predicates (summary->conds, &p, &op_non_const);
}
if (gimple_code (stmt) == GIMPLE_ASSIGN
&& TREE_CODE (gimple_assign_lhs (stmt)) == SSA_NAME)
VEC_replace (predicate_t, nonconstant_names,
- SSA_NAME_VERSION (gimple_assign_lhs (stmt)), &op_non_const);
+ SSA_NAME_VERSION (gimple_assign_lhs (stmt)), op_non_const);
return op_non_const;
}
@@ -1956,7 +1956,7 @@ estimate_function_body_sizes (struct cgraph_node *node, bool early)
struct predicate false_p = false_predicate ();
VEC_replace (predicate_t, nonconstant_names,
SSA_NAME_VERSION (gimple_call_lhs (stmt)),
- &false_p);
+ false_p);
}
if (ipa_node_params_vector)
{
@@ -1971,7 +1971,7 @@ estimate_function_body_sizes (struct cgraph_node *node, bool early)
int prob = param_change_prob (stmt, i);
gcc_assert (prob >= 0 && prob <= REG_BR_PROB_BASE);
VEC_index (inline_param_summary_t,
- es->param, i)->change_prob = prob;
+ es->param, i).change_prob = prob;
}
}
@@ -2430,8 +2430,8 @@ remap_predicate (struct inline_summary *info,
{
struct condition *c;
- c = VEC_index (condition, callee_info->conds,
- cond - predicate_first_dynamic_condition);
+ c = &VEC_index (condition, callee_info->conds,
+ cond - predicate_first_dynamic_condition);
/* See if we can remap condition operand to caller's operand.
Otherwise give up. */
if (!operand_map
@@ -2519,10 +2519,10 @@ remap_edge_change_prob (struct cgraph_edge *inlined_edge,
{
int jf_formal_id = ipa_get_jf_pass_through_formal_id (jfunc);
int prob1 = VEC_index (inline_param_summary_t,
- es->param, i)->change_prob;
+ es->param, i).change_prob;
int prob2 = VEC_index
(inline_param_summary_t,
- inlined_es->param, jf_formal_id)->change_prob;
+ inlined_es->param, jf_formal_id).change_prob;
int prob = ((prob1 * prob2 + REG_BR_PROB_BASE / 2)
/ REG_BR_PROB_BASE);
@@ -2530,7 +2530,7 @@ remap_edge_change_prob (struct cgraph_edge *inlined_edge,
prob = 1;
VEC_index (inline_param_summary_t,
- es->param, i)->change_prob = prob;
+ es->param, i).change_prob = prob;
}
}
}
@@ -2753,12 +2753,12 @@ do_estimate_edge_time (struct cgraph_edge *edge)
<= edge->uid)
VEC_safe_grow_cleared (edge_growth_cache_entry, heap, edge_growth_cache,
cgraph_edge_max_uid);
- VEC_index (edge_growth_cache_entry, edge_growth_cache, edge->uid)->time
+ VEC_index (edge_growth_cache_entry, edge_growth_cache, edge->uid).time
= ret + (ret >= 0);
ret_size = size - es->call_stmt_size;
gcc_checking_assert (es->call_stmt_size);
- VEC_index (edge_growth_cache_entry, edge_growth_cache, edge->uid)->size
+ VEC_index (edge_growth_cache_entry, edge_growth_cache, edge->uid).size
= ret_size + (ret_size >= 0);
}
return ret;
@@ -2784,7 +2784,7 @@ do_estimate_edge_growth (struct cgraph_edge *edge)
do_estimate_edge_time (edge);
size = VEC_index (edge_growth_cache_entry,
edge_growth_cache,
- edge->uid)->size;
+ edge->uid).size;
gcc_checking_assert (size);
return size - (size > 0);
}
@@ -3019,7 +3019,7 @@ read_inline_edge_summary (struct lto_input_block *ib, struct cgraph_edge *e)
{
VEC_safe_grow_cleared (inline_param_summary_t, heap, es->param, length);
for (i = 0; i < length; i++)
- VEC_index (inline_param_summary_t, es->param, i)->change_prob
+ VEC_index (inline_param_summary_t, es->param, i).change_prob
= streamer_read_uhwi (ib);
}
}
@@ -3173,7 +3173,7 @@ write_inline_edge_summary (struct output_block *ob, struct cgraph_edge *e)
streamer_write_uhwi (ob, VEC_length (inline_param_summary_t, es->param));
for (i = 0; i < (int)VEC_length (inline_param_summary_t, es->param); i++)
streamer_write_uhwi (ob, VEC_index (inline_param_summary_t,
- es->param, i)->change_prob);
+ es->param, i).change_prob);
}
diff --git a/gcc/ipa-inline.c b/gcc/ipa-inline.c
index d8b66e6..c43ce25 100644
--- a/gcc/ipa-inline.c
+++ b/gcc/ipa-inline.c
@@ -1287,7 +1287,7 @@ inline_small_functions (void)
{
struct cgraph_node *node;
struct cgraph_edge *edge;
- fibheap_t heap = fibheap_new ();
+ fibheap_t edge_heap = fibheap_new ();
bitmap updated_nodes = BITMAP_ALLOC (NULL);
int min_size, max_size;
VEC (cgraph_edge_p, heap) *new_indirect_edges = NULL;
@@ -1344,7 +1344,7 @@ inline_small_functions (void)
&& edge->inline_failed)
{
gcc_assert (!edge->aux);
- update_edge_key (heap, edge);
+ update_edge_key (edge_heap, edge);
}
}
@@ -1352,16 +1352,16 @@ inline_small_functions (void)
|| !max_count
|| (profile_info && flag_branch_probabilities));
- while (!fibheap_empty (heap))
+ while (!fibheap_empty (edge_heap))
{
int old_size = overall_size;
struct cgraph_node *where, *callee;
- int badness = fibheap_min_key (heap);
+ int badness = fibheap_min_key (edge_heap);
int current_badness;
int cached_badness;
int growth;
- edge = (struct cgraph_edge *) fibheap_extract_min (heap);
+ edge = (struct cgraph_edge *) fibheap_extract_min (edge_heap);
gcc_assert (edge->aux);
edge->aux = NULL;
if (!edge->inline_failed)
@@ -1382,7 +1382,7 @@ inline_small_functions (void)
gcc_assert (current_badness >= badness);
if (current_badness != badness)
{
- edge->aux = fibheap_insert (heap, current_badness, edge);
+ edge->aux = fibheap_insert (edge_heap, current_badness, edge);
continue;
}
@@ -1447,8 +1447,8 @@ inline_small_functions (void)
/* Recursive inliner inlines all recursive calls of the function
at once. Consequently we need to update all callee keys. */
if (flag_indirect_inlining)
- add_new_edges_to_heap (heap, new_indirect_edges);
- update_callee_keys (heap, where, updated_nodes);
+ add_new_edges_to_heap (edge_heap, new_indirect_edges);
+ update_callee_keys (edge_heap, where, updated_nodes);
}
else
{
@@ -1482,12 +1482,12 @@ inline_small_functions (void)
gcc_checking_assert (!callee->global.inlined_to);
inline_call (edge, true, &new_indirect_edges, &overall_size, true);
if (flag_indirect_inlining)
- add_new_edges_to_heap (heap, new_indirect_edges);
+ add_new_edges_to_heap (edge_heap, new_indirect_edges);
reset_edge_caches (edge->callee);
reset_node_growth_cache (callee);
- update_callee_keys (heap, edge->callee, updated_nodes);
+ update_callee_keys (edge_heap, edge->callee, updated_nodes);
}
where = edge->caller;
if (where->global.inlined_to)
@@ -1499,7 +1499,7 @@ inline_small_functions (void)
inlined into (since it's body size changed) and for the functions
called by function we inlined (since number of it inlinable callers
might change). */
- update_caller_keys (heap, where, updated_nodes, NULL);
+ update_caller_keys (edge_heap, where, updated_nodes, NULL);
bitmap_clear (updated_nodes);
if (dump_file)
@@ -1525,7 +1525,7 @@ inline_small_functions (void)
free_growth_caches ();
if (new_indirect_edges)
VEC_free (cgraph_edge_p, heap, new_indirect_edges);
- fibheap_delete (heap);
+ fibheap_delete (edge_heap);
if (dump_file)
fprintf (dump_file,
"Unit growth for small function inlining: %i->%i (%i%%)\n",
diff --git a/gcc/ipa-inline.h b/gcc/ipa-inline.h
index fbd0b99..2d0004b 100644
--- a/gcc/ipa-inline.h
+++ b/gcc/ipa-inline.h
@@ -191,13 +191,13 @@ extern int nfunctions_inlined;
static inline struct inline_summary *
inline_summary (struct cgraph_node *node)
{
- return VEC_index (inline_summary_t, inline_summary_vec, node->uid);
+ return &VEC_index (inline_summary_t, inline_summary_vec, node->uid);
}
static inline struct inline_edge_summary *
inline_edge_summary (struct cgraph_edge *edge)
{
- return VEC_index (inline_edge_summary_t,
+ return &VEC_index (inline_edge_summary_t,
inline_edge_summary_vec, edge->uid);
}
@@ -226,7 +226,7 @@ estimate_edge_growth (struct cgraph_edge *edge)
if ((int)VEC_length (edge_growth_cache_entry, edge_growth_cache) <= edge->uid
|| !(ret = VEC_index (edge_growth_cache_entry,
edge_growth_cache,
- edge->uid)->size))
+ edge->uid).size))
return do_estimate_edge_growth (edge);
return ret - (ret > 0);
}
@@ -242,7 +242,7 @@ estimate_edge_time (struct cgraph_edge *edge)
if ((int)VEC_length (edge_growth_cache_entry, edge_growth_cache) <= edge->uid
|| !(ret = VEC_index (edge_growth_cache_entry,
edge_growth_cache,
- edge->uid)->time))
+ edge->uid).time))
return do_estimate_edge_time (edge);
return ret - (ret > 0);
}
@@ -265,6 +265,6 @@ reset_edge_growth_cache (struct cgraph_edge *edge)
if ((int)VEC_length (edge_growth_cache_entry, edge_growth_cache) > edge->uid)
{
struct edge_growth_cache_entry zero = {0, 0};
- VEC_replace (edge_growth_cache_entry, edge_growth_cache, edge->uid, &zero);
+ VEC_replace (edge_growth_cache_entry, edge_growth_cache, edge->uid, zero);
}
}
diff --git a/gcc/ipa-prop.c b/gcc/ipa-prop.c
index 7f90984..b964272 100644
--- a/gcc/ipa-prop.c
+++ b/gcc/ipa-prop.c
@@ -94,7 +94,7 @@ ipa_populate_param_decls (struct cgraph_node *node,
for (parm = fnargs; parm; parm = DECL_CHAIN (parm))
{
VEC_index (ipa_param_descriptor_t,
- info->descriptors, param_num)->decl = parm;
+ info->descriptors, param_num).decl = parm;
param_num++;
}
}
@@ -2439,10 +2439,10 @@ ipa_edge_duplication_hook (struct cgraph_edge *src, struct cgraph_edge *dst,
old_args->jump_functions);
for (i = 0; i < VEC_length (ipa_jump_func_t, old_args->jump_functions); i++)
- VEC_index (ipa_jump_func_t, new_args->jump_functions, i)->agg.items
+ VEC_index (ipa_jump_func_t, new_args->jump_functions, i).agg.items
= VEC_copy (ipa_agg_jf_item_t, gc,
VEC_index (ipa_jump_func_t,
- old_args->jump_functions, i)->agg.items);
+ old_args->jump_functions, i).agg.items);
}
/* Hook that is called by cgraph.c when a node is duplicated. */
@@ -2672,7 +2672,7 @@ ipa_modify_formal_parameters (tree fndecl, ipa_parm_adjustment_vec adjustments,
struct ipa_parm_adjustment *adj;
gcc_assert (link);
- adj = VEC_index (ipa_parm_adjustment_t, adjustments, i);
+ adj = &VEC_index (ipa_parm_adjustment_t, adjustments, i);
parm = VEC_index (tree, oparms, adj->base_index);
adj->base = parm;
@@ -2738,8 +2738,8 @@ ipa_modify_formal_parameters (tree fndecl, ipa_parm_adjustment_vec adjustments,
When we are asked to remove it, we need to build new FUNCTION_TYPE
instead. */
if (TREE_CODE (orig_type) != METHOD_TYPE
- || (VEC_index (ipa_parm_adjustment_t, adjustments, 0)->copy_param
- && VEC_index (ipa_parm_adjustment_t, adjustments, 0)->base_index == 0))
+ || (VEC_index (ipa_parm_adjustment_t, adjustments, 0).copy_param
+ && VEC_index (ipa_parm_adjustment_t, adjustments, 0).base_index == 0))
{
new_type = build_distinct_type_copy (orig_type);
TYPE_ARG_TYPES (new_type) = new_reversed;
@@ -2806,7 +2806,7 @@ ipa_modify_call_arguments (struct cgraph_edge *cs, gimple stmt,
{
struct ipa_parm_adjustment *adj;
- adj = VEC_index (ipa_parm_adjustment_t, adjustments, i);
+ adj = &VEC_index (ipa_parm_adjustment_t, adjustments, i);
if (adj->copy_param)
{
@@ -2989,7 +2989,7 @@ index_in_adjustments_multiple_times_p (int base_index,
for (i = 0; i < len; i++)
{
struct ipa_parm_adjustment *adj;
- adj = VEC_index (ipa_parm_adjustment_t, adjustments, i);
+ adj = &VEC_index (ipa_parm_adjustment_t, adjustments, i);
if (adj->base_index == base_index)
{
@@ -3020,7 +3020,7 @@ ipa_combine_adjustments (ipa_parm_adjustment_vec inner,
for (i = 0; i < inlen; i++)
{
struct ipa_parm_adjustment *n;
- n = VEC_index (ipa_parm_adjustment_t, inner, i);
+ n = &VEC_index (ipa_parm_adjustment_t, inner, i);
if (n->remove_param)
removals++;
@@ -3032,10 +3032,10 @@ ipa_combine_adjustments (ipa_parm_adjustment_vec inner,
for (i = 0; i < outlen; i++)
{
struct ipa_parm_adjustment *r;
- struct ipa_parm_adjustment *out = VEC_index (ipa_parm_adjustment_t,
- outer, i);
- struct ipa_parm_adjustment *in = VEC_index (ipa_parm_adjustment_t, tmp,
- out->base_index);
+ struct ipa_parm_adjustment *out = &VEC_index (ipa_parm_adjustment_t,
+ outer, i);
+ struct ipa_parm_adjustment *in = &VEC_index (ipa_parm_adjustment_t, tmp,
+ out->base_index);
gcc_assert (!in->remove_param);
if (out->remove_param)
@@ -3068,8 +3068,8 @@ ipa_combine_adjustments (ipa_parm_adjustment_vec inner,
for (i = 0; i < inlen; i++)
{
- struct ipa_parm_adjustment *n = VEC_index (ipa_parm_adjustment_t,
- inner, i);
+ struct ipa_parm_adjustment *n = &VEC_index (ipa_parm_adjustment_t,
+ inner, i);
if (n->remove_param)
VEC_quick_push (ipa_parm_adjustment_t, adjustments, n);
@@ -3094,7 +3094,7 @@ ipa_dump_param_adjustments (FILE *file, ipa_parm_adjustment_vec adjustments,
for (i = 0; i < len; i++)
{
struct ipa_parm_adjustment *adj;
- adj = VEC_index (ipa_parm_adjustment_t, adjustments, i);
+ adj = &VEC_index (ipa_parm_adjustment_t, adjustments, i);
if (!first)
fprintf (file, " ");
diff --git a/gcc/ipa-prop.h b/gcc/ipa-prop.h
index 489e5d8..feb8ff7 100644
--- a/gcc/ipa-prop.h
+++ b/gcc/ipa-prop.h
@@ -359,7 +359,7 @@ ipa_get_param_count (struct ipa_node_params *info)
static inline tree
ipa_get_param (struct ipa_node_params *info, int i)
{
- return VEC_index (ipa_param_descriptor_t, info->descriptors, i)->decl;
+ return VEC_index (ipa_param_descriptor_t, info->descriptors, i).decl;
}
/* Set the used flag corresponding to the Ith formal parameter of the function
@@ -368,7 +368,7 @@ ipa_get_param (struct ipa_node_params *info, int i)
static inline void
ipa_set_param_used (struct ipa_node_params *info, int i, bool val)
{
- VEC_index (ipa_param_descriptor_t, info->descriptors, i)->used = val;
+ VEC_index (ipa_param_descriptor_t, info->descriptors, i).used = val;
}
/* Return the used flag corresponding to the Ith formal parameter of the
@@ -377,7 +377,7 @@ ipa_set_param_used (struct ipa_node_params *info, int i, bool val)
static inline bool
ipa_is_param_used (struct ipa_node_params *info, int i)
{
- return VEC_index (ipa_param_descriptor_t, info->descriptors, i)->used;
+ return VEC_index (ipa_param_descriptor_t, info->descriptors, i).used;
}
/* ipa_edge_args stores information related to a callsite and particularly its
@@ -406,7 +406,7 @@ ipa_get_cs_argument_count (struct ipa_edge_args *args)
static inline struct ipa_jump_func *
ipa_get_ith_jump_func (struct ipa_edge_args *args, int i)
{
- return VEC_index (ipa_jump_func_t, args->jump_functions, i);
+ return &VEC_index (ipa_jump_func_t, args->jump_functions, i);
}
/* Vectors need to have typedefs of structures. */
@@ -425,10 +425,10 @@ extern GTY(()) VEC (ipa_edge_args_t, gc) *ipa_edge_args_vector;
/* Return the associated parameter/argument info corresponding to the given
node/edge. */
-#define IPA_NODE_REF(NODE) (VEC_index (ipa_node_params_t, \
- ipa_node_params_vector, (NODE)->uid))
-#define IPA_EDGE_REF(EDGE) (VEC_index (ipa_edge_args_t, \
- ipa_edge_args_vector, (EDGE)->uid))
+#define IPA_NODE_REF(NODE) (&VEC_index (ipa_node_params_t, \
+ ipa_node_params_vector, (NODE)->uid))
+#define IPA_EDGE_REF(EDGE) (&VEC_index (ipa_edge_args_t, \
+ ipa_edge_args_vector, (EDGE)->uid))
/* This macro checks validity of index returned by
ipa_get_param_decl_index function. */
#define IS_VALID_JUMP_FUNC_INDEX(I) ((I) != -1)
diff --git a/gcc/ipa-ref-inline.h b/gcc/ipa-ref-inline.h
index 636af14..575bba9 100644
--- a/gcc/ipa-ref-inline.h
+++ b/gcc/ipa-ref-inline.h
@@ -73,7 +73,7 @@ ipa_ref_list_first_reference (struct ipa_ref_list *list)
{
if (!VEC_length (ipa_ref_t, list->references))
return NULL;
- return VEC_index (ipa_ref_t, list->references, 0);
+ return &VEC_index (ipa_ref_t, list->references, 0);
}
/* Return first referring ref in LIST or NULL if empty. */
diff --git a/gcc/ipa-ref.c b/gcc/ipa-ref.c
index 7926eb6..21799ab 100644
--- a/gcc/ipa-ref.c
+++ b/gcc/ipa-ref.c
@@ -49,7 +49,7 @@ ipa_record_reference (symtab_node referring_node,
old_references = list->references;
VEC_safe_grow (ipa_ref_t, gc, list->references,
VEC_length (ipa_ref_t, list->references) + 1);
- ref = VEC_last (ipa_ref_t, list->references);
+ ref = &VEC_last (ipa_ref_t, list->references);
list2 = &referred_node->symbol.ref_list;
VEC_safe_push (ipa_ref_ptr, heap, list2->referring, ref);
@@ -93,7 +93,7 @@ ipa_remove_reference (struct ipa_ref *ref)
}
VEC_pop (ipa_ref_ptr, list->referring);
- last = VEC_last (ipa_ref_t, list2->references);
+ last = &VEC_last (ipa_ref_t, list2->references);
if (ref != last)
{
*ref = *last;
@@ -111,7 +111,7 @@ void
ipa_remove_all_references (struct ipa_ref_list *list)
{
while (VEC_length (ipa_ref_t, list->references))
- ipa_remove_reference (VEC_last (ipa_ref_t, list->references));
+ ipa_remove_reference (&VEC_last (ipa_ref_t, list->references));
VEC_free (ipa_ref_t, gc, list->references);
list->references = NULL;
}
diff --git a/gcc/ipa-split.c b/gcc/ipa-split.c
index 2b5bc22..ed12b5f 100644
--- a/gcc/ipa-split.c
+++ b/gcc/ipa-split.c
@@ -921,7 +921,7 @@ find_split_points (int overall_time, int overall_size)
while (!VEC_empty (stack_entry, stack))
{
- stack_entry *entry = VEC_last (stack_entry, stack);
+ stack_entry *entry = &VEC_last (stack_entry, stack);
/* We are walking an acyclic graph, so edge_num counts
succ and pred edges together. However when considering
@@ -988,9 +988,9 @@ find_split_points (int overall_time, int overall_size)
new_entry.bb = dest;
new_entry.edge_num = 0;
new_entry.overall_time
- = VEC_index (bb_info, bb_info_vec, dest->index)->time;
+ = VEC_index (bb_info, bb_info_vec, dest->index).time;
new_entry.overall_size
- = VEC_index (bb_info, bb_info_vec, dest->index)->size;
+ = VEC_index (bb_info, bb_info_vec, dest->index).size;
new_entry.earliest = INT_MAX;
new_entry.set_ssa_names = BITMAP_ALLOC (NULL);
new_entry.used_ssa_names = BITMAP_ALLOC (NULL);
@@ -1010,8 +1010,8 @@ find_split_points (int overall_time, int overall_size)
and merge stuff we accumulate during the walk. */
else if (entry->bb != ENTRY_BLOCK_PTR)
{
- stack_entry *prev = VEC_index (stack_entry, stack,
- VEC_length (stack_entry, stack) - 2);
+ stack_entry *prev = &VEC_index (stack_entry, stack,
+ VEC_length (stack_entry, stack) - 2);
entry->bb->aux = (void *)(intptr_t)-1;
prev->can_split &= entry->can_split;
@@ -1493,8 +1493,8 @@ execute_split_functions (void)
}
overall_time += time;
overall_size += size;
- VEC_index (bb_info, bb_info_vec, bb->index)->time = time;
- VEC_index (bb_info, bb_info_vec, bb->index)->size = size;
+ VEC_index (bb_info, bb_info_vec, bb->index).time = time;
+ VEC_index (bb_info, bb_info_vec, bb->index).size = size;
}
find_split_points (overall_time, overall_size);
if (best_split_point.split_bbs)
diff --git a/gcc/java/boehm.c b/gcc/java/boehm.c
index f4a9af6..07dfb61 100644
--- a/gcc/java/boehm.c
+++ b/gcc/java/boehm.c
@@ -233,6 +233,6 @@ uses_jv_markobj_p (tree dtable)
this function is only used with flag_reduced_reflection. No
point in asserting unless we hit the bad case. */
gcc_assert (!flag_reduced_reflection || TARGET_VTABLE_USES_DESCRIPTORS == 0);
- v = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (dtable), 3)->value;
+ v = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (dtable), 3).value;
return (PROCEDURE_OBJECT_DESCRIPTOR == TREE_INT_CST_LOW (v));
}
diff --git a/gcc/java/class.c b/gcc/java/class.c
index 40ff26d..f806cea 100644
--- a/gcc/java/class.c
+++ b/gcc/java/class.c
@@ -1533,7 +1533,7 @@ make_method_value (tree mdecl)
v = VEC_alloc (constructor_elt, gc, length);
VEC_safe_grow_cleared (constructor_elt, gc, v, length);
- e = VEC_index (constructor_elt, v, idx--);
+ e = &VEC_index (constructor_elt, v, idx--);
e->value = null_pointer_node;
FOR_EACH_VEC_ELT (tree, DECL_FUNCTION_THROWS (mdecl), ix, t)
@@ -1542,7 +1542,7 @@ make_method_value (tree mdecl)
tree utf8
= build_utf8_ref (unmangle_classname (IDENTIFIER_POINTER (sig),
IDENTIFIER_LENGTH (sig)));
- e = VEC_index (constructor_elt, v, idx--);
+ e = &VEC_index (constructor_elt, v, idx--);
e->value = utf8;
}
gcc_assert (idx == -1);
@@ -1621,7 +1621,7 @@ get_dispatch_table (tree type, tree this_class_addr)
arraysize += 2;
VEC_safe_grow_cleared (constructor_elt, gc, v, arraysize);
- e = VEC_index (constructor_elt, v, arraysize - 1);
+ e = &VEC_index (constructor_elt, v, arraysize - 1);
#define CONSTRUCTOR_PREPEND_VALUE(E, V) E->value = V, E--
for (i = nvirtuals; --i >= 0; )
@@ -3007,7 +3007,7 @@ emit_catch_table (tree this_class)
int n_catch_classes;
constructor_elt *e;
/* Fill in the dummy entry that make_class created. */
- e = VEC_index (constructor_elt, TYPE_CATCH_CLASSES (this_class), 0);
+ e = &VEC_index (constructor_elt, TYPE_CATCH_CLASSES (this_class), 0);
e->value = make_catch_class_record (null_pointer_node, null_pointer_node);
CONSTRUCTOR_APPEND_ELT (TYPE_CATCH_CLASSES (this_class), NULL_TREE,
make_catch_class_record (null_pointer_node,
diff --git a/gcc/java/constants.c b/gcc/java/constants.c
index 2cc911f..c709fa4 100644
--- a/gcc/java/constants.c
+++ b/gcc/java/constants.c
@@ -514,8 +514,8 @@ build_constants_constructor (void)
int c = outgoing_cpool->count;
VEC_safe_grow_cleared (constructor_elt, gc, tags, c);
VEC_safe_grow_cleared (constructor_elt, gc, data, c);
- t = VEC_index (constructor_elt, tags, c-1);
- d = VEC_index (constructor_elt, data, c-1);
+ t = &VEC_index (constructor_elt, tags, c-1);
+ d = &VEC_index (constructor_elt, data, c-1);
}
#define CONSTRUCTOR_PREPEND_VALUE(E, V) E->value = V, E--
diff --git a/gcc/java/java-tree.h b/gcc/java/java-tree.h
index 6169b6a..5167b9b 100644
--- a/gcc/java/java-tree.h
+++ b/gcc/java/java-tree.h
@@ -1430,7 +1430,7 @@ extern tree *type_map;
#define PUSH_SUPER_VALUE(V, VALUE) \
do \
{ \
- constructor_elt *_elt___ = VEC_last (constructor_elt, V); \
+ constructor_elt *_elt___ = &VEC_last (constructor_elt, V); \
tree _next___ = DECL_CHAIN (_elt___->index); \
gcc_assert (!DECL_NAME (_elt___->index)); \
_elt___->value = VALUE; \
@@ -1444,7 +1444,7 @@ extern tree *type_map;
#define PUSH_FIELD_VALUE(V, NAME, VALUE) \
do \
{ \
- constructor_elt *_elt___ = VEC_last (constructor_elt, V); \
+ constructor_elt *_elt___ = &VEC_last (constructor_elt, V); \
tree _next___ = DECL_CHAIN (_elt___->index); \
gcc_assert (strcmp (IDENTIFIER_POINTER (DECL_NAME (_elt___->index)), \
NAME) == 0); \
diff --git a/gcc/modulo-sched.c b/gcc/modulo-sched.c
index af79fb6..673055e 100644
--- a/gcc/modulo-sched.c
+++ b/gcc/modulo-sched.c
@@ -229,7 +229,7 @@ static void remove_node_from_ps (partial_schedule_ptr, ps_insn_ptr);
#define NODE_ASAP(node) ((node)->aux.count)
-#define SCHED_PARAMS(x) VEC_index (node_sched_params, node_sched_param_vec, x)
+#define SCHED_PARAMS(x) (&VEC_index (node_sched_params, node_sched_param_vec, x))
#define SCHED_TIME(x) (SCHED_PARAMS (x)->time)
#define SCHED_ROW(x) (SCHED_PARAMS (x)->row)
#define SCHED_STAGE(x) (SCHED_PARAMS (x)->stage)
@@ -305,7 +305,7 @@ static struct ps_reg_move_info *
ps_reg_move (partial_schedule_ptr ps, int id)
{
gcc_checking_assert (id >= ps->g->num_nodes);
- return VEC_index (ps_reg_move_info, ps->reg_moves, id - ps->g->num_nodes);
+ return &VEC_index (ps_reg_move_info, ps->reg_moves, id - ps->g->num_nodes);
}
/* Return the rtl instruction that is being scheduled by partial schedule
diff --git a/gcc/ree.c b/gcc/ree.c
index 697e45f..1d0f194 100644
--- a/gcc/ree.c
+++ b/gcc/ree.c
@@ -802,7 +802,7 @@ add_removable_extension (const_rtx expr, rtx insn,
different extension. FIXME: this obviously can be improved. */
for (def = defs; def; def = def->next)
if ((idx = def_map[INSN_UID(DF_REF_INSN (def->ref))])
- && (cand = VEC_index (ext_cand, *insn_list, idx - 1))
+ && (cand = &VEC_index (ext_cand, *insn_list, idx - 1))
&& (cand->code != code || cand->mode != mode))
{
if (dump_file)
diff --git a/gcc/reg-stack.c b/gcc/reg-stack.c
index d1e195d..dc7550a 100644
--- a/gcc/reg-stack.c
+++ b/gcc/reg-stack.c
@@ -201,7 +201,7 @@ typedef struct stack_def
int top; /* index to top stack element */
HARD_REG_SET reg_set; /* set of live registers */
unsigned char reg[REG_STACK_SIZE];/* register - stack mapping */
-} *stack;
+} *stack_ptr;
/* This is used to carry information about basic blocks. It is
attached to the AUX field of the standard CFG block. */
@@ -246,7 +246,7 @@ static rtx not_a_num;
/* Forward declarations */
static int stack_regs_mentioned_p (const_rtx pat);
-static void pop_stack (stack, int);
+static void pop_stack (stack_ptr, int);
static rtx *get_true_reg (rtx *);
static int check_asm_stack_operands (rtx);
@@ -254,19 +254,19 @@ static void get_asm_operands_in_out (rtx, int *, int *);
static rtx stack_result (tree);
static void replace_reg (rtx *, int);
static void remove_regno_note (rtx, enum reg_note, unsigned int);
-static int get_hard_regnum (stack, rtx);
-static rtx emit_pop_insn (rtx, stack, rtx, enum emit_where);
-static void swap_to_top(rtx, stack, rtx, rtx);
-static bool move_for_stack_reg (rtx, stack, rtx);
-static bool move_nan_for_stack_reg (rtx, stack, rtx);
+static int get_hard_regnum (stack_ptr, rtx);
+static rtx emit_pop_insn (rtx, stack_ptr, rtx, enum emit_where);
+static void swap_to_top(rtx, stack_ptr, rtx, rtx);
+static bool move_for_stack_reg (rtx, stack_ptr, rtx);
+static bool move_nan_for_stack_reg (rtx, stack_ptr, rtx);
static int swap_rtx_condition_1 (rtx);
static int swap_rtx_condition (rtx);
-static void compare_for_stack_reg (rtx, stack, rtx);
-static bool subst_stack_regs_pat (rtx, stack, rtx);
-static void subst_asm_stack_regs (rtx, stack);
-static bool subst_stack_regs (rtx, stack);
-static void change_stack (rtx, stack, stack, enum emit_where);
-static void print_stack (FILE *, stack);
+static void compare_for_stack_reg (rtx, stack_ptr, rtx);
+static bool subst_stack_regs_pat (rtx, stack_ptr, rtx);
+static void subst_asm_stack_regs (rtx, stack_ptr);
+static bool subst_stack_regs (rtx, stack_ptr);
+static void change_stack (rtx, stack_ptr, stack_ptr, enum emit_where);
+static void print_stack (FILE *, stack_ptr);
static rtx next_flags_user (rtx);
/* Return nonzero if any stack register is mentioned somewhere within PAT. */
@@ -354,7 +354,7 @@ next_flags_user (rtx insn)
/* Reorganize the stack into ascending numbers, before this insn. */
static void
-straighten_stack (rtx insn, stack regstack)
+straighten_stack (rtx insn, stack_ptr regstack)
{
struct stack_def temp_stack;
int top;
@@ -377,7 +377,7 @@ straighten_stack (rtx insn, stack regstack)
/* Pop a register from the stack. */
static void
-pop_stack (stack regstack, int regno)
+pop_stack (stack_ptr regstack, int regno)
{
int top = regstack->top;
@@ -721,7 +721,7 @@ remove_regno_note (rtx insn, enum reg_note note, unsigned int regno)
returned if the register is not found. */
static int
-get_hard_regnum (stack regstack, rtx reg)
+get_hard_regnum (stack_ptr regstack, rtx reg)
{
int i;
@@ -742,7 +742,7 @@ get_hard_regnum (stack regstack, rtx reg)
cases the movdf pattern to pop. */
static rtx
-emit_pop_insn (rtx insn, stack regstack, rtx reg, enum emit_where where)
+emit_pop_insn (rtx insn, stack_ptr regstack, rtx reg, enum emit_where where)
{
rtx pop_insn, pop_rtx;
int hard_regno;
@@ -793,7 +793,7 @@ emit_pop_insn (rtx insn, stack regstack, rtx reg, enum emit_where where)
If REG is already at the top of the stack, no insn is emitted. */
static void
-emit_swap_insn (rtx insn, stack regstack, rtx reg)
+emit_swap_insn (rtx insn, stack_ptr regstack, rtx reg)
{
int hard_regno;
rtx swap_rtx;
@@ -900,7 +900,7 @@ emit_swap_insn (rtx insn, stack regstack, rtx reg)
is emitted. */
static void
-swap_to_top (rtx insn, stack regstack, rtx src1, rtx src2)
+swap_to_top (rtx insn, stack_ptr regstack, rtx src1, rtx src2)
{
struct stack_def temp_stack;
int regno, j, k, temp;
@@ -941,7 +941,7 @@ swap_to_top (rtx insn, stack regstack, rtx src1, rtx src2)
was deleted in the process. */
static bool
-move_for_stack_reg (rtx insn, stack regstack, rtx pat)
+move_for_stack_reg (rtx insn, stack_ptr regstack, rtx pat)
{
rtx *psrc = get_true_reg (&SET_SRC (pat));
rtx *pdest = get_true_reg (&SET_DEST (pat));
@@ -1092,7 +1092,7 @@ move_for_stack_reg (rtx insn, stack regstack, rtx pat)
a NaN into DEST, then invokes move_for_stack_reg. */
static bool
-move_nan_for_stack_reg (rtx insn, stack regstack, rtx dest)
+move_nan_for_stack_reg (rtx insn, stack_ptr regstack, rtx dest)
{
rtx pat;
@@ -1231,7 +1231,7 @@ swap_rtx_condition (rtx insn)
set up. */
static void
-compare_for_stack_reg (rtx insn, stack regstack, rtx pat_src)
+compare_for_stack_reg (rtx insn, stack_ptr regstack, rtx pat_src)
{
rtx *src1, *src2;
rtx src1_note, src2_note;
@@ -1320,7 +1320,7 @@ compare_for_stack_reg (rtx insn, stack regstack, rtx pat_src)
static int
subst_stack_regs_in_debug_insn (rtx *loc, void *data)
{
- stack regstack = (stack)data;
+ stack_ptr regstack = (stack_ptr)data;
int hard_regno;
if (!STACK_REG_P (*loc))
@@ -1361,7 +1361,7 @@ subst_all_stack_regs_in_debug_insn (rtx insn, struct stack_def *regstack)
was deleted in the process. */
static bool
-subst_stack_regs_pat (rtx insn, stack regstack, rtx pat)
+subst_stack_regs_pat (rtx insn, stack_ptr regstack, rtx pat)
{
rtx *dest, *src;
bool control_flow_insn_deleted = false;
@@ -2009,7 +2009,7 @@ subst_stack_regs_pat (rtx insn, stack regstack, rtx pat)
requirements, since record_asm_stack_regs removes any problem asm. */
static void
-subst_asm_stack_regs (rtx insn, stack regstack)
+subst_asm_stack_regs (rtx insn, stack_ptr regstack)
{
rtx body = PATTERN (insn);
int alt;
@@ -2292,7 +2292,7 @@ subst_asm_stack_regs (rtx insn, stack regstack)
a control flow insn was deleted in the process. */
static bool
-subst_stack_regs (rtx insn, stack regstack)
+subst_stack_regs (rtx insn, stack_ptr regstack)
{
rtx *note_link, note;
bool control_flow_insn_deleted = false;
@@ -2404,7 +2404,7 @@ subst_stack_regs (rtx insn, stack regstack)
is no longer needed once this has executed. */
static void
-change_stack (rtx insn, stack old, stack new_stack, enum emit_where where)
+change_stack (rtx insn, stack_ptr old, stack_ptr new_stack, enum emit_where where)
{
int reg;
int update_end = 0;
@@ -2610,7 +2610,7 @@ change_stack (rtx insn, stack old, stack new_stack, enum emit_where where)
/* Print stack configuration. */
static void
-print_stack (FILE *file, stack s)
+print_stack (FILE *file, stack_ptr s)
{
if (! file)
return;
@@ -2686,7 +2686,7 @@ static void
convert_regs_exit (void)
{
int value_reg_low, value_reg_high;
- stack output_stack;
+ stack_ptr output_stack;
rtx retvalue;
retvalue = stack_result (current_function_decl);
@@ -2719,8 +2719,8 @@ convert_regs_exit (void)
static void
propagate_stack (edge e)
{
- stack src_stack = &BLOCK_INFO (e->src)->stack_out;
- stack dest_stack = &BLOCK_INFO (e->dest)->stack_in;
+ stack_ptr src_stack = &BLOCK_INFO (e->src)->stack_out;
+ stack_ptr dest_stack = &BLOCK_INFO (e->dest)->stack_in;
int reg;
/* Preserve the order of the original stack, but check whether
@@ -2746,8 +2746,8 @@ static bool
compensate_edge (edge e)
{
basic_block source = e->src, target = e->dest;
- stack target_stack = &BLOCK_INFO (target)->stack_in;
- stack source_stack = &BLOCK_INFO (source)->stack_out;
+ stack_ptr target_stack = &BLOCK_INFO (target)->stack_in;
+ stack_ptr source_stack = &BLOCK_INFO (source)->stack_out;
struct stack_def regstack;
int reg;
diff --git a/gcc/regrename.c b/gcc/regrename.c
index 2366fb5..d125242 100644
--- a/gcc/regrename.c
+++ b/gcc/regrename.c
@@ -728,8 +728,8 @@ regrename_analyze (bitmap bb_mask)
rtx insn;
FOR_BB_INSNS (bb1, insn)
{
- insn_rr_info *p = VEC_index (insn_rr_info, insn_rr,
- INSN_UID (insn));
+ insn_rr_info *p = &VEC_index (insn_rr_info, insn_rr,
+ INSN_UID (insn));
p->op_info = NULL;
}
}
@@ -1583,7 +1583,7 @@ build_def_use (basic_block bb)
if (insn_rr != NULL)
{
- insn_info = VEC_index (insn_rr_info, insn_rr, INSN_UID (insn));
+ insn_info = &VEC_index (insn_rr_info, insn_rr, INSN_UID (insn));
insn_info->op_info = XOBNEWVEC (&rename_obstack, operand_rr_info,
recog_data.n_operands);
memset (insn_info->op_info, 0,
diff --git a/gcc/reload.h b/gcc/reload.h
index 29d15ea..a672ddc 100644
--- a/gcc/reload.h
+++ b/gcc/reload.h
@@ -243,19 +243,19 @@ typedef struct reg_equivs
} reg_equivs_t;
#define reg_equiv_constant(ELT) \
- VEC_index (reg_equivs_t, reg_equivs, (ELT))->constant
+ VEC_index (reg_equivs_t, reg_equivs, (ELT)).constant
#define reg_equiv_invariant(ELT) \
- VEC_index (reg_equivs_t, reg_equivs, (ELT))->invariant
+ VEC_index (reg_equivs_t, reg_equivs, (ELT)).invariant
#define reg_equiv_memory_loc(ELT) \
- VEC_index (reg_equivs_t, reg_equivs, (ELT))->memory_loc
+ VEC_index (reg_equivs_t, reg_equivs, (ELT)).memory_loc
#define reg_equiv_address(ELT) \
- VEC_index (reg_equivs_t, reg_equivs, (ELT))->address
+ VEC_index (reg_equivs_t, reg_equivs, (ELT)).address
#define reg_equiv_mem(ELT) \
- VEC_index (reg_equivs_t, reg_equivs, (ELT))->mem
+ VEC_index (reg_equivs_t, reg_equivs, (ELT)).mem
#define reg_equiv_alt_mem_list(ELT) \
- VEC_index (reg_equivs_t, reg_equivs, (ELT))->alt_mem_list
+ VEC_index (reg_equivs_t, reg_equivs, (ELT)).alt_mem_list
#define reg_equiv_init(ELT) \
- VEC_index (reg_equivs_t, reg_equivs, (ELT))->init
+ VEC_index (reg_equivs_t, reg_equivs, (ELT)).init
DEF_VEC_O(reg_equivs_t);
DEF_VEC_ALLOC_O(reg_equivs_t, gc);
diff --git a/gcc/reload1.c b/gcc/reload1.c
index bf5d3d3..b866af5 100644
--- a/gcc/reload1.c
+++ b/gcc/reload1.c
@@ -664,7 +664,8 @@ grow_reg_equivs (void)
for (i = old_size; i < max_regno; i++)
{
VEC_quick_insert (reg_equivs_t, reg_equivs, i, 0);
- memset (VEC_index (reg_equivs_t, reg_equivs, i), 0, sizeof (reg_equivs_t));
+ memset (&VEC_index (reg_equivs_t, reg_equivs, i), 0,
+ sizeof (reg_equivs_t));
}
}
diff --git a/gcc/sched-int.h b/gcc/sched-int.h
index fa5fc66..2e46238 100644
--- a/gcc/sched-int.h
+++ b/gcc/sched-int.h
@@ -873,7 +873,7 @@ DEF_VEC_ALLOC_O (haifa_insn_data_def, heap);
extern VEC(haifa_insn_data_def, heap) *h_i_d;
-#define HID(INSN) (VEC_index (haifa_insn_data_def, h_i_d, INSN_UID (INSN)))
+#define HID(INSN) (&VEC_index (haifa_insn_data_def, h_i_d, INSN_UID (INSN)))
/* Accessor macros for h_i_d. There are more in haifa-sched.c and
sched-rgn.c. */
@@ -895,7 +895,7 @@ DEF_VEC_ALLOC_O (haifa_deps_insn_data_def, heap);
extern VEC(haifa_deps_insn_data_def, heap) *h_d_i_d;
-#define HDID(INSN) (VEC_index (haifa_deps_insn_data_def, h_d_i_d, \
+#define HDID(INSN) (&VEC_index (haifa_deps_insn_data_def, h_d_i_d, \
INSN_LUID (INSN)))
#define INSN_DEP_COUNT(INSN) (HDID (INSN)->dep_count)
#define HAS_INTERNAL_DEP(INSN) (HDID (INSN)->has_internal_dep)
@@ -909,7 +909,7 @@ extern VEC(haifa_deps_insn_data_def, heap) *h_d_i_d;
#define INSN_COND_DEPS(INSN) (HDID (INSN)->cond_deps)
#define CANT_MOVE(INSN) (HDID (INSN)->cant_move)
#define CANT_MOVE_BY_LUID(LUID) (VEC_index (haifa_deps_insn_data_def, h_d_i_d, \
- LUID)->cant_move)
+ LUID).cant_move)
#define INSN_PRIORITY(INSN) (HID (INSN)->priority)
diff --git a/gcc/sel-sched-ir.c b/gcc/sel-sched-ir.c
index e7ca3f1..449efc9 100644
--- a/gcc/sel-sched-ir.c
+++ b/gcc/sel-sched-ir.c
@@ -1524,7 +1524,7 @@ insert_in_history_vect (VEC (expr_history_def, heap) **pvect,
if (res)
{
- expr_history_def *phist = VEC_index (expr_history_def, vect, ind);
+ expr_history_def *phist = &VEC_index (expr_history_def, vect, ind);
/* It is possible that speculation types of expressions that were
propagated through different paths will be different here. In this
@@ -4159,7 +4159,7 @@ finish_insns (void)
removed during the scheduling. */
for (i = 0; i < VEC_length (sel_insn_data_def, s_i_d); i++)
{
- sel_insn_data_def *sid_entry = VEC_index (sel_insn_data_def, s_i_d, i);
+ sel_insn_data_def *sid_entry = &VEC_index (sel_insn_data_def, s_i_d, i);
if (sid_entry->live)
return_regset_to_pool (sid_entry->live);
diff --git a/gcc/sel-sched-ir.h b/gcc/sel-sched-ir.h
index 2003552..ef884a5 100644
--- a/gcc/sel-sched-ir.h
+++ b/gcc/sel-sched-ir.h
@@ -765,8 +765,8 @@ DEF_VEC_ALLOC_O (sel_insn_data_def, heap);
extern VEC (sel_insn_data_def, heap) *s_i_d;
/* Accessor macros for s_i_d. */
-#define SID(INSN) (VEC_index (sel_insn_data_def, s_i_d, INSN_LUID (INSN)))
-#define SID_BY_UID(UID) (VEC_index (sel_insn_data_def, s_i_d, LUID_BY_UID (UID)))
+#define SID(INSN) (&VEC_index (sel_insn_data_def, s_i_d, INSN_LUID (INSN)))
+#define SID_BY_UID(UID) (&VEC_index (sel_insn_data_def, s_i_d, LUID_BY_UID (UID)))
extern sel_insn_data_def insn_sid (insn_t);
@@ -897,7 +897,7 @@ extern void sel_finish_global_bb_info (void);
/* Get data for BB. */
#define SEL_GLOBAL_BB_INFO(BB) \
- (VEC_index (sel_global_bb_info_def, sel_global_bb_info, (BB)->index))
+ (&VEC_index (sel_global_bb_info_def, sel_global_bb_info, (BB)->index))
/* Access macros. */
#define BB_LV_SET(BB) (SEL_GLOBAL_BB_INFO (BB)->lv_set)
@@ -927,8 +927,8 @@ DEF_VEC_ALLOC_O (sel_region_bb_info_def, heap);
extern VEC (sel_region_bb_info_def, heap) *sel_region_bb_info;
/* Get data for BB. */
-#define SEL_REGION_BB_INFO(BB) (VEC_index (sel_region_bb_info_def, \
- sel_region_bb_info, (BB)->index))
+#define SEL_REGION_BB_INFO(BB) (&VEC_index (sel_region_bb_info_def, \
+ sel_region_bb_info, (BB)->index))
/* Get BB's note_list.
A note_list is a list of various notes that was scattered across BB
diff --git a/gcc/sel-sched.c b/gcc/sel-sched.c
index f0c6eaf..b5bffa1 100644
--- a/gcc/sel-sched.c
+++ b/gcc/sel-sched.c
@@ -1938,9 +1938,9 @@ undo_transformations (av_set_t *av_ptr, rtx insn)
{
expr_history_def *phist;
- phist = VEC_index (expr_history_def,
- EXPR_HISTORY_OF_CHANGES (expr),
- index);
+ phist = &VEC_index (expr_history_def,
+ EXPR_HISTORY_OF_CHANGES (expr),
+ index);
switch (phist->type)
{
@@ -3581,7 +3581,7 @@ vinsn_vec_has_expr_p (vinsn_vec_t vinsn_vec, expr_t expr)
EXPR_HISTORY_OF_CHANGES (expr))
? VEC_index (expr_history_def,
EXPR_HISTORY_OF_CHANGES (expr),
- i++)->old_expr_vinsn
+ i++).old_expr_vinsn
: NULL))
FOR_EACH_VEC_ELT (vinsn_t, vinsn_vec, n, vinsn)
if (VINSN_SEPARABLE_P (vinsn))
diff --git a/gcc/tree-call-cdce.c b/gcc/tree-call-cdce.c
index c879548..be020da 100644
--- a/gcc/tree-call-cdce.c
+++ b/gcc/tree-call-cdce.c
@@ -374,7 +374,7 @@ gen_conditions_for_domain (tree arg, inp_domain domain,
{
/* Now push a separator. */
if (domain.has_lb)
- VEC_quick_push (gimple, conds, NULL);
+ VEC_quick_push (gimple, conds, (gimple)NULL);
gen_one_condition (arg, domain.ub,
(domain.is_ub_inclusive
@@ -496,7 +496,7 @@ gen_conditions_for_pow_int_base (tree base, tree expn,
type is integer. */
/* Push a separator. */
- VEC_quick_push (gimple, conds, NULL);
+ VEC_quick_push (gimple, conds, (gimple)NULL);
temp = create_tmp_var (int_type, "DCE_COND1");
cst0 = build_int_cst (int_type, 0);
diff --git a/gcc/tree-ssa-loop-ivopts.c b/gcc/tree-ssa-loop-ivopts.c
index c44567f..7429003 100644
--- a/gcc/tree-ssa-loop-ivopts.c
+++ b/gcc/tree-ssa-loop-ivopts.c
@@ -3120,7 +3120,7 @@ multiplier_allowed_in_address_p (HOST_WIDE_INT ratio, enum machine_mode mode,
TODO -- there must be some better way. This all is quite crude. */
-typedef struct
+typedef struct address_cost_data_s
{
HOST_WIDE_INT min_offset, max_offset;
unsigned costs[2][2][2][2];
diff --git a/gcc/tree-ssa-reassoc.c b/gcc/tree-ssa-reassoc.c
index 233ecce..30c4127 100644
--- a/gcc/tree-ssa-reassoc.c
+++ b/gcc/tree-ssa-reassoc.c
@@ -962,7 +962,7 @@ static VEC (oecount, heap) *cvec;
static hashval_t
oecount_hash (const void *p)
{
- const oecount *c = VEC_index (oecount, cvec, (size_t)p - 42);
+ const oecount *c = &VEC_index (oecount, cvec, (size_t)p - 42);
return htab_hash_pointer (c->op) ^ (hashval_t)c->oecode;
}
@@ -971,8 +971,8 @@ oecount_hash (const void *p)
static int
oecount_eq (const void *p1, const void *p2)
{
- const oecount *c1 = VEC_index (oecount, cvec, (size_t)p1 - 42);
- const oecount *c2 = VEC_index (oecount, cvec, (size_t)p2 - 42);
+ const oecount *c1 = &VEC_index (oecount, cvec, (size_t)p1 - 42);
+ const oecount *c2 = &VEC_index (oecount, cvec, (size_t)p2 - 42);
return (c1->oecode == c2->oecode
&& c1->op == c2->op);
}
@@ -1354,7 +1354,7 @@ undistribute_ops_list (enum tree_code opcode,
else
{
VEC_pop (oecount, cvec);
- VEC_index (oecount, cvec, (size_t)*slot - 42)->cnt++;
+ VEC_index (oecount, cvec, (size_t)*slot - 42).cnt++;
}
}
}
@@ -1381,7 +1381,7 @@ undistribute_ops_list (enum tree_code opcode,
candidates2 = sbitmap_alloc (length);
while (!VEC_empty (oecount, cvec))
{
- oecount *c = VEC_last (oecount, cvec);
+ oecount *c = &VEC_last (oecount, cvec);
if (c->cnt < 2)
break;
@@ -3190,7 +3190,7 @@ attempt_builtin_powi (gimple stmt, VEC(operand_entry_t, heap) **ops)
fputs ("Multiplying by cached product ", dump_file);
for (elt = j; elt < vec_len; elt++)
{
- rf = VEC_index (repeat_factor, repeat_factor_vec, elt);
+ rf = &VEC_index (repeat_factor, repeat_factor_vec, elt);
print_generic_expr (dump_file, rf->factor, 0);
if (elt < vec_len - 1)
fputs (" * ", dump_file);
@@ -3216,7 +3216,7 @@ attempt_builtin_powi (gimple stmt, VEC(operand_entry_t, heap) **ops)
dump_file);
for (elt = j; elt < vec_len; elt++)
{
- rf = VEC_index (repeat_factor, repeat_factor_vec, elt);
+ rf = &VEC_index (repeat_factor, repeat_factor_vec, elt);
print_generic_expr (dump_file, rf->factor, 0);
if (elt < vec_len - 1)
fputs (" * ", dump_file);
@@ -3250,7 +3250,7 @@ attempt_builtin_powi (gimple stmt, VEC(operand_entry_t, heap) **ops)
fputs ("Building __builtin_pow call for (", dump_file);
for (elt = j; elt < vec_len; elt++)
{
- rf = VEC_index (repeat_factor, repeat_factor_vec, elt);
+ rf = &VEC_index (repeat_factor, repeat_factor_vec, elt);
print_generic_expr (dump_file, rf->factor, 0);
if (elt < vec_len - 1)
fputs (" * ", dump_file);
@@ -3275,8 +3275,8 @@ attempt_builtin_powi (gimple stmt, VEC(operand_entry_t, heap) **ops)
{
tree op1, op2;
- rf1 = VEC_index (repeat_factor, repeat_factor_vec, ii);
- rf2 = VEC_index (repeat_factor, repeat_factor_vec, ii + 1);
+ rf1 = &VEC_index (repeat_factor, repeat_factor_vec, ii);
+ rf2 = &VEC_index (repeat_factor, repeat_factor_vec, ii + 1);
/* Init the last factor's representative to be itself. */
if (!rf2->repr)
@@ -3300,7 +3300,7 @@ attempt_builtin_powi (gimple stmt, VEC(operand_entry_t, heap) **ops)
/* Form a call to __builtin_powi for the maximum product
just formed, raised to the power obtained earlier. */
- rf1 = VEC_index (repeat_factor, repeat_factor_vec, j);
+ rf1 = &VEC_index (repeat_factor, repeat_factor_vec, j);
iter_result = make_temp_ssa_name (type, NULL, "reassocpow");
pow_stmt = gimple_build_call (powi_fndecl, 2, rf1->repr,
build_int_cst (integer_type_node,
@@ -3333,7 +3333,7 @@ attempt_builtin_powi (gimple stmt, VEC(operand_entry_t, heap) **ops)
unsigned k = power;
unsigned n;
- rf1 = VEC_index (repeat_factor, repeat_factor_vec, i);
+ rf1 = &VEC_index (repeat_factor, repeat_factor_vec, i);
rf1->count -= power;
FOR_EACH_VEC_ELT_REVERSE (operand_entry_t, *ops, n, oe)
diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c
index 842c392..55798d1 100644
--- a/gcc/tree-ssa-sccvn.c
+++ b/gcc/tree-ssa-sccvn.c
@@ -775,7 +775,7 @@ ao_ref_init_from_vn_reference (ao_ref *ref,
alias_set_type base_alias_set = -1;
/* First get the final access size from just the outermost expression. */
- op = VEC_index (vn_reference_op_s, ops, 0);
+ op = &VEC_index (vn_reference_op_s, ops, 0);
if (op->opcode == COMPONENT_REF)
size_tree = DECL_SIZE (op->op0);
else if (op->opcode == BIT_FIELD_REF)
@@ -815,7 +815,7 @@ ao_ref_init_from_vn_reference (ao_ref *ref,
&& op->op0
&& DECL_P (TREE_OPERAND (op->op0, 0)))
{
- vn_reference_op_t pop = VEC_index (vn_reference_op_s, ops, i-1);
+ vn_reference_op_t pop = &VEC_index (vn_reference_op_s, ops, i-1);
base = TREE_OPERAND (op->op0, 0);
if (pop->off == -1)
{
@@ -1004,8 +1004,8 @@ vn_reference_fold_indirect (VEC (vn_reference_op_s, heap) **ops,
unsigned int *i_p)
{
unsigned int i = *i_p;
- vn_reference_op_t op = VEC_index (vn_reference_op_s, *ops, i);
- vn_reference_op_t mem_op = VEC_index (vn_reference_op_s, *ops, i - 1);
+ vn_reference_op_t op = &VEC_index (vn_reference_op_s, *ops, i);
+ vn_reference_op_t mem_op = &VEC_index (vn_reference_op_s, *ops, i - 1);
tree addr_base;
HOST_WIDE_INT addr_offset;
@@ -1036,8 +1036,8 @@ vn_reference_maybe_forwprop_address (VEC (vn_reference_op_s, heap) **ops,
unsigned int *i_p)
{
unsigned int i = *i_p;
- vn_reference_op_t op = VEC_index (vn_reference_op_s, *ops, i);
- vn_reference_op_t mem_op = VEC_index (vn_reference_op_s, *ops, i - 1);
+ vn_reference_op_t op = &VEC_index (vn_reference_op_s, *ops, i);
+ vn_reference_op_t mem_op = &VEC_index (vn_reference_op_s, *ops, i - 1);
gimple def_stmt;
enum tree_code code;
double_int off;
@@ -1114,7 +1114,7 @@ fully_constant_vn_reference_p (vn_reference_t ref)
/* Try to simplify the translated expression if it is
a call to a builtin function with at most two arguments. */
- op = VEC_index (vn_reference_op_s, operands, 0);
+ op = &VEC_index (vn_reference_op_s, operands, 0);
if (op->opcode == CALL_EXPR
&& TREE_CODE (op->op0) == ADDR_EXPR
&& TREE_CODE (TREE_OPERAND (op->op0, 0)) == FUNCTION_DECL
@@ -1124,9 +1124,9 @@ fully_constant_vn_reference_p (vn_reference_t ref)
{
vn_reference_op_t arg0, arg1 = NULL;
bool anyconst = false;
- arg0 = VEC_index (vn_reference_op_s, operands, 1);
+ arg0 = &VEC_index (vn_reference_op_s, operands, 1);
if (VEC_length (vn_reference_op_s, operands) > 2)
- arg1 = VEC_index (vn_reference_op_s, operands, 2);
+ arg1 = &VEC_index (vn_reference_op_s, operands, 2);
if (TREE_CODE_CLASS (arg0->opcode) == tcc_constant
|| (arg0->opcode == ADDR_EXPR
&& is_gimple_min_invariant (arg0->op0)))
@@ -1158,7 +1158,7 @@ fully_constant_vn_reference_p (vn_reference_t ref)
&& VEC_length (vn_reference_op_s, operands) == 2)
{
vn_reference_op_t arg0;
- arg0 = VEC_index (vn_reference_op_s, operands, 1);
+ arg0 = &VEC_index (vn_reference_op_s, operands, 1);
if (arg0->opcode == STRING_CST
&& (TYPE_MODE (op->type)
== TYPE_MODE (TREE_TYPE (TREE_TYPE (arg0->op0))))
@@ -1226,12 +1226,12 @@ valueize_refs_1 (VEC (vn_reference_op_s, heap) *orig, bool *valueized_anything)
&& vro->op0
&& TREE_CODE (vro->op0) == ADDR_EXPR
&& VEC_index (vn_reference_op_s,
- orig, i - 1)->opcode == MEM_REF)
+ orig, i - 1).opcode == MEM_REF)
vn_reference_fold_indirect (&orig, &i);
else if (i > 0
&& vro->opcode == SSA_NAME
&& VEC_index (vn_reference_op_s,
- orig, i - 1)->opcode == MEM_REF)
+ orig, i - 1).opcode == MEM_REF)
vn_reference_maybe_forwprop_address (&orig, &i);
/* If it transforms a non-constant ARRAY_REF into a constant
one, adjust the constant offset. */
@@ -1624,9 +1624,9 @@ vn_reference_lookup_3 (ao_ref *ref, tree vuse, void *vr_)
i = VEC_length (vn_reference_op_s, vr->operands) - 1;
j = VEC_length (vn_reference_op_s, lhs_ops) - 1;
while (j >= 0 && i >= 0
- && vn_reference_op_eq (VEC_index (vn_reference_op_s,
- vr->operands, i),
- VEC_index (vn_reference_op_s, lhs_ops, j)))
+ && vn_reference_op_eq (&VEC_index (vn_reference_op_s,
+ vr->operands, i),
+ &VEC_index (vn_reference_op_s, lhs_ops, j)))
{
i--;
j--;
@@ -1639,10 +1639,10 @@ vn_reference_lookup_3 (ao_ref *ref, tree vuse, void *vr_)
don't care here - further lookups with the rewritten operands
will simply fail if we messed up types too badly. */
if (j == 0 && i >= 0
- && VEC_index (vn_reference_op_s, lhs_ops, 0)->opcode == MEM_REF
- && VEC_index (vn_reference_op_s, lhs_ops, 0)->off != -1
- && (VEC_index (vn_reference_op_s, lhs_ops, 0)->off
- == VEC_index (vn_reference_op_s, vr->operands, i)->off))
+ && VEC_index (vn_reference_op_s, lhs_ops, 0).opcode == MEM_REF
+ && VEC_index (vn_reference_op_s, lhs_ops, 0).off != -1
+ && (VEC_index (vn_reference_op_s, lhs_ops, 0).off
+ == VEC_index (vn_reference_op_s, vr->operands, i).off))
i--, j--;
/* i now points to the first additional op.
@@ -1669,7 +1669,7 @@ vn_reference_lookup_3 (ao_ref *ref, tree vuse, void *vr_)
VEC_truncate (vn_reference_op_s, vr->operands,
i + 1 + VEC_length (vn_reference_op_s, rhs));
FOR_EACH_VEC_ELT (vn_reference_op_s, rhs, j, vro)
- VEC_replace (vn_reference_op_s, vr->operands, i + 1 + j, vro);
+ VEC_replace (vn_reference_op_s, vr->operands, i + 1 + j, *vro);
VEC_free (vn_reference_op_s, heap, rhs);
vr->operands = valueize_refs (vr->operands);
vr->hashcode = vn_reference_compute_hash (vr);
@@ -1807,12 +1807,12 @@ vn_reference_lookup_3 (ao_ref *ref, tree vuse, void *vr_)
op.opcode = MEM_REF;
op.op0 = build_int_cst (ptr_type_node, at - rhs_offset);
op.off = at - lhs_offset + rhs_offset;
- VEC_replace (vn_reference_op_s, vr->operands, 0, &op);
+ VEC_replace (vn_reference_op_s, vr->operands, 0, op);
op.type = TREE_TYPE (rhs);
op.opcode = TREE_CODE (rhs);
op.op0 = rhs;
op.off = -1;
- VEC_replace (vn_reference_op_s, vr->operands, 1, &op);
+ VEC_replace (vn_reference_op_s, vr->operands, 1, op);
vr->hashcode = vn_reference_compute_hash (vr);
/* Adjust *ref from the new operands. */
@@ -3746,7 +3746,7 @@ start_over:
/* Restore the last use walker and continue walking there. */
use = name;
name = VEC_pop (tree, namevec);
- memcpy (&iter, VEC_last (ssa_op_iter, itervec),
+ memcpy (&iter, &VEC_last (ssa_op_iter, itervec),
sizeof (ssa_op_iter));
VEC_pop (ssa_op_iter, itervec);
goto continue_walking;
diff --git a/gcc/tree-ssa-structalias.c b/gcc/tree-ssa-structalias.c
index 5b185b6..c108f76 100644
--- a/gcc/tree-ssa-structalias.c
+++ b/gcc/tree-ssa-structalias.c
@@ -2927,7 +2927,7 @@ get_constraint_for_ptr_offset (tree ptr, tree offset,
for (j = 0; j < n; j++)
{
varinfo_t curr;
- c = *VEC_index (ce_s, *results, j);
+ c = VEC_index (ce_s, *results, j);
curr = get_varinfo (c.var);
if (c.type == ADDRESSOF
@@ -2989,7 +2989,7 @@ get_constraint_for_ptr_offset (tree ptr, tree offset,
else
c.offset = rhsoffset;
- VEC_replace (ce_s, *results, j, &c);
+ VEC_replace (ce_s, *results, j, c);
}
}
@@ -3058,7 +3058,7 @@ get_constraint_for_component_ref (tree t, VEC(ce_s, heap) **results,
adding the required subset of sub-fields below. */
get_constraint_for_1 (t, results, true, lhs_p);
gcc_assert (VEC_length (ce_s, *results) == 1);
- result = VEC_last (ce_s, *results);
+ result = &VEC_last (ce_s, *results);
if (result->type == SCALAR
&& get_varinfo (result->var)->is_full_var)
@@ -3284,13 +3284,13 @@ get_constraint_for_1 (tree t, VEC (ce_s, heap) **results, bool address_p,
if (address_p)
return;
- cs = *VEC_last (ce_s, *results);
+ cs = VEC_last (ce_s, *results);
if (cs.type == DEREF
&& type_can_have_subvars (TREE_TYPE (t)))
{
/* For dereferences this means we have to defer it
to solving time. */
- VEC_last (ce_s, *results)->offset = UNKNOWN_OFFSET;
+ VEC_last (ce_s, *results).offset = UNKNOWN_OFFSET;
return;
}
if (cs.type != SCALAR)
@@ -3451,8 +3451,8 @@ do_structure_copy (tree lhsop, tree rhsop)
get_constraint_for (lhsop, &lhsc);
get_constraint_for_rhs (rhsop, &rhsc);
- lhsp = VEC_index (ce_s, lhsc, 0);
- rhsp = VEC_index (ce_s, rhsc, 0);
+ lhsp = &VEC_index (ce_s, lhsc, 0);
+ rhsp = &VEC_index (ce_s, rhsc, 0);
if (lhsp->type == DEREF
|| (lhsp->type == ADDRESSOF && lhsp->var == anything_id)
|| rhsp->type == DEREF)
@@ -3481,7 +3481,7 @@ do_structure_copy (tree lhsop, tree rhsop)
for (j = 0; VEC_iterate (ce_s, lhsc, j, lhsp);)
{
varinfo_t lhsv, rhsv;
- rhsp = VEC_index (ce_s, rhsc, k);
+ rhsp = &VEC_index (ce_s, rhsc, k);
lhsv = get_varinfo (lhsp->var);
rhsv = get_varinfo (rhsp->var);
if (lhsv->may_have_pointers
@@ -4377,7 +4377,7 @@ find_func_aliases_for_call (gimple t)
lhs = get_function_part_constraint (fi, fi_parm_base + j);
while (VEC_length (ce_s, rhsc) != 0)
{
- rhsp = VEC_last (ce_s, rhsc);
+ rhsp = &VEC_last (ce_s, rhsc);
process_constraint (new_constraint (lhs, *rhsp));
VEC_pop (ce_s, rhsc);
}
@@ -4399,7 +4399,7 @@ find_func_aliases_for_call (gimple t)
VEC(ce_s, heap) *tem = NULL;
VEC_safe_push (ce_s, heap, tem, &rhs);
do_deref (&tem);
- rhs = *VEC_index (ce_s, tem, 0);
+ rhs = VEC_index (ce_s, tem, 0);
VEC_free(ce_s, heap, tem);
}
FOR_EACH_VEC_ELT (ce_s, lhsc, j, lhsp)
@@ -4471,7 +4471,7 @@ find_func_aliases (gimple origt)
struct constraint_expr *c2;
while (VEC_length (ce_s, rhsc) > 0)
{
- c2 = VEC_last (ce_s, rhsc);
+ c2 = &VEC_last (ce_s, rhsc);
process_constraint (new_constraint (*c, *c2));
VEC_pop (ce_s, rhsc);
}
@@ -5158,7 +5158,7 @@ push_fields_onto_fieldstack (tree type, VEC(fieldoff_s,heap) **fieldstack,
bool must_have_pointers_p;
if (!VEC_empty (fieldoff_s, *fieldstack))
- pair = VEC_last (fieldoff_s, *fieldstack);
+ pair = &VEC_last (fieldoff_s, *fieldstack);
/* If there isn't anything at offset zero, create sth. */
if (!pair
diff --git a/gcc/tree-vect-loop-manip.c b/gcc/tree-vect-loop-manip.c
index e0b68c7..4e25159 100644
--- a/gcc/tree-vect-loop-manip.c
+++ b/gcc/tree-vect-loop-manip.c
@@ -188,7 +188,7 @@ adjust_vec_debug_stmts (void)
while (!VEC_empty (adjust_info, adjust_vec))
{
- adjust_debug_stmts_now (VEC_last (adjust_info, adjust_vec));
+ adjust_debug_stmts_now (&VEC_last (adjust_info, adjust_vec));
VEC_pop (adjust_info, adjust_vec);
}
@@ -2550,4 +2550,3 @@ vect_loop_versioning (loop_vec_info loop_vinfo,
GSI_SAME_STMT);
}
}
-
diff --git a/gcc/tree-vect-slp.c b/gcc/tree-vect-slp.c
index 782172f..a4c7483 100644
--- a/gcc/tree-vect-slp.c
+++ b/gcc/tree-vect-slp.c
@@ -1098,7 +1098,7 @@ vect_slp_rearrange_stmts (slp_tree node, unsigned int group_size,
tmp_stmts = VEC_alloc (gimple, heap, group_size);
for (i = 0; i < group_size; i++)
- VEC_safe_push (gimple, heap, tmp_stmts, NULL);
+ VEC_safe_push (gimple, heap, tmp_stmts, (gimple)NULL);
FOR_EACH_VEC_ELT (gimple, SLP_TREE_SCALAR_STMTS (node), i, stmt)
{
@@ -2653,7 +2653,7 @@ vect_create_mask_and_perm (gimple stmt, gimple next_scalar_stmt,
stmts later. */
for (i = VEC_length (gimple, SLP_TREE_VEC_STMTS (node));
i < (int) SLP_TREE_NUMBER_OF_VEC_STMTS (node); i++)
- VEC_quick_push (gimple, SLP_TREE_VEC_STMTS (node), NULL);
+ VEC_quick_push (gimple, SLP_TREE_VEC_STMTS (node), (gimple)NULL);
perm_dest = vect_create_destination_var (gimple_assign_lhs (stmt), vectype);
for (i = 0; i < ncopies; i++)
diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
index eb6402b..e3f46ec 100644
--- a/gcc/tree-vrp.c
+++ b/gcc/tree-vrp.c
@@ -8862,7 +8862,7 @@ vrp_fold_stmt (gimple_stmt_iterator *si)
A NULL entry is used to mark the end of pairs which need to be
restored. */
-static VEC(tree,heap) *stack;
+static VEC(tree,heap) *equiv_stack;
/* A trivial wrapper so that we can present the generic jump threading
code with a simple API for simplifying statements. STMT is the
@@ -8930,7 +8930,7 @@ identify_jump_threads (void)
/* Allocate our unwinder stack to unwind any temporary equivalences
that might be recorded. */
- stack = VEC_alloc (tree, heap, 20);
+ equiv_stack = VEC_alloc (tree, heap, 20);
/* To avoid lots of silly node creation, we create a single
conditional and just modify it in-place when attempting to
@@ -8984,7 +8984,7 @@ identify_jump_threads (void)
if (e->flags & (EDGE_DFS_BACK | EDGE_COMPLEX))
continue;
- thread_across_edge (dummy, e, true, &stack,
+ thread_across_edge (dummy, e, true, &equiv_stack,
simplify_stmt_for_jump_threading);
}
}
@@ -9005,7 +9005,7 @@ static void
finalize_jump_threads (void)
{
thread_through_all_blocks (false);
- VEC_free (tree, heap, stack);
+ VEC_free (tree, heap, equiv_stack);
}
diff --git a/gcc/tree.c b/gcc/tree.c
index 6e864c3..e0bd2b5 100644
--- a/gcc/tree.c
+++ b/gcc/tree.c
@@ -6704,8 +6704,8 @@ simple_cst_equal (const_tree t1, const_tree t2)
for (idx = 0; idx < VEC_length (constructor_elt, v1); ++idx)
/* ??? Should we handle also fields here? */
- if (!simple_cst_equal (VEC_index (constructor_elt, v1, idx)->value,
- VEC_index (constructor_elt, v2, idx)->value))
+ if (!simple_cst_equal (VEC_index (constructor_elt, v1, idx).value,
+ VEC_index (constructor_elt, v2, idx).value))
return false;
return true;
}
diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c
index a79872f..818fb24 100644
--- a/gcc/var-tracking.c
+++ b/gcc/var-tracking.c
@@ -7821,7 +7821,7 @@ loc_exp_dep_clear (variable var)
{
while (!VEC_empty (loc_exp_dep, VAR_LOC_DEP_VEC (var)))
{
- loc_exp_dep *led = VEC_last (loc_exp_dep, VAR_LOC_DEP_VEC (var));
+ loc_exp_dep *led = &VEC_last (loc_exp_dep, VAR_LOC_DEP_VEC (var));
if (led->next)
led->next->pprev = led->pprev;
if (led->pprev)
@@ -7865,7 +7865,7 @@ loc_exp_insert_dep (variable var, rtx x, htab_t vars)
else
{
VEC_quick_push (loc_exp_dep, VAR_LOC_DEP_VEC (var), NULL);
- led = VEC_last (loc_exp_dep, VAR_LOC_DEP_VEC (var));
+ led = &VEC_last (loc_exp_dep, VAR_LOC_DEP_VEC (var));
}
led->dv = var->dv;
led->value = x;
diff --git a/gcc/varasm.c b/gcc/varasm.c
index a1f0a23..b380a47 100644
--- a/gcc/varasm.c
+++ b/gcc/varasm.c
@@ -2871,8 +2871,8 @@ compare_constant (const tree t1, const tree t2)
for (idx = 0; idx < VEC_length (constructor_elt, v1); ++idx)
{
- constructor_elt *c1 = VEC_index (constructor_elt, v1, idx);
- constructor_elt *c2 = VEC_index (constructor_elt, v2, idx);
+ constructor_elt *c1 = &VEC_index (constructor_elt, v1, idx);
+ constructor_elt *c2 = &VEC_index (constructor_elt, v2, idx);
/* Check that each value is the same... */
if (!compare_constant (c1->value, c2->value))
diff --git a/gcc/vec.c b/gcc/vec.c
index 85274c4..51a55d9 100644
--- a/gcc/vec.c
+++ b/gcc/vec.c
@@ -1,7 +1,8 @@
/* Vector API for GNU compiler.
- Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010
+ Copyright (C) 2004, 2005, 2006, 2007, 2008, 2010, 2011, 2012
Free Software Foundation, Inc.
Contributed by Nathan Sidwell <nathan@codesourcery.com>
+ Re-implemented in C++ by Diego Novillo <dnovillo@google.com>
This file is part of GCC.
@@ -213,7 +214,7 @@ calculate_allocation (const struct vec_prefix *pfx, int reserve, bool exact)
trailing array is at VEC_OFFSET offset and consists of ELT_SIZE
sized elements. */
-static void *
+void *
vec_gc_o_reserve_1 (void *vec, int reserve, size_t vec_offset, size_t elt_size,
bool exact MEM_STAT_DECL)
{
@@ -246,61 +247,10 @@ vec_gc_o_reserve_1 (void *vec, int reserve, size_t vec_offset, size_t elt_size,
return vec;
}
-/* Ensure there are at least RESERVE free slots in VEC, growing
- exponentially. If RESERVE < 0 grow exactly, else grow
- exponentially. As a special case, if VEC is NULL, and RESERVE is
- 0, no vector will be created. */
-
-void *
-vec_gc_p_reserve (void *vec, int reserve MEM_STAT_DECL)
-{
- return vec_gc_o_reserve_1 (vec, reserve,
- sizeof (struct vec_prefix),
- sizeof (void *), false
- PASS_MEM_STAT);
-}
-
-/* Ensure there are at least RESERVE free slots in VEC, growing
- exactly. If RESERVE < 0 grow exactly, else grow exponentially. As
- a special case, if VEC is NULL, and RESERVE is 0, no vector will be
- created. */
-
-void *
-vec_gc_p_reserve_exact (void *vec, int reserve MEM_STAT_DECL)
-{
- return vec_gc_o_reserve_1 (vec, reserve,
- sizeof (struct vec_prefix),
- sizeof (void *), true
- PASS_MEM_STAT);
-}
-
-/* As for vec_gc_p_reserve, but for object vectors. The vector's
- trailing array is at VEC_OFFSET offset and consists of ELT_SIZE
- sized elements. */
-
-void *
-vec_gc_o_reserve (void *vec, int reserve, size_t vec_offset, size_t elt_size
- MEM_STAT_DECL)
-{
- return vec_gc_o_reserve_1 (vec, reserve, vec_offset, elt_size, false
- PASS_MEM_STAT);
-}
-
-/* As for vec_gc_p_reserve_exact, but for object vectors. The
- vector's trailing array is at VEC_OFFSET offset and consists of
- ELT_SIZE sized elements. */
-
-void *
-vec_gc_o_reserve_exact (void *vec, int reserve, size_t vec_offset,
- size_t elt_size MEM_STAT_DECL)
-{
- return vec_gc_o_reserve_1 (vec, reserve, vec_offset, elt_size, true
- PASS_MEM_STAT);
-}
/* As for vec_gc_o_reserve_1, but for heap allocated vectors. */
-static void *
+void *
vec_heap_o_reserve_1 (void *vec, int reserve, size_t vec_offset,
size_t elt_size, bool exact MEM_STAT_DECL)
{
@@ -328,47 +278,6 @@ vec_heap_o_reserve_1 (void *vec, int reserve, size_t vec_offset,
return vec;
}
-/* As for vec_gc_p_reserve, but for heap allocated vectors. */
-
-void *
-vec_heap_p_reserve (void *vec, int reserve MEM_STAT_DECL)
-{
- return vec_heap_o_reserve_1 (vec, reserve,
- sizeof (struct vec_prefix),
- sizeof (void *), false
- PASS_MEM_STAT);
-}
-
-/* As for vec_gc_p_reserve_exact, but for heap allocated vectors. */
-
-void *
-vec_heap_p_reserve_exact (void *vec, int reserve MEM_STAT_DECL)
-{
- return vec_heap_o_reserve_1 (vec, reserve,
- sizeof (struct vec_prefix),
- sizeof (void *), true
- PASS_MEM_STAT);
-}
-
-/* As for vec_gc_o_reserve, but for heap allocated vectors. */
-
-void *
-vec_heap_o_reserve (void *vec, int reserve, size_t vec_offset, size_t elt_size
- MEM_STAT_DECL)
-{
- return vec_heap_o_reserve_1 (vec, reserve, vec_offset, elt_size, false
- PASS_MEM_STAT);
-}
-
-/* As for vec_gc_o_reserve_exact, but for heap allocated vectors. */
-
-void *
-vec_heap_o_reserve_exact (void *vec, int reserve, size_t vec_offset,
- size_t elt_size MEM_STAT_DECL)
-{
- return vec_heap_o_reserve_1 (vec, reserve, vec_offset, elt_size, true
- PASS_MEM_STAT);
-}
/* Stack vectors are a little different. VEC_alloc turns into a call
to vec_stack_p_reserve_exact1 and passes in space allocated via a
@@ -450,28 +359,6 @@ vec_stack_o_reserve_1 (void *vec, int reserve, size_t vec_offset,
/* Grow a vector allocated on the stack. */
void *
-vec_stack_p_reserve (void *vec, int reserve MEM_STAT_DECL)
-{
- return vec_stack_o_reserve_1 (vec, reserve,
- sizeof (struct vec_prefix),
- sizeof (void *), false
- PASS_MEM_STAT);
-}
-
-/* Exact version of vec_stack_p_reserve. */
-
-void *
-vec_stack_p_reserve_exact (void *vec, int reserve MEM_STAT_DECL)
-{
- return vec_stack_o_reserve_1 (vec, reserve,
- sizeof (struct vec_prefix),
- sizeof (void *), true
- PASS_MEM_STAT);
-}
-
-/* Like vec_stack_p_reserve, but for objects. */
-
-void *
vec_stack_o_reserve (void *vec, int reserve, size_t vec_offset,
size_t elt_size MEM_STAT_DECL)
{
@@ -479,7 +366,7 @@ vec_stack_o_reserve (void *vec, int reserve, size_t vec_offset,
PASS_MEM_STAT);
}
-/* Like vec_stack_p_reserve_exact, but for objects. */
+/* Exact version of vec_stack_o_reserve. */
void *
vec_stack_o_reserve_exact (void *vec, int reserve, size_t vec_offset,
diff --git a/gcc/vec.h b/gcc/vec.h
index cb87112..5fdb859 100644
--- a/gcc/vec.h
+++ b/gcc/vec.h
@@ -1,7 +1,8 @@
/* Vector API for GNU compiler.
- Copyright (C) 2004, 2005, 2007, 2008, 2009, 2010
+ Copyright (C) 2004, 2005, 2007, 2008, 2009, 2010, 2011, 2012
Free Software Foundation, Inc.
Contributed by Nathan Sidwell <nathan@codesourcery.com>
+ Re-implemented in C++ by Diego Novillo <dnovillo@google.com>
This file is part of GCC.
@@ -134,6 +135,153 @@ along with GCC; see the file COPYING3. If not see
*/
+#if ENABLE_CHECKING
+#define VEC_CHECK_INFO ,__FILE__,__LINE__,__FUNCTION__
+#define VEC_CHECK_DECL ,const char *file_,unsigned line_,const char *function_
+#define VEC_CHECK_PASS ,file_,line_,function_
+
+#define VEC_ASSERT(EXPR,OP,T,A) \
+ (void)((EXPR) ? 0 : (VEC_ASSERT_FAIL(OP,VEC(T,A)), 0))
+
+extern void vec_assert_fail (const char *, const char * VEC_CHECK_DECL)
+ ATTRIBUTE_NORETURN;
+#define VEC_ASSERT_FAIL(OP,VEC) vec_assert_fail (OP,#VEC VEC_CHECK_PASS)
+#else
+#define VEC_CHECK_INFO
+#define VEC_CHECK_DECL
+#define VEC_CHECK_PASS
+#define VEC_ASSERT(EXPR,OP,T,A) (void)(EXPR)
+#endif
+
+#define VEC(T,A) vec_t<T>
+
+enum vec_allocation_t { heap, gc, stack };
+
+struct vec_prefix
+{
+ unsigned num;
+ unsigned alloc;
+};
+
+/* Vector type, user visible. */
+template<typename T>
+struct GTY(()) vec_t
+{
+ vec_prefix prefix;
+ T vec[1];
+};
+
+/* Garbage collection support for vec_t. */
+
+template<typename T>
+void
+gt_ggc_mx (vec_t<T> *v)
+{
+ extern void gt_ggc_mx (T&);
+ for (unsigned i = 0; i < v->prefix.num; i++)
+ gt_ggc_mx (v->vec[i]);
+}
+
+
+/* PCH support for vec_t. */
+
+template<typename T>
+void
+gt_pch_nx (vec_t<T> *v)
+{
+ extern void gt_pch_nx (T&);
+ for (unsigned i = 0; i < v->prefix.num; i++)
+ gt_pch_nx (v->vec[i]);
+}
+
+template<typename T>
+void
+gt_pch_nx (vec_t<T *> *v, gt_pointer_operator op, void *cookie)
+{
+ for (unsigned i = 0; i < v->prefix.num; i++)
+ op (&(v->vec[i]), cookie);
+}
+
+template<typename T>
+void
+gt_pch_nx (vec_t<T> *v, gt_pointer_operator op, void *cookie)
+{
+ extern void gt_pch_nx (T *, gt_pointer_operator, void *);
+ for (unsigned i = 0; i < v->prefix.num; i++)
+ gt_pch_nx (&(v->vec[i]), op, cookie);
+}
+
+
+/* FIXME cxx-conversion. Remove these definitions and update all
+ calling sites. */
+/* Vector of integer-like object. */
+#define DEF_VEC_I(T) struct vec_swallow_trailing_semi
+#define DEF_VEC_ALLOC_I(T,A) struct vec_swallow_trailing_semi
+
+/* Vector of pointer to object. */
+#define DEF_VEC_P(T) struct vec_swallow_trailing_semi
+#define DEF_VEC_ALLOC_P(T,A) struct vec_swallow_trailing_semi
+
+/* Vector of object. */
+#define DEF_VEC_O(T) struct vec_swallow_trailing_semi
+#define DEF_VEC_ALLOC_O(T,A) struct vec_swallow_trailing_semi
+
+/* Vectors on the stack. */
+#define DEF_VEC_ALLOC_P_STACK(T) struct vec_swallow_trailing_semi
+#define DEF_VEC_ALLOC_O_STACK(T) struct vec_swallow_trailing_semi
+#define DEF_VEC_ALLOC_I_STACK(T) struct vec_swallow_trailing_semi
+
+/* Vectors of atomic types. Atomic types do not need to have its
+ elements marked for GC and PCH. To avoid unnecessary traversals,
+ we provide template instantiations for the GC/PCH functions that
+ do not traverse the vector.
+
+ FIXME cxx-conversion - Once vec_t users are converted this can
+ be provided in some other way (e.g., adding an additional template
+ parameter to the vec_t class). */
+#define DEF_VEC_A(TYPE) \
+template<typename T> \
+void \
+gt_ggc_mx (vec_t<TYPE> *v ATTRIBUTE_UNUSED) \
+{ \
+} \
+ \
+template<typename T> \
+void \
+gt_pch_nx (vec_t<TYPE> *v ATTRIBUTE_UNUSED) \
+{ \
+} \
+ \
+template<typename T> \
+void \
+gt_pch_nx (vec_t<TYPE> *v ATTRIBUTE_UNUSED, \
+ gt_pointer_operator op ATTRIBUTE_UNUSED, \
+ void *cookie ATTRIBUTE_UNUSED) \
+{ \
+} \
+struct vec_swallow_trailing_semi
+
+#define DEF_VEC_ALLOC_A(T,A) struct vec_swallow_trailing_semi
+
+/* Support functions for stack vectors. */
+extern void *vec_stack_p_reserve_exact_1 (int, void *);
+extern void *vec_stack_o_reserve (void *, int, size_t, size_t MEM_STAT_DECL);
+extern void *vec_stack_o_reserve_exact (void *, int, size_t, size_t
+ MEM_STAT_DECL);
+extern void vec_stack_free (void *);
+
+/* Reallocate an array of elements with prefix. */
+template<typename T, enum vec_allocation_t A>
+extern vec_t<T> *vec_reserve (vec_t<T> *, int MEM_STAT_DECL);
+
+template<typename T, enum vec_allocation_t A>
+extern vec_t<T> *vec_reserve_exact (vec_t<T> *, int MEM_STAT_DECL);
+
+extern void dump_vec_loc_statistics (void);
+extern void ggc_free (void *);
+extern void vec_heap_free (void *);
+
+
/* Macros to invoke API calls. A single macro works for both pointer
and object vectors, but the argument and return types might well be
different. In each macro, T is the typedef of the vector elements,
@@ -148,7 +296,14 @@ along with GCC; see the file COPYING3. If not see
Return the number of active elements in V. V can be NULL, in which
case zero is returned. */
-#define VEC_length(T,V) (VEC_OP(T,base,length)(VEC_BASE(V)))
+#define VEC_length(T,V) (VEC_length_1<T> (V))
+
+template<typename T>
+static inline unsigned
+VEC_length_1 (const vec_t<T> *vec_)
+{
+ return vec_ ? vec_->prefix.num : 0;
+}
/* Check if vector is empty
@@ -156,7 +311,30 @@ along with GCC; see the file COPYING3. If not see
Return nonzero if V is an empty vector (or V is NULL), zero otherwise. */
-#define VEC_empty(T,V) (VEC_length (T,V) == 0)
+#define VEC_empty(T,V) (VEC_empty_1<T> (V))
+
+template<typename T>
+static inline bool
+VEC_empty_1 (const vec_t<T> *vec_)
+{
+ return VEC_length (T, vec_) == 0;
+}
+
+
+/* Get the address of the array of elements
+ T *VEC_T_address (VEC(T) v)
+
+ If you need to directly manipulate the array (for instance, you
+ want to feed it to qsort), use this accessor. */
+
+#define VEC_address(T,V) (VEC_address_1<T> (V))
+
+template<typename T>
+static inline T *
+VEC_address_1 (vec_t<T> *vec_)
+{
+ return vec_ ? vec_->vec : 0;
+}
/* Get the final element of the vector.
@@ -166,16 +344,42 @@ along with GCC; see the file COPYING3. If not see
Return the final element. V must not be empty. */
-#define VEC_last(T,V) (VEC_OP(T,base,last)(VEC_BASE(V) VEC_CHECK_INFO))
+#define VEC_last(T,V) (VEC_last_1<T> (V VEC_CHECK_INFO))
+
+template<typename T>
+static inline T&
+VEC_last_1 (vec_t<T> *vec_ VEC_CHECK_DECL)
+{
+ VEC_ASSERT (vec_ && vec_->prefix.num, "last", T, base);
+ return vec_->vec[vec_->prefix.num - 1];
+}
+
/* Index into vector
T VEC_T_index(VEC(T) *v, unsigned ix); // Integer
T VEC_T_index(VEC(T) *v, unsigned ix); // Pointer
T *VEC_T_index(VEC(T) *v, unsigned ix); // Object
- Return the IX'th element. If IX must be in the domain of V. */
+ Return the IX'th element. IX must be in the domain of V. */
+
+#define VEC_index(T,V,I) (VEC_index_1<T> (V, I VEC_CHECK_INFO))
+
+template<typename T>
+static inline T&
+VEC_index_1 (vec_t<T> *vec_, unsigned ix_ VEC_CHECK_DECL)
+{
+ VEC_ASSERT (vec_ && ix_ < vec_->prefix.num, "index", T, base);
+ return vec_->vec[ix_];
+}
+
+template<typename T>
+static inline const T&
+VEC_index_1 (const vec_t<T> *vec_, unsigned ix_ VEC_CHECK_DECL)
+{
+ VEC_ASSERT (vec_ && ix_ < vec_->prefix.num, "index", T, base);
+ return vec_->vec[ix_];
+}
-#define VEC_index(T,V,I) (VEC_OP(T,base,index)(VEC_BASE(V),I VEC_CHECK_INFO))
/* Iterate over vector
int VEC_T_iterate(VEC(T) *v, unsigned ix, T &ptr); // Integer
@@ -189,7 +393,39 @@ along with GCC; see the file COPYING3. If not see
for (ix = 0; VEC_iterate(T,v,ix,ptr); ix++)
continue; */
-#define VEC_iterate(T,V,I,P) (VEC_OP(T,base,iterate)(VEC_BASE(V),I,&(P)))
+#define VEC_iterate(T,V,I,P) (VEC_iterate_1<T> (V, I, &(P)))
+
+template<typename T>
+static inline bool
+VEC_iterate_1 (const vec_t<T> *vec_, unsigned ix_, T *ptr)
+{
+ if (vec_ && ix_ < vec_->prefix.num)
+ {
+ *ptr = vec_->vec[ix_];
+ return true;
+ }
+ else
+ {
+ *ptr = 0;
+ return false;
+ }
+}
+
+template<typename T>
+static inline bool
+VEC_iterate_1 (vec_t<T> *vec_, unsigned ix_, T **ptr)
+{
+ if (vec_ && ix_ < vec_->prefix.num)
+ {
+ *ptr = &vec_->vec[ix_];
+ return true;
+ }
+ else
+ {
+ *ptr = 0;
+ return false;
+ }
+}
/* Convenience macro for forward iteration. */
@@ -208,31 +444,99 @@ along with GCC; see the file COPYING3. If not see
VEC_iterate (T, (V), (I), (P)); \
(I)--)
+
+/* Use these to determine the required size and initialization of a
+ vector embedded within another structure (as the final member).
+
+ size_t VEC_T_embedded_size(int reserve);
+ void VEC_T_embedded_init(VEC(T) *v, int reserve);
+
+ These allow the caller to perform the memory allocation. */
+
+#define VEC_embedded_size(T,N) (VEC_embedded_size_1<T> (N))
+
+template<typename T>
+static inline size_t
+VEC_embedded_size_1 (int alloc_)
+{
+ return offsetof (vec_t<T>, vec) + alloc_ * sizeof (T);
+}
+
+#define VEC_embedded_init(T,O,N) (VEC_embedded_init_1<T> (O, N))
+
+template<typename T>
+static inline void
+VEC_embedded_init_1 (vec_t<T> *vec_, int alloc_)
+{
+ vec_->prefix.num = 0;
+ vec_->prefix.alloc = alloc_;
+}
+
+
/* Allocate new vector.
VEC(T,A) *VEC_T_A_alloc(int reserve);
Allocate a new vector with space for RESERVE objects. If RESERVE
- is zero, NO vector is created. */
+ is zero, NO vector is created.
+
+ We support a vector which starts out with space on the stack and
+ switches to heap space when forced to reallocate. This works a
+ little differently. In the case of stack vectors, VEC_alloc will
+ expand to a call to VEC_alloc_1 that calls XALLOCAVAR to request the
+ initial allocation. This uses alloca to get the initial space.
+ Since alloca can not be usefully called in an inline function,
+ VEC_alloc must always be a macro.
+
+ Only the initial allocation will be made using alloca, so pass a
+ reasonable estimate that doesn't use too much stack space; don't
+ pass zero. Don't return a VEC(TYPE,stack) vector from the function
+ which allocated it. */
+
+#define VEC_alloc(T,A,N) \
+ ((A == stack) \
+ ? VEC_alloc_1 (N, \
+ XALLOCAVAR (vec_t<T>, \
+ VEC_embedded_size_1<T> (N))) \
+ : VEC_alloc_1<T, A> (N MEM_STAT_INFO))
+
+template<typename T, enum vec_allocation_t A>
+static inline vec_t<T> *
+VEC_alloc_1 (int alloc_ MEM_STAT_DECL)
+{
+ return vec_reserve_exact<T, A> (NULL, alloc_ PASS_MEM_STAT);
+}
+
+template<typename T>
+static inline vec_t<T> *
+VEC_alloc_1 (int alloc_, vec_t<T> *space)
+{
+ return (vec_t<T> *) vec_stack_p_reserve_exact_1 (alloc_, space);
+}
-#define VEC_alloc(T,A,N) (VEC_OP(T,A,alloc)(N MEM_STAT_INFO))
/* Free a vector.
void VEC_T_A_free(VEC(T,A) *&);
Free a vector and set it to NULL. */
-#define VEC_free(T,A,V) (VEC_OP(T,A,free)(&V))
+#define VEC_free(T,A,V) (VEC_free_1<T, A> (&V))
-/* Use these to determine the required size and initialization of a
- vector embedded within another structure (as the final member).
-
- size_t VEC_T_embedded_size(int reserve);
- void VEC_T_embedded_init(VEC(T) *v, int reserve);
-
- These allow the caller to perform the memory allocation. */
+template<typename T, enum vec_allocation_t A>
+static inline void
+VEC_free_1 (vec_t<T> **vec_)
+{
+ if (*vec_)
+ {
+ if (A == heap)
+ vec_heap_free (*vec_);
+ else if (A == gc)
+ ggc_free (*vec_);
+ else if (A == stack)
+ vec_stack_free (*vec_);
+ }
+ *vec_ = NULL;
+}
-#define VEC_embedded_size(T,N) (VEC_OP(T,base,embedded_size)(N))
-#define VEC_embedded_init(T,O,N) (VEC_OP(T,base,embedded_init)(VEC_BASE(O),N))
/* Copy a vector.
VEC(T,A) *VEC_T_A_copy(VEC(T) *);
@@ -240,7 +544,24 @@ along with GCC; see the file COPYING3. If not see
Copy the live elements of a vector into a new vector. The new and
old vectors need not be allocated by the same mechanism. */
-#define VEC_copy(T,A,V) (VEC_OP(T,A,copy)(VEC_BASE(V) MEM_STAT_INFO))
+#define VEC_copy(T,A,V) (VEC_copy_1<T, A> (V MEM_STAT_INFO))
+
+template<typename T, enum vec_allocation_t A>
+static inline vec_t<T> *
+VEC_copy_1 (vec_t<T> *vec_ MEM_STAT_DECL)
+{
+ size_t len_ = vec_ ? vec_->prefix.num : 0;
+ vec_t<T> *new_vec_ = NULL;
+
+ if (len_)
+ {
+ new_vec_ = vec_reserve_exact<T, A> (NULL, len_ PASS_MEM_STAT);
+ new_vec_->prefix.num = len_;
+ memcpy (new_vec_->vec, vec_->vec, sizeof (T) * len_);
+ }
+ return new_vec_;
+}
+
/* Determine if a vector has additional capacity.
@@ -252,8 +573,18 @@ along with GCC; see the file COPYING3. If not see
nonzero in exactly the same circumstances that VEC_T_reserve
will. */
-#define VEC_space(T,V,R) \
- (VEC_OP(T,base,space)(VEC_BASE(V),R VEC_CHECK_INFO))
+#define VEC_space(T,V,R) (VEC_space_1<T> (V, R VEC_CHECK_INFO))
+
+template<typename T>
+static inline int
+VEC_space_1 (vec_t<T> *vec_, int alloc_ VEC_CHECK_DECL)
+{
+ VEC_ASSERT (alloc_ >= 0, "space", T, base);
+ return vec_
+ ? vec_->prefix.alloc - vec_->prefix.num >= (unsigned)alloc_
+ : !alloc_;
+}
+
/* Reserve space.
int VEC_T_A_reserve(VEC(T,A) *&v, int reserve);
@@ -264,7 +595,20 @@ along with GCC; see the file COPYING3. If not see
occurred. */
#define VEC_reserve(T,A,V,R) \
- (VEC_OP(T,A,reserve)(&(V),R VEC_CHECK_INFO MEM_STAT_INFO))
+ (VEC_reserve_1<T, A> (&(V), (int)(R) VEC_CHECK_INFO MEM_STAT_INFO))
+
+template<typename T, enum vec_allocation_t A>
+static inline int
+VEC_reserve_1 (vec_t<T> **vec_, int alloc_ VEC_CHECK_DECL MEM_STAT_DECL)
+{
+ int extend = !VEC_space_1 (*vec_, alloc_ VEC_CHECK_PASS);
+
+ if (extend)
+ *vec_ = vec_reserve<T, A> (*vec_, alloc_ PASS_MEM_STAT);
+
+ return extend;
+}
+
/* Reserve space exactly.
int VEC_T_A_reserve_exact(VEC(T,A) *&v, int reserve);
@@ -275,7 +619,20 @@ along with GCC; see the file COPYING3. If not see
occurred. */
#define VEC_reserve_exact(T,A,V,R) \
- (VEC_OP(T,A,reserve_exact)(&(V),R VEC_CHECK_INFO MEM_STAT_INFO))
+ (VEC_reserve_exact_1<T, A> (&(V), R VEC_CHECK_INFO MEM_STAT_INFO))
+
+template<typename T, enum vec_allocation_t A>
+static inline int
+VEC_reserve_exact_1 (vec_t<T> **vec_, int alloc_ VEC_CHECK_DECL MEM_STAT_DECL)
+{
+ int extend = !VEC_space_1 (*vec_, alloc_ VEC_CHECK_PASS);
+
+ if (extend)
+ *vec_ = vec_reserve_exact<T, A> (*vec_, alloc_ PASS_MEM_STAT);
+
+ return extend;
+}
+
/* Copy elements with no reallocation
void VEC_T_splice (VEC(T) *dst, VEC(T) *src); // Integer
@@ -287,8 +644,23 @@ along with GCC; see the file COPYING3. If not see
often will be. DST is assumed to have sufficient headroom
available. */
-#define VEC_splice(T,DST,SRC) \
- (VEC_OP(T,base,splice)(VEC_BASE(DST), VEC_BASE(SRC) VEC_CHECK_INFO))
+#define VEC_splice(T,DST,SRC) (VEC_splice_1<T> (DST, SRC VEC_CHECK_INFO))
+
+template<typename T>
+static inline void
+VEC_splice_1 (vec_t<T> *dst_, vec_t<T> *src_ VEC_CHECK_DECL)
+{
+ if (src_)
+ {
+ unsigned len_ = src_->prefix.num;
+ VEC_ASSERT (dst_->prefix.num + len_ <= dst_->prefix.alloc, "splice",
+ T, base);
+
+ memcpy (&dst_->vec[dst_->prefix.num], &src_->vec[0], len_ * sizeof (T));
+ dst_->prefix.num += len_;
+ }
+}
+
/* Copy elements with reallocation
void VEC_T_safe_splice (VEC(T,A) *&dst, VEC(T) *src); // Integer
@@ -301,7 +673,21 @@ along with GCC; see the file COPYING3. If not see
reallocated if needed. */
#define VEC_safe_splice(T,A,DST,SRC) \
- (VEC_OP(T,A,safe_splice)(&(DST), VEC_BASE(SRC) VEC_CHECK_INFO MEM_STAT_INFO))
+ (VEC_safe_splice_1<T, A> (&(DST), SRC VEC_CHECK_INFO MEM_STAT_INFO))
+
+template<typename T, enum vec_allocation_t A>
+static inline void
+VEC_safe_splice_1 (vec_t<T> **dst_, vec_t<T> *src_ VEC_CHECK_DECL MEM_STAT_DECL)
+{
+ if (src_)
+ {
+ VEC_reserve_exact_1<T, A> (dst_, src_->prefix.num
+ VEC_CHECK_PASS MEM_STAT_INFO);
+
+ VEC_splice_1 (*dst_, src_ VEC_CHECK_PASS);
+ }
+}
+
/* Push object with no reallocation
T *VEC_T_quick_push (VEC(T) *v, T obj); // Integer
@@ -313,8 +699,31 @@ along with GCC; see the file COPYING3. If not see
case NO initialization is performed. There must
be sufficient space in the vector. */
-#define VEC_quick_push(T,V,O) \
- (VEC_OP(T,base,quick_push)(VEC_BASE(V),O VEC_CHECK_INFO))
+#define VEC_quick_push(T,V,O) (VEC_quick_push_1<T> (V, O VEC_CHECK_INFO))
+
+template<typename T>
+static inline T &
+VEC_quick_push_1 (vec_t<T> *vec_, T obj_ VEC_CHECK_DECL)
+{
+ VEC_ASSERT (vec_->prefix.num < vec_->prefix.alloc, "push", T, base);
+ vec_->vec[vec_->prefix.num] = obj_;
+ T &val_ = vec_->vec[vec_->prefix.num];
+ vec_->prefix.num++;
+ return val_;
+}
+
+template<typename T>
+static inline T *
+VEC_quick_push_1 (vec_t<T> *vec_, const T *ptr_ VEC_CHECK_DECL)
+{
+ T *slot_;
+ VEC_ASSERT (vec_->prefix.num < vec_->prefix.alloc, "push", T, base);
+ slot_ = &vec_->vec[vec_->prefix.num++];
+ if (ptr_)
+ *slot_ = *ptr_;
+ return slot_;
+}
+
/* Push object with reallocation
T *VEC_T_A_safe_push (VEC(T,A) *&v, T obj); // Integer
@@ -326,7 +735,24 @@ along with GCC; see the file COPYING3. If not see
case NO initialization is performed. Reallocates V, if needed. */
#define VEC_safe_push(T,A,V,O) \
- (VEC_OP(T,A,safe_push)(&(V),O VEC_CHECK_INFO MEM_STAT_INFO))
+ (VEC_safe_push_1<T, A> (&(V), O VEC_CHECK_INFO MEM_STAT_INFO))
+
+template<typename T, enum vec_allocation_t A>
+static inline T &
+VEC_safe_push_1 (vec_t<T> **vec_, T obj_ VEC_CHECK_DECL MEM_STAT_DECL)
+{
+ VEC_reserve_1<T, A> (vec_, 1 VEC_CHECK_PASS PASS_MEM_STAT);
+ return VEC_quick_push_1 (*vec_, obj_ VEC_CHECK_PASS);
+}
+
+template<typename T, enum vec_allocation_t A>
+static inline T *
+VEC_safe_push_1 (vec_t<T> **vec_, const T *ptr_ VEC_CHECK_DECL MEM_STAT_DECL)
+{
+ VEC_reserve_1<T, A> (vec_, 1 VEC_CHECK_PASS PASS_MEM_STAT);
+ return VEC_quick_push_1 (*vec_, ptr_ VEC_CHECK_PASS);
+}
+
/* Pop element off end
T VEC_T_pop (VEC(T) *v); // Integer
@@ -336,7 +762,16 @@ along with GCC; see the file COPYING3. If not see
Pop the last element off the end. Returns the element popped, for
pointer vectors. */
-#define VEC_pop(T,V) (VEC_OP(T,base,pop)(VEC_BASE(V) VEC_CHECK_INFO))
+#define VEC_pop(T,V) (VEC_pop_1<T> (V VEC_CHECK_INFO))
+
+template<typename T>
+static inline T&
+VEC_pop_1 (vec_t<T> *vec_ VEC_CHECK_DECL)
+{
+ VEC_ASSERT (vec_->prefix.num, "pop", T, base);
+ return vec_->vec[--vec_->prefix.num];
+}
+
/* Truncate to specific length
void VEC_T_truncate (VEC(T) *v, unsigned len);
@@ -344,8 +779,18 @@ along with GCC; see the file COPYING3. If not see
Set the length as specified. The new length must be less than or
equal to the current length. This is an O(1) operation. */
-#define VEC_truncate(T,V,I) \
- (VEC_OP(T,base,truncate)(VEC_BASE(V),I VEC_CHECK_INFO))
+#define VEC_truncate(T,V,I) \
+ (VEC_truncate_1<T> (V, (unsigned)(I) VEC_CHECK_INFO))
+
+template<typename T>
+static inline void
+VEC_truncate_1 (vec_t<T> *vec_, unsigned size_ VEC_CHECK_DECL)
+{
+ VEC_ASSERT (vec_ ? vec_->prefix.num >= size_ : !size_, "truncate", T, base);
+ if (vec_)
+ vec_->prefix.num = size_;
+}
+
/* Grow to a specific length.
void VEC_T_A_safe_grow (VEC(T,A) *&v, int len);
@@ -355,7 +800,20 @@ along with GCC; see the file COPYING3. If not see
uninitialized. */
#define VEC_safe_grow(T,A,V,I) \
- (VEC_OP(T,A,safe_grow)(&(V),I VEC_CHECK_INFO MEM_STAT_INFO))
+ (VEC_safe_grow_1<T, A> (&(V), (int)(I) VEC_CHECK_INFO MEM_STAT_INFO))
+
+template<typename T, enum vec_allocation_t A>
+static inline void
+VEC_safe_grow_1 (vec_t<T> **vec_, int size_ VEC_CHECK_DECL MEM_STAT_DECL)
+{
+ VEC_ASSERT (size_ >= 0 && VEC_length (T, *vec_) <= (unsigned)size_,
+ "grow", T, A);
+ VEC_reserve_exact_1<T, A> (vec_,
+ size_ - (int)(*vec_ ? (*vec_)->prefix.num : 0)
+ VEC_CHECK_PASS PASS_MEM_STAT);
+ (*vec_)->prefix.num = size_;
+}
+
/* Grow to a specific length.
void VEC_T_A_safe_grow_cleared (VEC(T,A) *&v, int len);
@@ -364,8 +822,21 @@ along with GCC; see the file COPYING3. If not see
long or longer than the current length. The new elements are
initialized to zero. */
-#define VEC_safe_grow_cleared(T,A,V,I) \
- (VEC_OP(T,A,safe_grow_cleared)(&(V),I VEC_CHECK_INFO MEM_STAT_INFO))
+#define VEC_safe_grow_cleared(T,A,V,I) \
+ (VEC_safe_grow_cleared_1<T,A> (&(V), (int)(I) \
+ VEC_CHECK_INFO MEM_STAT_INFO))
+
+template<typename T, enum vec_allocation_t A>
+static inline void
+VEC_safe_grow_cleared_1 (vec_t<T> **vec_, int size_ VEC_CHECK_DECL
+ MEM_STAT_DECL)
+{
+ int oldsize = VEC_length (T, *vec_);
+ VEC_safe_grow_1<T, A> (vec_, size_ VEC_CHECK_PASS PASS_MEM_STAT);
+ memset (&(VEC_address (T, *vec_)[oldsize]), 0,
+ sizeof (T) * (size_ - oldsize));
+}
+
/* Replace element
T VEC_T_replace (VEC(T) *v, unsigned ix, T val); // Integer
@@ -379,20 +850,57 @@ along with GCC; see the file COPYING3. If not see
performed. */
#define VEC_replace(T,V,I,O) \
- (VEC_OP(T,base,replace)(VEC_BASE(V),I,O VEC_CHECK_INFO))
+ (VEC_replace_1<T> (V, (unsigned)(I), O VEC_CHECK_INFO))
+
+template<typename T>
+static inline T&
+VEC_replace_1 (vec_t<T> *vec_, unsigned ix_, T obj_ VEC_CHECK_DECL)
+{
+ VEC_ASSERT (ix_ < vec_->prefix.num, "replace", T, base);
+ vec_->vec[ix_] = obj_;
+ return vec_->vec[ix_];
+}
+
/* Insert object with no reallocation
- T *VEC_T_quick_insert (VEC(T) *v, unsigned ix, T val); // Integer
- T *VEC_T_quick_insert (VEC(T) *v, unsigned ix, T val); // Pointer
- T *VEC_T_quick_insert (VEC(T) *v, unsigned ix, T *val); // Object
+ void VEC_T_quick_insert (VEC(T) *v, unsigned ix, T val); // Integer
+ void VEC_T_quick_insert (VEC(T) *v, unsigned ix, T val); // Pointer
+ void VEC_T_quick_insert (VEC(T) *v, unsigned ix, T *val); // Object
- Insert an element, VAL, at the IXth position of V. Return a pointer
- to the slot created. For vectors of object, the new value can be
- NULL, in which case no initialization of the inserted slot takes
- place. There must be sufficient space. */
+ Insert an element, VAL, at the IXth position of V. For vectors of
+ object, the new value can be NULL, in which case no initialization
+ of the inserted slot takes place. There must be sufficient space. */
#define VEC_quick_insert(T,V,I,O) \
- (VEC_OP(T,base,quick_insert)(VEC_BASE(V),I,O VEC_CHECK_INFO))
+ (VEC_quick_insert_1<T> (V,I,O VEC_CHECK_INFO))
+
+template<typename T>
+static inline void
+VEC_quick_insert_1 (vec_t<T> *vec_, unsigned ix_, T obj_ VEC_CHECK_DECL)
+{
+ T *slot_;
+
+ VEC_ASSERT (vec_->prefix.num < vec_->prefix.alloc, "insert", T, base);
+ VEC_ASSERT (ix_ <= vec_->prefix.num, "insert", T, base);
+ slot_ = &vec_->vec[ix_];
+ memmove (slot_ + 1, slot_, (vec_->prefix.num++ - ix_) * sizeof (T));
+ *slot_ = obj_;
+}
+
+template<typename T>
+static inline void
+VEC_quick_insert_1 (vec_t<T> *vec_, unsigned ix_, const T *ptr_ VEC_CHECK_DECL)
+{
+ T *slot_;
+
+ VEC_ASSERT (vec_->prefix.num < vec_->prefix.alloc, "insert", T, base);
+ VEC_ASSERT (ix_ <= vec_->prefix.num, "insert", T, base);
+ slot_ = &vec_->vec[ix_];
+ memmove (slot_ + 1, slot_, (vec_->prefix.num++ - ix_) * sizeof (T));
+ if (ptr_)
+ *slot_ = *ptr_;
+}
+
/* Insert object with reallocation
T *VEC_T_A_safe_insert (VEC(T,A) *&v, unsigned ix, T val); // Integer
@@ -405,31 +913,70 @@ along with GCC; see the file COPYING3. If not see
place. Reallocate V, if necessary. */
#define VEC_safe_insert(T,A,V,I,O) \
- (VEC_OP(T,A,safe_insert)(&(V),I,O VEC_CHECK_INFO MEM_STAT_INFO))
+ (VEC_safe_insert_1<T, A> (&(V),I,O VEC_CHECK_INFO MEM_STAT_INFO))
+
+template<typename T, enum vec_allocation_t A>
+static inline void
+VEC_safe_insert_1 (vec_t<T> **vec_, unsigned ix_, T obj_
+ VEC_CHECK_DECL MEM_STAT_DECL)
+{
+ VEC_reserve_1<T, A> (vec_, 1 VEC_CHECK_PASS PASS_MEM_STAT);
+ VEC_quick_insert_1 (*vec_, ix_, obj_ VEC_CHECK_PASS);
+}
+
+template<typename T, enum vec_allocation_t A>
+static inline void
+VEC_safe_insert_1 (vec_t<T> **vec_, unsigned ix_, T *ptr_
+ VEC_CHECK_DECL MEM_STAT_DECL)
+{
+ VEC_reserve_1<T, A> (vec_, 1 VEC_CHECK_PASS PASS_MEM_STAT);
+ VEC_quick_insert_1 (*vec_, ix_, ptr_ VEC_CHECK_PASS);
+}
+
+
/* Remove element retaining order
- T VEC_T_ordered_remove (VEC(T) *v, unsigned ix); // Integer
- T VEC_T_ordered_remove (VEC(T) *v, unsigned ix); // Pointer
+ void VEC_T_ordered_remove (VEC(T) *v, unsigned ix); // Integer
+ void VEC_T_ordered_remove (VEC(T) *v, unsigned ix); // Pointer
void VEC_T_ordered_remove (VEC(T) *v, unsigned ix); // Object
Remove an element from the IXth position of V. Ordering of
- remaining elements is preserved. For pointer vectors returns the
- removed object. This is an O(N) operation due to a memmove. */
+ remaining elements is preserved. This is an O(N) operation due to
+ a memmove. */
#define VEC_ordered_remove(T,V,I) \
- (VEC_OP(T,base,ordered_remove)(VEC_BASE(V),I VEC_CHECK_INFO))
+ (VEC_ordered_remove_1<T> (V,I VEC_CHECK_INFO))
+
+template<typename T>
+static inline void
+VEC_ordered_remove_1 (vec_t<T> *vec_, unsigned ix_ VEC_CHECK_DECL)
+{
+ T *slot_;
+ VEC_ASSERT (ix_ < vec_->prefix.num, "remove", T, base);
+ slot_ = &vec_->vec[ix_];
+ memmove (slot_, slot_ + 1, (--vec_->prefix.num - ix_) * sizeof (T));
+}
+
/* Remove element destroying order
- T VEC_T_unordered_remove (VEC(T) *v, unsigned ix); // Integer
- T VEC_T_unordered_remove (VEC(T) *v, unsigned ix); // Pointer
+ void VEC_T_unordered_remove (VEC(T) *v, unsigned ix); // Integer
+ void VEC_T_unordered_remove (VEC(T) *v, unsigned ix); // Pointer
void VEC_T_unordered_remove (VEC(T) *v, unsigned ix); // Object
- Remove an element from the IXth position of V. Ordering of
- remaining elements is destroyed. For pointer vectors returns the
- removed object. This is an O(1) operation. */
+ Remove an element from the IXth position of V. Ordering of
+ remaining elements is destroyed. This is an O(1) operation. */
#define VEC_unordered_remove(T,V,I) \
- (VEC_OP(T,base,unordered_remove)(VEC_BASE(V),I VEC_CHECK_INFO))
+ (VEC_unordered_remove_1<T> (V,I VEC_CHECK_INFO))
+
+template<typename T>
+static inline void
+VEC_unordered_remove_1 (vec_t<T> *vec_, unsigned ix_ VEC_CHECK_DECL)
+{
+ VEC_ASSERT (ix_ < vec_->prefix.num, "remove", T, base);
+ vec_->vec[ix_] = vec_->vec[--vec_->prefix.num];
+}
+
/* Remove a block of elements
void VEC_T_block_remove (VEC(T) *v, unsigned ix, unsigned len);
@@ -438,22 +985,27 @@ along with GCC; see the file COPYING3. If not see
This is an O(N) operation due to memmove. */
#define VEC_block_remove(T,V,I,L) \
- (VEC_OP(T,base,block_remove)(VEC_BASE(V),I,L VEC_CHECK_INFO))
+ (VEC_block_remove_1<T> (V, I, L VEC_CHECK_INFO))
-/* Get the address of the array of elements
- T *VEC_T_address (VEC(T) v)
-
- If you need to directly manipulate the array (for instance, you
- want to feed it to qsort), use this accessor. */
+template<typename T>
+static inline void
+VEC_block_remove_1 (vec_t<T> *vec_, unsigned ix_, unsigned len_ VEC_CHECK_DECL)
+{
+ T *slot_;
+ VEC_ASSERT (ix_ + len_ <= vec_->prefix.num, "block_remove", T, base);
+ slot_ = &vec_->vec[ix_];
+ vec_->prefix.num -= len_;
+ memmove (slot_, slot_ + len_, (vec_->prefix.num - ix_) * sizeof (T));
+}
-#define VEC_address(T,V) (VEC_OP(T,base,address)(VEC_BASE(V)))
/* Conveniently sort the contents of the vector with qsort.
void VEC_qsort (VEC(T) *v, int (*cmp_func)(const void *, const void *)) */
-#define VEC_qsort(T,V,CMP) qsort(VEC_address (T,V), VEC_length(T,V), \
+#define VEC_qsort(T,V,CMP) qsort(VEC_address (T, V), VEC_length (T, V), \
sizeof (T), CMP)
+
/* Find the first index in the vector not less than the object.
unsigned VEC_T_lower_bound (VEC(T) *v, const T val,
bool (*lessthan) (const T, const T)); // Integer
@@ -466,955 +1018,140 @@ along with GCC; see the file COPYING3. If not see
changing the ordering of V. LESSTHAN is a function that returns
true if the first argument is strictly less than the second. */
-#define VEC_lower_bound(T,V,O,LT) \
- (VEC_OP(T,base,lower_bound)(VEC_BASE(V),O,LT VEC_CHECK_INFO))
+#define VEC_lower_bound(T,V,O,LT) \
+ (VEC_lower_bound_1<T> (V, O, LT VEC_CHECK_INFO))
-/* Reallocate an array of elements with prefix. */
-extern void *vec_gc_p_reserve (void *, int MEM_STAT_DECL);
-extern void *vec_gc_p_reserve_exact (void *, int MEM_STAT_DECL);
-extern void *vec_gc_o_reserve (void *, int, size_t, size_t MEM_STAT_DECL);
-extern void *vec_gc_o_reserve_exact (void *, int, size_t, size_t
- MEM_STAT_DECL);
-extern void ggc_free (void *);
-#define vec_gc_free(V) ggc_free (V)
-extern void *vec_heap_p_reserve (void *, int MEM_STAT_DECL);
-extern void *vec_heap_p_reserve_exact (void *, int MEM_STAT_DECL);
-extern void *vec_heap_o_reserve (void *, int, size_t, size_t MEM_STAT_DECL);
-extern void *vec_heap_o_reserve_exact (void *, int, size_t, size_t
- MEM_STAT_DECL);
-extern void dump_vec_loc_statistics (void);
-extern void vec_heap_free (void *);
-
-#if ENABLE_CHECKING
-#define VEC_CHECK_INFO ,__FILE__,__LINE__,__FUNCTION__
-#define VEC_CHECK_DECL ,const char *file_,unsigned line_,const char *function_
-#define VEC_CHECK_PASS ,file_,line_,function_
-
-#define VEC_ASSERT(EXPR,OP,T,A) \
- (void)((EXPR) ? 0 : (VEC_ASSERT_FAIL(OP,VEC(T,A)), 0))
-
-extern void vec_assert_fail (const char *, const char * VEC_CHECK_DECL)
- ATTRIBUTE_NORETURN;
-#define VEC_ASSERT_FAIL(OP,VEC) vec_assert_fail (OP,#VEC VEC_CHECK_PASS)
-#else
-#define VEC_CHECK_INFO
-#define VEC_CHECK_DECL
-#define VEC_CHECK_PASS
-#define VEC_ASSERT(EXPR,OP,T,A) (void)(EXPR)
-#endif
-
-/* Note: gengtype has hardwired knowledge of the expansions of the
- VEC, DEF_VEC_*, and DEF_VEC_ALLOC_* macros. If you change the
- expansions of these macros you may need to change gengtype too. */
-
-typedef struct GTY(()) vec_prefix
+template<typename T>
+static inline unsigned
+VEC_lower_bound_1 (vec_t<T> *vec_, T obj_,
+ bool (*lessthan_)(T, T) VEC_CHECK_DECL)
{
- unsigned num;
- unsigned alloc;
-} vec_prefix;
-
-#define VEC(T,A) VEC_##T##_##A
-#define VEC_OP(T,A,OP) VEC_##T##_##A##_##OP
-
-/* Base of vector type, not user visible. */
-#define VEC_T(T,B) \
-typedef struct VEC(T,B) \
-{ \
- struct vec_prefix prefix; \
- T vec[1]; \
-} VEC(T,B)
-
-#define VEC_T_GTY(T,B) \
-typedef struct GTY(()) VEC(T,B) \
-{ \
- struct vec_prefix prefix; \
- T GTY ((length ("%h.prefix.num"))) vec[1]; \
-} VEC(T,B)
-
-#define VEC_T_GTY_ATOMIC(T,B) \
-typedef struct GTY(()) VEC(T,B) \
-{ \
- struct vec_prefix prefix; \
- T GTY ((atomic)) vec[1]; \
-} VEC(T,B)
-
-/* Derived vector type, user visible. */
-#define VEC_TA_GTY(T,B,A,GTY) \
-typedef struct GTY VEC(T,A) \
-{ \
- VEC(T,B) base; \
-} VEC(T,A)
-
-#define VEC_TA(T,B,A) \
-typedef struct VEC(T,A) \
-{ \
- VEC(T,B) base; \
-} VEC(T,A)
-
-/* Convert to base type. */
-#if GCC_VERSION >= 4000
-#define VEC_BASE(P) \
- ((offsetof (__typeof (*P), base) == 0 || (P)) ? &(P)->base : 0)
-#else
-#define VEC_BASE(P) ((P) ? &(P)->base : 0)
-#endif
-
-/* Vector of integer-like object. */
-#define DEF_VEC_I(T) \
-static inline void VEC_OP (T,must_be,integral_type) (void) \
-{ \
- (void)~(T)0; \
-} \
- \
-VEC_T(T,base); \
-VEC_TA(T,base,none); \
-DEF_VEC_FUNC_P(T) \
-struct vec_swallow_trailing_semi
-#define DEF_VEC_ALLOC_I(T,A) \
-VEC_TA(T,base,A); \
-DEF_VEC_ALLOC_FUNC_I(T,A) \
-DEF_VEC_NONALLOC_FUNCS_I(T,A) \
-struct vec_swallow_trailing_semi
-
-/* Vector of pointer to object. */
-#define DEF_VEC_P(T) \
-static inline void VEC_OP (T,must_be,pointer_type) (void) \
-{ \
- (void)((T)1 == (void *)1); \
-} \
- \
-VEC_T_GTY(T,base); \
-VEC_TA(T,base,none); \
-DEF_VEC_FUNC_P(T) \
-struct vec_swallow_trailing_semi
-#define DEF_VEC_ALLOC_P(T,A) \
-VEC_TA(T,base,A); \
-DEF_VEC_ALLOC_FUNC_P(T,A) \
-DEF_VEC_NONALLOC_FUNCS_P(T,A) \
-struct vec_swallow_trailing_semi
-
-#define DEF_VEC_FUNC_P(T) \
-static inline unsigned VEC_OP (T,base,length) (const VEC(T,base) *vec_) \
-{ \
- return vec_ ? vec_->prefix.num : 0; \
-} \
- \
-static inline T VEC_OP (T,base,last) \
- (const VEC(T,base) *vec_ VEC_CHECK_DECL) \
-{ \
- VEC_ASSERT (vec_ && vec_->prefix.num, "last", T, base); \
- \
- return vec_->vec[vec_->prefix.num - 1]; \
-} \
- \
-static inline T VEC_OP (T,base,index) \
- (const VEC(T,base) *vec_, unsigned ix_ VEC_CHECK_DECL) \
-{ \
- VEC_ASSERT (vec_ && ix_ < vec_->prefix.num, "index", T, base); \
- \
- return vec_->vec[ix_]; \
-} \
- \
-static inline int VEC_OP (T,base,iterate) \
- (const VEC(T,base) *vec_, unsigned ix_, T *ptr) \
-{ \
- if (vec_ && ix_ < vec_->prefix.num) \
- { \
- *ptr = vec_->vec[ix_]; \
- return 1; \
- } \
- else \
- { \
- *ptr = (T) 0; \
- return 0; \
- } \
-} \
- \
-static inline size_t VEC_OP (T,base,embedded_size) \
- (int alloc_) \
-{ \
- return offsetof (VEC(T,base),vec) + alloc_ * sizeof(T); \
-} \
- \
-static inline void VEC_OP (T,base,embedded_init) \
- (VEC(T,base) *vec_, int alloc_) \
-{ \
- vec_->prefix.num = 0; \
- vec_->prefix.alloc = alloc_; \
-} \
- \
-static inline int VEC_OP (T,base,space) \
- (VEC(T,base) *vec_, int alloc_ VEC_CHECK_DECL) \
-{ \
- VEC_ASSERT (alloc_ >= 0, "space", T, base); \
- return vec_ ? vec_->prefix.alloc - vec_->prefix.num >= (unsigned)alloc_ : !alloc_; \
-} \
- \
-static inline void VEC_OP(T,base,splice) \
- (VEC(T,base) *dst_, VEC(T,base) *src_ VEC_CHECK_DECL) \
-{ \
- if (src_) \
- { \
- unsigned len_ = src_->prefix.num; \
- VEC_ASSERT (dst_->prefix.num + len_ <= dst_->prefix.alloc, "splice", T, base); \
- \
- memcpy (&dst_->vec[dst_->prefix.num], &src_->vec[0], len_ * sizeof (T)); \
- dst_->prefix.num += len_; \
- } \
-} \
- \
-static inline T *VEC_OP (T,base,quick_push) \
- (VEC(T,base) *vec_, T obj_ VEC_CHECK_DECL) \
-{ \
- T *slot_; \
- \
- VEC_ASSERT (vec_->prefix.num < vec_->prefix.alloc, "push", T, base); \
- slot_ = &vec_->vec[vec_->prefix.num++]; \
- *slot_ = obj_; \
- \
- return slot_; \
-} \
- \
-static inline T VEC_OP (T,base,pop) (VEC(T,base) *vec_ VEC_CHECK_DECL) \
-{ \
- T obj_; \
- \
- VEC_ASSERT (vec_->prefix.num, "pop", T, base); \
- obj_ = vec_->vec[--vec_->prefix.num]; \
- \
- return obj_; \
-} \
- \
-static inline void VEC_OP (T,base,truncate) \
- (VEC(T,base) *vec_, unsigned size_ VEC_CHECK_DECL) \
-{ \
- VEC_ASSERT (vec_ ? vec_->prefix.num >= size_ : !size_, "truncate", T, base); \
- if (vec_) \
- vec_->prefix.num = size_; \
-} \
- \
-static inline T VEC_OP (T,base,replace) \
- (VEC(T,base) *vec_, unsigned ix_, T obj_ VEC_CHECK_DECL) \
-{ \
- T old_obj_; \
- \
- VEC_ASSERT (ix_ < vec_->prefix.num, "replace", T, base); \
- old_obj_ = vec_->vec[ix_]; \
- vec_->vec[ix_] = obj_; \
- \
- return old_obj_; \
-} \
- \
-static inline T *VEC_OP (T,base,quick_insert) \
- (VEC(T,base) *vec_, unsigned ix_, T obj_ VEC_CHECK_DECL) \
-{ \
- T *slot_; \
- \
- VEC_ASSERT (vec_->prefix.num < vec_->prefix.alloc, "insert", T, base); \
- VEC_ASSERT (ix_ <= vec_->prefix.num, "insert", T, base); \
- slot_ = &vec_->vec[ix_]; \
- memmove (slot_ + 1, slot_, (vec_->prefix.num++ - ix_) * sizeof (T)); \
- *slot_ = obj_; \
- \
- return slot_; \
-} \
- \
-static inline T VEC_OP (T,base,ordered_remove) \
- (VEC(T,base) *vec_, unsigned ix_ VEC_CHECK_DECL) \
-{ \
- T *slot_; \
- T obj_; \
- \
- VEC_ASSERT (ix_ < vec_->prefix.num, "remove", T, base); \
- slot_ = &vec_->vec[ix_]; \
- obj_ = *slot_; \
- memmove (slot_, slot_ + 1, (--vec_->prefix.num - ix_) * sizeof (T)); \
- \
- return obj_; \
-} \
- \
-static inline T VEC_OP (T,base,unordered_remove) \
- (VEC(T,base) *vec_, unsigned ix_ VEC_CHECK_DECL) \
-{ \
- T *slot_; \
- T obj_; \
- \
- VEC_ASSERT (ix_ < vec_->prefix.num, "remove", T, base); \
- slot_ = &vec_->vec[ix_]; \
- obj_ = *slot_; \
- *slot_ = vec_->vec[--vec_->prefix.num]; \
- \
- return obj_; \
-} \
- \
-static inline void VEC_OP (T,base,block_remove) \
- (VEC(T,base) *vec_, unsigned ix_, unsigned len_ VEC_CHECK_DECL) \
-{ \
- T *slot_; \
- \
- VEC_ASSERT (ix_ + len_ <= vec_->prefix.num, "block_remove", T, base); \
- slot_ = &vec_->vec[ix_]; \
- vec_->prefix.num -= len_; \
- memmove (slot_, slot_ + len_, (vec_->prefix.num - ix_) * sizeof (T)); \
-} \
- \
-static inline T *VEC_OP (T,base,address) \
- (VEC(T,base) *vec_) \
-{ \
- return vec_ ? vec_->vec : 0; \
-} \
- \
-static inline unsigned VEC_OP (T,base,lower_bound) \
- (VEC(T,base) *vec_, const T obj_, \
- bool (*lessthan_)(const T, const T) VEC_CHECK_DECL) \
-{ \
- unsigned int len_ = VEC_OP (T,base, length) (vec_); \
- unsigned int half_, middle_; \
- unsigned int first_ = 0; \
- while (len_ > 0) \
- { \
- T middle_elem_; \
- half_ = len_ >> 1; \
- middle_ = first_; \
- middle_ += half_; \
- middle_elem_ = VEC_OP (T,base,index) (vec_, middle_ VEC_CHECK_PASS); \
- if (lessthan_ (middle_elem_, obj_)) \
- { \
- first_ = middle_; \
- ++first_; \
- len_ = len_ - half_ - 1; \
- } \
- else \
- len_ = half_; \
- } \
- return first_; \
+ unsigned int len_ = VEC_length (T, vec_);
+ unsigned int half_, middle_;
+ unsigned int first_ = 0;
+ while (len_ > 0)
+ {
+ T middle_elem_;
+ half_ = len_ >> 1;
+ middle_ = first_;
+ middle_ += half_;
+ middle_elem_ = VEC_index_1 (vec_, middle_ VEC_CHECK_PASS);
+ if (lessthan_ (middle_elem_, obj_))
+ {
+ first_ = middle_;
+ ++first_;
+ len_ = len_ - half_ - 1;
+ }
+ else
+ len_ = half_;
+ }
+ return first_;
}
-#define DEF_VEC_ALLOC_FUNC_P(T,A) \
-static inline VEC(T,A) *VEC_OP (T,A,alloc) \
- (int alloc_ MEM_STAT_DECL) \
-{ \
- return (VEC(T,A) *) vec_##A##_p_reserve_exact (NULL, alloc_ \
- PASS_MEM_STAT); \
-}
-
-
-#define DEF_VEC_NONALLOC_FUNCS_P(T,A) \
-static inline void VEC_OP (T,A,free) \
- (VEC(T,A) **vec_) \
-{ \
- if (*vec_) \
- vec_##A##_free (*vec_); \
- *vec_ = NULL; \
-} \
- \
-static inline VEC(T,A) *VEC_OP (T,A,copy) (VEC(T,base) *vec_ MEM_STAT_DECL) \
-{ \
- size_t len_ = vec_ ? vec_->prefix.num : 0; \
- VEC (T,A) *new_vec_ = NULL; \
- \
- if (len_) \
- { \
- new_vec_ = (VEC (T,A) *)(vec_##A##_p_reserve_exact \
- (NULL, len_ PASS_MEM_STAT)); \
- \
- new_vec_->base.prefix.num = len_; \
- memcpy (new_vec_->base.vec, vec_->vec, sizeof (T) * len_); \
- } \
- return new_vec_; \
-} \
- \
-static inline int VEC_OP (T,A,reserve) \
- (VEC(T,A) **vec_, int alloc_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- int extend = !VEC_OP (T,base,space) (VEC_BASE(*vec_), alloc_ \
- VEC_CHECK_PASS); \
- \
- if (extend) \
- *vec_ = (VEC(T,A) *) vec_##A##_p_reserve (*vec_, alloc_ PASS_MEM_STAT); \
- \
- return extend; \
-} \
- \
-static inline int VEC_OP (T,A,reserve_exact) \
- (VEC(T,A) **vec_, int alloc_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- int extend = !VEC_OP (T,base,space) (VEC_BASE(*vec_), alloc_ \
- VEC_CHECK_PASS); \
- \
- if (extend) \
- *vec_ = (VEC(T,A) *) vec_##A##_p_reserve_exact (*vec_, alloc_ \
- PASS_MEM_STAT); \
- \
- return extend; \
-} \
- \
-static inline void VEC_OP (T,A,safe_grow) \
- (VEC(T,A) **vec_, int size_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- VEC_ASSERT (size_ >= 0 \
- && VEC_OP(T,base,length) VEC_BASE(*vec_) <= (unsigned)size_, \
- "grow", T, A); \
- VEC_OP (T,A,reserve_exact) (vec_, \
- size_ - (int)(*vec_ ? VEC_BASE(*vec_)->prefix.num : 0) \
- VEC_CHECK_PASS PASS_MEM_STAT); \
- VEC_BASE (*vec_)->prefix.num = size_; \
-} \
- \
-static inline void VEC_OP (T,A,safe_grow_cleared) \
- (VEC(T,A) **vec_, int size_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- int oldsize = VEC_OP(T,base,length) VEC_BASE(*vec_); \
- VEC_OP (T,A,safe_grow) (vec_, size_ VEC_CHECK_PASS PASS_MEM_STAT); \
- memset (&(VEC_OP (T,base,address) VEC_BASE(*vec_))[oldsize], 0, \
- sizeof (T) * (size_ - oldsize)); \
-} \
- \
-static inline void VEC_OP(T,A,safe_splice) \
- (VEC(T,A) **dst_, VEC(T,base) *src_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- if (src_) \
- { \
- VEC_OP (T,A,reserve_exact) (dst_, src_->prefix.num \
- VEC_CHECK_PASS MEM_STAT_INFO); \
- \
- VEC_OP (T,base,splice) (VEC_BASE (*dst_), src_ \
- VEC_CHECK_PASS); \
- } \
-} \
- \
-static inline T *VEC_OP (T,A,safe_push) \
- (VEC(T,A) **vec_, T obj_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- VEC_OP (T,A,reserve) (vec_, 1 VEC_CHECK_PASS PASS_MEM_STAT); \
- \
- return VEC_OP (T,base,quick_push) (VEC_BASE(*vec_), obj_ VEC_CHECK_PASS); \
-} \
- \
-static inline T *VEC_OP (T,A,safe_insert) \
- (VEC(T,A) **vec_, unsigned ix_, T obj_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- VEC_OP (T,A,reserve) (vec_, 1 VEC_CHECK_PASS PASS_MEM_STAT); \
- \
- return VEC_OP (T,base,quick_insert) (VEC_BASE(*vec_), ix_, obj_ \
- VEC_CHECK_PASS); \
-}
-
-/* Vector of object. */
-#define DEF_VEC_O(T) \
-VEC_T_GTY(T,base); \
-VEC_TA(T,base,none); \
-DEF_VEC_FUNC_O(T) \
-struct vec_swallow_trailing_semi
-#define DEF_VEC_ALLOC_O(T,A) \
-VEC_TA(T,base,A); \
-DEF_VEC_ALLOC_FUNC_O(T,A) \
-DEF_VEC_NONALLOC_FUNCS_O(T,A) \
-struct vec_swallow_trailing_semi
-
-/* Vector of atomic object. */
-#define DEF_VEC_A(T) \
-VEC_T_GTY_ATOMIC(T,base); \
-VEC_TA(T,base,none); \
-DEF_VEC_FUNC_O(T) \
-struct vec_swallow_trailing_semi
-#define DEF_VEC_ALLOC_A(T,A) DEF_VEC_ALLOC_O(T,A)
-
-#define DEF_VEC_FUNC_O(T) \
-static inline unsigned VEC_OP (T,base,length) (const VEC(T,base) *vec_) \
-{ \
- return vec_ ? vec_->prefix.num : 0; \
-} \
- \
-static inline T *VEC_OP (T,base,last) (VEC(T,base) *vec_ VEC_CHECK_DECL) \
-{ \
- VEC_ASSERT (vec_ && vec_->prefix.num, "last", T, base); \
- \
- return &vec_->vec[vec_->prefix.num - 1]; \
-} \
- \
-static inline T *VEC_OP (T,base,index) \
- (VEC(T,base) *vec_, unsigned ix_ VEC_CHECK_DECL) \
-{ \
- VEC_ASSERT (vec_ && ix_ < vec_->prefix.num, "index", T, base); \
- \
- return &vec_->vec[ix_]; \
-} \
- \
-static inline int VEC_OP (T,base,iterate) \
- (VEC(T,base) *vec_, unsigned ix_, T **ptr) \
-{ \
- if (vec_ && ix_ < vec_->prefix.num) \
- { \
- *ptr = &vec_->vec[ix_]; \
- return 1; \
- } \
- else \
- { \
- *ptr = 0; \
- return 0; \
- } \
-} \
- \
-static inline size_t VEC_OP (T,base,embedded_size) \
- (int alloc_) \
-{ \
- return offsetof (VEC(T,base),vec) + alloc_ * sizeof(T); \
-} \
- \
-static inline void VEC_OP (T,base,embedded_init) \
- (VEC(T,base) *vec_, int alloc_) \
-{ \
- vec_->prefix.num = 0; \
- vec_->prefix.alloc = alloc_; \
-} \
- \
-static inline int VEC_OP (T,base,space) \
- (VEC(T,base) *vec_, int alloc_ VEC_CHECK_DECL) \
-{ \
- VEC_ASSERT (alloc_ >= 0, "space", T, base); \
- return vec_ ? vec_->prefix.alloc - vec_->prefix.num >= (unsigned)alloc_ : !alloc_; \
-} \
- \
-static inline void VEC_OP(T,base,splice) \
- (VEC(T,base) *dst_, VEC(T,base) *src_ VEC_CHECK_DECL) \
-{ \
- if (src_) \
- { \
- unsigned len_ = src_->prefix.num; \
- VEC_ASSERT (dst_->prefix.num + len_ <= dst_->prefix.alloc, "splice", T, base); \
- \
- memcpy (&dst_->vec[dst_->prefix.num], &src_->vec[0], len_ * sizeof (T)); \
- dst_->prefix.num += len_; \
- } \
-} \
- \
-static inline T *VEC_OP (T,base,quick_push) \
- (VEC(T,base) *vec_, const T *obj_ VEC_CHECK_DECL) \
-{ \
- T *slot_; \
- \
- VEC_ASSERT (vec_->prefix.num < vec_->prefix.alloc, "push", T, base); \
- slot_ = &vec_->vec[vec_->prefix.num++]; \
- if (obj_) \
- *slot_ = *obj_; \
- \
- return slot_; \
-} \
- \
-static inline void VEC_OP (T,base,pop) (VEC(T,base) *vec_ VEC_CHECK_DECL) \
-{ \
- VEC_ASSERT (vec_->prefix.num, "pop", T, base); \
- --vec_->prefix.num; \
-} \
- \
-static inline void VEC_OP (T,base,truncate) \
- (VEC(T,base) *vec_, unsigned size_ VEC_CHECK_DECL) \
-{ \
- VEC_ASSERT (vec_ ? vec_->prefix.num >= size_ : !size_, "truncate", T, base); \
- if (vec_) \
- vec_->prefix.num = size_; \
-} \
- \
-static inline T *VEC_OP (T,base,replace) \
- (VEC(T,base) *vec_, unsigned ix_, const T *obj_ VEC_CHECK_DECL) \
-{ \
- T *slot_; \
- \
- VEC_ASSERT (ix_ < vec_->prefix.num, "replace", T, base); \
- slot_ = &vec_->vec[ix_]; \
- if (obj_) \
- *slot_ = *obj_; \
- \
- return slot_; \
-} \
- \
-static inline T *VEC_OP (T,base,quick_insert) \
- (VEC(T,base) *vec_, unsigned ix_, const T *obj_ VEC_CHECK_DECL) \
-{ \
- T *slot_; \
- \
- VEC_ASSERT (vec_->prefix.num < vec_->prefix.alloc, "insert", T, base); \
- VEC_ASSERT (ix_ <= vec_->prefix.num, "insert", T, base); \
- slot_ = &vec_->vec[ix_]; \
- memmove (slot_ + 1, slot_, (vec_->prefix.num++ - ix_) * sizeof (T)); \
- if (obj_) \
- *slot_ = *obj_; \
- \
- return slot_; \
-} \
- \
-static inline void VEC_OP (T,base,ordered_remove) \
- (VEC(T,base) *vec_, unsigned ix_ VEC_CHECK_DECL) \
-{ \
- T *slot_; \
- \
- VEC_ASSERT (ix_ < vec_->prefix.num, "remove", T, base); \
- slot_ = &vec_->vec[ix_]; \
- memmove (slot_, slot_ + 1, (--vec_->prefix.num - ix_) * sizeof (T)); \
-} \
- \
-static inline void VEC_OP (T,base,unordered_remove) \
- (VEC(T,base) *vec_, unsigned ix_ VEC_CHECK_DECL) \
-{ \
- VEC_ASSERT (ix_ < vec_->prefix.num, "remove", T, base); \
- vec_->vec[ix_] = vec_->vec[--vec_->prefix.num]; \
-} \
- \
-static inline void VEC_OP (T,base,block_remove) \
- (VEC(T,base) *vec_, unsigned ix_, unsigned len_ VEC_CHECK_DECL) \
-{ \
- T *slot_; \
- \
- VEC_ASSERT (ix_ + len_ <= vec_->prefix.num, "block_remove", T, base); \
- slot_ = &vec_->vec[ix_]; \
- vec_->prefix.num -= len_; \
- memmove (slot_, slot_ + len_, (vec_->prefix.num - ix_) * sizeof (T)); \
-} \
- \
-static inline T *VEC_OP (T,base,address) \
- (VEC(T,base) *vec_) \
-{ \
- return vec_ ? vec_->vec : 0; \
-} \
- \
-static inline unsigned VEC_OP (T,base,lower_bound) \
- (VEC(T,base) *vec_, const T *obj_, \
- bool (*lessthan_)(const T *, const T *) VEC_CHECK_DECL) \
-{ \
- unsigned int len_ = VEC_OP (T, base, length) (vec_); \
- unsigned int half_, middle_; \
- unsigned int first_ = 0; \
- while (len_ > 0) \
- { \
- T *middle_elem_; \
- half_ = len_ >> 1; \
- middle_ = first_; \
- middle_ += half_; \
- middle_elem_ = VEC_OP (T,base,index) (vec_, middle_ VEC_CHECK_PASS); \
- if (lessthan_ (middle_elem_, obj_)) \
- { \
- first_ = middle_; \
- ++first_; \
- len_ = len_ - half_ - 1; \
- } \
- else \
- len_ = half_; \
- } \
- return first_; \
-}
-
-#define DEF_VEC_ALLOC_FUNC_O(T,A) \
-static inline VEC(T,A) *VEC_OP (T,A,alloc) \
- (int alloc_ MEM_STAT_DECL) \
-{ \
- return (VEC(T,A) *) vec_##A##_o_reserve_exact (NULL, alloc_, \
- offsetof (VEC(T,A),base.vec), \
- sizeof (T) \
- PASS_MEM_STAT); \
-}
-
-#define DEF_VEC_NONALLOC_FUNCS_O(T,A) \
-static inline VEC(T,A) *VEC_OP (T,A,copy) (VEC(T,base) *vec_ MEM_STAT_DECL) \
-{ \
- size_t len_ = vec_ ? vec_->prefix.num : 0; \
- VEC (T,A) *new_vec_ = NULL; \
- \
- if (len_) \
- { \
- new_vec_ = (VEC (T,A) *)(vec_##A##_o_reserve_exact \
- (NULL, len_, \
- offsetof (VEC(T,A),base.vec), sizeof (T) \
- PASS_MEM_STAT)); \
- \
- new_vec_->base.prefix.num = len_; \
- memcpy (new_vec_->base.vec, vec_->vec, sizeof (T) * len_); \
- } \
- return new_vec_; \
-} \
- \
-static inline void VEC_OP (T,A,free) \
- (VEC(T,A) **vec_) \
-{ \
- if (*vec_) \
- vec_##A##_free (*vec_); \
- *vec_ = NULL; \
-} \
- \
-static inline int VEC_OP (T,A,reserve) \
- (VEC(T,A) **vec_, int alloc_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- int extend = !VEC_OP (T,base,space) (VEC_BASE(*vec_), alloc_ \
- VEC_CHECK_PASS); \
- \
- if (extend) \
- *vec_ = (VEC(T,A) *) vec_##A##_o_reserve (*vec_, alloc_, \
- offsetof (VEC(T,A),base.vec),\
- sizeof (T) \
- PASS_MEM_STAT); \
- \
- return extend; \
-} \
- \
-static inline int VEC_OP (T,A,reserve_exact) \
- (VEC(T,A) **vec_, int alloc_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- int extend = !VEC_OP (T,base,space) (VEC_BASE(*vec_), alloc_ \
- VEC_CHECK_PASS); \
- \
- if (extend) \
- *vec_ = (VEC(T,A) *) vec_##A##_o_reserve_exact \
- (*vec_, alloc_, \
- offsetof (VEC(T,A),base.vec), \
- sizeof (T) PASS_MEM_STAT); \
- \
- return extend; \
-} \
- \
-static inline void VEC_OP (T,A,safe_grow) \
- (VEC(T,A) **vec_, int size_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- VEC_ASSERT (size_ >= 0 \
- && VEC_OP(T,base,length) VEC_BASE(*vec_) <= (unsigned)size_, \
- "grow", T, A); \
- VEC_OP (T,A,reserve_exact) (vec_, \
- size_ - (int)(*vec_ ? VEC_BASE(*vec_)->prefix.num : 0) \
- VEC_CHECK_PASS PASS_MEM_STAT); \
- VEC_BASE (*vec_)->prefix.num = size_; \
-} \
- \
-static inline void VEC_OP (T,A,safe_grow_cleared) \
- (VEC(T,A) **vec_, int size_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- int oldsize = VEC_OP(T,base,length) VEC_BASE(*vec_); \
- VEC_OP (T,A,safe_grow) (vec_, size_ VEC_CHECK_PASS PASS_MEM_STAT); \
- memset (&(VEC_OP (T,base,address) VEC_BASE(*vec_))[oldsize], 0, \
- sizeof (T) * (size_ - oldsize)); \
-} \
- \
-static inline void VEC_OP(T,A,safe_splice) \
- (VEC(T,A) **dst_, VEC(T,base) *src_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- if (src_) \
- { \
- VEC_OP (T,A,reserve_exact) (dst_, src_->prefix.num \
- VEC_CHECK_PASS MEM_STAT_INFO); \
- \
- VEC_OP (T,base,splice) (VEC_BASE (*dst_), src_ \
- VEC_CHECK_PASS); \
- } \
-} \
- \
-static inline T *VEC_OP (T,A,safe_push) \
- (VEC(T,A) **vec_, const T *obj_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- VEC_OP (T,A,reserve) (vec_, 1 VEC_CHECK_PASS PASS_MEM_STAT); \
- \
- return VEC_OP (T,base,quick_push) (VEC_BASE(*vec_), obj_ VEC_CHECK_PASS); \
-} \
- \
-static inline T *VEC_OP (T,A,safe_insert) \
- (VEC(T,A) **vec_, unsigned ix_, const T *obj_ \
- VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- VEC_OP (T,A,reserve) (vec_, 1 VEC_CHECK_PASS PASS_MEM_STAT); \
- \
- return VEC_OP (T,base,quick_insert) (VEC_BASE(*vec_), ix_, obj_ \
- VEC_CHECK_PASS); \
-}
-
-#define DEF_VEC_ALLOC_FUNC_I(T,A) \
-static inline VEC(T,A) *VEC_OP (T,A,alloc) \
- (int alloc_ MEM_STAT_DECL) \
-{ \
- return (VEC(T,A) *) vec_##A##_o_reserve_exact \
- (NULL, alloc_, offsetof (VEC(T,A),base.vec), \
- sizeof (T) PASS_MEM_STAT); \
-}
-
-#define DEF_VEC_NONALLOC_FUNCS_I(T,A) \
-static inline VEC(T,A) *VEC_OP (T,A,copy) (VEC(T,base) *vec_ MEM_STAT_DECL) \
-{ \
- size_t len_ = vec_ ? vec_->prefix.num : 0; \
- VEC (T,A) *new_vec_ = NULL; \
- \
- if (len_) \
- { \
- new_vec_ = (VEC (T,A) *)(vec_##A##_o_reserve_exact \
- (NULL, len_, \
- offsetof (VEC(T,A),base.vec), sizeof (T) \
- PASS_MEM_STAT)); \
- \
- new_vec_->base.prefix.num = len_; \
- memcpy (new_vec_->base.vec, vec_->vec, sizeof (T) * len_); \
- } \
- return new_vec_; \
-} \
- \
-static inline void VEC_OP (T,A,free) \
- (VEC(T,A) **vec_) \
-{ \
- if (*vec_) \
- vec_##A##_free (*vec_); \
- *vec_ = NULL; \
-} \
- \
-static inline int VEC_OP (T,A,reserve) \
- (VEC(T,A) **vec_, int alloc_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- int extend = !VEC_OP (T,base,space) (VEC_BASE(*vec_), alloc_ \
- VEC_CHECK_PASS); \
- \
- if (extend) \
- *vec_ = (VEC(T,A) *) vec_##A##_o_reserve (*vec_, alloc_, \
- offsetof (VEC(T,A),base.vec),\
- sizeof (T) \
- PASS_MEM_STAT); \
- \
- return extend; \
-} \
- \
-static inline int VEC_OP (T,A,reserve_exact) \
- (VEC(T,A) **vec_, int alloc_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- int extend = !VEC_OP (T,base,space) (VEC_BASE(*vec_), alloc_ \
- VEC_CHECK_PASS); \
- \
- if (extend) \
- *vec_ = (VEC(T,A) *) vec_##A##_o_reserve_exact \
- (*vec_, alloc_, offsetof (VEC(T,A),base.vec), \
- sizeof (T) PASS_MEM_STAT); \
- \
- return extend; \
-} \
- \
-static inline void VEC_OP (T,A,safe_grow) \
- (VEC(T,A) **vec_, int size_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- VEC_ASSERT (size_ >= 0 \
- && VEC_OP(T,base,length) VEC_BASE(*vec_) <= (unsigned)size_, \
- "grow", T, A); \
- VEC_OP (T,A,reserve_exact) (vec_, \
- size_ - (int)(*vec_ ? VEC_BASE(*vec_)->prefix.num : 0) \
- VEC_CHECK_PASS PASS_MEM_STAT); \
- VEC_BASE (*vec_)->prefix.num = size_; \
-} \
- \
-static inline void VEC_OP (T,A,safe_grow_cleared) \
- (VEC(T,A) **vec_, int size_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- int oldsize = VEC_OP(T,base,length) VEC_BASE(*vec_); \
- VEC_OP (T,A,safe_grow) (vec_, size_ VEC_CHECK_PASS PASS_MEM_STAT); \
- memset (&(VEC_OP (T,base,address) VEC_BASE(*vec_))[oldsize], 0, \
- sizeof (T) * (size_ - oldsize)); \
-} \
- \
-static inline void VEC_OP(T,A,safe_splice) \
- (VEC(T,A) **dst_, VEC(T,base) *src_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- if (src_) \
- { \
- VEC_OP (T,A,reserve_exact) (dst_, src_->prefix.num \
- VEC_CHECK_PASS MEM_STAT_INFO); \
- \
- VEC_OP (T,base,splice) (VEC_BASE (*dst_), src_ \
- VEC_CHECK_PASS); \
- } \
-} \
- \
-static inline T *VEC_OP (T,A,safe_push) \
- (VEC(T,A) **vec_, const T obj_ VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- VEC_OP (T,A,reserve) (vec_, 1 VEC_CHECK_PASS PASS_MEM_STAT); \
- \
- return VEC_OP (T,base,quick_push) (VEC_BASE(*vec_), obj_ VEC_CHECK_PASS); \
-} \
- \
-static inline T *VEC_OP (T,A,safe_insert) \
- (VEC(T,A) **vec_, unsigned ix_, const T obj_ \
- VEC_CHECK_DECL MEM_STAT_DECL) \
-{ \
- VEC_OP (T,A,reserve) (vec_, 1 VEC_CHECK_PASS PASS_MEM_STAT); \
- \
- return VEC_OP (T,base,quick_insert) (VEC_BASE(*vec_), ix_, obj_ \
- VEC_CHECK_PASS); \
+template<typename T>
+static inline unsigned
+VEC_lower_bound_1 (vec_t<T> *vec_, const T *ptr_,
+ bool (*lessthan_)(const T*, const T*) VEC_CHECK_DECL)
+{
+ unsigned int len_ = VEC_length (T, vec_);
+ unsigned int half_, middle_;
+ unsigned int first_ = 0;
+ while (len_ > 0)
+ {
+ T *middle_elem_;
+ half_ = len_ >> 1;
+ middle_ = first_;
+ middle_ += half_;
+ middle_elem_ = &VEC_index_1 (vec_, middle_ VEC_CHECK_PASS);
+ if (lessthan_ (middle_elem_, ptr_))
+ {
+ first_ = middle_;
+ ++first_;
+ len_ = len_ - half_ - 1;
+ }
+ else
+ len_ = half_;
+ }
+ return first_;
}
-/* We support a vector which starts out with space on the stack and
- switches to heap space when forced to reallocate. This works a
- little differently. Instead of DEF_VEC_ALLOC_P(TYPE, heap|gc), use
- DEF_VEC_ALLOC_P_STACK(TYPE). This uses alloca to get the initial
- space; because alloca can not be usefully called in an inline
- function, and because a macro can not define a macro, you must then
- write a #define for each type:
- #define VEC_{TYPE}_stack_alloc(alloc) \
- VEC_stack_alloc({TYPE}, alloc)
+void *vec_heap_o_reserve_1 (void *, int, size_t, size_t, bool MEM_STAT_DECL);
+void *vec_gc_o_reserve_1 (void *, int, size_t, size_t, bool MEM_STAT_DECL);
- This is really a hack and perhaps can be made better. Note that
- this macro will wind up evaluating the ALLOC parameter twice.
+/* Ensure there are at least RESERVE free slots in VEC_, growing
+ exponentially. If RESERVE < 0 grow exactly, else grow
+ exponentially. As a special case, if VEC_ is NULL, and RESERVE is
+ 0, no vector will be created. */
- Only the initial allocation will be made using alloca, so pass a
- reasonable estimate that doesn't use too much stack space; don't
- pass zero. Don't return a VEC(TYPE,stack) vector from the function
- which allocated it. */
-
-extern void *vec_stack_p_reserve (void *, int MEM_STAT_DECL);
-extern void *vec_stack_p_reserve_exact (void *, int MEM_STAT_DECL);
-extern void *vec_stack_p_reserve_exact_1 (int, void *);
-extern void *vec_stack_o_reserve (void *, int, size_t, size_t MEM_STAT_DECL);
-extern void *vec_stack_o_reserve_exact (void *, int, size_t, size_t
- MEM_STAT_DECL);
-extern void vec_stack_free (void *);
-
-/* Unfortunately, we cannot use MEM_STAT_DECL here. */
-#if GATHER_STATISTICS
-#define VEC_stack_alloc(T,alloc,name,line,function) \
- (VEC_OP (T,stack,alloc1) \
- (alloc, XALLOCAVAR (VEC(T,stack), VEC_embedded_size (T, alloc))))
-#else
-#define VEC_stack_alloc(T,alloc) \
- (VEC_OP (T,stack,alloc1) \
- (alloc, XALLOCAVAR (VEC(T,stack), VEC_embedded_size (T, alloc))))
-#endif
-
-#define DEF_VEC_ALLOC_P_STACK(T) \
-VEC_TA(T,base,stack); \
-DEF_VEC_ALLOC_FUNC_P_STACK(T) \
-DEF_VEC_NONALLOC_FUNCS_P(T,stack) \
-struct vec_swallow_trailing_semi
-
-#define DEF_VEC_ALLOC_FUNC_P_STACK(T) \
-static inline VEC(T,stack) *VEC_OP (T,stack,alloc1) \
- (int alloc_, VEC(T,stack)* space) \
-{ \
- return (VEC(T,stack) *) vec_stack_p_reserve_exact_1 (alloc_, space); \
+template<typename T, enum vec_allocation_t A>
+vec_t<T> *
+vec_reserve (vec_t<T> *vec_, int reserve MEM_STAT_DECL)
+{
+ if (A == gc)
+ return (vec_t<T> *) vec_gc_o_reserve_1 (vec_, reserve,
+ offsetof (vec_t<T>, vec),
+ sizeof (T), false
+ PASS_MEM_STAT);
+ else if (A == heap)
+ return (vec_t<T> *) vec_heap_o_reserve_1 (vec_, reserve,
+ offsetof (vec_t<T>, vec),
+ sizeof (T), false
+ PASS_MEM_STAT);
+ else
+ {
+ /* Only allow stack vectors when re-growing them. The initial
+ allocation of stack vectors must be done with the
+ VEC_stack_alloc macro, because it uses alloca() for the
+ allocation. */
+ if (vec_ == NULL)
+ {
+ fprintf (stderr, "Stack vectors must be initially allocated "
+ "with VEC_stack_alloc.\n");
+ gcc_unreachable ();
+ }
+ return (vec_t<T> *) vec_stack_o_reserve (vec_, reserve,
+ offsetof (vec_t<T>, vec),
+ sizeof (T) PASS_MEM_STAT);
+ }
}
-#define DEF_VEC_ALLOC_O_STACK(T) \
-VEC_TA(T,base,stack); \
-DEF_VEC_ALLOC_FUNC_O_STACK(T) \
-DEF_VEC_NONALLOC_FUNCS_O(T,stack) \
-struct vec_swallow_trailing_semi
-#define DEF_VEC_ALLOC_FUNC_O_STACK(T) \
-static inline VEC(T,stack) *VEC_OP (T,stack,alloc1) \
- (int alloc_, VEC(T,stack)* space) \
-{ \
- return (VEC(T,stack) *) vec_stack_p_reserve_exact_1 (alloc_, space); \
-}
-
-#define DEF_VEC_ALLOC_I_STACK(T) \
-VEC_TA(T,base,stack); \
-DEF_VEC_ALLOC_FUNC_I_STACK(T) \
-DEF_VEC_NONALLOC_FUNCS_I(T,stack) \
-struct vec_swallow_trailing_semi
+/* Ensure there are at least RESERVE free slots in VEC_, growing
+ exactly. If RESERVE < 0 grow exactly, else grow exponentially. As
+ a special case, if VEC_ is NULL, and RESERVE is 0, no vector will be
+ created. */
-#define DEF_VEC_ALLOC_FUNC_I_STACK(T) \
-static inline VEC(T,stack) *VEC_OP (T,stack,alloc1) \
- (int alloc_, VEC(T,stack)* space) \
-{ \
- return (VEC(T,stack) *) vec_stack_p_reserve_exact_1 (alloc_, space); \
+template<typename T, enum vec_allocation_t A>
+vec_t<T> *
+vec_reserve_exact (vec_t<T> *vec_, int reserve MEM_STAT_DECL)
+{
+ if (A == gc)
+ return (vec_t<T> *) vec_gc_o_reserve_1 (vec_, reserve,
+ sizeof (struct vec_prefix),
+ sizeof (T), true
+ PASS_MEM_STAT);
+ else if (A == heap)
+ return (vec_t<T> *) vec_heap_o_reserve_1 (vec_, reserve,
+ sizeof (struct vec_prefix),
+ sizeof (T), true
+ PASS_MEM_STAT);
+ else if (A == stack)
+ {
+ /* Only allow stack vectors when re-growing them. The initial
+ allocation of stack vectors must be done with VEC_alloc,
+ because it uses alloca() for the allocation. */
+ if (vec_ == NULL)
+ {
+ fprintf (stderr, "Stack vectors must be initially allocated "
+ "with VEC_stack_alloc.\n");
+ gcc_unreachable ();
+ }
+ return (vec_t<T> *) vec_stack_o_reserve_exact (vec_, reserve,
+ sizeof (struct vec_prefix),
+ sizeof (T)
+ PASS_MEM_STAT);
+ }
}
#endif /* GCC_VEC_H */
diff --git a/gcc/cp/call.c b/gcc/cp/call.c
index 5345f2b..7a72666 100644
--- a/gcc/cp/call.c
+++ b/gcc/cp/call.c
@@ -1924,7 +1924,8 @@ add_function_candidate (struct z_candidate **candidates,
for (i = 0; i < len; ++i)
{
- tree arg, argtype, to_type;
+ tree argtype, to_type;
+ tree arg;
conversion *t;
int is_this;
@@ -1934,8 +1935,9 @@ add_function_candidate (struct z_candidate **candidates,
if (i == 0 && first_arg != NULL_TREE)
arg = first_arg;
else
- arg = VEC_index (tree, args,
- i + skip - (first_arg != NULL_TREE ? 1 : 0));
+ arg = CONST_CAST_TREE (
+ VEC_index (tree, args,
+ i + skip - (first_arg != NULL_TREE ? 1 : 0)));
argtype = lvalue_type (arg);
is_this = (i == 0 && DECL_NONSTATIC_MEMBER_FUNCTION_P (fn)
diff --git a/gcc/cp/class.c b/gcc/cp/class.c
index 2f377c8..dfa2b52 100644
--- a/gcc/cp/class.c
+++ b/gcc/cp/class.c
@@ -8403,12 +8403,12 @@ build_vtbl_initializer (tree binfo,
int new_position = (TARGET_VTABLE_DATA_ENTRY_DISTANCE * ix
+ (TARGET_VTABLE_DATA_ENTRY_DISTANCE - 1));
- VEC_replace (constructor_elt, vid.inits, new_position, e);
+ VEC_replace (constructor_elt, vid.inits, new_position, *e);
for (j = 1; j < TARGET_VTABLE_DATA_ENTRY_DISTANCE; ++j)
{
- constructor_elt *f = VEC_index (constructor_elt, vid.inits,
- new_position - j);
+ constructor_elt *f = &VEC_index (constructor_elt, vid.inits,
+ new_position - j);
f->index = NULL_TREE;
f->value = build1 (NOP_EXPR, vtable_entry_type,
null_pointer_node);
@@ -8429,7 +8429,7 @@ build_vtbl_initializer (tree binfo,
for (ix = VEC_length (constructor_elt, vid.inits) - 1;
VEC_iterate (constructor_elt, vid.inits, ix, e);
ix--, jx++)
- VEC_replace (constructor_elt, *inits, jx, e);
+ VEC_replace (constructor_elt, *inits, jx, *e);
/* Go through all the ordinary virtual functions, building up
initializers. */
diff --git a/gcc/cp/decl.c b/gcc/cp/decl.c
index b637643..5908996 100644
--- a/gcc/cp/decl.c
+++ b/gcc/cp/decl.c
@@ -5282,7 +5282,7 @@ reshape_init_r (tree type, reshape_iter *d, bool first_initializer_p,
&& VEC_length (constructor_elt, CONSTRUCTOR_ELTS (str_init)) == 1)
{
str_init = VEC_index (constructor_elt,
- CONSTRUCTOR_ELTS (str_init), 0)->value;
+ CONSTRUCTOR_ELTS (str_init), 0).value;
}
/* If it's a string literal, then it's the initializer for the array
@@ -5372,7 +5372,7 @@ reshape_init (tree type, tree init, tsubst_flags_t complain)
return init;
/* Recurse on this CONSTRUCTOR. */
- d.cur = VEC_index (constructor_elt, v, 0);
+ d.cur = &VEC_index (constructor_elt, v, 0);
d.end = d.cur + VEC_length (constructor_elt, v);
new_init = reshape_init_r (type, &d, true, complain);
@@ -5917,7 +5917,7 @@ type_dependent_init_p (tree init)
nelts = VEC_length (constructor_elt, elts);
for (i = 0; i < nelts; ++i)
if (type_dependent_init_p (VEC_index (constructor_elt,
- elts, i)->value))
+ elts, i).value))
return true;
}
else
@@ -5947,7 +5947,7 @@ value_dependent_init_p (tree init)
nelts = VEC_length (constructor_elt, elts);
for (i = 0; i < nelts; ++i)
if (value_dependent_init_p (VEC_index (constructor_elt,
- elts, i)->value))
+ elts, i).value))
return true;
}
else
@@ -6896,7 +6896,7 @@ cp_complete_array_type (tree *ptype, tree initial_value, bool do_default)
&& !VEC_empty (constructor_elt, CONSTRUCTOR_ELTS (initial_value)))
{
VEC(constructor_elt,gc) *v = CONSTRUCTOR_ELTS (initial_value);
- tree value = VEC_index (constructor_elt, v, 0)->value;
+ tree value = VEC_index (constructor_elt, v, 0).value;
if (TREE_CODE (value) == STRING_CST
&& VEC_length (constructor_elt, v) == 1)
diff --git a/gcc/cp/parser.c b/gcc/cp/parser.c
index d8c3305..a7f12ba 100644
--- a/gcc/cp/parser.c
+++ b/gcc/cp/parser.c
@@ -273,7 +273,7 @@ cp_lexer_dump_tokens (FILE *file, VEC(cp_token,gc) *buffer,
if (start_token > VEC_address (cp_token, buffer))
{
- cp_lexer_print_token (file, VEC_index (cp_token, buffer, 0));
+ cp_lexer_print_token (file, &VEC_index (cp_token, buffer, 0));
fprintf (file, " ... ");
}
@@ -313,8 +313,7 @@ cp_lexer_dump_tokens (FILE *file, VEC(cp_token,gc) *buffer,
if (i == num && i < VEC_length (cp_token, buffer))
{
fprintf (file, " ... ");
- cp_lexer_print_token (file, VEC_index (cp_token, buffer,
- VEC_length (cp_token, buffer) - 1));
+ cp_lexer_print_token (file, &VEC_last (cp_token, buffer));
}
fprintf (file, "\n");
@@ -1723,11 +1722,11 @@ cp_parser_context_new (cp_parser_context* next)
/* Managing the unparsed function queues. */
#define unparsed_funs_with_default_args \
- VEC_last (cp_unparsed_functions_entry, parser->unparsed_queues)->funs_with_default_args
+ VEC_last (cp_unparsed_functions_entry, parser->unparsed_queues).funs_with_default_args
#define unparsed_funs_with_definitions \
- VEC_last (cp_unparsed_functions_entry, parser->unparsed_queues)->funs_with_definitions
+ VEC_last (cp_unparsed_functions_entry, parser->unparsed_queues).funs_with_definitions
#define unparsed_nsdmis \
- VEC_last (cp_unparsed_functions_entry, parser->unparsed_queues)->nsdmis
+ VEC_last (cp_unparsed_functions_entry, parser->unparsed_queues).nsdmis
static void
push_unparsed_function_queues (cp_parser *parser)
@@ -8048,7 +8047,7 @@ record_lambda_scope (tree lambda)
static void
finish_lambda_scope (void)
{
- tree_int *p = VEC_last (tree_int, lambda_scope_stack);
+ tree_int *p = &VEC_last (tree_int, lambda_scope_stack);
if (lambda_scope != p->t)
{
lambda_scope = p->t;
diff --git a/gcc/cp/rtti.c b/gcc/cp/rtti.c
index a19a893..51cb5ee 100644
--- a/gcc/cp/rtti.c
+++ b/gcc/cp/rtti.c
@@ -295,7 +295,7 @@ typeid_ok_p (void)
}
pseudo_type_info
- = VEC_index (tinfo_s, tinfo_descs, TK_TYPE_INFO_TYPE)->type;
+ = VEC_index (tinfo_s, tinfo_descs, TK_TYPE_INFO_TYPE).type;
type_info_type = TYPE_MAIN_VARIANT (const_type_info_type_node);
/* Make sure abi::__type_info_pseudo has the same alias set
@@ -422,7 +422,7 @@ get_tinfo_decl (tree type)
if (!d)
{
int ix = get_pseudo_ti_index (type);
- tinfo_s *ti = VEC_index (tinfo_s, tinfo_descs, ix);
+ tinfo_s *ti = &VEC_index (tinfo_s, tinfo_descs, ix);
d = build_lang_decl (VAR_DECL, name, ti->type);
SET_DECL_ASSEMBLER_NAME (d, name);
@@ -1079,7 +1079,7 @@ typeinfo_in_lib_p (tree type)
static tree
get_pseudo_ti_init (tree type, unsigned tk_index)
{
- tinfo_s *ti = VEC_index (tinfo_s, tinfo_descs, tk_index);
+ tinfo_s *ti = &VEC_index (tinfo_s, tinfo_descs, tk_index);
gcc_assert (at_eof);
switch (tk_index)
@@ -1105,7 +1105,7 @@ get_pseudo_ti_init (tree type, unsigned tk_index)
tree tinfo = get_tinfo_ptr (BINFO_TYPE (base_binfo));
/* get_tinfo_ptr might have reallocated the tinfo_descs vector. */
- ti = VEC_index (tinfo_s, tinfo_descs, tk_index);
+ ti = &VEC_index (tinfo_s, tinfo_descs, tk_index);
return class_initializer (ti, type, 1, tinfo);
}
@@ -1160,14 +1160,14 @@ get_pseudo_ti_init (tree type, unsigned tk_index)
CONSTRUCTOR_APPEND_ELT (v, NULL_TREE, tinfo);
CONSTRUCTOR_APPEND_ELT (v, NULL_TREE, offset);
base_init = build_constructor (init_list_type_node, v);
- e = VEC_index (constructor_elt, init_vec, ix);
+ e = &VEC_index (constructor_elt, init_vec, ix);
e->index = NULL_TREE;
e->value = base_init;
}
base_inits = build_constructor (init_list_type_node, init_vec);
/* get_tinfo_ptr might have reallocated the tinfo_descs vector. */
- ti = VEC_index (tinfo_s, tinfo_descs, tk_index);
+ ti = &VEC_index (tinfo_s, tinfo_descs, tk_index);
return class_initializer (ti, type, 3,
build_int_cst (NULL_TREE, hint),
build_int_cst (NULL_TREE, nbases),
@@ -1214,7 +1214,7 @@ create_pseudo_type_info (int tk, const char *real_name, ...)
fields = build_decl (input_location,
FIELD_DECL, NULL_TREE,
VEC_index (tinfo_s, tinfo_descs,
- TK_TYPE_INFO_TYPE)->type);
+ TK_TYPE_INFO_TYPE).type);
/* Now add the derived fields. */
while ((field_decl = va_arg (ap, tree)))
@@ -1228,7 +1228,7 @@ create_pseudo_type_info (int tk, const char *real_name, ...)
finish_builtin_struct (pseudo_type, pseudo_name, fields, NULL_TREE);
CLASSTYPE_AS_BASE (pseudo_type) = pseudo_type;
- ti = VEC_index (tinfo_s, tinfo_descs, tk);
+ ti = &VEC_index (tinfo_s, tinfo_descs, tk);
ti->type = cp_build_qualified_type (pseudo_type, TYPE_QUAL_CONST);
ti->name = get_identifier (real_name);
ti->vtable = NULL_TREE;
@@ -1321,7 +1321,7 @@ get_pseudo_ti_index (tree type)
while (VEC_iterate (tinfo_s, tinfo_descs, len++, ti))
ti->type = ti->vtable = ti->name = NULL_TREE;
}
- else if (VEC_index (tinfo_s, tinfo_descs, ix)->type)
+ else if (VEC_index (tinfo_s, tinfo_descs, ix).type)
/* already created. */
break;
@@ -1335,7 +1335,7 @@ get_pseudo_ti_index (tree type)
array_domain = build_index_type (size_int (num_bases));
base_array =
build_array_type (VEC_index (tinfo_s, tinfo_descs,
- TK_BASE_TYPE)->type,
+ TK_BASE_TYPE).type,
array_domain);
push_abi_namespace ();
@@ -1387,7 +1387,7 @@ create_tinfo_types (void)
DECL_CHAIN (field) = fields;
fields = field;
- ti = VEC_index (tinfo_s, tinfo_descs, TK_TYPE_INFO_TYPE);
+ ti = &VEC_index (tinfo_s, tinfo_descs, TK_TYPE_INFO_TYPE);
ti->type = make_class_type (RECORD_TYPE);
ti->vtable = NULL_TREE;
ti->name = NULL_TREE;
@@ -1427,7 +1427,7 @@ create_tinfo_types (void)
DECL_CHAIN (field) = fields;
fields = field;
- ti = VEC_index (tinfo_s, tinfo_descs, TK_BASE_TYPE);
+ ti = &VEC_index (tinfo_s, tinfo_descs, TK_BASE_TYPE);
ti->type = make_class_type (RECORD_TYPE);
ti->vtable = NULL_TREE;
diff --git a/gcc/cp/semantics.c b/gcc/cp/semantics.c
index 230e967..ebac960 100644
--- a/gcc/cp/semantics.c
+++ b/gcc/cp/semantics.c
@@ -161,7 +161,7 @@ resume_deferring_access_checks (void)
{
if (!deferred_access_no_check)
VEC_last (deferred_access, deferred_access_stack)
- ->deferring_access_checks_kind = dk_deferred;
+ .deferring_access_checks_kind = dk_deferred;
}
/* Stop deferring access checks. */
@@ -171,7 +171,7 @@ stop_deferring_access_checks (void)
{
if (!deferred_access_no_check)
VEC_last (deferred_access, deferred_access_stack)
- ->deferring_access_checks_kind = dk_no_deferred;
+ .deferring_access_checks_kind = dk_no_deferred;
}
/* Discard the current deferred access checks and restore the
@@ -198,7 +198,7 @@ get_deferred_access_checks (void)
return NULL;
else
return (VEC_last (deferred_access, deferred_access_stack)
- ->deferred_access_checks);
+ .deferred_access_checks);
}
/* Take current deferred checks and combine with the
@@ -216,10 +216,10 @@ pop_to_parent_deferring_access_checks (void)
deferred_access *ptr;
checks = (VEC_last (deferred_access, deferred_access_stack)
- ->deferred_access_checks);
+ .deferred_access_checks);
VEC_pop (deferred_access, deferred_access_stack);
- ptr = VEC_last (deferred_access, deferred_access_stack);
+ ptr = &VEC_last (deferred_access, deferred_access_stack);
if (ptr->deferring_access_checks_kind == dk_no_deferred)
{
/* Check access. */
@@ -321,7 +321,7 @@ perform_or_defer_access_check (tree binfo, tree decl, tree diag_decl,
gcc_assert (TREE_CODE (binfo) == TREE_BINFO);
- ptr = VEC_last (deferred_access, deferred_access_stack);
+ ptr = &VEC_last (deferred_access, deferred_access_stack);
/* If we are not supposed to defer access checks, just check now. */
if (ptr->deferring_access_checks_kind == dk_no_deferred)
@@ -5948,7 +5948,7 @@ build_constexpr_constructor_member_initializers (tree type, tree body)
if (VEC_length (constructor_elt, vec) > 0)
{
/* In a delegating constructor, return the target. */
- constructor_elt *ce = VEC_index (constructor_elt, vec, 0);
+ constructor_elt *ce = &VEC_index (constructor_elt, vec, 0);
if (ce->index == current_class_ptr)
{
body = ce->value;
@@ -6820,7 +6820,7 @@ cxx_eval_array_reference (const constexpr_call *call, tree t,
}
i = tree_low_cst (index, 0);
if (TREE_CODE (ary) == CONSTRUCTOR)
- return VEC_index (constructor_elt, CONSTRUCTOR_ELTS (ary), i)->value;
+ return VEC_index (constructor_elt, CONSTRUCTOR_ELTS (ary), i).value;
else if (elem_nchars == 1)
return build_int_cst (cv_unqualified (TREE_TYPE (TREE_TYPE (ary))),
TREE_STRING_POINTER (ary)[i]);
diff --git a/gcc/cp/tree.c b/gcc/cp/tree.c
index bfbde04..7cc2457 100644
--- a/gcc/cp/tree.c
+++ b/gcc/cp/tree.c
@@ -1337,7 +1337,7 @@ strip_typedefs_expr (tree t)
type = strip_typedefs (TREE_TYPE (t));
for (i = 0; i < n; ++i)
{
- constructor_elt *e = VEC_index (constructor_elt, vec, i);
+ constructor_elt *e = &VEC_index (constructor_elt, vec, i);
tree op = strip_typedefs_expr (e->value);
if (op != e->value)
{
diff --git a/gcc/cp/typeck2.c b/gcc/cp/typeck2.c
index 326f602..2180535 100644
--- a/gcc/cp/typeck2.c
+++ b/gcc/cp/typeck2.c
@@ -1178,7 +1178,7 @@ process_init_constructor_record (tree type, tree init,
if (idx < VEC_length (constructor_elt, CONSTRUCTOR_ELTS (init)))
{
- constructor_elt *ce = VEC_index (constructor_elt,
+ constructor_elt *ce = &VEC_index (constructor_elt,
CONSTRUCTOR_ELTS (init), idx);
if (ce->index)
{
@@ -1305,7 +1305,7 @@ process_init_constructor_union (tree type, tree init,
VEC_block_remove (constructor_elt, CONSTRUCTOR_ELTS (init), 1, len-1);
}
- ce = VEC_index (constructor_elt, CONSTRUCTOR_ELTS (init), 0);
+ ce = &VEC_index (constructor_elt, CONSTRUCTOR_ELTS (init), 0);
/* If this element specifies a field, initialize via that field. */
if (ce->index)
diff --git a/gcc/df-scan.c b/gcc/df-scan.c
index df90365..55492fa 100644
--- a/gcc/df-scan.c
+++ b/gcc/df-scan.c
@@ -4448,7 +4448,6 @@ df_bb_verify (basic_block bb)
if (!INSN_P (insn))
continue;
df_insn_refs_verify (&collection_rec, bb, insn, true);
- df_free_collection_rec (&collection_rec);
}
/* Do the artificial defs and uses. */
diff --git a/gcc/tree-sra.c b/gcc/tree-sra.c
index 3c94b79..504f9a8 100644
--- a/gcc/tree-sra.c
+++ b/gcc/tree-sra.c
@@ -3999,7 +3999,7 @@ splice_all_param_accesses (VEC (access_p, heap) **representatives)
result = UNUSED_PARAMS;
}
else
- VEC_quick_push (access_p, *representatives, NULL);
+ VEC_quick_push (access_p, *representatives, (access_p) NULL);
}
if (result == NO_GOOD_ACCESS)
@@ -4208,7 +4208,7 @@ get_adjustment_for_base (ipa_parm_adjustment_vec adjustments, tree base)
{
struct ipa_parm_adjustment *adj;
- adj = VEC_index (ipa_parm_adjustment_t, adjustments, i);
+ adj = &VEC_index (ipa_parm_adjustment_t, adjustments, i);
if (!adj->copy_param && adj->base == base)
return adj;
}
@@ -4315,7 +4315,7 @@ sra_ipa_modify_expr (tree *expr, bool convert,
for (i = 0; i < len; i++)
{
- adj = VEC_index (ipa_parm_adjustment_t, adjustments, i);
+ adj = &VEC_index (ipa_parm_adjustment_t, adjustments, i);
if (adj->base == base &&
(adj->offset == offset || adj->remove_param))
@@ -4522,7 +4522,7 @@ sra_ipa_reset_debug_stmts (ipa_parm_adjustment_vec adjustments)
tree name, vexpr, copy = NULL_TREE;
use_operand_p use_p;
- adj = VEC_index (ipa_parm_adjustment_t, adjustments, i);
+ adj = &VEC_index (ipa_parm_adjustment_t, adjustments, i);
if (adj->copy_param || !is_gimple_reg (adj->base))
continue;
name = ssa_default_def (cfun, adj->base);
diff --git a/gcc/tree-ssa-address.c b/gcc/tree-ssa-address.c
index ddab7d8..57a590d 100644
--- a/gcc/tree-ssa-address.c
+++ b/gcc/tree-ssa-address.c
@@ -215,7 +215,8 @@ addr_for_mem_ref (struct mem_address *addr, addr_space_t as,
templ_index + 1);
/* Reuse the templates for addresses, so that we do not waste memory. */
- templ = VEC_index (mem_addr_template, mem_addr_template_list, templ_index);
+ templ = &VEC_index (mem_addr_template, mem_addr_template_list,
+ templ_index);
if (!templ->ref)
{
sym = (addr->symbol ?
diff --git a/gcc/tree-ssa-dom.c b/gcc/tree-ssa-dom.c
index d2a4128..a7b644f 100644
--- a/gcc/tree-ssa-dom.c
+++ b/gcc/tree-ssa-dom.c
@@ -1734,7 +1734,8 @@ dom_opt_enter_block (struct dom_walk_data *walk_data ATTRIBUTE_UNUSED,
/* Push a marker on the stacks of local information so that we know how
far to unwind when we finalize this block. */
- VEC_safe_push (expr_hash_elt_t, heap, avail_exprs_stack, NULL);
+ VEC_safe_push (expr_hash_elt_t, heap, avail_exprs_stack,
+ (expr_hash_elt_t)NULL);
VEC_safe_push (tree, heap, const_and_copies_stack, NULL_TREE);
record_equivalences_from_incoming_edge (bb);
@@ -1745,7 +1746,8 @@ dom_opt_enter_block (struct dom_walk_data *walk_data ATTRIBUTE_UNUSED,
/* Create equivalences from redundant PHIs. PHIs are only truly
redundant when they exist in the same block, so push another
marker and unwind right afterwards. */
- VEC_safe_push (expr_hash_elt_t, heap, avail_exprs_stack, NULL);
+ VEC_safe_push (expr_hash_elt_t, heap, avail_exprs_stack,
+ (expr_hash_elt_t)NULL);
for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
eliminate_redundant_computations (&gsi);
remove_local_expressions_from_table ();
@@ -1800,7 +1802,8 @@ dom_opt_leave_block (struct dom_walk_data *walk_data, basic_block bb)
/* Push a marker onto the available expression stack so that we
unwind any expressions related to the TRUE arm before processing
the false arm below. */
- VEC_safe_push (expr_hash_elt_t, heap, avail_exprs_stack, NULL);
+ VEC_safe_push (expr_hash_elt_t, heap, avail_exprs_stack,
+ (expr_hash_elt_t)NULL);
VEC_safe_push (tree, heap, const_and_copies_stack, NULL_TREE);
edge_info = (struct edge_info *) true_edge->aux;
diff --git a/gcc/hw-doloop.c b/gcc/hw-doloop.c
index 925039f..cd9b3f2 100644
--- a/gcc/hw-doloop.c
+++ b/gcc/hw-doloop.c
@@ -345,12 +345,12 @@ discover_loop (hwloop_info loop, basic_block tail_bb, rtx tail_insn, rtx reg)
}
/* Analyze the structure of the loops in the current function. Use
- STACK for bitmap allocations. Returns all the valid candidates for
+ LOOP_STACK for bitmap allocations. Returns all the valid candidates for
hardware loops found in this function. HOOKS is the argument
passed to reorg_loops, used here to find the iteration registers
from a loop_end pattern. */
static hwloop_info
-discover_loops (bitmap_obstack *stack, struct hw_doloop_hooks *hooks)
+discover_loops (bitmap_obstack *loop_stack, struct hw_doloop_hooks *hooks)
{
hwloop_info loops = NULL;
hwloop_info loop;
@@ -406,7 +406,7 @@ discover_loops (bitmap_obstack *stack, struct hw_doloop_hooks *hooks)
loops = loop;
loop->loop_no = nloops++;
loop->blocks = VEC_alloc (basic_block, heap, 20);
- loop->block_bitmap = BITMAP_ALLOC (stack);
+ loop->block_bitmap = BITMAP_ALLOC (loop_stack);
if (dump_file)
{
@@ -626,18 +626,18 @@ reorg_loops (bool do_reorder, struct hw_doloop_hooks *hooks)
{
hwloop_info loops = NULL;
hwloop_info loop;
- bitmap_obstack stack;
+ bitmap_obstack loop_stack;
df_live_add_problem ();
df_live_set_all_dirty ();
df_analyze ();
- bitmap_obstack_initialize (&stack);
+ bitmap_obstack_initialize (&loop_stack);
if (dump_file)
fprintf (dump_file, ";; Find loops, first pass\n\n");
- loops = discover_loops (&stack, hooks);
+ loops = discover_loops (&loop_stack, hooks);
if (do_reorder)
{
@@ -647,7 +647,7 @@ reorg_loops (bool do_reorder, struct hw_doloop_hooks *hooks)
if (dump_file)
fprintf (dump_file, ";; Find loops, second pass\n\n");
- loops = discover_loops (&stack, hooks);
+ loops = discover_loops (&loop_stack, hooks);
}
for (loop = loops; loop; loop = loop->next)
diff --git a/gcc/config/bfin/bfin.c b/gcc/config/bfin/bfin.c
index 3a4b8af..0a0d702 100644
--- a/gcc/config/bfin/bfin.c
+++ b/gcc/config/bfin/bfin.c
@@ -3478,7 +3478,7 @@ hwloop_optimize (hwloop_info loop)
/* If we have to insert the LSETUP before a jump, count that jump in the
length. */
if (VEC_length (edge, loop->incoming) > 1
- || !(VEC_last (edge, loop->incoming)->flags & EDGE_FALLTHRU))
+ || !(VEC_last (edge, loop->incoming).flags & EDGE_FALLTHRU))
{
gcc_assert (JUMP_P (insn));
insn = PREV_INSN (insn);
@@ -3747,7 +3747,7 @@ hwloop_optimize (hwloop_info loop)
{
rtx prev = BB_END (loop->incoming_src);
if (VEC_length (edge, loop->incoming) > 1
- || !(VEC_last (edge, loop->incoming)->flags & EDGE_FALLTHRU))
+ || !(VEC_last (edge, loop->incoming).flags & EDGE_FALLTHRU))
{
gcc_assert (JUMP_P (prev));
prev = PREV_INSN (prev);
diff --git a/gcc/config/c6x/c6x.c b/gcc/config/c6x/c6x.c
index a189a1d..1905504 100644
--- a/gcc/config/c6x/c6x.c
+++ b/gcc/config/c6x/c6x.c
@@ -126,7 +126,7 @@ DEF_VEC_ALLOC_O(c6x_sched_insn_info, heap);
static VEC(c6x_sched_insn_info, heap) *insn_info;
#define INSN_INFO_LENGTH (VEC_length (c6x_sched_insn_info, insn_info))
-#define INSN_INFO_ENTRY(N) (*VEC_index (c6x_sched_insn_info, insn_info, (N)))
+#define INSN_INFO_ENTRY(N) (VEC_index (c6x_sched_insn_info, insn_info, (N)))
static bool done_cfi_sections;
@@ -3448,8 +3448,8 @@ try_rename_operands (rtx head, rtx tail, unit_req_table reqs, rtx insn,
{
unsigned int mask1, mask2, mask_changed;
int count, side1, side2, req1, req2;
- insn_rr_info *this_rr = VEC_index (insn_rr_info, insn_rr,
- INSN_UID (chain->insn));
+ insn_rr_info *this_rr = &VEC_index (insn_rr_info, insn_rr,
+ INSN_UID (chain->insn));
count = get_unit_reqs (chain->insn, &req1, &side1, &req2, &side2);
@@ -3555,7 +3555,7 @@ reshuffle_units (basic_block loop)
if (!get_unit_operand_masks (insn, &mask1, &mask2))
continue;
- info = VEC_index (insn_rr_info, insn_rr, INSN_UID (insn));
+ info = &VEC_index (insn_rr_info, insn_rr, INSN_UID (insn));
if (info->op_info == NULL)
continue;
diff --git a/gcc/config/mips/mips.c b/gcc/config/mips/mips.c
index f36f65b..3688136 100644
--- a/gcc/config/mips/mips.c
+++ b/gcc/config/mips/mips.c
@@ -3971,7 +3971,8 @@ mips_multi_start (void)
static struct mips_multi_member *
mips_multi_add (void)
{
- return VEC_safe_push (mips_multi_member, heap, mips_multi_members, 0);
+ return VEC_safe_push (mips_multi_member, heap, mips_multi_members,
+ (struct mips_multi_member *) 0);
}
/* Add a normal insn with the given asm format to the current multi-insn
@@ -4026,7 +4027,7 @@ mips_multi_copy_insn (unsigned int i)
struct mips_multi_member *member;
member = mips_multi_add ();
- memcpy (member, VEC_index (mips_multi_member, mips_multi_members, i),
+ memcpy (member, &VEC_index (mips_multi_member, mips_multi_members, i),
sizeof (*member));
gcc_assert (!member->is_label_p);
}
@@ -4038,7 +4039,7 @@ mips_multi_copy_insn (unsigned int i)
static void
mips_multi_set_operand (unsigned int i, unsigned int op, rtx x)
{
- VEC_index (mips_multi_member, mips_multi_members, i)->operands[op] = x;
+ VEC_index (mips_multi_member, mips_multi_members, i).operands[op] = x;
}
/* Write out the asm code for the current multi-insn sequence. */