This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug bootstrap/61320] [4.10 regression] ICE in jcf-parse.c:1622 (parse_class_file
- From: "rguenth at gcc dot gnu.org" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: Tue, 03 Jun 2014 11:20:21 +0000
- Subject: [Bug bootstrap/61320] [4.10 regression] ICE in jcf-parse.c:1622 (parse_class_file
- Auto-submitted: auto-generated
- References: <bug-61320-4 at http dot gcc dot gnu dot org/bugzilla/>
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61320
--- Comment #12 from Richard Biener <rguenth at gcc dot gnu.org> ---
(In reply to Eric Botcazou from comment #11)
> > So I am testing the patch right now and should be able to send it tomorrow.
> > However, the code already shall already mark the load with the actual
> > alignment the access is being done with. Therefore it seems to me that
> > something in the backend fails to split the unaligned load into several
> > aligned load.
>
> But what would be the point of this round trip exactly?
I'd say
Index: tree-ssa-math-opts.c
===================================================================
--- tree-ssa-math-opts.c (revision 211170)
+++ tree-ssa-math-opts.c (working copy)
@@ -2149,7 +2149,8 @@ bswap_replace (gimple stmt, gimple_stmt_
unsigned align;
align = get_object_alignment (src);
- if (bswap && SLOW_UNALIGNED_ACCESS (TYPE_MODE (load_type), align))
+ if (align < GET_MODE_ALIGNMENT (TYPE_MODE (load_type))
+ && SLOW_UNALIGNED_ACCESS (TYPE_MODE (load_type), align))
return false;
/* Compute address to load from and cast according to the size
is obvious (and pre-approved).