This is the mail archive of the gcc-help@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Help working out where GCC decides the size of 0xFFFFFFFF


Hi,

On Mon, Apr 20, 2015 at 3:05 AM, Oleg Endo <oleg.endo@t-online.de> wrote:

> Somehow, my initial guess was that your host system is 64 bit ...
> Probably this is the problem.  When something picks up the wrong env
> settings during some sub-configure step, it's probably not sufficient to
> fix it up in the generated Makefile only.  There have been some issues
> w.r.t. CFLAGS / CXXFLAGS and *FLAGS_FOR_TARGET etc usage in the
> configury stuff, maybe this is yet another one.  In this case you might
> want to open a PR for that.

Yeah, I was looking into the FLAGS_FOR_TARGET stuff and had trouble
getting it to work. Maybe I should revisit that and see if I can use
that.

> Maybe the problem is also because your native SH compiler is built for
> SH2E, which always truncates doubles to floats.  However, this is just a
> wild guess.  You could confirm or refute this by building the native SH
> compiler for SH2 (-m2), which will use software FP for float and double.
> Note though that SH2 (-m2) and SH2E (-m2e) is not binary compatible due
> to FPU presence/absence.

I think I am slowly narrowing down where the bad decision is being
made. Seems an incorrect results comes from a call to:

bool wi::fits_to_tree_p(const T &x, const_tree type)

and more specifically the call to eq_p() after it does a zext

I am not sure yet if the issue is in that call, or if the tree data is
incorrectly formed going in, but at least for the time being I have
some leads to follow.

If I don't get anywhere with that I will try the -m2 option and see if
that sheds any light.

Cheers

Alex


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]