This is the mail archive of the java@gcc.gnu.org mailing list for the Java project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [Patch] Testcase for PR26858


Andrew Haley wrote:
David Daney writes:
 > This is the test case for PR26858 and another PR to be named in the future.

This one is going to be quite interesting to fix.  I think we should
start some sort of conversation with kernel engineers and Hans Boehm
to see what can be done.

One thing that immediately occurs to me is to return to using sbrk()
to get memory pages for the heap, but that fails when creating
trampolines because the memory doesn't have execute permission set.
However, we could use sbrk() and then alter the page permissions with
mprotect().  I see nothing that says we're allowed to do this, but I'm
fairly sure it will work.


The problem with using sbrk() is that there is no guarantee that code outside of our control will not mmap things in low memory. If libgcj is embedded in a web browser, the browser might map things there. Since switching back to using sbrk will not fully solve the problem, I think we should try to solve it in a different manner.


We really need the kernel/glibc to by default not map things in low memory. Where the definition of 'low' is a little murky. However there is a reason that the kernel started mapping things in low memory (to reduce memory space fragmentation). I foresee a heated discussion with the kernel hackers coming if we ask them to change it.

I have not run the testcase on a 64 bit system. There is no reason that in a 64 bit address space a very large block of low memory cannot be left unmapped. For a 32 bit system, I don't think leaving something like 16 or 64 64K pages unmapped would be the end of the world.

We need to have a hard limit on the area that we require, otherwise I think it would be difficult to get buy-in from others to fix things. Saying that we have a pathological case where we require more than the lowest 4K to be unmapped is fine, but we need a hard upper limit. It would be hard to argue to raise the limit, if it were still possible to generate code that would break with any possible limit.

Thus my two ideas (as stated on IRC yesterday):

1) Have the compiler generate checks for field accesses in large classes.

2) If #1 breaks the BC ABI or makes it too messy, refuse to execute code for the pathological cases. It should be possible to have the runtime determine the size of the unmapped low region, Throw an Error at runtime when classes are being linked if a field access via a null pointer would fall outside of that area. For things compiled with the C++ ABI ignore the problem as you have to be trusted to execute such code. If you have C++ ABI code that fails in this manner you deserve what happens.

Well that is a quick brain dump. I don't really know what the best approach is. From a purely practical point of view I don't really care, because with my current code base, I will never be effected by this problem.

David Daney


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]