gcj's IO performance vs blackdown JDK

Mohan Embar gnustuff@thisiscool.com
Wed Dec 24 14:44:00 GMT 2003


Hi Bryce,

>Interesting. I actually already rewrote much of BufferedReader (based 
>partly on the BufferedInputStream code) and got a small speed 
>improvement (along with a big reduction in complexity), before 
>realizing that the real performance problem was in InputStreamReader 
>where the character conversion occurs. What was happening is that 
>InputStreamReader was converting characters into a small internal 
>buffer (only 100 chars IIRC), so when BufferedReader called read() on 
>the InputStreamReader it only got a fraction of the amount of data 
>needed to fill up its buffer. This also meant that there was an extra 
>unnecessary layer of copying going on. Solution: fix InputSreamReader 
>to decode characters directly into the array given to it in the read() 
>method. This brought us up to about equal or slightly faster than the 
>JRE 1.4.2 on Chris's test.

Nice catch.

> From memory, my readLine() implementation only creates a StringBuffer 
>in the event that the line crosses a boundary between buffers (ie the 
>StringBuffer is used to store the start of a line when the buffer needs 
>to be refilled), so given a full 8192 char buffer, the need to create a 
>StringBuffer should be rare at least for input with "normal" sized 
>lines.

I agree. I was focusing on Chris' test case and I thought keeping
the StringBuffer around was relatively innocuous. I'm less convinced
about the extra effort needed to eliminate synchronization.

While I've got you here, can you answer one of my testing-related questions?
When you "certify" a patch, do you run the libjava and Mauve tests against
both the pre-patch and post-patch versions and then diff the regression test
result output?

-- Mohan
http://www.thisiscool.com/
http://www.animalsong.org/





More information about the Java mailing list