Skip to Content.
Sympa Menu

xom-interest - Re: [XOM-interest] Text.copy

xom-interest AT lists.ibiblio.org

Subject: XOM API for Processing XML with Java

List archive

Chronological Thread  
  • From: Steve Loughran <steve.loughran AT gmail.com>
  • Cc: xom-interest AT lists.ibiblio.org
  • Subject: Re: [XOM-interest] Text.copy
  • Date: Thu, 25 Nov 2004 23:46:55 +0000

On Thu, 25 Nov 2004 15:16:27 -0800, Wolfgang Hoschek <whoschek AT lbl.gov> wrote:
> I've seen that presentation some years ago, Steve.
>
> There are many cases where cost/benefit and prize/performance do not
> pay off, no doubt. Gains tend to be high initially, then level off
> beyond some point, with little ROI. But there are no universal truths.
> Fruits often hang low, they accumulate and, quite possibly, multiply.
> Try running a throughput app with JDK 1.3 client VM and a normal
> parser, then try running with 1.5 server VM and a binary codec. With a
> little luck, there may well be 2 orders of magnitude speedup here.

If you look at modern CPUs, key problems are
-branch misprediction: Get it wrong and you lose your speculation.
-cache misses.

In x86-land you can deal with this, with a bit of selective assembler
inline: http://www.iseran.com/Win32/CodeForSpeed/
for example on P6 up, there are the conditional move opcodes, plus the
PII core added operations to move data in $L1, $L2 or $L3. Yet there
is no way to issue those opcodes in either Java except through JNI,
which costs you 300 (PII cycles)), which makes it worthless.

Imagine if every graph traverse hinted that the next nodes of both
sides of a binary tree should be prefetched, or the next bit in a
list. Or an XML parser hinted at where it would be looking for data
next. Now most people wouldn't want to or need to bother with that
junk, but we only need the implementors of things like StringBuffer,
LinkedList, Hash{table,Map} and a few of the IO streams, and life
would be much better. The nice thing about some hint that an object
was about to be dereferenced would be that the JVM could discard it on
cores with no prefetch, like those little ARM CPUs in mobile phones
everywhere.

Also, you dont even know how the JVM is going to lay out your data, so
you can never be sure that your objects shared read data is on a
different cache line from your shared write data, which hurts MPU
synchronisation no end. But since cache lines are so variable over
CPUs, that is a hard one, the only solution is not to have shared read
and shared write data in the same objects.

-Steve




Archive powered by MHonArc 2.6.24.

Top of Page