Skip to Content.
Sympa Menu

xom-interest - Re: [XOM-interest] indexOf O(1) patch?

xom-interest AT lists.ibiblio.org

Subject: XOM API for Processing XML with Java

List archive

Chronological Thread  
  • From: Wolfgang Hoschek <whoschek AT yahoo.com>
  • To: mike AT saxonica.com
  • Cc: elharo AT metalab.unc.edu, xom-interest AT lists.ibiblio.org
  • Subject: Re: [XOM-interest] indexOf O(1) patch?
  • Date: Tue, 1 Feb 2005 12:06:10 -0800 (PST)

I'm in no position to judge Saxon's tinytree, but the
cost of the patch for XOM may intuitively be
overestimated as can be seen from actual real-world
measurements. For example, using new
Builder().build(...) here I parsed a typical 25 MB
file containing publication data of books and papers
(dblp.xml - publicly available) and measured various
configurations with java -verbose:gc. If anyone want's
the file, let me know.

A: xom-1.0 as shipped by Eliotte
B: is A) + the 4 bytes per node for the sibling
position
C: is B) + the stringified Text patch (for
performance)
D: is C) + a patch for XOMHandler and
NonVerifyingHandler that boosts performance AND
reduces memory for the Text patch by document-level
interning of whitespace text and CDATA (a large
fraction of indented real-world documents). If there's
interest I can send it.

A: xom-1.0: [GC
152745K->136301K(223040K), 0.1535169 secs]
B: xom-1.0 + sibling: [GC
168007K->152434K(224928K), 0.1681365 secs]
C: xom-1.0 + sibling + SText - intern: [GC
191192K->181195K(233784K), 0.3170597 secs]
D: xom-1.0 + sibling + SText + intern: [GC
162883K->156848K(234400K), 0.4279120 secs]

C / A = 168007 / 152745 = 1.09 --> 9% difference
D / A = 162883 / 152745 = 1.06 --> 6% difference

6% difference in memory is minimal. XOM is already
quite good at memory consumption: 162883k / 25 MB =
6.5 memory bytes per file byte.

Having said that, there are plenty of unexplored
opportunities in XOM to reduce memory *without*
compromisising performance.
ArrayLists could be replaced with arrays, saving some
4 + 4 bytes per Element and per Attribute list.
Additional namespace declarations consume bizarre
amounts of memory which is a problem for artificially
generated XML. Qnames with prefixes are not interned
hence the substrings consume lots of memory, the
baseURI of each Node is almost always the same so
those 4 bytes could be reduced, the type of an
attribute could be reduced from 4 bytes to 1 byte, and
there are probably more potential improvements with
unknown quantified effects that could be done, but
havent't been done.

But more importantly, always keep in mind that working
with very large XML document trees is always
expensive, produces huge *tenured* heaps which puts
huge pressure on the VM allocator and collector, and
most importantly it is inherently fragile: Application
data size and disk storage capacity grow at a *much*
higher rate (more than Moore's law) than main memory
size, and infinitely faster than any minor XOM code
memory optimization tweaking. Thus, if today an
application's files can still just fit into memory,
they most likely won't fit anymore 6 months down the
road, and the app will break. There are hard limits
with main memory trees, and no amount of tweaking will
make them go away in any significant manner.

Consequently, applications working on huge XML volumes
fundamentally need to be designed to reflect that
fact. Either by splitting documents into manageable
parts (almost always easy to do), or by using real
streaming architecture, possibly in combination with a
DBMS. Folks trying to run a XSL transform or other app
over a 1 GB file might want to consider that...

Eliotte, this mail is sent from an account that isn't
subscribed to the mailing list, cause I'm in some
strange internet cafe, so perhaps you can forward this
on my behalf - thanks.

Wolfgang.





Mike wrote:

I agree that 4 bytes per node is a significant cost.
I've been putting off
adding parent pointers to Saxon's TinyTree for that
reason - it would help
significantly for wide trees (the typical RDBMS table
dump of 100K records)
but that's not worth the increase from the current 29
bytes.

Michael Kay
http://www.saxonica.com/

-----Original Message-----
From: xom-interest-bounces AT lists.ibiblio.org
[mailto:xom-interest-bounces AT lists.ibiblio.org] On
Behalf Of
Elliotte Harold
Sent: 01 February 2005 01:06
To: Wolfgang Hoschek
Cc: xom-interest AT lists.ibiblio.org
Subject: Re: [XOM-interest] indexOf O(1) patch?


The additional memory (4 bytes) required per Node
doesn't hurt much
either, considering how much memory is consumed by all
sorts of other
info in node, element, text, etc. Perhaps Elliotte can
profile memory by
parsing documents with java -verbose:gc and look at
the
final max memory
consumption.


I find it very hard to stomach four extra bytes per
node. In my
experience memory consumption is a huge problem for
XML
object models,
and prevents people from using XML far more often than
speed
does. I've
done extensive profiling on memory usage, and worked
very
hard to reduce
each class down to the bare minimum. I'm still not
quite
there. They're
a couple of indirections I can pull out, especially in
ParentNode and
Element, but mostly XOM is pretty small.

To convince me to spend four more bytes per node, the
speed
increase is
going to have to be phenomenal, like a factor of ten
on common
operations. I'm not going to be convinced by 30% on
one
method. And even
then, I might push this off into a separate Ant
target.






__________________________________
Do you Yahoo!?
Read only the mail you want - Yahoo! Mail SpamGuard.
http://promotions.yahoo.com/new_mail




Archive powered by MHonArc 2.6.24.

Top of Page