Skip to Content.
Sympa Menu

xom-interest - Re: [XOM-interest] indexOf O(1) patch?

xom-interest AT lists.ibiblio.org

Subject: XOM API for Processing XML with Java

List archive

Chronological Thread  
  • From: Wolfgang Hoschek <whoschek AT lbl.gov>
  • To: Elliotte Harold <elharo AT metalab.unc.edu>
  • Cc: Wolfgang Hoschek <whoschek AT yahoo.com>, xom-interest AT lists.ibiblio.org
  • Subject: Re: [XOM-interest] indexOf O(1) patch?
  • Date: Tue, 1 Feb 2005 16:06:47 -0800

It seems to me that we can move the bar of where the main memory size becomes an issue though. In particular, anything that's added to each and every node really does make a noticeable difference in memory size. The more fat I can trim from XOM, the more documents it will be able to process.


As I said, this misses the point. The point is that trying to tweak a little memory to enable very large documents is a loosing battle. It's a fight one cannot win, only loose. Technology drifts govern this, not XOM. Use your energy for more important things!

6% difference in memory is minimal. XOM is already
quite good at memory consumption: 162883k / 25 MB =
6.5 memory bytes per file byte.

Ouch. That's worse than I thought. I was thinking XOM was in the 3-4 bytes per file byte range, maybe 4.5 at the outside.

It depends on the character of the file, of course.

As an aside, if i remember it correctly, the BinaryXMLCodec deserializes in a more compact manner, somewhere around 4 memory bytes per file byte.



Having said that, there are plenty of unexplored
opportunities in XOM to reduce memory *without*
compromisising performance.
ArrayLists could be replaced with arrays, saving some
4 + 4 bytes per Element and per Attribute list.

That's on the TODO list.

Additional namespace declarations consume bizarre
amounts of memory which is a problem for artificially
generated XML.

I don't know that these come up so often, though. Are there really cases where every element is going to have these? Typically, it's just the elements at the root that pick these up.

Might not be the case when the document is produced by a tool that produces and then essentially concatenates many small fragments, each of which carries the same (mostly redundant) namespace declarations. Can happen with XSLT and XQuery or other tools.


> Qnames with prefixes are not interned
hence the substrings consume lots of memory,

Good catch. I hadn't thought of that, and it's easy to fix.

Except that String.intern() is not the way to go, as outlined in a mail some months ago (it's very slow, and can cause memory leaks if you're unlucky). One should better have a string pool per document, or similar. So the CVS update from a few minutes ago is highly problematic, IMHO.


> the
baseURI of each Node is almost always the same so
those 4 bytes could be reduced,

This might need more elaboration. What would change? I don't see how we can support different base URIs on different elements without carrying around this field.

See Mike's mail.


the type of an
attribute could be reduced from 4 bytes to 1 byte,

I'm not sure about this one. We could make it a one-byte type, sure, but would the VM actually store that value in a byte, or would it use 4 bytes anyway?

Depends on the VM. By the way, whether the 4 sibling bytes actually make ANY difference also depends on the VM. On a 64 bit VM, the object fields might be aligned so that the memory consumption remains identical either way. Or perhaps not - it depends.


and
there are probably more potential improvements with
unknown quantified effects that could be done, but
havent't been done.

If you find any more, holler.

But more importantly, always keep in mind that working
with very large XML document trees is always
expensive, produces huge *tenured* heaps which puts
huge pressure on the VM allocator and collector, and
most importantly it is inherently fragile: Application
data size and disk storage capacity grow at a *much*
higher rate (more than Moore's law) than main memory
size, and infinitely faster than any minor XOM code
memory optimization tweaking. Thus, if today an
application's files can still just fit into memory,
they most likely won't fit anymore 6 months down the
road, and the app will break. There are hard limits
with main memory trees, and no amount of tweaking will
make them go away in any significant manner.



-----------------------------------------------------------------------
Wolfgang Hoschek | email: whoschek AT lbl.gov
Distributed Systems Department | phone: (415)-533-7610
Berkeley Laboratory | http://dsd.lbl.gov/~hoschek/
-----------------------------------------------------------------------





Archive powered by MHonArc 2.6.24.

Top of Page