Skip to Content.
Sympa Menu

xom-interest - Re: [XOM-interest] indexOf O(1) patch?

xom-interest AT lists.ibiblio.org

Subject: XOM API for Processing XML with Java

List archive

Chronological Thread  
  • From: Wolfgang Hoschek <whoschek AT lbl.gov>
  • To: Elliotte Harold <elharo AT metalab.unc.edu>
  • Cc: XOM-interest AT lists.ibiblio.org, Nils_Kilden-Pedersen AT Countrywide.Com
  • Subject: Re: [XOM-interest] indexOf O(1) patch?
  • Date: Wed, 2 Feb 2005 12:34:57 -0800

On Feb 2, 2005, at 9:25 AM, Elliotte Harold wrote:

Nils_Kilden-Pedersen AT Countrywide.Com wrote:


But how will you know in advance if the document will fit in memory? I
think Wolfgang's got a good point here. You don't want to find out by an
OutOfMemoryException, so if you're going to process "large" documents, you
need a different strategy anyway.
The scenarios I've seen are either static "small" documents, or dynamic
(growing) "large" documents.

Elliotte wrote: Experiment. The fact is I constantly see people having trouble processing XML due to the size of their documents. I see this far more often than I see people having problems with speed.

How about replying to the core point Nils makes, rather than cutting it away in your response, as if it was never there?
It's just difficult to have an interesting continueing conversation if one side has a tendency

- to only selectively listen
- to misconstrue or misrepresent what has been said before
- to be unwilling to think about things from alternative angles

How can a main memory tree (and the binary codec is no exception) address documents growing faster than moore's law? Cause that's the technology drift happening at the "very large document" area. A very large document an app is processing today won't have the same size in 6 months. Can one trim XOM memory faster than that rate in a sustainable manner? Certainly not. Can one trim XOM memory to zero, which is what it would need to be to handle arbitrarily large documents? Certainly not.

At best, one can delay defeat for a little longer, with a little more tweaking here and there. That's a desparate short-term survival strategy, not a meaningful long-term strategy.

It goes without saying that consuming exorbitant amounts of memory for no good reason is always a bad idea. So trim memory fat where it improves or doesn't hurt performance, and abstain from trimming where it would degrade performance.





Archive powered by MHonArc 2.6.24.

Top of Page