Skip to Content.
Sympa Menu

xom-interest - Re: [XOM-interest] profiler

xom-interest AT lists.ibiblio.org

Subject: XOM API for Processing XML with Java

List archive

Chronological Thread  
  • From: Wolfgang Hoschek <whoschek AT lbl.gov>
  • To: Elliotte Harold <elharo AT metalab.unc.edu>
  • Cc: xom-interest AT lists.ibiblio.org
  • Subject: Re: [XOM-interest] profiler
  • Date: Tue, 23 Nov 2004 18:55:00 -0800

On Nov 23, 2004, at 6:02 PM, Elliotte Harold wrote:

Wolfgang Hoschek wrote:
Allow me to speculate that the xom source snippet below might have to do with the just mentioned profiler and/or client VM problems, or similar reasons:

This comment goes back to when I was developing on Linux, not the Mac. If you could demonstrate conclusively that this was a problem, I'd consider revisiting it. However. these sorts of low-level optimizations really do run the risk of hurting more than they help. Even if they work today on one particular VM with one batch of test documents, that can turn completely around on another VM processing documents with different characteristics.

Based on your own comments in the code it seems you can't see a reason for the anomalous and counter-intuitive observation yourself, right? So i suggest that the funny hack in there is a low-level optimizations that runs the risk of hurting more than it helps (if it ever helped and you were not tricked by the profiler), not the other way round. Think about it for a moment :-)

Anyway, I just ran a bench with wurlf.xml (which has quite many attributes) and when doing the obvious thing (caching the index in the Attribute constructor) bnux runs a couple of percent faster. (On the usual benchmark environment). Not dramatic, but certainly not slower.


Algorithm-level optimizations like caching the namespace URIs and short-circuiting the scheme checking for http seem a lot more reliable and effective. I'm not sure how many more of these there are to be found though. Possibly we could cache element names as well, but that would need a much bigger cache and a lot more search time than namespace URIs, so it might well not help. Possibly I could cache namespace prefixes separately. Those tend to repeat, and not to be excessively numerous so there shouldn't be too much cache thrashing. That might help documents that use namespace prefixes. I'll put that on my TODO list.

Any good ideas should be welcome. If one can figure out how to effectively bound the cache and maintain high hit ratio it would be interesting.


Another reason I don't want to be too aggressive here is simply that this is not where most applications spend most of their time. Almost every real world application I've seen is limited by one or more of three things:

1. Input, including parsing.
2. Output, including serialization
3. Non-XML operations like encryption and decryption

Depends on the usecases, as always...

Encryption and decryption can be done at rates around 50 MB/s in normal software, or more with dedicated hardware.


NUX is a little unusual here because it does its own parsing and serialization; but I still suspect in practice a lot of time will get spent in raw I/O. There's a good reason most XML benchmarks precache documents in memory to0 avoid I/O costs. If they didn't, I/O would swamp everything they're trying to measure. Even if we succeeded in reducing in-memory operations to zero, I'm afraid it wouldn't have much effect on most applications. :-(

We're quite happy with the way Nux works out.

Disk I/O at 50 MB/s per disk doesn't strike me as unusal anymore. Network I/O at the full capacity of links, say GigE (100 MB/s) or 10 GigE doesn't strike me as unusal anymore either. One can quite easily fill a GigE pipe with a single commodity CPU using our Java SEA library (NIO based). I used to work at CERN where our apps would fill any and all hardware we could possibly get our hands on, acquiring data from a particle physics accelerator at 1 GB/s in a 24x7x365 manner, filtering, storing, sieving and analyzing Petabytes of data, fanning tasks out to Grid data centers around the world. Moore's Law doesn't change anything is such a setting. Any phantastic advances in hardware and software are immediately neutralized by the desire of science apps to take and process as much data as one conceivably can. Anyway, enough of that for now...

Wolfgang.





Archive powered by MHonArc 2.6.24.

Top of Page