Skip to Content.
Sympa Menu

xom-interest - Re: [XOM-interest] More Serializer performance patches

xom-interest AT lists.ibiblio.org

Subject: XOM API for Processing XML with Java

List archive

Chronological Thread  
  • From: Wolfgang Hoschek <whoschek AT lbl.gov>
  • To: Elliotte Harold <elharo AT metalab.unc.edu>
  • Cc: xom-interest AT lists.ibiblio.org
  • Subject: Re: [XOM-interest] More Serializer performance patches
  • Date: Wed, 12 Oct 2005 10:17:36 -0700

On Oct 11, 2005, at 11:54 AM, Elliotte Harold wrote:

Wolfgang Hoschek wrote:

Here are some more Serializer performance patches against xom-1.1-CVS
1. improved UnicodeWriter.{writePCData, writeMarkup, writeAttributeValue) for strings that contain both portions that need - escaping and others that do not need escaping. Example: foo bar hello world foo bar
2. replace BufferedWriter used by Serializer with an unsynchronized custom version, enabling much better compiler inlining
3. revert Serializer.writeNamespaceDeclarations to previous impl (the recently changed CVS impl shows a 25% degradation)
Results: 1.5 - 2x faster for a wide range of documents
All tests pass after applying the patches.


By any chance did you get any numbers on the effects of just replacing BufferedWriter with an unsynchronized version, independent of the other changes? I'm considering whether to mention this possibility in the next edition of Java I/O.


It's been a while and I may not remember the numbers quite right but I think just the unsynchronized version accounted for some 30-50% of the overall speedup. It eliminates the synchronization, and it also has shorter execution paths to get data into a buffer, which in turn helps inlining and other synergetic compiler optimizations.

The technique applies when writing a character (or byte) at a time in a high frequency loop (e.g. XOM Serializer). Here you want the loop to be as tight as possible, with execution paths being as short as possible. Buffering helps to keep critical paths short. Acquiring and releasing a lock isn't terribly expensive, but it adds up when done for each character, in particular on multiprocessors.

Whenever possible it's faster to write and read in pages rather than a char or byte at a time. All I/O systems follow this theme of sequential bulk data transfers, from the app to the OS, the file system, the block device, down to the physical disk drive, as well as the virtual memory system, L1 and L2 caches, etc. Random access and piece wise access is unnatural to I/O systems, incurring dramatic penalities. Some of the other optimizations we (or just me?) put into the Serializer follow that theme, writing the whole string in one call, or at least parts of it.

The Character encoding/decoding conversions available via the standard Java libraries also work much better with reasonably large chunks, i.e. with buffering, for the same reasons. Batch processing follows the same rationale.

Wolfgang.




Archive powered by MHonArc 2.6.24.

Top of Page