Skip to Content.
Sympa Menu

pcplantdb - Re: [pcplantdb] status

pcplantdb@lists.ibiblio.org

Subject: pcplantdb

List archive

Chronological Thread  
  • From: Sean Maley <semaley@yahoo.com>
  • To: Permaculture Plant Database <pcplantdb@lists.ibiblio.org>
  • Subject: Re: [pcplantdb] status
  • Date: Wed, 26 Jan 2005 15:03:57 -0800 (PST)

How do you distinguish when a prankster embeds XML
into their descriptions, thus corrupting our ability
to feed that data using XML? The DTD can take care of
some things, but what happens when we see things like:

<botanical_name id='1' legacy_pfaf_latin_name="">
<composition part_of_plant='leaf'>
<water type='mg'>4</water>
<energy type="calories">10</energy>
</composition>
<cultivars cultivar="okay place" synonyms="">
corrupt notes: </cultivars></botanical_name>
<botanical_name id="666"
legacy_pfaf_latin_name="">
<cultivars
cultivar="cheeches beastly stuff"
common_name="weed">
</cultivars>
</botanical_name>

Surely we won't write code to look for this stuff?
Base64 would resolve this, but then we aren't human
readable anymore. I can prevent a character like
ASCII 0 getting into the database merely from
Javascript.

The other alternative has us certifying a data source
by some means. We could also put required information
into the dataset that self identifies corrupted data.
For instance, a source id that has been safeguarded
from public access. The above example becomes flagged
for missing this key unknowable information.

Sorry for the conundrums, but this sort of concern is
what I do in my day job (financial markets data
feeds). I thought the perspective would help harden
how data transfers occur for PIW. If I'm out of line,
please don't be afraid to say so (thick skin is a
requirement in my position).


-Sean.

--- John Schinnerer <john@eco-living.net> wrote:

> Aloha,
>
> Thanks for the questions and input...I'll leave some
> replies to the rest
> of the gang...
>
> > XML is fine for capturing complex data sets, but
> isn't
> > always the answer. With a properly defined data
> > model, and non text data, XML would be over kill.
>
> I am also no fan of XML's overhead.
> However it is familiar and popular and would do the
> job.
>
> > I wonder if the consumers of the data you will
> serve
> > have been defined.
>
> We are designing to have a clean API for the data
> engine that will
> support a wide variety of clients, from vanilla HTML
> (and even text/cli
> if desired) to graph-based UIs and all sorts of
> other possibilities.
> At this point XML is (I think... :-) our initial
> data transport
> mechanism of choice.
>
> > If you plan on thin client apps
> > (Java/browser) being your only consumers, XML is
> fine.
> > However, a more involved site might want to
> reduce
> > network traffic with a more traditional record
> based
> > feed (perhaps even compressed and over ssh
> > channels(scp)).
>
> One way to deal with bandwidth issues is to have
> part of a 'fat' client
> actually run on the server, and use some
> more-efficient-than-XML
> transport mechanism to pass information back and
> forth to the remote
> part of the client.
>
> For example to take the XML overhead out for really
> low-bandwidth
> clients that are passing a lot of actual data, a
> piece of 'client'
> process on the server could talk XML with the data
> engine and then strip
> it down to something leaner and meaner and pass it
> on to the remote
> piece of the client (works both ways of course).
>
> John S.
>
> --
>
> John Schinnerer - MA, Whole Systems Design
> ------------------------------------------
> - Eco-Living -
> Whole Systems Design Services
> People - Place - Learning - Integration
> john@eco-living.net
> http://eco-living.net
> _______________________________________________
> pcplantdb mailing list
> pcplantdb@lists.ibiblio.org
> http://lists.ibiblio.org/mailman/listinfo/pcplantdb
>


__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com




Archive powered by MHonArc 2.6.24.

Top of Page