Skip to Content.
Sympa Menu

pcplantdb - Re: [pcplantdb] PIW Relationships Modelling

pcplantdb@lists.ibiblio.org

Subject: pcplantdb

List archive

Chronological Thread  
  • From: Sean Maley <semaley@yahoo.com>
  • To: pcplantdb <pcplantdb@lists.ibiblio.org>
  • Subject: Re: [pcplantdb] PIW Relationships Modelling
  • Date: Mon, 18 Jul 2005 08:54:38 -0700 (PDT)



--- Chad Knepp <pyg@galatea.org> wrote:

> Responding to Marco and Sean... two for the price of
> one!
>
>> Where searches represent the fundamental activity
>> with the dataset, OLAP, we may consider keeping the
>> schema denormalized; creature_type: A, M, V
(animal,
>> mineral, vegetable) rather than get into
>> hierarchical data sets.
>>
>> --------------------
>> |creature_dimension|
>> --------------------
>> |creature_key | : hemp, bean, etc
>> |creature_type_key | : A, M, V
>> |ground_depth |
>> |height |
>> |.... |
>> --------------------
>
> Bear has also suggested a denormalized schema and on
> a theoretical technical level I tend to agree that
> this is a good direction to go. Unfortunately it we
> loose most of the advantages of using an RDBMS when
> doing so. Our initial direction toward this was
> using an object DB, but was thwarted IMO, by the
fact
> that ZOPE2 just plain blows, although I think ZODB
> is awesome.

The rules of normalization address the issues
encountered for data transactions (insert, update,
delete). The idea was to reduce live lock contention
by reducing the footprint of any given entity
requiring modification. This works great for data
entry. Unfortunately, the analysis of this data can
become an IO burden.

The problem stems from a concept known as snow
flaking. Starting from table A, we may have a
selectivity for our query of say ten records. Joined
in with table B and ten more records means 100 points
of comparison occur for the RDMS engine. This is
expected. The snow flake occurs when you have that
one more table to bring into the join; 10 records
there causes there to be 1000 points of comparison for
the join; each level deep in a snowflake grows the IO
exponentially. Data analysis will be most efficient
when we can start with one base table and never join
more than one table relationship away.

If we denormalize in a fashion targeting the
elimination of snow flakes for potential joins, we end
up with a star schema (google Ralph Kimball). The
central table for analysis carrying all of your
measures like outputs, is called the fact table. All
of the tables joined into the fact for analysis are
called dimensions. Each dimension has a unique
integer, artificial key.

There is nothing wrong with using the RDMS tool. Data
modeling issues often result in developers demonizing
the RDMS and the DBA cursing the data architect. ZODB
looks interesting as a tool for RAD, but not to use in
place of an RDMS with a properly modeled database. In
fact, it looks interesting because it offers some
figurative glue between the RDMS universe and the HTML
view that will eventually be needed. On the other
hand, my own development isn't hindered by my database
interaction (scripting tool inadequacies aside).

> Also your example is not fully denormalized in that
> every schema element should apply to every item. Do
> all animal, mineral, and vegetables have a
> ground_depth and/or height that makes sense.

NULL

> IIRC, Bear suggested a two/three column schema with
> a primary key and then an attribute and followe by
> data of the attribute. Something like:
>
> id | attribute | data
> ---------------------
> 34 height/feet 20
>
> As I said earlier this doesn't scale well in an
> RDBMS.

This represents data entry optimized designing. It
scales very well with RDMS tools, but not if you don't
use the data that way. This is a schema discussion,
not about the tool to implement it.

>> Most outputs have a corresponding output. So it
>> isn't enough to say your input/output is some
>> quantity, but rather some input yields some output.
>> Additionally, most relationships are complex, so
>> there are numerous inputs that yield numerous
>> outputs.
>
> I'm rather in agreement with Sean here and not sure
> that this is the best (or even an adequate) division
> of all the possible relationships entities have with
> each other. When I try these categories I find
> Inputs to be too abstract... as in I get bogged down
> when trying to list all of the things a plant could
> need. Listing Outputs is much more obvious to my
> mind.

No system is complete. Would it really matter to have
completeness? Completeness weighs heavily for return
on investment, particularly if you try to be complete.

>> -------------------
>> |product_dimension|
>> -------------------
|
>> -------------------
>> |relationship_fact|
>> -------------------
>> |
--------------------
|creature_dimension|
--------------------

Never stray far from your facts.

>> The sign accounts for when an input causes a decay
>> for a given output; over watering, nutrient
burning,
>> etc.
>
> OTOH, this seems overly complex to me.

I can't imagine these tables as directly updated by
end users. That is why a range was specified for the
input side and the output side. I threw the
production time in as an example of other things that
might be piled into the fact table. Think of this as
one big table index connecting us between
relationships and the more simplistic plant database.






____________________________________________________
Start your day with Yahoo! - make it your home page
http://www.yahoo.com/r/hs





Archive powered by MHonArc 2.6.24.

Top of Page