Skip to Content.
Sympa Menu

pcplantdb - Re: [pcplantdb] PIW Relationships Modelling

pcplantdb@lists.ibiblio.org

Subject: pcplantdb

List archive

Chronological Thread  
  • From: Richard Morris <webmaster@pfaf.org>
  • To: pcplantdb <pcplantdb@lists.ibiblio.org>
  • Subject: Re: [pcplantdb] PIW Relationships Modelling
  • Date: Wed, 20 Jul 2005 00:21:32 +0100

Chad Knepp wrote:
Sean Maley writes:
> --- Chad Knepp <pyg@galatea.org> wrote:
>
> <snip>
>
> > Bear has also suggested a denormalized schema and on a
> > theoretical technical level I tend to agree that this is a good
> > direction to go. Unfortunately it we loose most of the
> > advantages of using an RDBMS when doing so. Our initial
> > direction toward this was using an object DB, but was thwarted
> > IMO, by the fact that ZOPE2 just plain blows, although I think
> > ZODB is awesome.
> > The rules of normalization address the issues encountered for data
> transactions (insert, update, delete). The idea was to reduce live
> lock contention by reducing the footprint of any given entity
> requiring modification. This works great for data entry.
> Unfortunately, the analysis of this data can become an IO burden.
> > The problem stems from a concept known as snow flaking. Starting
> from table A, we may have a selectivity for our query of say ten
> records. Joined in with table B and ten more records means 100
> points of comparison occur for the RDMS engine. This is expected.
> The snow flake occurs when you have that one more table to bring
> into the join; 10 records there causes there to be 1000 points of
> comparison for the join; each level deep in a snowflake grows the
> IO exponentially. Data analysis will be most efficient when we can
> start with one base table and never join more than one table
> relationship away.

I pretty much grok what happens with joins.

> If we denormalize in a fashion targeting the elimination of snow
> flakes for potential joins, we end up with a star schema (google
> Ralph Kimball). The central table for analysis carrying all of
> your measures like outputs, is called the fact table. All of the
> tables joined into the fact for analysis are called dimensions.
> Each dimension has a unique integer, artificial key.
>
> There is nothing wrong with using the RDMS tool. Data modeling
> issues often result in developers demonizing the RDMS and the DBA
> cursing the data architect. ZODB looks interesting as a tool for
> RAD, but not to use in place of an RDMS with a properly modeled
> database. In fact, it looks interesting because it offers some
> figurative glue between the RDMS universe and the HTML view that
> will eventually be needed. On the other hand, my own development
> isn't hindered by my database interaction (scripting tool
> inadequacies aside).
>
> > Also your example is not fully denormalized in that
> > every schema element should apply to every item. Do
> > all animal, mineral, and vegetables have a
> > ground_depth and/or height that makes sense.
> > NULL

My use of denormalization was talking over my head. I didn't even
know someone had created theories behind it. Looks like I have some
reading to do.

Indeed there is. I ended up having to teach a class on this stuff a while back not knowing the theory. Apparently there are
1st, 2nd and 3rd Normal Forms for data with varying degrees of normilisation.

Have a look at
http://en.wikipedia.org/wiki/Database_normalization

which describes all.

Rich




Archive powered by MHonArc 2.6.24.

Top of Page