Skip to Content.
Sympa Menu

pcplantdb - Re: [pcplantdb] PIW Relationships Modelling

pcplantdb@lists.ibiblio.org

Subject: pcplantdb

List archive

Chronological Thread  
  • From: Chad Knepp <pyg@galatea.org>
  • To: pcplantdb <pcplantdb@lists.ibiblio.org>
  • Subject: Re: [pcplantdb] PIW Relationships Modelling
  • Date: Tue, 19 Jul 2005 17:43:20 -0500

Sean Maley writes:
> --- Chad Knepp <pyg@galatea.org> wrote:
>
> <snip>
>
> > Bear has also suggested a denormalized schema and on a
> > theoretical technical level I tend to agree that this is a good
> > direction to go. Unfortunately it we loose most of the
> > advantages of using an RDBMS when doing so. Our initial
> > direction toward this was using an object DB, but was thwarted
> > IMO, by the fact that ZOPE2 just plain blows, although I think
> > ZODB is awesome.
>
> The rules of normalization address the issues encountered for data
> transactions (insert, update, delete). The idea was to reduce live
> lock contention by reducing the footprint of any given entity
> requiring modification. This works great for data entry.
> Unfortunately, the analysis of this data can become an IO burden.
>
> The problem stems from a concept known as snow flaking. Starting
> from table A, we may have a selectivity for our query of say ten
> records. Joined in with table B and ten more records means 100
> points of comparison occur for the RDMS engine. This is expected.
> The snow flake occurs when you have that one more table to bring
> into the join; 10 records there causes there to be 1000 points of
> comparison for the join; each level deep in a snowflake grows the
> IO exponentially. Data analysis will be most efficient when we can
> start with one base table and never join more than one table
> relationship away.

I pretty much grok what happens with joins.

> If we denormalize in a fashion targeting the elimination of snow
> flakes for potential joins, we end up with a star schema (google
> Ralph Kimball). The central table for analysis carrying all of
> your measures like outputs, is called the fact table. All of the
> tables joined into the fact for analysis are called dimensions.
> Each dimension has a unique integer, artificial key.
>
> There is nothing wrong with using the RDMS tool. Data modeling
> issues often result in developers demonizing the RDMS and the DBA
> cursing the data architect. ZODB looks interesting as a tool for
> RAD, but not to use in place of an RDMS with a properly modeled
> database. In fact, it looks interesting because it offers some
> figurative glue between the RDMS universe and the HTML view that
> will eventually be needed. On the other hand, my own development
> isn't hindered by my database interaction (scripting tool
> inadequacies aside).
>
> > Also your example is not fully denormalized in that
> > every schema element should apply to every item. Do
> > all animal, mineral, and vegetables have a
> > ground_depth and/or height that makes sense.
>
> NULL

My use of denormalization was talking over my head. I didn't even
know someone had created theories behind it. Looks like I have some
reading to do.


> > IIRC, Bear suggested a two/three column schema with a primary key
> > and then an attribute and followe by data of the attribute.
> > Something like:
> >
> > id | attribute | data
> > ---------------------
> > 34 height/feet 20
> >
> > As I said earlier this doesn't scale well in an RDBMS.
>
> This represents data entry optimized designing. It scales very
> well with RDMS tools, but not if you don't use the data that way.
> This is a schema discussion, not about the tool to implement it.

Not sure I follow this. Bear was suggesting this because we were
having trouble deciding what [plant] attributes where important and
which weren't (a schema discussion correct?). His suggestion would
disolve most of the need to figure out what was important ahead of
time and essentially make a row out of each column of several tables.
I love the idea from an ease of implementation standpoint, but I'm not
sold on the extra select to reassemble the plant row and lack of
contraints (anything could be NULL). OTOH, this is really close to
the loose tagging way of data organization I'm currently in love
with.

Waddya think? I would really love to be convinced that this is a good
idea. I'm going to read some ralphkimball.com stuff -n- relax.

> <snip>

Cheers,
Chad

--
Chad Knepp
python -c 'import base64;print base64.decodestring("cHlnQGdhbGF0ZWEub3Jn")'




Archive powered by MHonArc 2.6.24.

Top of Page