Skip to Content.
Sympa Menu

pcplantdb - Re: [pcplantdb] PIW Relationships Modelling

pcplantdb@lists.ibiblio.org

Subject: pcplantdb

List archive

Chronological Thread  
  • From: Sean Maley <semaley@yahoo.com>
  • To: pcplantdb <pcplantdb@lists.ibiblio.org>
  • Subject: Re: [pcplantdb] PIW Relationships Modelling
  • Date: Wed, 20 Jul 2005 07:16:18 -0700 (PDT)



--- Richard Morris <webmaster@pfaf.org> wrote:

> Chad Knepp wrote:
> > Sean Maley writes:
> > > --- Chad Knepp <pyg@galatea.org> wrote:
> > >
> > > <snip>
> > >
> > My use of denormalization was talking over my
> head. I didn't even
> > know someone had created theories behind it.
> Looks like I have some
> > reading to do.
> >
> Indeed there is. I ended up having to teach a class
> on this stuff a
> while back not knowing the theory. Apparently there
> are
> 1st, 2nd and 3rd Normal Forms for data with varying
> degrees of
> normilisation.

There is also 4th and 5th and more with actual names.
However, greater normalization helps transactional
systems. I believe we are discussing an analytical
system. Think about what needs to be done when
joining many tables, which is what normalization
encourages. Every match in table 1 needs to be paired
up with every match in the next inner table. Some
recent improvements in RDMS packages perform a sort
merge, but there is still the fundamental of each
first table record scans with the SARG (search
argument) for the next table with sort merges only
being possible with favorable index statistics. Ten
hits per table in a four way join is the same work as
scanning a 10,000 record table. You can imagin what
happens at 100=> 100,000,000 and 1000=>
1000,000,000,000.

A tiny little database can still peg a CPU. If all
you wanted was to have millions of records entered,
then normalization would be the key to success. In
the case of a search engine for plant data, many
tables may as well be one big monolithic table, rather
than doing the work to join them together anyway.

It is true that updates will be more precarious, but
this can be resolved by batching updates (plant
updates not "real time", but scheduled during inactive
time blocks).

It is also true that this system doesn't have the
current usage statistics for any of this to be an
issue. However, my understanding is that this system
would attempt to be a permaculture infrastructure
center piece. Multiple educational organizations
would forego current databases to come to this one.
We neither know the full extent of these statistics
from organizations that may not exist yet, but we also
don't know what commercial applications might benefit
the permaculture community by using it as well.

Therefore, knowing the primary usage to be mining for
information, there needs to be a design decision
tilted towards the OLAP spectrum. The rules of
normalization, OLTP, will only help you properly
denormalize to the OLAP equivalent. However, you can
also use the Kimball methods to design a star schema;
identify measures (fact tables) and dimensions
(attribute tables). The fact tables are relationships
you can quantify like how much of something is
produced over some amount of time.

Which direction you come into your schema should also
be decided, as you either denormalize based on your
analytical ideals or you start talking measures and
dimensions. The star schema is easier to design, but
is pure OLAP; I can help you write batch jobs, but
mysql may present problems as the scale of the batch
colides with the analytical processing.

> Have a look at
> http://en.wikipedia.org/wiki/Database_normalization
>
> which describes all.

With slight technical errors and tilted more towards
theory than practice. Not that I'm any expert, but
the disk still seeks and programming code always finds
a path to the CPU instruction set prior to execution.
Every optimal design has an equally optimal use.



__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com




Archive powered by MHonArc 2.6.24.

Top of Page