Skip to Content.
Sympa Menu

cc-bizcom - Re: [Cc-bizcom] Greetings list!

cc-bizcom AT lists.ibiblio.org

Subject: A discussion of hybrid open source and proprietary licensing models.

List archive

Chronological Thread  
  • From: Marshall W Van Alstyne <marshall AT MIT.EDU>
  • To: cc-bizcom AT lists.ibiblio.org
  • Subject: Re: [Cc-bizcom] Greetings list!
  • Date: Mon, 27 Sep 2004 16:30:19 -0400

Apologies to the newsgroup for the delay. I've been traveling through
Michigan
and Missouri. A few coarse thoughts on modeling below...

At 02:48 PM 9/21/2004, Ryan S. Dancey wrote:
On Fri, 2004-09-17 at 15:45 -0400, Marshall Van Alstyne wrote:

> >One of the stated aims of this list is to come up with an economic model
> >for this sort of thing (I think, I'm not an economist). Do you think it's
> >possible to quantify the effects of the OGL on WotC's revenue stream, or
> >is that the wrong way of looking at the equation?
...

> It would be possible to conduct such modeling, if Wizards would release
> quantifiable information about the unit sales volume of the core D&D
> books, especially if that data could be combined with unit sales volume
> data from the half-dozen major distributors to correlate with 3rd party
> D20 product sales. Unfortunately, neither set of data points is, or is
> every likely to be, available. :(

Let me take a stab at distinguishing between two diff types of models.

In one, the type above, we'd need lots of data to give predictive measures of
interesting effects. So, for example, we might use data (if it were
available)
to predict sales volumes based on numbers and sizes of distributors. This
would be a regression or econometric model. We'd then fit parameters to point
clouds, basically a set of axes would be constructed to span the space of
points.

With this, we could answer questions about the sensitivity and confidence
level
of changes in output parameters based on changes in input parameters.

The second type of model, is different; it's analytic. Like a color palette,
wed use rough systems of equations to sketch spaces where we think various
properties hold. Then, functions can sometimes capture our intuitions about
how assumptions interact. So, for example, we might specify an equation that
says developer output rises in the level of open content but falls in less
prestigious tasks. Another equation might specify ability to reuse open
content (so for example, software tends to be more "reusable" than
paintings).
Another equation might specify how network effects boost adoption.
Juxtaposing
these equations carves up the space into distinct regions.

With this, we can ask whether certain properties exist in any region at all.
We
can ask how big is one region relative to another. Also, at what point does
one property become false in moving between spaces?

The first model is actually terrible for making predictions at the
boundaries.
It can also never be used to explore phenomena with no recorded data. And it
says very little about the ratios of multiple spaces. Rather, it combines
vectors in varying proportions to hit points in space with interpretable
levels
of accuracy.

The second model is terrible for such sensitivity analysis. But that is also
its strength. You can warp the entire collection of spaces and the ratios
will
stay fairly constant. The locations of points will have moved (so predictive
value is lousy) but claims about the relative proportions tend to remain
true.
These models are great for "seeing" into the dark spaces where you've been
unable to collect data.

Further, the former empirical style models are frequently used to confirm or
reject claims of the latter theoretical style models, which are themselves
used
to guide data gathering for the empirical style. In a sense, each needs the
other.

So, I'm optimistic that even if we can't get data immediately, then we can use
models of one type or another to boldly go where no one has gone before :)

MVA



  • Re: [Cc-bizcom] Greetings list!, Marshall W Van Alstyne, 09/27/2004

Archive powered by MHonArc 2.6.24.

Top of Page