Skip to Content.
Sympa Menu

cc-licenses - [cc-licenses] Proving the success of open methods -- was Re: Restricting Derivative Works

cc-licenses AT lists.ibiblio.org

Subject: Development of Creative Commons licenses

List archive

Chronological Thread  
  • From: Terry Hancock <hancock AT anansispaceworks.com>
  • To: cc-community AT lists.ibiblio.org
  • Cc: Discussion on the Creative Commons license drafts <cc-licenses AT lists.ibiblio.org>
  • Subject: [cc-licenses] Proving the success of open methods -- was Re: Restricting Derivative Works
  • Date: Sat, 24 Jun 2006 14:15:56 -0500

(I just redirected this to the community list, since we've left the subject of licenses)

Rob Myers wrote:
The FSF take copyright assignment but they license their assignments
under a copyleft license.

> They put the case for copyleft as an
ethical one, but if we believe Eric Raymond then the case has been
proven in terms of efficiency.

You say. I agree. But where's the data?

> The obvious counterexamples are H2G2 vs Wikipedia and 'shared
> source' or 'community source' and free-licensed open-source
> software.

Within Free Software if we compare Linux and Copland, or IIS and
Apache, again we see ethical and efficiency wins.

You say. I agree. But where's the data?

> However, the conclusions there are subjective.

I would say that the numbers speak for themselves. But it's important
not to mistake popularity for success...

You haven't mentioned any numbers, Rob.

Where. Are. The. Numbers?

Am I making my point? ;-)

I'm completely persuaded of the success of such projects,
through my subjective impression of their 'success'. I have
a definite 'scientific intuition' that if one collected the data
on measurable objective 'success metrics', that the point
could be driven home with a sledgehammer.

But that's not the same as actually having the statistics to back
up the argument. Has no one collected that kind of data?

Considering for the sake of argument that no one has
bothered to try proving this scientifically, here's what you
need:

1) Decide on a measurable index of success (or several of
them). It must be:

- easy to collect for both free and proprietary datasets

- easy for third parties to verify

- exactly comparable for all datasets

- objectively measureable (i.e. a number)

- immune to marketing biases (e.g. number of websites
is a no-go, because that's a function of PR money as
well as deployment success)

2) Choose representative datasets. This could be:

- comprehensive -- just measure everything ($$?)

- choose variants: free/non-free, copyleft/non,
assignment/no, etc

- matched 'comparables' (this is what we've
done subjectively in this thread)

3) Actually collect the data

- hopefully this is a web-spider job

4) Visualize the data (make charts)

5) Analyze it in light of the existing theories of the social
dynamics of collaboration (no need for equations,
this would be an empirical study, not a theoretical model)

6) Draw a conclusion

Hopefully, the conclusion is more or less what you and I
believe to be the case from our unscientific study of the
community (otherwise our subjective impressions are
wrong, and we should be prepared to eat some crow).

But a scientific analysis like this is hard to refute -- it
would sway people who lack our shared experiences.
The truth is, I'm not excited about the idea of taking on
such a study (read: 'If nominated I will not run, if elected
I will not serve' -- at least not unless wants to give me
a grant ;-)), but I'd also be very surprised if it hasn't
already been done by *somebody*.

Cheers,
Terry

--
Terry Hancock (hancock AT AnansiSpaceworks.com)
Anansi Spaceworks http://www.AnansiSpaceworks.com





Archive powered by MHonArc 2.6.24.

Top of Page