Skip to Content.
Sympa Menu

bluesky - Re: Root and Branch Naming

bluesky AT lists.ibiblio.org

Subject: Global-Scale Distributed Storage Systems

List archive

Chronological Thread  
  • From: Ted Anderson <ota AT transarc.com>
  • To: bluesky AT franklin.oit.unc.edu
  • Subject: Re: Root and Branch Naming
  • Date: Tue, 27 Mar 2001 20:41:14 -0500


-----BEGIN PGP SIGNED MESSAGE-----

On 19 Mar 2001 11:23:36 -0800 hal AT finney.org wrote:
> I think you are assuming a relatively deep hierarchical model.

I am indeed assuming a deep hierarchical file system model. My
reference point on this is something like AFS, which basically assumes a
Unix command line reference model. On the other hand even a stand-alone
Windows box has many deep pathnames. If you have 10^20 objects to keep
track of, you're going to need a lot of hierarchy.

> Let me first toss out the challenge that we may not need a naming
> hierarchy at all. Is it really necessary that when I insert a
> document into the network it has a name like
> /technical/papers/computerscience/peertopeer/bluesky/problemswithnames.txt?
> Is someone going to type this in? If not, why not something like Mark
> Miller's pet-name system [1],
> <key>7fa89sdf0a7sf098asf0978asf<s/>problemswithnames.txt</key>, which
> shows up in your client as "Hal's problemwithnames.txt"? I'd like to
> see more justification of the need for a hierarchical naming system
> that fits every single piece of information in the world into a single
> hierarchy rooted at /.

Well my experience is that one very rarely actually types in a long
absolute pathname. And when I do, I rely on file name completion to
make it easier and catch typos as they arise and not after I've got a 50
character path typed in. Generally, one starts from a working directory
or a collection of path abbreviations usually implemented as symlinks
from a easy-to-reach starting point. For example, I keep my development
sandboxes and many source trees I reference frequently in ~/de, so I can
say ~/de/sb1, ~/de/afs3.6 or ~/de/linux-2.4-test7 and quickly get to
these far flung parts of the hierarchy. Clearly, this is a little
personal namespace and my browser bookmarks work pretty much the same
way. So these are pet names of an ad hoc sort. Even the links in a
web page are really page-local names for remote pages.

However, absolute paths are invaluable for storing in configuration
files and scripts and the like so that these things can be shared and
the names are the same for everyone. The more collaboration and sharing
there is the more important uniform global names are.

So I think we need both more and better ways to give objects short easy
to remember names as well as long descriptive, uniform, permanent,
global names for objects.

> > For reasons of stability and replication I think the upper parts
> > .. of the naming hierarchy are best implemented by ... [CHKs].
>
> I see two problems with this. One is the assumption that they are
> slowly changing. Certainly in the DNS that is not the case. However
> the DNS is a very shallow hierarchy and you are probably assuming a
> deep one. Still I suspect that directories near the top are going to
> be relatively dynamic, because everyone is going to want a short name
> just like today.

This is an issue. First, I think a little more depth would be a good
idea. As long as the top level names are kept short, this wouldn't
impact conciseness too badly. Second, I assume that while new directory
hashes would be produced more or less daily, I don't think that means
that everyone needs to get the latest copy every day. If the process is
automated I don't see that it would be a big problem to do this once a
week or so. Currently, individual nodes don't typically cache these
upper-level directories anyway, but contact a nearby name server to
request lookups and just cache individual results with a small
time-to-live.

So the slowly changing assumption mostly applies not to the directories
as a whole but to the individual entries. Thus, name mappings are
cachable with a simple coherence mechanism.

> Second, it's not clear how a directory informs its parents that an
> update is needed, in an authenticated way. If
> /technical/papers/computerscience adds a new subheading, it needs to
> tell /technical/papers, which then needs to tell /technical, which
> then needs to tell the root name servers. This notification
> methodology is outside the scope of your proposal, but it very likely
> will involve public-key cryptography. This is exactly the technology
> you propose to deal with the more dynamic directory levels below. Why
> not just use PKC at all levels? See below for how Freenet proposes to
> do it.

This is a good point. I haven't thought very hard about the protocol
these guys would use to communicate among themselves. And when I do, I
assume some kind of signed update records that are collected by the
parent until it is time to generate the next update. This stream of
signed updates would work pretty much like a log, but without the usual
serialization guarantees that a transaction system provides. One could
expose some of this mechanism in the directory format itself which might
be more general. In particular, the names in a directory are nearly
independent mappings and could be distributed individually or in various
groups if that was convenient.

For example a very large directory such as the DNS ".com" might actually
be divided physically into a 26 sub-directories based on the first
letter of the name. Logically, the directory is wide and flat, but
physically, it is stored as a two level hierarchy. As Ross pointed out,
smaller components allow easier manipulation and verification than huge
monolithic objects do.

> > ... lower levels [use] a server / client architecture.

> Second, this mechanism would be useful for more than just updating
> directories. What you have described would seem to handle file
> updates as well. It makes sense to treat directories similarly to
> ordinary files. Whatever mechanisms we have to deal with changes and
> updates for directories can be done with files, too.

Yes, sort of. What the owner can do, the no one else is trusted to do
is decide which updates to accept. In other words, to exercise access
control. In the upper levels this is done by the root name authorities
(RNAs) who periodically issue CHKs as detached signatures. While there
is public key cryptography (PKC) involved in distributing these CHKs,
the signature in the directory, and the protocol for collecting updates
from the children, it is only part of the story. The other part is that
the RNA uses access control and the application of reputation to the
creation of these directories. The same is also true at the lower
directory levels and even for files. Someone must exercise access
control and that someone has to be the key holder.

Another aspect of lower-level directories and some files is they they
are accessed like databases. This is to say they are queried and
updated (read and written) by multiple parties. Each expects updates to
be atomic, consistent, and durable (basically ACID-like) and lookups to
be globally serialized with respect to all updates. Some types of files
need these properties as well but call the accesses "reads" and
"writes". Use a locking protocol to provide isolation in addition to
atomicity, and you have ACID semantics. In these cases, a single agent
manages object access to provide a coherent view.

A single agent is needed to provide both access control and coherence to
mutable objects.

Most files and some directories, however, are rarely shared or rarely
modified and so have little need of database-like semantics. In these
cases using CHKs two major benefits. They allow nodes the independence
of being able to create globally unique names for objects without
contacting a central authority (such as a file server). CHKs also
permit easy caching without concern for the source of the object.
Ideally, the mechanisms needed for directories and database-like files
extend gracefully and efficiently to handle unshared or static files.

Nearer the top of the directory hierarchy, very high levels of sharing
and relatively static content (in the sense mentioned above) makes
database-like mechanisms inappropriate. Instead, periodic snapshots
provide approximate, time-bounded consistency, that works well enough
most of the time.

> Third, it's not clear that the auxiliary information of server and
> snapshot are necessary. Freenet and OceanStore use the SVKs as direct
> indexes into the network. In Freenet they are "first-class"
> alternatives to CHKs; in OceanStore they are the primary addresses of
> documents. Given that you have a network that can find data, you
> don't need to store server addresses. For performance it may be
> desirable to cache data, but that might be true of leaf-node data as
> well as directory data.

The problem with using the data location network for key hashes is that
there is no one-to-one mapping between the key and the data. With CHKs
the data is self-certifying. With SVKs one can verify the signature to
tell when a datum matches the key, but there is no limit to the amount
of data that goes with a single key. This makes the lookup process
ambiguous and ill-defined. It makes more sense to put someone in charge
of organizing the set of all data signed by a key and provide a way to
find that someone. I suggested a list of list of server IP addresses
and a snapshot to use if all the servers are down, but there may be
better approaches.

> One of the advantages of your explicit-server model is that it allows
> the key holder more control over updates. Otherwise you do have
> problems with cache staleness and distribution of updates. Freenet
> does not have a solution to this yet; OceanStore has a very elaborate
> mechanism that is supposed to chase down all copies of a document and
> update it.

As I mentioned above, access control is a crucial aspect of mutable
data. Using PK signatures is a decentralized way to implement access
control. If the key holder, or his delegate, is online, he can also
provide synchronization services (coherence).

I am very concerned about the complexity of the OceanStore system. I
wish someone who understands it well could comment on the trade-offs
made in its design.

> > I still think that [SPKI] is best for handling root names. ...

> As I understand this, the root name authorities (RNAs) would not
> manage the name spaces per se, but would be responsible solely for
> mapping top level names into top level directories. In the example
> above, they would map /technical into a CHK which would point at the
> directory for everything under /technical. Typically the RNAs would
> support multiple such mappings, and each such mapping would be
> supported by multiple RNAs. Hopefully there would be substantial
> agreement among the RNAs about which directory each top level name
> gets mapped to. This agreement would be essentially 100% among the
> conservative RNAs, while the liberal RNAs would handle a wider set of
> names, including those for which there was competition and no
> consensus yet on which directory was the "real" one for that domain.

This isn't exactly what I was thinking. I was assuming that each user
would select a supplier for each top-level name in his private
namespace. For example, Verisign might provide me with DNS names, which
I would call /dns/com/microsoft and /dns/edu/mit and /dns/org/eff. I
might have some other RNA provide other parts of my root namespace, for
example, IEEE might provide a compendium of technical research papers
available as /research, Project Gutenberg might provide online books
categorized by Library of Congress subject headings under /books, RIAA
might provide a similar service for /music, SourceForge could provide
access to a global source tree under /src, and so forth. In addition, I
have my own names for things under /private/mail, /temp,
/local/projects, or whatever.

Without much trouble, I could allow multiple providers of some names by
redirecting misses in the primary to a secondary or tertiary provider.
Indeed I could offload this merging behavior to a meta-namespace
provider, like today's web meta search engines.

I assume that root naming conventions (i.e. /dns, /books, /src) would
spring up to ensure that most of the widely shared names would look the
same from every node. If someone wanted to organize things
differently, however, the only consequence is that global names wouldn't
work for him without a translation step.

> What is to prevent the owner of /com or /sex from overcharging for
> their name registrations, if they have a de facto monopoly conferred
> by the conservative RNAs? The only thing that can limit them is fear
> that others will challenge them by either stealing /com or by starting
> competing domains. But for these challenges to be credible it must be
> relatively easy to get new names into widespread use. This implies
> that even conservative RNAs must be relatively flexible, or they can
> handle only a small percentage of widely used names.

The providers of popular names don't get a monopoly quite this easily.
If Verisign charges too much for "com/microsoft", then Microsoft can pay
GnuDNS instead. As a user, I can choose either Verisign or GnuDNS to
provide me /dns, if GnuDNS has more names in it, then I'm more likely to
use them than Verisign. As a consequence, Verisign may choose to
include "com/microsoft" just to attract users. It might be a bit like
competing yellow pages companies.

> The experience we are getting from the DNS world is that top level
> names are extremely valuable. The main registry for .com, .org and
> .net was valued at 21 billion dollars! Given the huge amounts of
> money to be made here, I don't think it is realistic to assume that
> the market will be at all stable or orderly. DNS has been limited
> until now by institutional monopolies. Under competition, with no
> trademark law to provide property rights in names, it's going to be a
> wild ride.

I'm certainly not sure how it would all work out. The dynamics of a
global namespace are sure to be complex and unpredicable. But assuming
RNAs would need to attract users (subscribers) then they have to compete
on the basis of completeness, stability, reliability, etc. They will
have to work out some means of agreeing among themselves to provide
consistent mappings for most names.

> > In summary, I would suggest that there are three levels of the
> > namespace which require different implementation mechanisms:
> > 1 - root names, depending on political and social issues
> > 2 - hash tree namespaces, providing high performance and stability
> > 3 - online directories, supporting updates and synchronization

I'd expand on this list a bit:
1 - root names, defined by convention; social, political, and economic
issues.
2 - hash tree namespaces, providing high performance and stability with
weak, time-based consistency.
3 - online directories and database-like files, access control and
strong consistency.
4 - private and static data, CHKs provide independence and sharing.

> I think there are a lot of good ideas here. However I still don't see
> the root name problem as being solved by letting the market screw in
> the lightbulb. Institutions like trademarks and courts have evolved
> worldwide because of exactly the problems that exist when names can be
> used by anyone. Our admiration for markets should not blind us to the
> fact that they can't solve all problems.

I agree. It may be that for competing RNAs to provide the level of
service their customers demand will require a level of cooperation that
won't look much like competition in usual commercial sense. The key is
giving users the ability to vote with their feet.

Ted

-----BEGIN PGP SIGNATURE-----
Version: PGPfreeware 7.0.3 for non-commercial use <http://www.pgp.com>

iQCVAwUBOsDF+QGojC9e/wyBAQEGzAQA1DxcFjZ1U5CZqFpVH3nUUrq6U1P2WAkZ
NIoh4sxCyWYWzwtGXNPMBnejQvEA/amWal2xZGPReEtoWcFUAPiArKexJmX+alWo
Gm9tzV/uE5AGhP3k1WKEC4tFgtruhIoNUjt60dgveF103iAiGH6JfSRYRs0yIwqi
yFPHTgvYa3s=
=ezHN
-----END PGP SIGNATURE-----






Archive powered by MHonArc 2.6.24.

Top of Page