Skip to Content.
Sympa Menu

sm-users - Re: [SM-Discuss] Re: [SM-Grimoire] Re: [SM-Users] tinderboxes

sm-users AT lists.ibiblio.org

Subject: Sourcemage Users List

List archive

Chronological Thread  
  • From: Andrew <afrayedknot AT thefrayedknot.armory.com>
  • To: Source Mage - Users <sm-users AT lists.ibiblio.org>, sm-discuss AT lists.ibiblio.org, Source Mage - Grimoire <sm-grimoire AT lists.ibiblio.org>
  • Cc:
  • Subject: Re: [SM-Discuss] Re: [SM-Grimoire] Re: [SM-Users] tinderboxes
  • Date: Wed, 16 Jul 2003 11:18:22 -0700

On Wed, Jul 16, 2003 at 08:33:36AM -0400, Sergey A. Lipnevich wrote:
> Guys, since I'm not very knowledgeable in this, feel free to correct,
> but wouldn't user-mode Linux support in 2.6 make it easier than chroot?
> Can you maybe test the 2.6 kernel with more spells and start using its
> features? I can only see an obstacle of NPTL in the way, but I think we
> have to switch to NPTL anyway if we want to remain current.
>

Well from a functional standpoint they are essentially the same thing
except you dont get to try a new kernel in a chroot. However the linux
spell is tested very heavily anyways so its a bit irrelevant (and
difficult) to have the tinderbox test it. Is there any other bonus from
using a UML kernel other than its elegant?

If we look at things from a efficiency standpoint I'd put my money on
a chroot. A chroot simply remaps some of the values in the chroot'ed
process's tables, and thus all its children as well, thus incurring
essentially no time penalty and running the same as non-chrooted
processes in terms of efficiency. A UML implementation, to my knowledge,
runs a linux kernel as a user land process. This means that any process
running under this user-mode kernel will eventually make system-calls
(thats what a kernel provides), these system-calls in the userland
process will eventually traverse through our user mode kernel down
through some idealized hardware and come back out the other end, where
might this be? well system calls on your native kernel, which then
have to run through a whole mass of stuff (again) before it gets to
real hardware. So you are essentially incurring unnecessary overhead,
and throwing cpu cycles out the window (IMO).

So we have what amounts to something that is functionally very similiar
but less efficient. If someone has benchmarks to correct me I'd be
willing to change my opinion of course.

Although, I dont think we should rule out UML, we are about choice of
course. Im just trying to point out the relative merits of one approach
to the other, and lets face it, we want to be using every spare cycle
as best we can!

On a side note, I've made significant headway in a beta-version of a
tinderbox. Id like to collaborate on error reporting and stuff with some
people though.


Andrew




Archive powered by MHonArc 2.6.24.

Top of Page