Skip to Content.
Sympa Menu

sm-discuss - Re: [SM-Discuss] Teh Future

sm-discuss AT lists.ibiblio.org

Subject: Public SourceMage Discussion List

List archive

Chronological Thread  
  • From: David Kowis <dkowis AT shlrm.org>
  • To: sm-discuss AT lists.ibiblio.org
  • Subject: Re: [SM-Discuss] Teh Future
  • Date: Sun, 27 Jan 2013 15:06:54 -0600

I'll add a bit more in response to this mail to try to be more
descriptive about the things I said and why I said them.

On 1/26/2013 10:03 AM, flux wrote:
>> Single spell file:
>> ------------------
>> Opening 6 files is difficult. Opening one file is easy. I can maintain a
>> much better context regarding what my variable names are, and what I'm
>> going to do with them. There's no reason we *must* split things up into
>> many files, and I don't think there's any benefit to it. I believe this
>> is a change we must implement.
> </snip>
>
> I believe you were considering the developer's perspective, but not the
> computation perspective. There is a very clear computational reason to
> have separate files, especially with the way sorcery is currently set
> up. The various stages of casting are truly separate stages, and can
> actually be run separately (see delve). This is, IMHO, actually a good
> thing. If everything for a spell is in a single file, it means that
> single file will be loaded every time sorcery enters a different stage
> of processing. Now, that may not matter much if each file is very small
> and doesn't do much. However, it will mean that sorcery *must* load and
> parse the *entire* file for every stage. Currently, if a file doesn't
> exist at all, sorcery performs the default action for that stage. A test
> if the file exists is much cheaper than parsing a file to find out we're
> just going to run the default anyway. Also, if the single file is large
> enough (unlikely, but possible for monster spells like the linux spell),
> it would consume more RAM/time/etc. to load the file, and you'll be
> loading it multiple times so it stacks. This second argument might be
> moot on most machines where resources are nowadays far more ample than
> they used to be, but there it is.

Computer time is less valuable than people time. I'm not concerned about
the miniscule amount of time, even cumulatively, that will be lostby
doing more computational work. Making the people time more efficient is
the whole reason computers exist, and is of greater priority.


> <snip>
>> Updated init system:
>> --------------------
>> Simpleinint-msb works, but it's old and crappy. I'm a fan of systemd,
>> because it makes things amazingly easy to do, and is ridiculously fast.
>> It's also been adopted by several linux distros, and large projects.
>> However, that's not set in stone, we just need something newer, and
>> preferably something that other distros are using, so we can take
>> advantage of the work that others are doing as well.
> </snip>
>
> Old? Yes. Unmaintained? Yes. Crappy? That depends on how you define
> "crappy". All the systems that use simpleinit with SMGL boot (to my
> knowledge), so it clearly works. Although there has been some work to
> get an agnostic init setup, and a select few have done custom setups to
> run alternative inits, there hasn't been a widespread or large push to
> get a different init system. That implies that for most SMGL users
> simpleinit more than "works", it "works well enough". This, I think, is
> like unix compared to plan9: it works well enough and the newer
> alternative doesn't bring *significantly* more to the table that people
> are sticking with the old and tried-and-true.
>
> I'm not advocating that simpleinit is great and we should stay with it.
> I'm not advocating that systemd isn't great. I'm simply stating that
> you're making a subjective but baseless claim that simpleinit is "bad"
> without saying/showing in what way(s) it's bad.

It's *far* easier to build initscripts for systemd than it is for
simpleinit-msb or any other init system I've used. I am generally
inclined to agree that systemd isn't traditional UNIX philosophy, but I
really like how easy systemd makes it to build complicated and powerful
init systems. Simpleinit-msb doesn't do any of those things, and us as a
distro will have to reinvent all that stuff. Primarily, however, I would
like to move to something that's more common, and used by other distros,
so we can leverage the users of other distros.

>
> <snip>
>> Chroot based build process:
>> ---------------------------
>> For building binary packages, I want to take advantage of a chroot and
>> unionfs (or rsync and hardlinks or something.) Inspiration from this
>> page:
>> https://wiki.archlinux.org/index.php/DeveloperWiki:Building_in_a_Clean_Chroot
>> It's probably the sanest way to produce a package that we can ensure
>> isn't melding in dependencies we don't want and such. By ensuring that
>> we build things into binary packages as well, we can catch leaky
>> installs, or missing dependencies when we're building the chain of
>> packages. Additionally, it'll give the system itself protection from a
>> stupid installer doing bad things, or a partially failed install.
> </snip>
>
> I have actually already been working on something like this for a while
> in the newer version of cauldron. Better binary support in sorcery would
> certainly be a plus for this, but at least we can get the basics by
> having bare "defaults" for everything. If you only use the defaults from
> the spells, then you can verify/repeat the binaries by getting the state
> of the grimoire that matched what a given chroot/ISO was built against
> (via git) and running the same spells with the defaults.

Until the spells are updated, and you have to go re-verify all the
defaults, and you have no way of knowing what defaults are new/changed
without manually going through the process. This hearkens back to the
people time is more important than computer time, and so we should make
the computer do this work forus.

>
> Note that chrooting won't give perfect security against stupid
> installers, unless your chroots are *very* tight, and even then it's
> possible an intentionally malicious install might have a trick to
> circumvent something. This is always true as a security issue in general
> though, and adding another layer is a good idea.
>
> However, there is a issue for doing *all* installs via chroot: you will
> be casting spells over and again even when they are already installed,
> unless you first graft them in from the host system. This can get
> complicated, but it is possible to do it, except you will only be able
> to do so when the version in the host system matches the options
> requested by the chrooted spell cast. You'll also need a smarter way to
> handle conflicts/merges/updates between versions in the chroot vs.
> versions in the host system. I.e., if you have gcc without g++ in the
> host system, and cast a spell in a chroot that forces/requests gcc with
> g++, you'll likely need the g++ enabled version in the host system (for
> libstdc++ at runtime). That means updating the host gcc. In this case,
> there's probably no issue and you can just do it, but in some other
> cases it might cause existing spells in the host system to break due to
> library changes (especially if a spell forces a dependency without a
> feature that's enabled in the host). This can be done smartly, but will
> need to be planned out and accounted for.

I plan that storing the spell config in the binary tarball will indicate
that you need something specific installed.

>
> <snip>
>> Declarative spell config:
>> -------------------------
>> Spell configuration needs to not be procedural. I should be able to say
>> "cast kde" and get a menuconfig style interface where I can toggle
>> things off and on and know what the effects of my selections are going
>> to be without having to restart the entire process again. I should also
>> be able to store a config to a file "Dave's KDE Desktop Config" and load
>> that in, and be notified of new options somehow. This is critical not
>> only to making it easier for people to construct systems, but to have
>> repeatable builds. When someone complains that their package doesn't
>> build, we can ask for their config, throw it in a chroot, and duplicate
>> the problem, either finding a patch, or finding out that their config is
>> simply broken. Finally, having stored configs allows us to package those
>> up with a binary package, and should you already have a binary package
>> with the proper config, you can just extract that rather than rebuild it
>> again.
> </snip>
>
> You don't need a declarative config to make this happen. You just need
> to have a system that can handle dynamically-updated menus. And yes, I
> think a menu system would be a smart thing to do for sorcery, at least
> as an option over the current presentation, and has been generally
> requested by others in the past.

I believe that making it strictly declarative will make it easier to
deal with configurations. Having logic makes it much more difficult for
a computer to compare differences in config, since we cannot know what
it is without executing that path.

>
> Whether we really want to have declarative spell configs depends on how
> it's actually implemented. There are trade-offs that need to be properly
> weighed. Non-declarative gives flexibility and power, but declarative
> gives robustness and (much easier) repeatability.

I think robustness and repeatability are more important right now.
Perhaps it's just because we're being burned by that badly.

>
> I think there's another issue hiding behind this one though: the
> metadata we collect/store for spell configuration. Regardless of how the
> options are presented to the user, ultimately the repeatability question
> comes down to how the spell is finally configured, after getting user
> input. How we store user configurations is an area where I think there
> is much more room for improvement, even over how spells are written by
> the developers. It'd be nice to be able to write a user configuration
> file *by hand* and propagate it to different machines in order to cast a
> spell with the same options on different machines. That doesn't require
> the spell files themselves to be declarative. If enough information (and
> of the right kind(s)) is stored in the final configuration file, then
> sorcery could even outright bypass the spell files and use the metadata
> config file instead to build a spell. IMHO this is a better direction to
> go down first, and the issue of declarative vs. procedural spell files
> can wait.

I disagree, since figuring out how to load in that config is critical to
procedural vs. declarative. It's much easier to load in a config from
somewhere else if it's declarative. Much harder if it's procedural.
Especially if we need to compare things, like what options are new, etc.

>
> In any event, from both David's and my arguments, it seems to me that
> what's most needed is (not in order of importance):
>
> * core grimoire spells that are tightly controlled/tested and also
> offered as official binary caches (and possibly as a fully separate
> core grimoire)
>
> * improvements to sorcery to handle binary caches better and with more
> repeatability/testability
>
> * improvements to sorcery to (better) recreate spell builds from a
> given configuration, which are (more) repeatable/testable

I would reverse the order of these, because the last will contribute
greatly to the first, as well as the second contributing to the first.
Given those bits of infrastructure, we can more easily produce tightly
controlled/tested binaries.


Thanks for your thoughts and for composing the mail,
David


Attachment: signature.asc
Description: OpenPGP digital signature




Archive powered by MHonArc 2.6.24.

Top of Page