Many have noticed the failure of ASDF-Install to make installing Lisp packages universally easy. The situation is such that most serious Lispers don't bother with it, and many casual Lispers encounter a blocking problem such as the uninstallability of some dependency. The latter generally do one of these:
- Ask the mailing list for the failing package for help. This generally elicits either a new package post or an exhortation to use the VCS, as releases are worthless for this particular package.
- Ask on IRC. These also generally lead to a response of “use the VCS”.
- Just use the VCS themselves, or explode the tarball and configure it themselves.
- Give up. Well, that's that.
First, there is the symptomatic drawback. Everyone familiar with source releases has noticed that you have to download the entire package again to get the changes, which could otherwise be represented in a smaller compressed diff. Some GNU packages even distribute an xdelta, a kind of binary diff, along with release tarballs. The problem with this is that the number of diffs or xdeltas needed for maximum download efficiency is (n−1)²−1 where n=the number of releases over the course of the project. Setting that aside, now that broadband is broadly available, many believe the tarball-upgrade problem has been solved.
Some tarballs have real problems
However, I believe we've only treated the symptom, not the actual problem. We should have taken the inefficiency of making users download whole source trees over and over just to get little changes as a sign that there was a more serious problem with tarballs, demonstrated by further symptoms.
With tarballs, there's no automatic way to be sure you have the latest version. So you report a bug, get a reply “it's fixed in the new release; upgrade.”
Then, there's no automatic way to upgrade to said version. Even when managed by a binary distribution system like APT, you'll still encounter cases where some feature you want or need has been implemented upstream, possibly even released, but you just have to sit on your hands until it trickles down.
I've encountered this over and over with binary distributions: had to install my own git to get the pull --rebase option; cdrkit to get a bugfix for my DVD burner; hg to get the new rebase plugin; Darcs to see whether Darcs2 is better at managing trees; Mono to get .NET 2.0 features; etc. Now I'm trying to build my own Evolution to be able to connect to MS Exchange 2007, without much luck so far.
Such as it is, there's no sense in binary-installing software I'm actually serious about using, such as Bazaar-NG or Lisp implementations; it's easier to just track and build new releases myself.
Most importantly, it places a serious barrier between using code and modifying code. It's one thing to make a change, build, and install. But this isn't automatically persistent. If all you have is the binary, then you have to download the source again. If you didn't plan to make the change when you exploded the tarball, you have to rename the tree and explode the tarball again, to make a diff and store it somewhere; planners ahead may use cp -al, and take care to break hardlinks when working. Then, every time there's a new release, you have to reapply the diff; if there are conflicts, you have to fix them then replace the diff (possibly separately, in case you have to reinstall an older release). Then, if you're serious about getting your change into the upstream, you have to get the source once more, via a VCS, and apply it there as well.
I have a patch for Hunchentoot 0.15.7 (downloaded from tarball) locally, that lets you start a server in SBCL while in a compilation unit (otherwise deadlocking). After spending a while on this, I was asked to reapply it to the SVN version mysteriously hosted at BKNR. Of course, the one at BKNR had already rewritten the function in question, so my patch was inapplicable. The disconnect between the idea of developing for Hunchentoot (on an unadvertised SVN) and using it (see e.g. Weblocks, which will not build against SVN Hunchentoot, because it targets 0.15.7, the latest advertised version of Hunchentoot) has become so large that you might not even call them the same software anymore. I can't drop in the SVN Hunchentoot because it would break Weblocks, and I can't fix Weblocks (well, dev, I'll probably do a branch sometime) because it would break it for everyone who hasn't found the SVN at BKNR and therefore assumes 0.15.7 is the latest and greatest.
If I sound frustrated with this, imagine if it was the first time I had ever tried to contribute to a Lisp project.
DVCS solves all these problems
My answer to all of the above is one that Lisp users should be familiar with: “use the VCS”. Let's go over the problems again, and see how DVCS solves them:
Downloading the same thing over and over. With a DVCS, you get the entire history in compact format, so you can fast-forward and rewind to any release very quickly, and the tool will reconstruct history for you. Deltas are combined on-the-fly, so there is no quadratic explosion of deltas. To get new changes, you only download the parts of history you don't have.
When history gets too big, systems like Bazaar feature horizons and overlays, so you need only download history back to a certain point.
Be sure you have the latest version, possibly upgrading to it. All DVCSes have convenient commands to upgrade to the latest version. Most also have graphical tools to browse and wind around in history, if you don't like the new version.
Transitioning from user to developer. You may not agree this is an important goal for software just being used by someone, but I will not delve into this, allowing the OLPC project to speak for me:
Our commitment to software freedom gives children the opportunity to use their laptops on their own terms. While we do not expect every child to become a programmer, we do not want any ceiling imposed on those children who choose to modify their machines. We are using open-document formats for much the same reason: transparency is empowering. The children—and their teachers—will have the freedom to reshape, reinvent, and reapply their software, hardware, and content.
(On a side note, reliance on release .xo files containing activities makes figuring out which activities will work with your XO a nightmare. To reach the full potential of Develop and related activities, I think that OLPC will be forced to adopt a VCS-driven, per-XO branching distribution framework.)
With a local DVCS checkout, all you need to do is make your changes and commit them. For sending upstream, all tools include, or have plugins, to send one or a series of changes to an upstream mailing list with a single command. Even private or rejected changes are safe: you can merge new upstream changes onto your branch (for which new conflict resolutions are automatically managed and fully rewindable), or use rebase to move your changes up to the tip of the upstream changes. If you make many changes and are given a branch on the upstream's server, it's yet another single command to push them to the new remote location.
But DVCS…
Let's start with the obvious, DVCS gives you an unstable developer's version rather than a stable version. This is a straw man, considering that modern DVCSes support powerful branching and merging. If your mainline frequently destabilizes, you can point everyone to a “stable” DVCS branch URL that receives regular merges from the unstable branch when it stabilizes. Pushing revisions across the network is so easy, as opposed to making a new release tarball, that this is likely to get far more frequent updates.
I can imagine a report for a bug needing only minor changes in this environment:
- User: $ bzr merge --pull
- User: test test test
- User: hey, there's a bug in the HEAD of the stable branch
- Maint: what with
- User: xxx yyy zzz
- Maint: okay, just a sec
- Maint: work work work on stable (or cherry fix from unstable)
- Maint: merge onto unstable
- Maint: okay, fixed in r4242
- User: $ bzr merge --pull
- User: thanks, you're the awesomest Maint
In a culture of small, incremental changes and widespread tracking of DVCS content, such as that in the Common Lisp community, the “stable” branch might even be the same as the “development” branch, where destabilizing changes are done in separate “topic” branches before being merged into the mainline. In addition, the effort to make sure that the heads of all the DVCS mainlines are compatible keeps these from driving users into version incompatibility hell.
Even if that's not enough, branching and merging allows us infinite granularity in the stability of the release process. If “stable” changes too often for some users, you can have a “reallystable” branch that merges revisions from “stable” that have reached really-stability, and so on. This could be used to simulate any kind of release cadence from the “stable” branch that maintainers might like to effect, in only a few shell commands.
History is too big. Well, first of all, not really. We've already used the bandwidth argument to dismiss the symptom of redownloading for tarballs; it applies equally here, and you're also getting the benefit of having every version ever. Even so, for large histories, lightweight checkouts and history horizons let you keep only a subset of history locally. Bazaar is really a pioneer in this area, but I expect the other DVCSes to catch up in the usual manner of competition among the available tools.
Building and rebuilding all the time takes time. While I can't speak for other environments, the Common Lisp community has admirably solved this problem. For all of Kenny's reported faults with ASDF, it's still better than anything any other environment has. It is such that I can run a single find command when downloading a new package by DVCS, thereby installing the package, and Lisp will rebuild changed files and packages on-demand when loading them. Even without on-demand recompilation, this isn't much of an issue for Lisp: I use a command to wipe out compiled code and rebuild from scratch on a shared Lisp environment I manage, where even given SBCL's relatively slow compiler and FASL loader, it only takes about 90sec to rebuild all 35 or so packages from scratch and write a new image for everyone's immediate use.
To be honest, this is a real blocker for systems with slow rebuilds and early binding semantics. It wouldn't work well for C or GHC Haskell, for example. However, I'm sure that Lisp, systems with on-the-fly interpretation and compilation like CPython and Ruby, and systems with simple, standardized and fast full-compilation processes like Mono would be served well by DVCS-based distribution.
Probably the most serious objection is really about dependencies, that you have to have all the DVCS tools used by the systems you use. First, I think existing practice shows that we already have no objection to this; we aren't bothered about requiring everyone to use APT rather than a website with downloads, because we know by comparison that the installation process with APT is orders of magnitude better for users and developers than the traditional Windows “download an installer and run it” method. (The fact that a Debian APT source does a Fedora yum user no good is all about the system-specific hacks typically packaged with these packages, and has no effect on pristine upstream distribution, which is after all our topic of discussion.)
Great, so who's going to implement all this?
The most comprehensive effort I've seen at trying to integrate VCS with automated installation is CL-Librarian, which knows how to use a few of the available VCS tools to download and update libraries. Librarian is mostly a personal effort, and isn't widely ported or adaptive yet, but it's a step in the right direction. While the above may sound like a very long advertisement for Librarian, I would surely embrace any Common Lisp library manager that takes the new DVCS order to heart and helps us to banish the release tarball.
I'm currently using a few scripts collectively called lispdir that drop into the myriad branches in my Lisp tree and update them using the appropriate VCSes. When I add a new branch, I simply run an
asdfify
shell function to add the .asd files therein to my central registry. It also serves as a list of canonical VCS locations for many projects. You can get that as a Bazaar branch; acquire.sh downloads all systems, and update.sh updates them.
The need for O(n^2) delta files can be avoided by storing O(n) delta files and merging them dynamically. I (the author of Xdelta) have code for this and am slowing integrating it into xdelta-3.x.
ReplyDeleteFirst, that is pretty neat.
ReplyDeleteHowever, for users to take advantage of that, they will need to figure the number of deltas to download between their current tarball and the latest release. This and more can be automated by a tool, which is the core of my proposal/rant.
Maybe I didn't read your post carefully enough, but how about clbuild?
ReplyDeleteNo, I didn't mention that, and it would be listed along with cl-librarian. Thanks for the link; I've been reading about it.
ReplyDelete