Tuesday, September 16, 2008

What is cascading rebase?

The problem of cascading rebase is a serious one, and should not be taken lightly. Though the idea of cleaning history may be appealing, it is almost always inappropriate.

This problem is what makes all documenters of rebase functionality implore you to avoid rebasing public repositories. However, the nature of the problem is difficult to explain to those without a good understanding of the DAG-based branching and merging employed by most modern distributed VCSes, including Bazaar, Mercurial, and Git.

What starts simple…



Imagine a larger version of this very abbreviated history. Bob has started a project and commits new revisions to the mainline, and collects contributions by merging from others' branches.

Here we see Bob's mainline in red.



Topic branching



Alice has branched mainline-3 to create a feature, and made a few commits. The idea is that Bob will merge the feature branch, in green, back into mainline. Work on mainline can continue, as seen by additional mainline revisions.



Uh oh…



Bob realizes that mainline-2 was done wrong. He decides the problem is serious enough that it has to be redone.

This is not the only kind of rebasing; rebasing is a subsequence replacement operation, so it can be used to insert revs in history, delete revs from history, or insert and replace any number of revs at any point. In this case, we're replacing mainline-2 with mainline-2'.

Here is the completed rebase operation. The dead revisions are shown in light red.



Rebasing is really branching



Every revision in a proper DAG history contains an immutable reference to its parents. So mainline-3 has a reference to mainline-2. But we rewrote mainline-2, so we must rebuild mainline-3 so it refers to the new mainline-2 instead, 4 to 3, 5 to 4, and so on.

As such, rebasing doesn't really "rewrite history"; it finds the latest revision that doesn't have to be rewritten, branches off it, builds the new history onto that branch, and finally replaces the current branch with the new branch. You can see this clearly in an alternate, but equivalent visualization of the previous graph.



Why is the feature branch still connected to the dead revisions?



The rules don't just apply to mainline; feature-4 also has a reference to mainline-3. This is an important integrity feature—as a brancher working on a private feature, Alice wouldn't want Bob to be able to spontaneously rewrite her branch's source by altering revisions made before she branched. Therefore, those "dead" revisions still live as part of the feature branch's history.

I show them above the feature branch here to show how they have become part of Alice's history. They will also be shared with any other branches that were made off mainline after mainline-2 and before the rebase. Please note that the history of Alice's branch remains exactly the same as it was before the rebase.



What happens with a merge?



Feature branches live to die; they ought to be merged back into the mainline eventually, when the feature is ready to be part of it. So Alice would like Bob to merge her feature branch into mainline.

Except the idea of mainline has changed due to the rebase. Here is a symmetric merge diagram to illustrate.



One step in the cascade occurs



Alice by necessity includes the broken changes from the old mainline-2 and the mostly duplicate mainline-3. The DAG sees these as separate from mainline-2' and mainline-3', as they are. So the merge is wrong.

To fix this, Alice must produce a new branch and rewrite her changes onto it. We can do this with rebase, but it requires Alice to know which revisions are duplicates. Here, Alice must know that mainline-3' is the new basis. This seems simple, but imagine if mainline-2 had been simply deleted, or some new revisions had been inserted. Then the revisions would be numbered differently; in that case she would have to rebase from mainline-3 to mainline-2'.

Here is the result of her rebase, and the correct merge.



Now the cascade happens: if any topic branches were made from the "dead" feature-4, feature-5, and feature-6, they must also be rebased onto the new feature-4', 5', or 6', as appropriate. And so on. And so on. Hence my name for this issue, which I don't believe has been adopted by anyone but appropriately illustrates its seriousness, cascading rebase.

Cascading rebase cannot be solved automatically; all rebase tools recognize this and have some interactive conflict resolution features. Furthermore, there is no guarantee that when a rebase goes smoothly on the original branch, it will also go so smoothly on the cascade. It also gets wildly complicated to calculate the cascade when you include behavior like synchronization by merging and especially cross-merging.

Before you consider rebasing, consider other tools for fixing problems. If you made a mistake in an old revision, and it is serious enough that all branches since should receive the fix as well, a good alternative is to branch off the broken revision, write the fix, commit to the new branch, merge it into mainline, publish the revision, and ask others to merge it in. Even if they don't, the common history will mean that merging won't see a parallel "fixed" revision, making the merge cleaner and less likely to conflict.

You don't even have to create a new URL in many cases. In both Bazaar and Mercurial, the commit to the side branch will receive a revision number in your mainline. By merging the mainline at this revision number, branches will receive only that revision, without being forced to merge the head of mainline.

Monday, September 8, 2008

Blame release tarballs for the installation problem

…plus love for distributed version control.

Many have noticed the failure of ASDF-Install to make installing Lisp packages universally easy. The situation is such that most serious Lispers don't bother with it, and many casual Lispers encounter a blocking problem such as the uninstallability of some dependency. The latter generally do one of these:
  1. Ask the mailing list for the failing package for help. This generally elicits either a new package post or an exhortation to use the VCS, as releases are worthless for this particular package.
  2. Ask on IRC. These also generally lead to a response of “use the VCS”.
  3. Just use the VCS themselves, or explode the tarball and configure it themselves.
  4. Give up. Well, that's that.
I'll spoil the ending and say that I think reliance on release tarballs is the main failing of ASDF-Install. Furthermore, I think it's a major mistake to assume that the tarball even qualifies as an appropriate modern distribution medium for software as easily rebuildable as most Lisp packages.

First, there is the symptomatic drawback. Everyone familiar with source releases has noticed that you have to download the entire package again to get the changes, which could otherwise be represented in a smaller compressed diff. Some GNU packages even distribute an xdelta, a kind of binary diff, along with release tarballs. The problem with this is that the number of diffs or xdeltas needed for maximum download efficiency is (n−1)²−1 where n=the number of releases over the course of the project. Setting that aside, now that broadband is broadly available, many believe the tarball-upgrade problem has been solved.

Some tarballs have real problems



However, I believe we've only treated the symptom, not the actual problem. We should have taken the inefficiency of making users download whole source trees over and over just to get little changes as a sign that there was a more serious problem with tarballs, demonstrated by further symptoms.

With tarballs, there's no automatic way to be sure you have the latest version. So you report a bug, get a reply “it's fixed in the new release; upgrade.”

Then, there's no automatic way to upgrade to said version. Even when managed by a binary distribution system like APT, you'll still encounter cases where some feature you want or need has been implemented upstream, possibly even released, but you just have to sit on your hands until it trickles down.

I've encountered this over and over with binary distributions: had to install my own git to get the pull --rebase option; cdrkit to get a bugfix for my DVD burner; hg to get the new rebase plugin; Darcs to see whether Darcs2 is better at managing trees; Mono to get .NET 2.0 features; etc. Now I'm trying to build my own Evolution to be able to connect to MS Exchange 2007, without much luck so far.

Such as it is, there's no sense in binary-installing software I'm actually serious about using, such as Bazaar-NG or Lisp implementations; it's easier to just track and build new releases myself.

Most importantly, it places a serious barrier between using code and modifying code. It's one thing to make a change, build, and install. But this isn't automatically persistent. If all you have is the binary, then you have to download the source again. If you didn't plan to make the change when you exploded the tarball, you have to rename the tree and explode the tarball again, to make a diff and store it somewhere; planners ahead may use cp -al, and take care to break hardlinks when working. Then, every time there's a new release, you have to reapply the diff; if there are conflicts, you have to fix them then replace the diff (possibly separately, in case you have to reinstall an older release). Then, if you're serious about getting your change into the upstream, you have to get the source once more, via a VCS, and apply it there as well.

I have a patch for Hunchentoot 0.15.7 (downloaded from tarball) locally, that lets you start a server in SBCL while in a compilation unit (otherwise deadlocking). After spending a while on this, I was asked to reapply it to the SVN version mysteriously hosted at BKNR. Of course, the one at BKNR had already rewritten the function in question, so my patch was inapplicable. The disconnect between the idea of developing for Hunchentoot (on an unadvertised SVN) and using it (see e.g. Weblocks, which will not build against SVN Hunchentoot, because it targets 0.15.7, the latest advertised version of Hunchentoot) has become so large that you might not even call them the same software anymore. I can't drop in the SVN Hunchentoot because it would break Weblocks, and I can't fix Weblocks (well, dev, I'll probably do a branch sometime) because it would break it for everyone who hasn't found the SVN at BKNR and therefore assumes 0.15.7 is the latest and greatest.

If I sound frustrated with this, imagine if it was the first time I had ever tried to contribute to a Lisp project.

DVCS solves all these problems



My answer to all of the above is one that Lisp users should be familiar with: “use the VCS”. Let's go over the problems again, and see how DVCS solves them:

Downloading the same thing over and over. With a DVCS, you get the entire history in compact format, so you can fast-forward and rewind to any release very quickly, and the tool will reconstruct history for you. Deltas are combined on-the-fly, so there is no quadratic explosion of deltas. To get new changes, you only download the parts of history you don't have.

When history gets too big, systems like Bazaar feature horizons and overlays, so you need only download history back to a certain point.

Be sure you have the latest version, possibly upgrading to it. All DVCSes have convenient commands to upgrade to the latest version. Most also have graphical tools to browse and wind around in history, if you don't like the new version.

Transitioning from user to developer. You may not agree this is an important goal for software just being used by someone, but I will not delve into this, allowing the OLPC project to speak for me:

Our commitment to software freedom gives children the opportunity to use their laptops on their own terms. While we do not expect every child to become a programmer, we do not want any ceiling imposed on those children who choose to modify their machines. We are using open-document formats for much the same reason: transparency is empowering. The children—and their teachers—will have the freedom to reshape, reinvent, and reapply their software, hardware, and content.


(On a side note, reliance on release .xo files containing activities makes figuring out which activities will work with your XO a nightmare. To reach the full potential of Develop and related activities, I think that OLPC will be forced to adopt a VCS-driven, per-XO branching distribution framework.)

With a local DVCS checkout, all you need to do is make your changes and commit them. For sending upstream, all tools include, or have plugins, to send one or a series of changes to an upstream mailing list with a single command. Even private or rejected changes are safe: you can merge new upstream changes onto your branch (for which new conflict resolutions are automatically managed and fully rewindable), or use rebase to move your changes up to the tip of the upstream changes. If you make many changes and are given a branch on the upstream's server, it's yet another single command to push them to the new remote location.

But DVCS…



Let's start with the obvious, DVCS gives you an unstable developer's version rather than a stable version. This is a straw man, considering that modern DVCSes support powerful branching and merging. If your mainline frequently destabilizes, you can point everyone to a “stable” DVCS branch URL that receives regular merges from the unstable branch when it stabilizes. Pushing revisions across the network is so easy, as opposed to making a new release tarball, that this is likely to get far more frequent updates.

I can imagine a report for a bug needing only minor changes in this environment:
  1. User: $ bzr merge --pull
  2. User: test test test
  3. User: hey, there's a bug in the HEAD of the stable branch
  4. Maint: what with
  5. User: xxx yyy zzz
  6. Maint: okay, just a sec
  7. Maint: work work work on stable (or cherry fix from unstable)
  8. Maint: merge onto unstable
  9. Maint: okay, fixed in r4242
  10. User: $ bzr merge --pull
  11. User: thanks, you're the awesomest Maint
Super-cheap topic branching means that this process can expand as needed, depending on the size of the changes required. Furthermore, this easy incremental release process means that the maintainers need no longer weigh the cost of rolling a release against the cost of duplicate bug reports for unreleased fixes; the release is “prerolled”, as it were.

In a culture of small, incremental changes and widespread tracking of DVCS content, such as that in the Common Lisp community, the “stable” branch might even be the same as the “development” branch, where destabilizing changes are done in separate “topic” branches before being merged into the mainline. In addition, the effort to make sure that the heads of all the DVCS mainlines are compatible keeps these from driving users into version incompatibility hell.

Even if that's not enough, branching and merging allows us infinite granularity in the stability of the release process. If “stable” changes too often for some users, you can have a “reallystable” branch that merges revisions from “stable” that have reached really-stability, and so on. This could be used to simulate any kind of release cadence from the “stable” branch that maintainers might like to effect, in only a few shell commands.

History is too big. Well, first of all, not really. We've already used the bandwidth argument to dismiss the symptom of redownloading for tarballs; it applies equally here, and you're also getting the benefit of having every version ever. Even so, for large histories, lightweight checkouts and history horizons let you keep only a subset of history locally. Bazaar is really a pioneer in this area, but I expect the other DVCSes to catch up in the usual manner of competition among the available tools.

Building and rebuilding all the time takes time. While I can't speak for other environments, the Common Lisp community has admirably solved this problem. For all of Kenny's reported faults with ASDF, it's still better than anything any other environment has. It is such that I can run a single find command when downloading a new package by DVCS, thereby installing the package, and Lisp will rebuild changed files and packages on-demand when loading them. Even without on-demand recompilation, this isn't much of an issue for Lisp: I use a command to wipe out compiled code and rebuild from scratch on a shared Lisp environment I manage, where even given SBCL's relatively slow compiler and FASL loader, it only takes about 90sec to rebuild all 35 or so packages from scratch and write a new image for everyone's immediate use.

To be honest, this is a real blocker for systems with slow rebuilds and early binding semantics. It wouldn't work well for C or GHC Haskell, for example. However, I'm sure that Lisp, systems with on-the-fly interpretation and compilation like CPython and Ruby, and systems with simple, standardized and fast full-compilation processes like Mono would be served well by DVCS-based distribution.

Probably the most serious objection is really about dependencies, that you have to have all the DVCS tools used by the systems you use. First, I think existing practice shows that we already have no objection to this; we aren't bothered about requiring everyone to use APT rather than a website with downloads, because we know by comparison that the installation process with APT is orders of magnitude better for users and developers than the traditional Windows “download an installer and run it” method. (The fact that a Debian APT source does a Fedora yum user no good is all about the system-specific hacks typically packaged with these packages, and has no effect on pristine upstream distribution, which is after all our topic of discussion.)

Great, so who's going to implement all this?



The most comprehensive effort I've seen at trying to integrate VCS with automated installation is CL-Librarian, which knows how to use a few of the available VCS tools to download and update libraries. Librarian is mostly a personal effort, and isn't widely ported or adaptive yet, but it's a step in the right direction. While the above may sound like a very long advertisement for Librarian, I would surely embrace any Common Lisp library manager that takes the new DVCS order to heart and helps us to banish the release tarball.

I'm currently using a few scripts collectively called lispdir that drop into the myriad branches in my Lisp tree and update them using the appropriate VCSes. When I add a new branch, I simply run an asdfify shell function to add the .asd files therein to my central registry. It also serves as a list of canonical VCS locations for many projects. You can get that as a Bazaar branch; acquire.sh downloads all systems, and update.sh updates them.

Tuesday, September 2, 2008

Fixing weblocks test failures, try 1

Here we commemorate the demise of the first test fixing branch of Weblocks, c6dc18-test-fixes. At its peak, it got the number of failures in the ~750 test suite down to a whopping 9. Shortly after its merge back into dev, the count was back up to over 50.

As I write this, the count on my new fixes branch is 84. On the now 4-patch divergent dev, a trial merge into the fixes branch raises that to 89. (To be entirely fair, I was the committer of the revisions that caused the jump.)

There is a lesson here about test discipline. Once you get out of the mindset of making every mainline rev pass all tests, it's hard to get back in. For quite a while my fix strategy has been a little lax.
  1. If it's small, and I don't think it does much, don't bother testing.
  2. For large changes, make sure the failure count doesn't jump above 100.
  3. Whatever, just make sure my development site (which doesn't even use continuations yet) works.
Symmetric merging is little help here, as all the revs in a topic branch end up as first-class revs in the mainline. Not that I blame Mercurial at all; still, I hope to build future features in topic branches posted on Bitbucket, preferably with optimized archival storage (backporting hardlinks and such) so dormant branches, such as c6dc18-test-fixes, can stay around forever cheaply.

I think the failure with c6dc18-test-fixes was not providing public feedback about how the failure counts were diverging between branches. Now I have scripts that merge, test, and compare test results on two branches, so I can generate all the useless statistics I like.