Failed Experiments.

a Mr. Fleming wishes to study bugs in smelly cheese; a Polish woman wishes to sift through tons of Central African ore to find minute quantities of a substance she says will glow in the dark; a Mr. Kepler wants to hear the songs the planets sing.

Saturday, March 1, 2014

Encountering the people of Free Software

Over my time as a programmer, I have grown in the practice mostly by way of contact with the free software community. However, for the first 8 years of this time, that contact was entirely mediated by Internet communication, for my hometown did not feature a free software community to speak of.

So, instead, I learned what these people, mostly distributed among the other continents, were like by way of their mailing list messages, IRC chats, wiki edits, and committed patches. This is a fine way to become acquainted with the hats people wear for the benefit of the projects they're involved with, but isn't really a way to observe what they are really like.

About face

Then, a few years ago, I moved to Boston. Well-known for being steeped in history, Boston is the geographic heart of free software, being also the home of the Free Software Foundation. Here also is the FSF's annual LibrePlanet conference, a policy conference accompanied by a strong lineup of technical content.

I first attended LibrePlanet in 2012. There, after a decade of forming an idea in my head of what these people were like, I could finally test that idea against real-life examples.


Richard Stallman (rms), the founder and leader, both in spirit and in practice, of free software has long since excised non-free software from his life. If he cannot use a website without using non-free software, he will not use that website. If he can't open a document you send him without using a non-free program to open it, he will ask you to send it in a different format, or otherwise simply not read it. If he cannot use a newer computer with only free software on it, he will use an older computer instead. Because people keep asking him to do these things, this is an ongoing effort. This is well-known about him.

So here was the surprise: my fellow attendees had all followed rms's example, with varying success. They traded tips on the freedom-respecting aspects of this or that hardware, yet admitted those areas where they hadn't yet been able to cut out non-free software.

Little things

There was no grand philosophical reason for this, no essential disagreement with rms's philosophy in play. It was just life. Perhaps they had a spouse who simply would not do without this non-free video streaming service. Perhaps they had friends with whom contact over that non-free messaging service was the foundation of the community. Perhaps they would like to work from home, albeit over some other non-free corporate network connector, in case they get snowed in.

Or maybe they simply haven't found the time. Maybe they tried once, failed, and haven't had the opportunity to try again. There are many demands on people; they deal with them as best as they can.

I should have realized this, and I should have known it from what rms himself had said.
"I hesitate to exaggerate the importance of this little puddle of freedom," he says. "Because the more well-known and conventional areas of working for freedom and a better society are tremendously important. I wouldn't say that free software is as important as they are. It's the responsibility I undertook, because it dropped in my lap and I saw a way I could do something about it…"


For all of these compromises, though, there was still the sense that these compromises are not the end of the story. Maybe free software isn't (practically) better sometimes. Maybe there are compromises that could ease up as the situation changes. Or maybe some inconvenience will be worth the trouble in the long run; after all, that practically inferior software probably won't get better without users.

People are perfectly capable, on our own, of following Milton Friedman's method of entering gain or loss of freedom on the appropriate side of the pros-and-cons list when making such choices: when you have little, the loss of a little means a lot. Why, then, look to the example of rms, or those small ranks of others who have also cut out all non-free software from their lives?

The people of free software don't necessarily believe that rms's goal is reachable within our lifetimes. I think that what people respond to is his clear, clearly stated, and continually adapting ideas of how the world could be better, never mind the occasional bout of eerie prescience. Maybe we will never get there. Does that mean people shouldn't set a lofty goal for making a better world, and spend a bit of time pushing the real one towards it?

Thursday, August 1, 2013

Rebasing makes collaboration harder

Thanks to certain version control systems' making these operations too attractive, history rewriting, e.g. rebase and squashed merge, of published revisions is currently quite popular in free software projects.  What does the git-rebase manpage, an otherwise advocate of the practice, have to say about that?

Rebasing (or any other form of rewriting) a branch that others have based work on is a bad idea: anyone downstream of it is forced to manually fix their history.

The manpage goes on to describe, essentially, cascading rebase.  I will not discuss further here why it is a bad idea.

So, let us suppose you wish to follow git-rebase's advice, and you wish to alter history you have made available to others, perhaps in a branch in a public repository.  The qualifying question becomes: "has anyone based work on this history I am rewriting?"

There are four ways in which you might answer this question.

  1. Someone has based work on your commits; rewriting history is a bad idea.
  2. Someone may have or might yet base work on your commits; rewriting history is a bad idea.
  3. It's unlikely that someone has based work on your commits so you can dismiss the possibility; the manpage's advice does not apply.
  4. It is not possible that someone has or will yet based work on your commits; the manpage's advice does not apply.

If you have truly met the requirement above and made the revisions available to others, you can only choose #4 if you have some kind of logging of revision fetches, and check this logging beforehand; this almost never applies, so it is not interesting here.  Note: it is not enough to check other public repositories; someone might be writing commits locally to be pushed later as you consider this question.  Perhaps someone is shy about sharing experiments until they're a little further along.

Now that we must accept it is possible someone has based changes on yours, even if you have dismissed it as unlikely, let's look at this from the perspective of another developer who wishes to build further revisions on yours.  The relevant question here is "should I base changes on my fellow developer's work?"  For which these are reasonable answers.

  1. You know someone has built changes on your history and will therefore not rewrite history, wanting to follow the manpage's advice.  It is safe for me to build on it.
  2. You assume someone might build changes on your history, and will not rewrite it for the same reason as with #1.  It is safe for me to build on it.
  3. You've dismissed the possibility of someone like me building on your history, and might rebase or squash, so it is not safe for me to build on it.

I have defined these answers to align with the earlier set, and wish to specifically address #3.  By answering #3 to the prior question, you have reinforced the very circumstances you might think you are only predicting.  In other words, by assuming no one will wish to collaborate on your change, you have created the circumstances by which no one can safely collaborate on your change.  It is a self-fulfilling prophecy that reinforces the tendency to keep collaboration unsafe on your next feature branch.

In this situation, it becomes very hard to break this cycle where each feature branch is "owned" by one person.  I believe this is strongly contrary to the spirits of distributed version control, free software, and public development methodology.

In circumstances with no history rewriting, the very interesting possibility of ad hoc cross-synchronizing via merges between two or more developers on a single feature branch arises.  You work on your parts, others work on other parts, you merge from each other when ready.  Given the above, it is not surprising to me that so many developers have not experienced this very satisfying way of working together, even as our modern tools with sophisticated merge systems enable it.

Sunday, June 23, 2013

Fake Theorems for Free

This article documents an element of Scalaz design that I practice, because I believe it to be an element of Scalaz design principles, and quite a good one at that. It explains why Functor[Set] was removed yet Foldable[Set] remains. More broadly, it explains why a functor may be considered invalid even though “it doesn't break any laws”. It is a useful discipline to apply to your own Scala code.
  1. Do not use runtime type information in an unconstrained way.
  2. Corollary: do not use Object#equals or Object#hashCode in unconstrained contexts, because that would count as #1.
The simplest way to state and remember it is “for all means for all”. Another, if you prefer, might be “if I don't know anything about it, I can't look at it”.

We accept this constraint for the same reason that we accept the constraint of referential transparency: it gives us powerful reasoning tools about our code. Specifically, it gives us our free theorems back.


Let's consider a basic signature.

     def const[A](a: A, a2: A): A

With the principle intact, there are only two total, referentially transparent functions that we can write with this signature.

     def const[A](a: A, a2: A): A = a
     def const2[A](a: A, a2: A): A = a2

That is, we can return one or the other argument. We can't “look at” either A, so we can't do tests on them or combine them in some way.

Much of Scalaz is minimally documented because it is easy enough to apply this approach to more complex functions once you have a bit of practice. Many Scalaz functions are the only function you could write with such a signature.

Now, let us imagine that we permit the unconstrained use of runtime type information. Here are functions that are referentially transparent, which you will find insane anyway.

     def const3[A](a: A, a2: A): A = (a, a2) match {
       case (s: Int, s2: Int) => if (s < s2) a else a2
       case _ => a
     def const4[A](a: A, a2: A): A =
       if (a.## < a2.##) a else a2

Now, look at what we have lost! If the lowly const can be driven mad this way, imagine what could happen with fmap. One of our most powerful tools for reasoning about generic code has been lost. No, this kind of thing is not meant for the realm of Scalaz.

Missing theorems

For completeness's sake, let us see the list of theorems from Theorems for free!, figure 1 on page 3, for which I can think of a counterexample, still meeting the stated function's signature, if we violate the above explained principle.

In each case, I have assumed all other functions but the one in question have their standard definitions as explained on the previous page of the paper. I recommend having the paper open to page 3 to follow along. They are restated in fake-Scala because you might like that. Let lift(f) be (_ map f), or f* as written in the paper.

head[X]: List[X] => X
a compose head = head compose lift(a)

tail[X]: List[X] => List[X]
lift(a) compose tail = tail compose lift(a)

++[X]: (List[X], List[X]) => List[X]
lift(a)(xs ++ ys) = lift(a)(xs) ++ lift(a)(ys)

zip[X, Y]: ((List[X], List[Y])) => List[(X, Y)]
lift(a product b) compose zip = zip compose (lift(a) product lift(b))

filter[X]: (X => Boolean) => List[X] => List[X]
lift(a) compose filter(p compose a) = filter(p) compose a

sort[X]: ((X, X) => Boolean) => List[X] => List[X]
wherever for all x, y in A , (x < y) = (a(x) <' a(y)), also lift(a) compose sort(<) = sort(<') compose lift(a)

fold[X, Y]: ((X, Y) => Y, Y) => List[X] => Y
wherever for all x in A, y in B, b(x + y) = a(x) * b(y) and b(u) = u', also b compose fold(+, u) = fold(*, u') compose lift(a)

Object#equals and Object#hashCode are sufficient to break all these free theorems, though many creative obliterations via type tests of the const3 kind also exist.

By contrast, here are the ones which I think are preserved. I hesitate to positively state that they are, just because there are so many possibilities opened up by runtime type information.

fst[X, Y]: ((X, Y)) => X
a compose fst = fst compose (a product b)

snd[X, Y]: ((X, Y)) => Y
b compose snd = snd compose (a product b)

I[X]: X => X
a compose I = I compose a

K[X, Y]: (X, Y) => X
a(K(x, y)) = K(a(x), a(y))

Here is a useful excerpt from the paper itself, section 3.4 “Polymorphic equality”, of which you may consider this entire article a mere expansion.

... polymorphic equality cannot be defined in the pure polymorphic lambda calculus. Polymorphic equality can be added as a constant, but then parametricity will not hold (for terms containing the constant).

This suggests that we need some way to tame the power of the polymorphic equality operator. Exactly such taming is provided by the eqtype variables of Standard ML [Mil87], or more generally by the type classes of Haskell [HW88, WB89].


Scalaz has some tools to help deal with things here. The Equal typeclass contains the equalIsNatural method as runtime evidence that Object#equals is expected to work; this evidence is used by other parts of Scalaz, and available to you.

Scalaz also provides

     implicit def setMonoid[A]: Monoid[Set[A]]

Relative to Functor, this is more or less harmless, because Monoid isn't so powerful; once you have Monoid evidence in hand, it doesn't “carry” any parametric polymorphism the way most Scalaz typeclasses do. It provides no means to actually fill sets, and the semigroup is also symmetric, so it seems unlikely that there is a way to write Monoid-generic code that can use this definition to break things.

More typical are definitions like

     implicit def setOrder[A: Order]: Order[Set[A]]

Which may use Object#equals, but is constrained in a way that they can be sure it's safe to do so, just as implied in the quote above.

Insofar as “compromise” characterizes the above choices, I think Scalaz's position in the space of possibilities is quite good. However, I would be loath to see any further relaxing of the principles I have described here, and I hope you would be too.

Mistakes are part of history

And sometimes, later, they turn out not to be mistakes at all.

Has this never happened to you?  For my part, sometimes I am mistaken, and sometimes I am even mistaken about what I am mistaken about.  So it is worthwhile to keep records of failed experiments.

You can always delete information later, as a log-viewing tool might, but you can never get it back if you just deleted it in the first place.

Please consider this, git lovers, before performing your next rebase or squashed merge.

(My favorite VC quote courtesy ddaa of GNU Arch land, of all places)

Tuesday, April 3, 2012

Some type systems are much better than others.

As a shortcut, I previously complained about C and Java's type systems leaving bad impressions. Let's broaden that to the realm of type system research, avoiding useless discussion of the relative merits of this or that programming language as a whole. Some type systems let you express more of the set of valid programs, or different subsets. More out of the theoretical realm, but because we are human, different inference engines are better at figuring out types in different situations.

There is a nice graphic in Gödel, Escher, Bach that illustrates this issue: it displays the universe split into the true and the false, and elaborate structures covered with appendages illustrating “proof”. Of all valid programs¹, many are “typeable”, meaning that we can prove that they are sound, type-wise, with a particular type system; conversely, most invalid programs are not typeable.

However, you might imagine that the fingers stretch into the invalid space on occasion, and they don't cover the valid space entirely either; for any given type system, there are valid programs that are rejected, and there are invalid programs that are accepted.

Not so simple math

The goal of type system research is to both accept more valid programs and reject more invalid programs. Let's consider these three programs in three different languages, which implement one step of a divide-and-conquer list reduction strategy.
     def pm(x, y, z):                # Python
         return x + y - z
     pm x y z = x + y - z            -- Haskell
     -- inferred as Num a ⇒ a → a → a → a
     int pm(int x, int y, int z) {   /* C */
       return x + y - z;
First, the expressive problem in C is easy to spot: you can only add and subtract ints. You have to reimplement this for each number type, even if the code stays the same.

The Haskell expressive problem is a little more obscure, but more obvious given the inferred type: all the numbers must be of the same type. You can see this with some partial application:
     pm (1::Int)      -- Int → Int → Int
     pm (1::Integer)  -- Integer → Integer → Integer
     pm (1::Float)    -- Float → Float → Float
The Python version works on a numeric tower: once you introduce a float, it sticks. This may be good for you, or it may be bad. If you are reducing some data with pm and a float sneaks in there, you won't see it until you get the final output. So with dynamic promotion, pm works with everything, even some things you probably don't want.

There are adjustments you can make to get the Haskell version to be more general, but this all depends on the kind of generalization you mean. This is a matter of continued innovation, even in the standard library; many commonly used libraries provide alternative, more generic versions of built-in Prelude functions, such as fmap for functors, one of many, many generalizations that work with lists, in this case replacing map.


[1] Use whatever meaning you like for “valid”, but if you want something more formal, perhaps “it terminates” will suffice.

Sunday, April 1, 2012

“Inference” is “proof assistance”.

As a Lisper, I'm used to just writing out some code that more or less expresses what I mean, then trying it out. I don't want to mess around with proving that what I'm writing makes sense. Moreover, when I first started learning Haskell, I didn't know nearly enough to begin proving what I meant.

Fortunately, Haskell can figure out all the argument types and return types of very complex functions.¹ You know how to write a function that applies a function to each element of a list, and then combines all the resulting lists into one, so just write it:

    concatMap _ [] = []
concatMap f (x:xs) = f x ++ concatMap f xs
-- inferred type of concatMap is (t → [a]) → [t] → [a]

That's pretty nice; I didn't have to specify a single type, and Haskell figured out not only the types of the arguments and results, one of which was itself a function type, but figured out the precise level of polymorphism appropriate. A frequent mistake when trying to guess this type is writing (a → [a]) → [a] → [a], which is not as general as the inferred version above. It will compile, but unnecessarily (and often fatally) restrict users of the concatMap function.²

So inference helps you prove things, often avoiding or explaining generalizations you didn't think of. It is a “proof assistant”. It greatly aids in refactoring, if you continue to rely on it, as you have to fix your proof in fewer places when you change the rules. It's an absolutely vital tool for entry into the typeful world, when you frequently know how to write a function, but not how to express its true, maximally polymorphic type.

Unfortunately, the “proof assistant” can't figure out absolutely everything. Moreover, the semantics of the language and type system affect how much the assistant can prove.³


[1] Haskell is really figuring out “the types of very complex expressions”, but that doesn't sound quite so good, despite being quite a bit better.

[2] As it happens, we've restricted ourselves to lists, where concatMap actually makes sense for all monads, but that's a result of our using the list-only operations for our implementation, not a failure of the type inference. In Haskell terms, concatMapM f xs = join (liftM f xs), which is inferred as a Monad m ⇒ (t → m a) → m t → m a. Other generalizations are possible, and you can accidentally lock them down in exactly the same way as concatMap, including to our original inferred type.

[3] To be overly general, features like mutability and looking up functions by name on a receiver type make inference harder. These are the rule in Simula-inspired object-oriented environments, making inference harder in OOP, and conversely, easier in functional environments. For example, in the Java expression x.getBlah(y), you can't infer anything about getBlah until you know the type of x. But in Haskell, getBlah has one known type, albeit perhaps polymorphic or constrained by typeclasses, which can be used to infer things about x and y without necessarily knowing anything else about them.

Thursday, March 29, 2012

With a type system, whether you can write a program depends on whether you can prove its correctness.

The trouble with bad type systems is that you have to use “escape hatches” pretty frequently. In C and many of its derivatives, these take the form of casts, which, in type terms, are like saying “I can't prove this even a little bit, but trust me, it's true. Try it a few times and you'll see.” The traditional cast system is very broad, as it must be to account for the shortcomings of lesser type systems. No one wants to reimplement the collection classes for every element type they might like to use.

After being so impressed by the power of Haskell's inference, many people next discover that they can't put values of different types into lists.¹ Well, that's not quite true, you can always chain 2-tuples together, but that's not what you really mean. Well, what did you mean?

Oh, well, I meant that sometimes the elements of my list are integers, and sometimes they're strings.

Okay, no problem. Put the integer type and string type together as alternatives in a single type:
     data IntOrString = AnInt Int | AString String
     [AnInt 42, AString "hi"] -- has type [IntOrString] 
No, I meant that it's alternating integers and strings, one of each for the other.

Well why didn't you say so!
     data IntAndString = IntAndString Int String
     [IntAndString 42 "hi"] -- has type [IntAndString] 
You can't just stick integers and strings together in a list without proving something about what you mean. To write any program typefully, you have to prove that it sort of makes sense. In the former example, you really meant that each element could be either one, and you have to prove that it's one of those, and not, say, a map, before you can put it in the list. In the latter example, you have to prove that you have exactly one string for each integer that you put into the list.

This permits a more analytical approach to programming than can occur in latent-typed systems. Let's say you had the [IntOrString], and you realized it was wrong and changed it to [IntAndString] in one module. You have two other modules that are trying to use the lists, and now they don't work, because you didn't prove that you had one string for each integer in those modules. Now nothing loads, and you have to jump around for a bit fixing your proofs until you can test again. This separates the task into two phases: one where you're only thinking about and testing the proofs, and the other where you're thinking about and testing the behavior.

I don't think this is an unqualified improvement over the latent-typed situation. On one hand, breaking tasks down into little bits is the foundation of human software development. Moreover, this example clearly helped us to clarify what we meant about the structure we were building. On the other hand, sometimes I prefer to focus on getting one module right on both type and runtime levels before moving on to the next. This is harder to do with most typeful programming languages, as type errors naturally cascade, and types both described and inferred usually influence runtime behavior.

[1] Haskell also has escape hatches, but using them is viewed rather like gratuitous use of eval is by Lispers. Whereas most C and Java programs use casts, very few Haskell programs use Data.Dynamic, just as very few OCaml programs use Obj.magic.

About Me

My Photo

I am S11001001, s11 for short.  Programmer and Free Software enthusiast.

Search for my name to see more stuff about me; no one shares my real name, and no one shares my username, though I can't understand why.