# Failed Experiments.

a Mr. Fleming wishes to study bugs in smelly cheese; a Polish woman wishes to sift through tons of Central African ore to find minute quantities of a substance she says will glow in the dark; a Mr. Kepler wants to hear the songs the planets sing.

## Tuesday, April 3, 2012

### Some type systems are much better than others.

As a shortcut, I previously complained about C and Java's type systems leaving bad impressions. Let's broaden that to the realm of type system research, avoiding useless discussion of the relative merits of this or that programming language as a whole. Some type systems let you express more of the set of valid programs, or different subsets. More out of the theoretical realm, but because we are human, different inference engines are better at figuring out types in different situations.

There is a nice graphic in GĂ¶del, Escher, Bach that illustrates this issue: it displays the universe split into the true and the false, and elaborate structures covered with appendages illustrating “proof”. Of all valid programs¹, many are “typeable”, meaning that we can prove that they are sound, type-wise, with a particular type system; conversely, most invalid programs are not typeable.

However, you might imagine that the fingers stretch into the invalid space on occasion, and they don't cover the valid space entirely either; for any given type system, there are valid programs that are rejected, and there are invalid programs that are accepted.

#### Not so simple math

The goal of type system research is to both accept more valid programs and reject more invalid programs. Let's consider these three programs in three different languages, which implement one step of a divide-and-conquer list reduction strategy.
```     def pm(x, y, z):                # Python
return x + y - z

pm x y z = x + y - z            -- Haskell
-- inferred as Num a ⇒ a → a → a → a

int pm(int x, int y, int z) {   /* C */
return x + y - z;
}
```
First, the expressive problem in C is easy to spot: you can only add and subtract `int`s. You have to reimplement this for each number type, even if the code stays the same.

The Haskell expressive problem is a little more obscure, but more obvious given the inferred type: all the numbers must be of the same type. You can see this with some partial application:
```     pm (1::Int)      -- Int → Int → Int
pm (1::Integer)  -- Integer → Integer → Integer
pm (1::Float)    -- Float → Float → Float
```
The Python version works on a numeric tower: once you introduce a float, it sticks. This may be good for you, or it may be bad. If you are reducing some data with `pm` and a float sneaks in there, you won't see it until you get the final output. So with dynamic promotion, `pm` works with everything, even some things you probably don't want.

There are adjustments you can make to get the Haskell version to be more general, but this all depends on the kind of generalization you mean. This is a matter of continued innovation, even in the standard library; many commonly used libraries provide alternative, more generic versions of built-in `Prelude` functions, such as `fmap` for functors, one of many, many generalizations that work with lists, in this case replacing `map`.

#### Footnotes

[1] Use whatever meaning you like for “valid”, but if you want something more formal, perhaps “it terminates” will suffice.

## Sunday, April 1, 2012

### “Inference” is “proof assistance”.

As a Lisper, I'm used to just writing out some code that more or less expresses what I mean, then trying it out. I don't want to mess around with proving that what I'm writing makes sense. Moreover, when I first started learning Haskell, I didn't know nearly enough to begin proving what I meant.

Fortunately, Haskell can figure out all the argument types and return types of very complex functions.¹ You know how to write a function that applies a function to each element of a list, and then combines all the resulting lists into one, so just write it:

`    concatMap _ [] = []    concatMap f (x:xs) = f x ++ concatMap f xs    -- inferred type of concatMap is (t → [a]) → [t] → [a] `

That's pretty nice; I didn't have to specify a single type, and Haskell figured out not only the types of the arguments and results, one of which was itself a function type, but figured out the precise level of polymorphism appropriate. A frequent mistake when trying to guess this type is writing `(a → [a]) → [a] → [a]`, which is not as general as the inferred version above. It will compile, but unnecessarily (and often fatally) restrict users of the `concatMap` function.²

So inference helps you prove things, often avoiding or explaining generalizations you didn't think of. It is a “proof assistant”. It greatly aids in refactoring, if you continue to rely on it, as you have to fix your proof in fewer places when you change the rules. It's an absolutely vital tool for entry into the typeful world, when you frequently know how to write a function, but not how to express its true, maximally polymorphic type.

Unfortunately, the “proof assistant” can't figure out absolutely everything. Moreover, the semantics of the language and type system affect how much the assistant can prove.³

#### Footnotes

[1] Haskell is really figuring out “the types of very complex expressions”, but that doesn't sound quite so good, despite being quite a bit better.

[2] As it happens, we've restricted ourselves to lists, where `concatMap` actually makes sense for all monads, but that's a result of our using the list-only operations for our implementation, not a failure of the type inference. In Haskell terms, `concatMapM f xs = join (liftM f xs)`, which is inferred as a `Monad m ⇒ (t → m a) → m t → m a`. Other generalizations are possible, and you can accidentally lock them down in exactly the same way as `concatMap`, including to our original inferred type.

[3] To be overly general, features like mutability and looking up functions by name on a receiver type make inference harder. These are the rule in Simula-inspired object-oriented environments, making inference harder in OOP, and conversely, easier in functional environments. For example, in the Java expression `x.getBlah(y)`, you can't infer anything about `getBlah` until you know the type of `x`. But in Haskell, `getBlah` has one known type, albeit perhaps polymorphic or constrained by typeclasses, which can be used to infer things about `x` and `y` without necessarily knowing anything else about them.