12 September 2014

Artificial Expertise

Artificial expertise is a term that I tend to use in real-life discussions, which I would like to document in order to help make people aware of the concept. It refers to the specific kind of detailed knowledge that you get from working with complex systems that require a high amount of skill to operate—when those skills are not transferable to other systems. The expertise is not inherent to the problem, but incidental to the solution, and thus artificial.

As far as computing science goes, I can give you two very good examples of systems that generate artificial expertise. What’s wonderful is that they’re at absolute opposite sides of the spectrum of delight.

C++ is a cult of artificial expertise because it encourages—often necessitates—the constant consideration of details typically irrelevant to the problem at hand. The choice of passing parameters and storing objects by value, reference, pointer, or smart pointer needs to be considered for every declaration. All this takes place in the exciting context of a complex interplay between implicit conversions, exceptions, mutable memory, and silently undefined behaviour.

When I use C++, or help my coworkers to, it makes me angry and tired.

Perl is a cult of artificial expertise because it has many “tricks” for producing more compact and expressive code. You could think of a trick as anything non-obvious. I’m loath to use the term because I regularly see programmers labelling as “tricks” many perfectly ordinary language features and programming techniques, such as Python’s list comprehensions or Haskell’s lazy evaluation. In Perl, many of these take the form of sensible defaults that can be left implicit, or special variables used to control the behaviour of common operations.

When I use Perl, I feel empowered and clever. Like C++, Perl gives you many opportunities to improve your code or write it in the manner that you deem best suited to the problem, and in the hands of an expert it’s a powerful tool. Here’s the critical difference: Perl code that I have seen written by a beginner tends to simply not use the “expert” features; C++ code tends to accidentally use them, and often thereby run into undefined behaviour through their miscombination.

Now, knowing either of these languages puts you in imminent danger of getting yourself employed. And I believe this is no coincidence. Creating inefficiency is an excellent way to create jobs—it’s creating efficiency that creates careers.

14 August 2014

Adjective Valence and Linguistic Relativity

There is a concept in linguistics that I could have sworn I’ve read about, but now cannot seem to find mention of anywhere. I call it adjective valence, and examining it can give you insight into how you conceptualise the world around you, how language influences and is influenced by that, and where language is lacking.

The valence of an adjective is its directionality—whether the word is perceived as the essential quality of a trait (positive valence) or its antonym (negative valence), and in what direction the axis goes in spatial metaphors concerning that trait.

For example, hot and cold are antonyms. We can observe that hot has positive valence in English by noting that when something gets hotter, its temperature is said to increase; cold has negative valence because when something gets colder, its temperature decreases. It’s fine for a thing to become more hot, but our language makes it slightly unusual to say less cold for the same concept. This has a sound physical basis: a higher temperature implies a higher average internal kinetic energy.

Not all languages have the same valence for all adjectives. For example, in English, we think of ourselves as moving forward in time, that the future is before us and the past is behind us. This valence leads to many idioms such as “I’m looking forward to it” and “put it behind you”. But in the Aymara language, the future is behind you—after all, you can’t see it—and the past is in front. This was also true in Ancient Greek (ὄπισθεν = behind = in the future, πρόσθεν = in front = in the past) and modern Chinese shows some evidence of a switch from front-past to front-future.

We can use this information to discover linguistic assumptions we make about the world. By inverting the valence of an adjective, we can challenge those assumptions and perhaps achieve insights about whether our intuitions correspond to real physical phenomena. For example, let’s replace temperature with its reciprocal, such that “absolute zero” is considered infinitely high on the scale of coldness. If this were our basic assumption, we would have no trouble accepting that such a temperature is unattainable in any physical system. And it turns out that that quantity, the thermodynamic beta, is a useful one, rigorously defined in terms of the energy and entropy of a system.

This is an argument in favour of the weak form of linguistic relativity, the notion that language influences thought. It’s not difficult to consider the implications of inverting the valence of an adjective, but we rarely think to do so, and talking about the implications is often hilariously difficult. This metal ball has a high mass, and a low—what? Take notice of these things in conversation and try inverting them to peek behind the curtain of your own thoughts.

08 May 2014

Constification in C++11

Here’s a neat little pattern. Suppose you want to initialise some object using mutating operations, and thereafter make the object const, without any overhead. In C++11 you can do just that with a lambda:

const auto v = []{
  vector<int> v;
  for (auto i = 0; i < 10; ++i)
    v.push_back(i * 10);
  return v;
}();

This “inline builder pattern” turns out to be a nice way to encapsulate mutation to smaller scope, thereby making your code a bit easier to reason about. Of course, the drawback of this is that objects you would initialise this way are the sorts of objects you might prefer to move rather than copy, and const interferes with that; but that is a problem in C++11 generally.

It goes to show that lambdas make working with const things a bit nicer all around. This pattern also lets you initialise const members in an object:

struct X {
  X()
    : x([]{
      auto s = 0;
      auto i = 10;
      while (i) s += --i;
      return s;
    }())
    {}
  const int x;
};

Ugly, but it gets the job done. That’s C++ for you.

21 August 2013

Lexical Closures without Garbage Collection

The problem and the solution

When designing my Kitten programming language, I wanted to support lexical closures without the need for a tracing garbage collector. The problem of closure is simple: anonymous functions often use local variables from their enclosing scope; in a memory-safe language, those variables should continue to exist if the inner function escapes, after the outer function exits. Take this classic example of function currying in JavaScript:

// curriedAdd :: Number -> (Number -> Number)
function curriedAdd(x) {
  return function(y) {
    return x + y;
  };
}

var inc = add(1);

Here, x must exist after curriedAdd has exited, in order for us to safely and meaningfully call the resulting anonymous function. Since Kitten has no mutable variables, there is no observable difference between capturing a variable by reference and capturing it by value. So the Kitten compiler takes a definition like this:

def curriedAdd (Int -> (Int -> Int)):
  ->x
  { x + }

def inc (Int -> Int):
  1 curriedAdd @

And rewrites it to copy the value of x into the closure of the anonymous function. In low-level pseudocode, this is what’s happening under the hood:

def curriedAdd:
  setlocal0
  $(local0){ closure0 __add_int }

Now there are no references to the local variable within the inner function—only the value of x is captured when the function object is constructed on the stack. The runtime cost is minimal, and no garbage is generated.

The rest of the story

Saying “by value” and “no garbage” is accurate, but not a complete picture. It would be inefficient to always deep-copy all closed values, particularly if they were to exceed the size of a machine word.

So while integers, booleans, and floating-point numbers are all copied, heap-allocated objects such as vectors can instead be shared using reference counting. Coming from such languages as C++, I like that this preserves deterministic memory behaviour. And in the absence of mutation and lazy evaluation, cycles are impossible, so reference counting alone suffices to eagerly reclaim all garbage.

This transformation is equally valid in other languages, as long as we ignore mutation. You can easily rewrite the JavaScript example above to use explicit closure:

// curriedAdd :: Number -> (Number -> Number)
function curriedAdd(x1) {
  return function(x2, y) {
    return x2 + y;
  }.bind(this, x1);
}

It also works when we add mutation. Here’s a function encapsulating an impure counter from a starting value:

// counter :: Number -> (() -> IO Number)
function counter(x) {
  return function() {
    return x++;
  };
}

var c = counter(0);
console.log(c());    // 0
console.log(c());    // 1
console.log(c());    // 2

We can fake pointers using Array to make this work without lexical closure:

// counter :: Number -> (() -> IO Number)
function counter(x) {
  return function(px) {
    return px[0]++;      // dereference
  }.bind(this, [x]);     // reference
}

However, reference cycles are now possible, so a reference-counting collector would need an auxiliary cycle detector to reclaim all garbage.

Optimisations and future work

Production tracing collectors outperform naïve reference-counting collectors by a wide margin. We can make a reference-counting collector that closes that gap through simple optimisations:

  • Reducing the number of bits used in the reference count—most objects have a small number of references.
  • Creating objects with a zero reference count, and only retaining when the object escapes—the vast majority of objects die in the scope they were born.
  • Eliminating redundant reference manipulation, considering only the net reference change in a basic block—most reference changes do not result in deallocation.

I intend to implement such optimisations in the future, as Kitten evolves. My bet is that as a statically typed, high-level language with low-level semantics, Kitten will someday be suitable for games and simulations. There, deterministic memory behaviour is a big win, since low latency is often more important than high throughput. Still, the ultimate direction of the project remains to be seen—I’m just discovering the language as I go. :)

01 July 2013

Static Typing in a Concatenative Language

As you may know, I have for some time been working on a pet language project, Kitten. The name is a pun on concatenative programming, a somewhat overlooked paradigm of stack-based functional programming languages. My February 2012 blarticle on concatenative programming is an okay introduction for those interested. The language is very much in flux, but at the time of this writing, Kitten looks like this:
/* C-style comments */

// Top-level code
99 bottles_of_beer

decl bottles_of_beer (Int ->)  // Separate type declarations
def bottles_of_beer:           // Optional indentation-based syntax
  -> x                         // Local variables
  x verse                      // Postfix expressions
  if x 1 > then:
    x-- bottles_of_beer        // Recursion instead of iteration
  else {}

decl verse (Int ->)
def verse:
  -> x
  x wall newline
  x beer newline
  take newline
  x-- wall newline
  newline

decl beer (Int ->)
def beer:
  bottles " of beer" print

decl wall (Int ->)
def wall:
  beer " on the wall" print

decl take (->)
def take:
  "take one down, pass it around" print

decl bottles (Int ->)
def bottles:
  -> x
  if x 0 = then:
    "no more bottles"
  else if x 1 = then:
    "one bottle"
  else:
    x showi " bottles" cat
  print

Motivation


The premier concatenative programming language Factor is dynamically typed, alongside dynamically typed cousin Joy, and untyped Forth. There was a statically typed concatenative language Cat but that project is no longer maintained.

So it seems there is room in the world for a statically typed functional stack language. Kitten was originally dynamically typed, primarily because I am lazy, but the intent was always to move to static typing. It seems to me that the main reason that dynamic languages are so widely used—often in situations where they are not appropriate—is that they are so easy to implement. That can give implementors a shorter iteration time on new and useful features, as well as more time to work on documentation and tooling.

Dynamic languages are deceptive. You can absolutely write large software in a dynamic language, but at the point when dynamic types become a liability and static types become really valuable, it’s already too late! In the long run, static types ease maintainability, optimisation, and static reasoning for computers and humans alike. And that’s not to mention the great opportunities for refactoring, analysis, and visualisation tools.

But static types, without type inference, are just a bad joke. My main goal when designing Kitten’s type system was inferability. Having used Haskell for a few years, I’ve grown accustomed to the benefits of type inference, at least of the local variety. The meat of a program simply shouldn’t need type annotations for the compiler to tell you not only whether it is type-correct, but also what the type is of anything you care to ask about. Even simple type deduction such as C++’s auto or C#’s var is much better than nothing.

Concatenative languages pose some unique problems for type inference, and I’d like to share what I’ve learned while implementing static types in Kitten.

Differences with Dynamic Languages


One of the typical features of dynamically typed concatenative languages is homoiconicity, the property that code and data have a common representation. In concatenative languages, this is the quotation, an analog of Lisp’s list. This is a really powerful feature, and a valuable tool in a dynamic language. There’s a reason that, alongside Lisp, concatenative languages are among the most heavily metaprogrammed in existence. But homoiconicity and static types are basically incompatible kinds of awesome. A function constructed dynamically could have a dynamic effect on the stack, which a static type system can’t hope to make sense of.

Kitten therefore has to differentiate quotations into two categories: functions and vectors. There are separate primitives for function and vector operations, such as composition and concatenation—even though internally these can be implemented the same way.

The Type System


Kitten’s type system is fairly simple. The base types include Booleans (Bool), characters (Char), integers (Int), floating-point numbers (Float), homogeneous vectors ([a]), and functions (a -> b).

All functions operate on stacks. They consume some number of parameters from their input stack and produce some number of return values, potentially with side effects such as I/O. The juxtaposition of functions denotes their composition—sending the output of one to the input of the next. 2 + is a function of type Int -> Int which adds two to its argument; it consists of the composition of two functions 2 and +, which have types -> Int and Int Int -> Int respectively.

However, functions that actually quantify over stacks pose significant challenges to inference. Functions may contain arbitrary code—of which you want to know the type, because they can be dynamically composed and applied, and those operations should be type-safe. So the basic concatenative compose combinator needs a type like the following:
r s t u. r (st) (tu) → r (su)
That is, for any stack r with two functions on top, one of type (st) and one of type (tu), it gives you back the same stack r with a new function on top, of type (su). All of the variables in this type are quantifying over multiple types on the stack. This is the type that Cat uses for its compose combinator. Already this is strictly less powerful than the dynamic equivalent:
  1. You can no longer dynamically compose functions with interesting stack effects: {1} and {+} are composable, but {drop} and {drop} are not.
  2. The associativity of dynamic composition is broken as a result: {a b} {c} compose does not necessarily equal {a} {b c} compose.
#1 is not an issue in practice, because as fun as arbitrary dynamic effects can be, a static type system is designed to prevent total lunacy, and there’s going to be some collateral damage. So by construction, the type of a dynamic composition must always be known statically.

#2 is more significant, in that the type checker is going to reject some dynamic compositions you might expect to be valid: {drop drop} {} compose is allowed, but {drop} {drop} compose is not.

The problem with quantifying over stacks, however, is that every ordinary function type is inherently polymorphic with respect to the part of the stack that it doesn’t care about. + in this system would have a type like ∀r. r Int Int → r Int. All higher-order functions become higher-rank, and all recursion becomes polymorphic recursion. This makes typechecking significantly more complicated, and type inference undecidable in general.

In light of this, I opted for a simpler approach: model functions as multiple-input, multiple-output, and get rid of such combinators as compose and apply that need to talk about stack types. These can be replaced with a handful of fixed-arity equivalents: to apply unary functions, binary functions, and so on.

The process for composing two function types (ab) and (cd) is simple:
  1. Pair the values produced by b and consumed by c, and ensure they match.
  2. Let x be those values consumed by c, not produced by b.
  3. Let y be those values produced by b, not consumed by c.
  4. The type of the composed function is x + a → y + d.

Next Time


In a future post, I’ll talk about some of the more low-level details of Kitten, particularly the design of the upcoming VM. As the language takes shape, I’ll also begin offering tutorials and explanations of how everything comes together to write real-world software. Until then!