03 September 2016

Arithmetic Types in Kitten

In order to motivate myself to work on my programming language, Kitten, I’ve decided to publish my notes on various aspects of the design as I figure them out. This is partly to solicit comments and drum up interest in the project, but mostly an exercise in ensuring my ideas are solid enough to explain clearly.

In these articles, “Kitten” will refer to the “ideal” language and the new work-in-progress compiler that approximates it.

Explicitness

In non-generic definitions, it should always be evident from the code what type of data you’re using. As such, the type of integer and floating-point literals can be specified with a type suffix: i8, i16, …, u8, …, f32, f64; the unadorned integer literal 1 is equivalent to 1i32, and the float literal 1.0 is equivalent to 1.0f64. I think signed 32-bit integers and 64-bit floats are reasonable defaults for common use cases.

In the future, we could relax this by allowing the suffix to be inferred, or by allowing the default suffix to be locally changed.

// Currently fails to typecheck; 3 could be deduced to mean 3i64.
(2i64 + 3)

// Currently not valid syntax; 2 and 3 could be defaulted to Int64.
assume (Int64)
(2 + 3)

Uniformity

Because overloading interacts rather poorly with type inference, it would be easiest for me to provide different operators for different types, such as + for integer addition and +. for floating-point addition, as in OCaml. I did this in the old compiler, and it wasn’t great for usability. I solve this through the use of traits—for each operator there is a single generic definition, with instances provided for all the common arithmetic types.

trait + <T> (T, T -> T)

instance + (Int32, Int32 -> Int32):
  _::kitten::add_int32

instance + (Float64, Float64 -> Float64):
  _::kitten::add_float64

Traits work like template specialisations in C++: if + is inferred to have the type Int32, Int32 -> Int32 at a given call site, then the compiler will emit a call to that particular instance; no generic code makes it into the final executable. And if no such instance exists, you’ll get a straightforward compilation error.

These traits will generally need to be inlined for performance reasons. Luckily, Kitten’s metadata system allows us to specify this easily enough:

about +:
  docs: """
    Adds two values of some type, producing a result of the same type.
    """
  inline: always

The about notation is a dumping ground for all metadata: documentation, examples, tests, operator precedence information, and optimization hints. The idea is that it’s better to have all of this information in a single structured format than to use some combination of magical comments, pragmas, attributes, and otherwise special syntax.

Safety

We want arithmetic operations to be safe, meaning that failure modes such as implicit overflow and trapping should be opt-in.

Therefore, we should provide an arbitrary-precision integer type (Int) as a sane default, as well as an array of signed and unsigned fixed-precision integer types in common sizes: Int8, Int16, Int32, Int64, and their UIntN counterparts.

Kitten has a permission system for reasoning about the permitted side effects of functions. Operations on fixed-precision integers (both signed and unsigned) are checked by default, requiring the +Fail permission to allow them to raise assertion failures in case of overflow.

instance + (Int32, Int32 -> Int32 +Fail) { … }

However, if most arithmetic operations can fail, then +Fail will proliferate through all the type signatures of calling code, which is a real drag, and might encourage people to use unchecked types more liberally than they should. We can solve this in two ways:

  • Make +Fail implicit in type signatures, and introduce a notation -Fail for removing it. This speaks to a more general notion of implicit permissions, which I think would be really valuable: it would let us introduce new implicit permissions in a backward-compatible way, allowing new code to opt out. For example, if we implicitly grant +Alloc to all functions, then they’re permitted to allocate memory on the heap—lists, closures, bignums, and so on. But for performance-critical code, you may want a static guarantee that your code performs no heap allocations, so you can opt out by specifying -Alloc.
  • Introduce a wrapper type, Checked<T>, which provides arithmetic operations that don’t require +Fail, but instead return an Optional<T> as the result. This would mean making the signatures of the arithmetic traits more general, or inlining the Optional into the representation of Checked.

Neither is unequivocally better, and they have different use cases, so I think it makes sense to provide both.

Modifiers

In addition to Checked<T>, Kitten provides wrapper types for different arithmetic modes:

  • Wrapped<T> is a T where overflow is defined to wrap modulo the range of T.
  • Unchecked<T> is a T where overflow is implementation-defined. I’m tempted to make this type require the +Unsafe permission, but that’s intended for operations that can violate memory safety, which Unchecked<T> cannot.
  • Saturating<T> is a T where overflow results in clamping to the range of T; this is useful in some signal-processing applications.

Platform Independence

All the basic arithmetic types should be provided on all platforms. That may require emulation in some cases, such as 64-bit arithmetic on a 32-bit processor. Implementations can provide additional platform-specific types such as Int128 or Float80.

SIMD operations are implemented on container-like types such as Vector4<Float32>; naïve-looking code such as:

vec { (* 2) } map

Should be lowered to the single instruction:

addps %xmm0, %xmm0

This is subtle, and I haven’t worked out all the details yet, but SIMD operations are important enough for performance-critical code that it seems worthwhile to consider them early on.

Licensing Problems

For arbitrary-precision types, I’m in a bit of a bind. I’m not equipped to write a native Kitten bignum library with performance on par with GMP, but because GMP is licensed under LGPL, it can’t be used freely. For instance, we can’t statically link it into generated executables, making distribution more difficult and negatively impacting performance.

TL;DR

  • Kitten has all the usual machine integer and float types that systems programmers love.
  • Literal types are explicit, with some sane defaults: signed Int32 and Float64.
  • Overflow is checked by default, with various type-system features for more control.
  • The standard library provides arbitrary-precision and SIMD operations.

25 January 2016

“Weird” is intellectually lazy

Saying “X is weird” is equivalent to saying “I don’t understand X, and I blame X for it.”

I often have to make this point in discussions of programming languages. I enjoy writing in Haskell, and I’ve taught it to a number of developers. People who are accustomed to imperative programming languages are quick to term Haskell “weird”. When I ask what they mean by this, it usually boils down to the fact that it’s different from what they’re accustomed to, and they don’t understand why the differences are necessary or useful.

A “weird” language feature is one that seems gratuitous because, almost by definition, you don’t yet understand why the language authors would’ve designed it that way. Once you understand the reasoning behind it, the weirdness evaporates. Of course, you might still reasonably hate the design, but it’s impossible to give a fair critique without a proper understanding.

For instance, criticising Go for its lack of generics misses the point of what Go is for—to be a simple, boring language, with minor improvements on the status quo, that admits good tooling. And Go does this admirably. It’s not going to advance the state of the art in the slightest. It’s not going to shape the future of programming. It is going to be a language that people use for writing regular old code right now.

To shamelessly misappropriate a quote from Tim Minchin: “Throughout history, every mystery ever solved turned out to be not magic.” If you find yourself thinking that something is weird or magical, that’s not an excuse to dismiss it out of hand—rather, it’s an excellent opportunity to dig in and learn something enlightening.

14 January 2016

Thoughts on using fractional types to model mutable borrowing of substructures

In type theory, a pair is denoted by a product type, with a constructor pair and two projections fst and snd:

$mathjax[ \textrm{pair}(a, b) : a \times b \\ \textrm{fst} : a \times b \to a \\ \textrm{snd} : a \times b \to b ]$

These projections are destructive: the pair is consumed, and one of the values is discarded. However, when we have a large structure, often it’s cheaper to modify parts of it in place, rather than duplicating and reconstructing it. So we’d like to come up with a system that lets us lend a field out for modification temporarily, and repossess the modified field afterward. That would let us model mutation in terms of pure functions, with the same performance as direct mutation.

Obviously our projections can’t destroy either of the fields, because we want to be able to put them back together into a real pair again. So the functions must be linear—the result has to contain both $mathjax(a)$ and $mathjax(b)$. What result can we return other than $mathjax(a \times b)$? I dunno, how about something isomorphic?

$mathjax[ \textrm{lend-fst} : a \times b \to a \times ((a \times b) / a) \\ \textrm{lend-snd} : a \times b \to b \times ((a \times b) / b) ]$

So this gives us a semantics for quotient types: a quotient $mathjax(a / b)$ denotes a value of type $mathjax(a)$ less a “hole” that we can fill in with a value of type $mathjax(b)$. We can implement this by drawing another isomorphism between $mathjax((a \times b) / a)$ and something like $mathjax(\textrm{ref}(a) \times (\textrm{hole}(\textrm{sizeof}(a)) \times b))$—the “hole” is an appropriately sized slot that we can fill in with a value of type $mathjax(a)$, and the quotient denotes a write-once mutable reference to that slot.

The inverses of the lending functions are the repossession functions, which repossess the field and reconstitute the pair by filling in the reference:

$mathjax[ \textrm{repo-fst} : a \times ((a \times b) / a) \to a \times b \\ \textrm{repo-snd} : b \times ((a \times b) / b) \to a \times b ]$

When we fill in the slot, the reference and quotient are destroyed, and we get back a complete data structure again. The references are relative, like C++ member references, so we could use this to implement, say, swapping fields between two structures.

I’m pretty sure that if you made these types non-duplicable, it’d guarantee that the lifetime of a substructure reference would be a sub-lifetime of the lifetime of the structure. So, notably, these lending functions wouldn’t need to actually do anything at runtime; they’d simply give us a type-level way to ensure that only one mutable reference to a particular field of a value could exist at a time. The same property is enforced in Rust with its borrow system, but in this system, we don’t need any concept of references or lifetimes. Rather than enforce a static approximation of stack/scope depth, as in Rust, we can enforce a static approximation of structure depth.

We probably need a subtyping rule, which states that a quotient type is a subtype of its numerator, as long as the denominator is not void:

$mathjax[ a / b \le a, b \ne 0 ]$

You can’t borrow a void field from a structure anyway, because you can’t construct a void value, so the side condition should never come up in practice.

In order for this to be useful in a real language, we probably need an additional “swap” rule, which states that it doesn’t matter in which order we lend or repossess substructures:

$mathjax[ a / b / c = a / c / b ]$

And clearly $mathjax(a / 1)$ is isomorphic to $mathjax(a)$, because every structure is perfectly happy to lend out useless values, and repossessing them does nothing.

We can use negative types to denote borrowing of fields from discriminated unions. If the type $mathjax(a + b)$ denotes a discriminated union like Haskell’s Either a b (which has a value of Left x where x is of type a, or Right y where y is of type b) then we also have two projections—but they’re weird:

$mathjax[ \textrm{lend-left} : a + b \to a + ((a + b) - a) \\ \textrm{lend-right} : a + b \to b + ((a + b) - b) ]$

Here, when we try to lend out a field, it might not be present. So we get either the lent field, or an instance of a smaller sum type. This can be used to implement pattern-matching: try to lend each field in turn, reducing the space of possible choices by one field at a time. If you get all the way down to $mathjax(0)$, you have a pattern-match failure. This automatically rules out redundant patterns: you can’t try to match on a field you’ve already tried, because it’s no longer present in the type.

That’s about as far as I’ve gotten with this line of thinking. If anyone has ideas for how to extend it or put it into practice, feel free to comment.

07 June 2015

Give.

When I walk down the streets of San Francisco, people sometimes ask me for money. And if I have the time, as I almost always do, I stop and talk with them about what brought them to where they are. And if I have the cash, as I almost always do, I give it to them.

People tell me that I’m foolish for giving money to people for no reason. My donations aren’t tax-deductible! They don’t go on the record anywhere! These people might just be using my hard-earned money to buy alcohol and drugs!

So what? I use my money to buy alcohol and drugs. Because like an awful lot of people in the modern west, I’m lonely. My vices are a distraction to tide me over until I meet another lonely someone whose flaws are compatible enough with my own that we can rub our woes together and feel whole again.

So if you can give, give. What the fuck does someone with a steady job need with another few bucks? Let it go. It might make the difference between life and death for someone who was dealt a shitty hand.

27 May 2015

Don’t say what you mean.

In conversation, I used to think it was good to say what you meant. I thought doing so would help me be understood. But I learned very quickly when tutoring computer science students in college that this strategy does not work in general.

The real challenge is to say what you think will make you understood. This turns out to be hard, because you need to develop a mental model of another person’s understanding. It means looking at what you’re trying to say in a new way. This has happy side effects, though: it makes you more empathetic toward others, and gives you a better understanding of the subject.

The old joke “Those who can’t do, teach” is a stupid statement—but an awesome prescription! If you can’t do it, teach it! By explaining things you’ve learned to others, you can develop your understanding and skill. When you can explain something to anyone, it means you have a fully developed model of it in your mind, one that you can look at from anyone’s point of view.

Similarly, when we’re trying to understand others, we should not be listening to what they say but trying to understand what they mean. When someone says something offensive or upsetting, often they didn’t intend for it to sound that way, and the intent matters. Being misunderstood feels bad; think about how often you’ve probably made someone feel that way because you didn’t take a moment to think.

As listeners, we should ask for clarification and try to understand why something was said. Sometimes it’s because a person has a belief we disagree with, or a bias we think is unfair. Sometimes it’s because we were hurtful to them, and they were acting in self-defense. In these cases we can use the moment as an opportunity to learn and teach, rather than getting offended and putting everyone involved on the defensive.

The vast majority of the time, people are basically well-intentioned and just trip a little on the delivery. Giving them the benefit of the doubt is important to develop mutual understanding.