I’ve been programming for a long time, mostly because I’m addicted to solving puzzles. I’m very much into programming languages, partly because as a programmer I’m surrounded by them, but also because language is an important part of what makes us human.
So while programming is a craft I enjoy, and I love having the power to create awesome stuff practically ex nihilo, I too am disappointed in what we have to work with. Here’s why.
Process-Oriented versus Result-Oriented Programming
Concatenative, functional, object-oriented, logic—whatever paradigm you choose, they all have the same issue: they require that you describe the how rather than the what. In any language you can name, a program is a description of the process by which you compute what you want, when it could be just a description of what you want. Sure, some languages are more declarative than others, but this is orthogonal to the issue of “declarative versus imperative”—a language that expresses a process declaratively is still process-oriented, not result-oriented.
That’s the real issue. As programmers, we solve problems using code. Even a fun problem like “a game like this should exist but doesn’t” is valid. So we should be able to express our code in terms of its intended result, not the process by which that result is attained.
The author proposes a style of programming based on constraint satisfaction. From a set of constraints, it’s possible to derive an algorithm that meets them. Of course, we naturally worry about the properties of the algorithm the compiler will derive; is it Θ(n!) when it could be Θ(n log n)? But while that is a reasonable worry, it is not—strictly speaking—relevant.
See, there are many equivalent sets of constraints that define a particular algorithm, and by selecting intelligently from those, we can be confident that the compiler will derive what we intend. Our job should be to intelligently express requirements as formal constraints for the compiler to satisfy.
I know that sounds boring, and I was initially skeptical that useful programs could be made this way. But think about it: every non-trivial program already is implemented this way, just in a slow, error-prone, manual fashion. A programmer takes some requirements, formalises them, then painstakingly translates them into a suitable process. We can avoid one of those steps.
A Realistic Example
Say you’re writing a game. The first thing on your to-do list is to be able to move a character around the screen with the arrow keys. In just about any existing language, you have to construct a bunch of unrelated infrastructure to make this happen:
- Transforming continuous physics into a discrete representation
- Dealing with issues of frame rates, rendering, and collision
- Managing the state and resources of living and dead objects
Often, much of this work has already been done for you by a library author, but the fact remains that someone had to do the busywork. Functional reactive programming, in contrast, gets this right. You can say precisely what you mean:
- The character consists of:
- Acceleration of ( ax , ay )
- Velocity of ( vx , vy )
- Position at ( x, y )
- Sprite of size ( w, h )
- When an arrow key is pressed, change the acceleration:
- Left: ( ‒1, 0 )
- Right: ( +1, 0 )
- Up: ( 0, ‒1 )
- Down: ( 0, +1 )
- Integrate over time:
- Velocity from acceleration
- Position from velocity
- The colour of a pixel at ( px , py ) is:
- If ( px , py ) is within the sprite ( x, y, x + w, y + h ):
- The colour of the pixel at ( px ‒ x, py ‒ y ) in the sprite
- The background colour
This is a complete, fully declarative, fully result-oriented description of the program. It contains only those details that are relevant to the problem at hand, and every detail is expressed as a constraint that can be plugged directly into an FRP system—which is really a simple constraint solver—to derive the runtime behaviour.
And Whither Hence?
Prolog serves as an example of constraint-based programming, and Haskell FRP libraries are nice, but these are basically just toys compared to what we can do.
There is no reason we cannot make a better, more result-oriented language, or even just make existing process-oriented languages look more result-oriented to get some momentum for the paradigm. Take a look at Swym (“Say What You Mean”), where the
etckeyword lets you do things like this:
List.byPairs: [[.1st, .2nd], [.3rd, .4th], etc]; byPairs[1..10] == [[1,2],[3,4],[5,6],[7,8],[9,10]];
Which even does something sane when you give it a different number of arguments:
[1,2,3].byPairs == [[1,2]]; [1,2].byPairs == [[1,2]]; .byPairs == ;
This is really exciting stuff, but it shouldn’t have to be, and we can do even better. The implementation of a few “magic” operators like
eachcould make even an imperative language look much more result-oriented.
For a while, I was working on a language called Prog. It was to be an imperative language for generic programming, with dependent typing. Many of the magic was implemented with “radioactive types”, unstable types that could exist only during evaluation, which decayed into ordinary types thereafter. These were used, for instance, to implement various query operators on iterable structures:
which (1..10) > 5 == (6, 7, 8, 9, 10); every (1..5) ** 2 == (1, 4, 9, 16, 25); every (1..3) * every (1..3) == ((1,2,3), (2,4,6), (3,6,9));
After reading that article and the comments, I’m inspired to work on Prog again, if I can find the time. Expect to see more blarticles about it. A project in the same vein that’s consuming a lot of my energy right now is a tool to help non-programmers make professional-quality interactive media. If you don’t know how to program, you have limited means of creating games and other media applications. All the authoring software that’s out there falls into two categories:
- Tragically underpowered
- Needlessly complicated