In type theory, a pair is denoted by a *product type*, with a constructor *pair* and two projections *fst* and *snd*:

$mathjax[ \textrm{pair}(a, b) : a \times b \\ \textrm{fst} : a \times b \to a \\ \textrm{snd} : a \times b \to b ]$

These projections are *destructive*: the pair is consumed, and one of the values is discarded. However, when we have a large structure, often it’s cheaper to modify parts of it in place, rather than duplicating and reconstructing it. So we’d like to come up with a system that lets us *lend* a field out for modification temporarily, and *repossess* the modified field afterward. That would let us model mutation in terms of pure functions, with the same performance as direct mutation.

Obviously our projections can’t destroy either of the fields, because we want to be able to put them back together into a real pair again. So the functions must be linear—the result has to contain both $mathjax(a)$ and $mathjax(b)$. What result can we return other than $mathjax(a \times b)$? I dunno, how about something isomorphic?

$mathjax[ \textrm{lend-fst} : a \times b \to a \times ((a \times b) / a) \\ \textrm{lend-snd} : a \times b \to b \times ((a \times b) / b) ]$

So this gives us a semantics for quotient types: a quotient $mathjax(a / b)$ denotes a value of type $mathjax(a)$ less a “hole” that we can fill in with a value of type $mathjax(b)$. We can implement this by drawing another isomorphism between $mathjax((a \times b) / a)$ and something like $mathjax(\textrm{ref}(a) \times (\textrm{hole}(\textrm{sizeof}(a)) \times b))$—the “hole” is an appropriately sized slot that we can fill in with a value of type $mathjax(a)$, and the quotient denotes a write-once mutable reference to that slot.

The inverses of the lending functions are the *repossession* functions, which repossess the field and reconstitute the pair by filling in the reference:

$mathjax[ \textrm{repo-fst} : a \times ((a \times b) / a) \to a \times b \\ \textrm{repo-snd} : b \times ((a \times b) / b) \to a \times b ]$

When we fill in the slot, the reference and quotient are destroyed, and we get back a complete data structure again. The references are relative, like C++ member references, so we could use this to implement, say, swapping fields between two structures.

I’m pretty sure that if you made these types non-duplicable, it’d guarantee that the lifetime of a substructure reference would be a sub-lifetime of the lifetime of the structure. So, notably, these lending functions wouldn’t need to actually *do* anything at runtime; they’d simply give us a type-level way to ensure that only one mutable reference to a particular field of a value could exist at a time. The same property is enforced in Rust with its borrow system, but in this system, we don’t need any concept of references or lifetimes. Rather than enforce a static approximation of stack/scope depth, as in Rust, we can enforce a static approximation of structure depth.

We probably need a subtyping rule, which states that a quotient type is a subtype of its numerator, as long as the denominator is not void:

$mathjax[ a / b \le a, b \ne 0 ]$

You can’t borrow a void field from a structure anyway, because you can’t construct a void value, so the side condition should never come up in practice.

In order for this to be useful in a real language, we probably need an additional “swap” rule, which states that it doesn’t matter in which order we lend or repossess substructures:

$mathjax[ a / b / c = a / c / b ]$

And clearly $mathjax(a / 1)$ is isomorphic to $mathjax(a)$, because every structure is perfectly happy to lend out useless values, and repossessing them does nothing.

We can use negative types to denote borrowing of fields from discriminated unions. If the type $mathjax(a + b)$ denotes a discriminated union like Haskell’s `Either a b`

(which has a value of `Left x`

where `x`

is of type `a`

, or `Right y`

where `y`

is of type `b`

) then we also have two projections—but they’re weird:

$mathjax[ \textrm{lend-left} : a + b \to a + ((a + b) - a) \\ \textrm{lend-right} : a + b \to b + ((a + b) - b) ]$

Here, when we try to lend out a field, it might not be present. So we get either the lent field, or an instance of a smaller sum type. This can be used to implement pattern-matching: try to lend each field in turn, reducing the space of possible choices by one field at a time. If you get all the way down to $mathjax(0)$, you have a pattern-match failure. This automatically rules out redundant patterns: you can’t try to match on a field you’ve already tried, because it’s no longer present in the type.

That’s about as far as I’ve gotten with this line of thinking. If anyone has ideas for how to extend it or put it into practice, feel free to comment.

There is no real formula that can tell anyone how the industry is going to innovate and develop in the future, whether this is in telecom, labor or gig economy. However, the future of field engineering is changing all the time, because technology is constantly evolving. With evolving technology, Field Engineers are facing a future where their career is going to adapt, and they must be ready to embrace those changes.

ReplyDeletefuture of engineering