17 December 2014

Caffeine vs. Nicotine

I had been experiencing some pretty severe anxiety off and on for a few months, and I was aware that my love of caffeine was exacerbating this anxiety, so I decided to look for a stimulant that would help me focus, while not worsening my anxiety.

I eventually settled on nicotine, for a few reasons:

  • It is a stimulant that is known to relieve anxiety.
  • By itself, it is not known to be a carcinogen.
  • It is only significantly addictive in conjunction with MAOIs, which are present in cigarettes but not in ecigs.

Experiment

Take ~1mg of nicotine per day by vapouriser, equivalent to 1 or 2 cigarettes, for three weeks, and document the results.

I chose an e-liquid solution of 6mg/mL nicotine suspended in vegetable glycerin (VG) with flavour. Most e-cigarette liquids contain propylene glycol (PG), but I am mildly allergic to this, so I excluded it.

Observations

  • Nicotine is an effective stimulant. It increases my motivation and focus, and makes me more productive by decreasing my propensity for distraction. It is comparable to caffeine in this regard.
  • Caffeine makes me feel anxious. Nicotine makes me feel relaxed.
  • Caffeine interferes with my ability to think creatively. Nicotine does not.
  • Caffeine interferes with my ability to sleep. Nicotine does not.
  • Nicotine reduces my desire to drink alcohol. Caffeine does not.
  • I suffer mild withdrawal symptoms (headaches, irritability) when ceasing caffeine. I also suffer mild withdrawal symptoms (agitation) when ceasing nicotine.
  • Nicotine increases my blood pressure slightly more (4±1mmHg) than caffeine does.

Conclusions

Nicotine is an effective choice for my use case, and I am happy with this choice. The stigma against nicotine appears to be due to its association with cigarettes—which I must emphasise are disgusting, dangerous, and outdated. Nicotine should be reconsidered and accepted for its own merits.

12 September 2014

Artificial Expertise

Artificial expertise is a term that I tend to use in real-life discussions, which I would like to document in order to help make people aware of the concept. It refers to the specific kind of detailed knowledge that you get from working with complex systems that require a high amount of skill to operate—when those skills are not transferable to other systems. The expertise is not inherent to the problem, but incidental to the solution, and thus artificial.

As far as computing science goes, I can give you two very good examples of systems that generate artificial expertise. What’s wonderful is that they’re at absolute opposite sides of the spectrum of delight.

C++ is a cult of artificial expertise because it encourages—often necessitates—the constant consideration of details typically irrelevant to the problem at hand. The choice of passing parameters and storing objects by value, reference, pointer, or smart pointer needs to be considered for every declaration. All this takes place in the exciting context of a complex interplay between implicit conversions, exceptions, mutable memory, and silently undefined behaviour.

When I use C++, or help my coworkers to, it makes me angry and tired.

Perl is a cult of artificial expertise because it has many “tricks” for producing more compact and expressive code. You could think of a trick as anything non-obvious. I’m loath to use the term because I regularly see programmers labelling as “tricks” many perfectly ordinary language features and programming techniques, such as Python’s list comprehensions or Haskell’s lazy evaluation. In Perl, many of these take the form of sensible defaults that can be left implicit, or special variables used to control the behaviour of common operations.

When I use Perl, I feel empowered and clever. Like C++, Perl gives you many opportunities to improve your code or write it in the manner that you deem best suited to the problem, and in the hands of an expert it’s a powerful tool. Here’s the critical difference: Perl code that I have seen written by a beginner tends to simply not use the “expert” features; C++ code tends to accidentally use them, and often thereby run into undefined behaviour through their miscombination.

Now, knowing either of these languages puts you in imminent danger of getting yourself employed. And I believe this is no coincidence. Creating inefficiency is an excellent way to create jobs—it’s creating efficiency that creates careers.

14 August 2014

Adjective Valence and Linguistic Relativity

There is a concept in linguistics that I could have sworn I’ve read about, but now cannot seem to find mention of anywhere. I call it adjective valence, and examining it can give you insight into how you conceptualise the world around you, how language influences and is influenced by that, and where language is lacking.

The valence of an adjective is its directionality—whether the word is perceived as the essential quality of a trait (positive valence) or its antonym (negative valence), and in what direction the axis goes in spatial metaphors concerning that trait.

For example, hot and cold are antonyms. We can observe that hot has positive valence in English by noting that when something gets hotter, its temperature is said to increase; cold has negative valence because when something gets colder, its temperature decreases. It’s fine for a thing to become more hot, but our language makes it slightly unusual to say less cold for the same concept. This has a sound physical basis: a higher temperature implies a higher average internal kinetic energy.

Not all languages have the same valence for all adjectives. For example, in English, we think of ourselves as moving forward in time, that the future is before us and the past is behind us. This valence leads to many idioms such as “I’m looking forward to it” and “put it behind you”. But in the Aymara language, the future is behind you—after all, you can’t see it—and the past is in front. This was also true in Ancient Greek (ὄπισθεν = behind = in the future, πρόσθεν = in front = in the past) and modern Chinese shows some evidence of a switch from front-past to front-future.

We can use this information to discover linguistic assumptions we make about the world. By inverting the valence of an adjective, we can challenge those assumptions and perhaps achieve insights about whether our intuitions correspond to real physical phenomena. For example, let’s replace temperature with its reciprocal, such that “absolute zero” is considered infinitely high on the scale of coldness. If this were our basic assumption, we would have no trouble accepting that such a temperature is unattainable in any physical system. And it turns out that that quantity, the thermodynamic beta, is a useful one, rigorously defined in terms of the energy and entropy of a system.

This is an argument in favour of the weak form of linguistic relativity, the notion that language influences thought. It’s not difficult to consider the implications of inverting the valence of an adjective, but we rarely think to do so, and talking about the implications is often hilariously difficult. This metal ball has a high mass, and a low—what? Take notice of these things in conversation and try inverting them to peek behind the curtain of your own thoughts.

08 May 2014

Constification in C++11

Here’s a neat little pattern. Suppose you want to initialise some object using mutating operations, and thereafter make the object const, without any overhead. In C++11 you can do just that with a lambda:

const auto v = []{
  vector<int> v;
  for (auto i = 0; i < 10; ++i)
    v.push_back(i * 10);
  return v;
}();

This “inline builder pattern” turns out to be a nice way to encapsulate mutation to smaller scope, thereby making your code a bit easier to reason about. Of course, the drawback of this is that objects you would initialise this way are the sorts of objects you might prefer to move rather than copy, and const interferes with that; but that is a problem in C++11 generally.

It goes to show that lambdas make working with const things a bit nicer all around. This pattern also lets you initialise const members in an object:

struct X {
  X()
    : x([]{
      auto s = 0;
      auto i = 10;
      while (i) s += --i;
      return s;
    }())
    {}
  const int x;
};

Ugly, but it gets the job done. That’s C++ for you.

21 August 2013

Lexical Closures without Garbage Collection

The problem and the solution

When designing my Kitten programming language, I wanted to support lexical closures without the need for a tracing garbage collector. The problem of closure is simple: anonymous functions often use local variables from their enclosing scope; in a memory-safe language, those variables should continue to exist if the inner function escapes, after the outer function exits. Take this classic example of function currying in JavaScript:

// curriedAdd :: Number -> (Number -> Number)
function curriedAdd(x) {
  return function(y) {
    return x + y;
  };
}

var inc = add(1);

Here, x must exist after curriedAdd has exited, in order for us to safely and meaningfully call the resulting anonymous function. Since Kitten has no mutable variables, there is no observable difference between capturing a variable by reference and capturing it by value. So the Kitten compiler takes a definition like this:

def curriedAdd (Int -> (Int -> Int)):
  ->x
  { x + }

def inc (Int -> Int):
  1 curriedAdd @

And rewrites it to copy the value of x into the closure of the anonymous function. In low-level pseudocode, this is what’s happening under the hood:

def curriedAdd:
  setlocal0
  $(local0){ closure0 __add_int }

Now there are no references to the local variable within the inner function—only the value of x is captured when the function object is constructed on the stack. The runtime cost is minimal, and no garbage is generated.

The rest of the story

Saying “by value” and “no garbage” is accurate, but not a complete picture. It would be inefficient to always deep-copy all closed values, particularly if they were to exceed the size of a machine word.

So while integers, booleans, and floating-point numbers are all copied, heap-allocated objects such as vectors can instead be shared using reference counting. Coming from such languages as C++, I like that this preserves deterministic memory behaviour. And in the absence of mutation and lazy evaluation, cycles are impossible, so reference counting alone suffices to eagerly reclaim all garbage.

This transformation is equally valid in other languages, as long as we ignore mutation. You can easily rewrite the JavaScript example above to use explicit closure:

// curriedAdd :: Number -> (Number -> Number)
function curriedAdd(x1) {
  return function(x2, y) {
    return x2 + y;
  }.bind(this, x1);
}

It also works when we add mutation. Here’s a function encapsulating an impure counter from a starting value:

// counter :: Number -> (() -> IO Number)
function counter(x) {
  return function() {
    return x++;
  };
}

var c = counter(0);
console.log(c());    // 0
console.log(c());    // 1
console.log(c());    // 2

We can fake pointers using Array to make this work without lexical closure:

// counter :: Number -> (() -> IO Number)
function counter(x) {
  return function(px) {
    return px[0]++;      // dereference
  }.bind(this, [x]);     // reference
}

However, reference cycles are now possible, so a reference-counting collector would need an auxiliary cycle detector to reclaim all garbage.

Optimisations and future work

Production tracing collectors outperform naïve reference-counting collectors by a wide margin. We can make a reference-counting collector that closes that gap through simple optimisations:

  • Reducing the number of bits used in the reference count—most objects have a small number of references.
  • Creating objects with a zero reference count, and only retaining when the object escapes—the vast majority of objects die in the scope they were born.
  • Eliminating redundant reference manipulation, considering only the net reference change in a basic block—most reference changes do not result in deallocation.

I intend to implement such optimisations in the future, as Kitten evolves. My bet is that as a statically typed, high-level language with low-level semantics, Kitten will someday be suitable for games and simulations. There, deterministic memory behaviour is a big win, since low latency is often more important than high throughput. Still, the ultimate direction of the project remains to be seen—I’m just discovering the language as I go. :)