Programming, philosophy, pedaling.

An rwx Theory of Programming Languages

Feb 13, 2016

Tags: programming

A great deal of effort has been expended on figuring out what makes a programming language "good" (or, more accurately, popular).

What I'm about to propose is really just conjecture, but I think the conclusions drawable from it are amusingly accurate.

The Theory

The popularity of a given programming language can be described accurately as if the language were a file on a Unix filesystem, with read, write, and execute bits. A language with all three "bits" set is more likely to be popular than a language with two or less.

That alone is fairly meaningless, so I'll detail exactly what is meant by "readability", "writability", and "executability".


What does it mean for a programming language to be readable? That's an inflammatory question.

At the risk of being overly simplistic and broad, I'll say that a language's readability is largely a function of 3 characteristics:

To a certain extent, this characteristic can be simplified to a single question:

"How similar is it to C?"

For the last 50 years, the vast majority of programming has been done in procedural fashion, in languages with syntaxes directly derived from or heavily inspired by ALGOL and C. Even languages that aren't directly procedural (read: anything remotely object-oriented) regularly borrow constructions from their ALGOL-derived cousins.

This doesn't mean that ALGOL-like syntaxes are objectively good, just that their universal familiarity has a tangible effect on how we approach new languages.

For example, how do you feel about these function calling syntaxes?

(1): foo(bar, baz, quux)
(2): foo bar baz quux
(3): foo bar withBaz: baz withQuux: quux
(4): [quux;baz;bar | foo]

If you're like me, (1) is the most immediately understandable - it's standard function-with-arguments-separated-by-commas-in-parentheses-style. (2) might also be fairly recognizable, if you're used to working in a shell - it's utility-with-stringy-arguments-separated-by-whitespace-style. (3) will feel familiar to Smalltalk programmers - the receiver-object-taking-a-message-and-keyword-arguments-style. Finally, (4) should feel relatively foreign to everybody - the argument list is reversed to reflect the structure of the stack.

Without getting into which one of these styles is objectively best (I wouldn't be able to tell you), it's apparent to most people that (1) is the most readable simply by virtue of being so common. If I see foo(bar, baz, quux) in a language I've never used before, I can be relatively confident that it will behave like the other languages I've seen it in. The principle would be the same, even if we lived in a world where Smalltalk or Lisp styles were overwhelmingly popular.

Apparent meaning.

If you had (or have) never written a line of code in your life, which one of these lines is most apparent in meaning? It's okay if neither is apparent:

(1): foobar[0]
(2): foobar.first

Although (1) is what I learned first and is significantly more common, I find (2) much more apparent. Compared to (1), which requires that I know that arrays are (usually) zero-based and accessed with [], (2) only requires that I recognize what I want (getting the first element of foobar) and the general pattern for "doing things" from the line's surroundings (calling a method on an object with .).

What about this pair?

(1): foo = bar * baz + quux
(2): = foo + * bar baz quux

(2) is conceptually cleaner (no memorization of PEMDAS required), but it's also significantly less apparent to someone who's already gone through basic arithmetic with infix notation.

These are simple examples, but I think that they demonstrate a truth in language design that we're often not willing to admit, namely that precedent and apparency matter more than conceptual purity. It's nice to think of radically new languages the reimagine common operations in

Complexity of mental representation.

This ties closely into apparent meaning, although it's slightly different.

Consider each of the following:

for(int i = 0; i < foobar.length; i++) {

In both examples, I call baz on every element in the foobar array.

However, in the first one, I have to keep track of an index variable (i), a loop condition and iteration (i < foobar.length; i++), and a new scope ({}).

In the second one, I take advantage of a little bit of magic in the form of an each method and a :baz symbol to reduce N operations to a single line. I don't have to worry about my index variable, the correctness of my condition, or my scope.

Although (1) is probably more immediately apparent and much more similar to currently popular styles, it also requires me to maintain a mental representation of details irrelevant to what I'm actually trying to accomplish. I don't want (or need) to know how each element is accessed or that there are foobar.length elements, I just want to apply baz.


A programming language's writability is closely related to its readability. The readability of a language refers to analyzing idiomatic programs written by everybody, while the writability of a language refers to an individual's ability to produce idiomatic programs.

My thoughts on writability fall into a few already well-defined categories:


POLA stands for the Principle of Least Astonishment.

It's hard to think of a language that is uniformly unastonishing (in a good way!), but I often point to Ruby as one that can be unastonishing.

Ruby's Array class is a good example of this. Compare the following code snippets:

Testing an array for emptiness


foobar = []

not foobar # => True
# OR
len(foobar) == 0 # => True


foobar = {};

foobar.length == 0 // => true


foobar = []

foobar.empty? # => true

Testing an array for element inclusion


foobar = [1, 2, 3, 4, 5]

6 in foobar # => False


foobar = {1, 2, 3, 4, 5};
found = false;

for (i : foobar) {
    if (i == 6) {
        found = true;


foobar = [1..5]

foobar.include?(6) # => false

In both examples, the Ruby solution is fairly unastonishing. Predicates are trailed by "?", and are single English words corresponding to their behavior.

The Python examples tend to be equally short, but not as unastonishing. not foobar relies on empty arrays being considered falsey (just be careful with tuples!). Checking the length is not particularly astonishing (aside from len() being called from the global namespace), but "the size of foobar is zero" is an awfully roundabout way of saying "foobar is empty".

The first Java example is about as short and astonishing as its Python counterpart. The second one, on the other hand, is absolutely insane. It could be shortened by using a helper like Arrays.asList(foobar).contains(6), except that this is invalid (primitives and Java generics don't mix) and also completely astonishing (an Arrays class? asList?).


There's more than one way to do it is a longstanding Perl principle. People don't often think of Perl when it comes to readability, but I would argue that it is one of the most writable languages (perhaps even write-only).

Some examples:

A read-loop that capitalizes input and spits it back

In C, this would involve buffering stdin in a loop, converting each character to its uppercase equivalent (don't forget Unicode!), and printing them back out. The process in Java would be similar.

In Perl:

$ perl -ne 'print uc'

a little more explicitly (and off of the command line):

while (<>) {
    print uc;

even more explicit:

while (my $line = <STDIN>) {
    print uc($line);

and so on:

while (my $line = readline(*STDIN)) {
    print uc($line));

Although this may seem like a shortcut to terse and unreadable code, it is a legitimate reason to like a language (and enjoy writing in it). Perl provides an astounding number of primitives and shortcuts for common operations (look at all of these operators!), making quick scripting painless and straightforward.

(By the way, the Ruby version: ruby -pe '$_.upcase!').


This is probably the murkiest of the three "bits".

Let's clarify it:

Ease of setup

Let's say I found a cool program written in "Etaoin". How am I going to run it?

If Etaoin is compiled and the developer was kind enough to provide a package or installer, I'd probably just download their package and let my system take care of the little details. The is the best case scenario.

But what if Etaoin is interpreted? Well, the developer might bundle the interpreter into a package, but this would be a bit overkill. It would probably be better to install Etaoin discretely, a process that involves either a package or manual compilation, depending on Etaoin's popularity.

Once I've got the interpreter, what about the program's dependencies? Does Etaoin have a package manager? Are the dependencies available through it? Are they compatible with my release of the language? Am I going to need other languages and tools to build them?

Whenever I see a new language or an interesting project written in a new language, these are the very first questions I ask. It's a lot harder to justify installing a relatively simple program If it requires me to manually build an interpreter and dependencies, solely by virtue of being written in an uncommon language.

Of course, this is not a completely fair or accurate characterization. It's a little ridiculous to place the onus of "executability" solely upon package maintainers.

Integration with the system

Now that I have Etaoin all set up, it's time to run this cool program.

But wait, Etaoin programs run in a virtual machine that needs to be invoked:

$ etaoin cool_program # runs cool_program.et

This isolation between my system and the actual program is a little annoying, but it's not terrible. An alias or script hides it away.

But what if the etaoin executor hides my system environment away? What if, for the sake of security or misguided "cleanness", I'm not allowed to create non-Etaoin subprocesses? What if the runtime doesn't respect the basic semantics of the underlying system (signals, pipes, sockets, virtual filesystems)?

A gilded cage is still a cage. From the user's perspective, this doesn't matter all that much (so long as the program actually works). However, from the programmer's perspective, a language simply isn't very useful (or perhaps convenient) if it exists inside a bubble.

(A slightly more real-world example: Smalltalk is an incredibly powerful language that, in my opinion, is both readable and writable. However, the common practice of implementing Smalltalk dialects as whole self-contained "worlds" makes it difficult to run many Smalltalk programs without entire graphical suites).

Concluding Thoughts

Is this theory accurate? Maybe.

These characteristics are in chorus and in conflict with each other, often simultaneously. In any given language, the ones that triumph (and fail) are fairly arbitrary.

Even if this model doesn't correctly predict the popularity of future programming languages, I'd wager that it's amusingly accurate for languages that we consider currently popular. Even more amusingly, disagreements about whether a given language is popular can probably be accurately contextualized as disagreements over which "bits" are most important.

- William