I designed a programming language and it looks like this.

For reasons I talk about here, I’m going to try to create a programming language. So far I’ve got basically a first-draft design.

There is a specific idea I decided I want to explore, when I do this: Programming languages duplicate too much. Programming languages often have multiple syntaxes that do very similar things, or multiple underlying concepts that do very similar things. It is sometimes possible to collapse these similar things into one thing, and when we do, I usually like the results better.

For example, many languages (Perl, Python, PHP) have both a dictionary type and an object type, but the two are used in effectively the same way; on the other hand Lua collapses dictionaries and objects into one type (tables), and makes an object field lookup identical to a dictionary string lookup. Or most object oriented languages distinguish objects and classes, but prototype based languages show that you can get by with just objects; if objects can inherit from other objects, then a “class” is just a pattern for a particular kind of object. When you collapse ideas together like this, or build language features on top of existing features rather than adding new primitives, you reduce both the amount of mental overhead in thinking about code implementation and also the amount of redundant syntax. There’s usually a lot of redundant syntax. C++ uses . to access a field from a reference, and -> to access a field from a pointer. Most languages use (x) to indicate an argument to a function, and [x] to indicate an index to an array. Why? If pointers and references were just special cases of one underlying concept, or if arrays and functions were, you could use one syntax for each pair and you wouldn’t have to mentally track what each variable is, you wouldn’t have to do all the obnoxious manual refactoring when you suddenly decide to replace a reference with a pointer somewhere or vice versa.

In the language I’ve been thinking about, I started with Lua’s “Table” idea– what if i built objects out of dictionaries?– and decided to take it one step further, and build both objects and dictionaries out of functions. In this language, there’s one underlying data structure that functions, objects, dictionaries, and some other stuff besides are just special cases of– design patterns of.

Taking a cue from Smalltalk, I’m going to call this underlying structure “blocks”.

Blocks are just functions

A block, for purposes of this blog post, is a unary function. It takes exactly one argument, and it returns a value. Anywhere in this blog post I say “blocks”, I could have just written “functions”. I’m going to mostly use the “block” jargon instead of saying “functions” because some blocks will be “used like” functions and some very much will not be.

In my language, you’ll define a function just with “=”:

    addOne ^x = x + 1

The ^x is an argument binding. The ^ signals to the language that the right side of the = needs to be a function body (a closure). If on the next line you just said

    y = addOne 3

That would just assign 4 to the variable “y”, it would not create a function.

Blocks are pattern-matched functions

A big part of this project is going to be that I really like the ideas in functional languages like ML or Haskell, but I don’t actually enjoy *writing* in those languages. I like OO. I want a language that gives me the freedom and expressiveness of FP, but comfortably lets me code in the OO style I use in Python or Lua or C++. So I’m going to steal as many ideas from FP languages as I can. Three really important ideas I’m going to steal are closures, currying, and pattern matching.

In case you don’t know those languages, let me stop and explain pattern matching real quick. You know how C++ lets you function overload?

    // In C++
    void addOneHour(int &k) { k = (k + 1) % 12; }
    void addOneHour(float &k) { k = fod(k + 1.0, 12); }

Well, pattern matching is as if you could switch not just on type, but also on value:

    // In hypothetical-C++
    void addOneAbsolute(int &k where k > 0) { k = k + 1; }
    void addOneAbsolute(int &k where k < 0) { k = k - 1; } void addOneAbsolute(0) { } // Do nothing

That last line– the one demonstrating we could write a function whose pattern matches only *one single value*– is going to be important to this language. Why?

Blocks are dictionaries

In my language, if I want to assign more than one pattern to a single block, I just use = multiple times:

    factorial ^x = x * factorial (x - 1)
    factorial 0 = 1

“Factorial” is a block. The way I’m looking at it, a block is just a data structure which maps patterns to closures. It’s like a dictionary, but some of the keys (the ones with bound variables) match multiple values.

However we could not bother assigning any bound-variable patterns, and then we’d just have a dictionary or an array:

    nameOfMonth 1 = "January"
    nameOfMonth 2 = "February"
    nameOfMonth 3 = "March"
    ...

Blocks are objects

Here I want to introduce a data type called an “atom”. This is an idea stolen from Erlang (and possibly Ruby?). Technically an atom is an “interned string”. It’s something that the programmer sees as a string, but the compiler sees as an integer (or a pointer, or something which has a constant-time comparison). You get at the atom by putting a . before a symbol; the symbol is the name of the atom:

    x = .atomname

It’s cheaper to compare atoms than strings (.atomname == .atomname is cheaper than “atomname” == “atomname”) and cheaper to use them as dictionary lookup keys. This means atoms work well as keys for fields of an object. Objective-C for example actually uses atoms as the lookup keys for its method names, although it calls them “selectors”. In my language, this looks like:

    constants.pi = 3.14
    constants.e = 2.71
    constants.phi = 1.61

Notice this looks like normal object syntax from any number of languages. But formally, what we’re doing is adding matching patterns to a function. What’s cool about that is it means we’ll eventually be able to use machinery designed for functions, on objects. Like to skip ahead a bit, eventually we’ll be able to do something like

    map constants [.pi, .e, .phi]

and this will evaluate to an array [3.14, 2.71, 1.61].

What’s up with the square brackets? Oh, right. well, I think all this “constants.” nonsense is gonna get kinda tiresome. So let’s say there’s a syntax like:

    constants = [ pi = 3.14, e = 2.71, phi = 1.61 ]

Notice I say “pi” and not “.pi”– on the left side of an =, the initial “.” is implicit. More on that in a moment.

One other thing. Inside of the [ ], there exists an implicit “this” variable, corresponding to the object the [ ] creates. so if you say

    counter = [
        count = 0
        increment ^x = { this.count = this.count + x }
        decrement ^x = { this.count = this.count - x }
    ]
    counter.increment 1
    counter.increment 3

Then at the end of this string of code “counter.count” is equal to four.

Blocks are prototypes

What if we want more than one counter object? Well, you’ll notice an interesting consequence of our pattern matching concept. Let’s say I said:

    counter = [
        init ^x = { this.count = x }
        increment ^x = { this.count = this.count + x }
        decrement ^x = { this.count = this.count - x }
    ]

    counter_instance ^x = counter x
    counter_instance.init 3
    counter_instance.increment 5

When we say “counter_instance.whatever”, the language interprets this as calling the block counter_instance with the argument .whatever. So if counter_instance is defined to just re-call “counter”, then on the next line saying “counter_instance.init 3” will fetch the block stored in counter.init, and then that block gets called with the argument 3. The way the “this” binding works is special, such that counter_instance.init gets invoked “on” counter_instance– “this” is equal to counter_instance, not counter.

The syntax we used to make counter_instance “inherit” is pretty ugly, so let’s come up with a better one:

    counter_instance.ditch = counter

I haven’t explained much about how = works, but when we say “counter_instance ^x = “, what we’re really doing is taking a closure with an argument binding and adding it to counter_instance’s implementation-internal key-value store, with the key being a pattern object that matches “anything”. “.ditch” is a shortcut for that one match-anything key slot. In other words, by setting counter_instance.ditch to counter, we are saying that counter is counter_instance’s “prototype”.

Something to make clear here: the lines inside a [ ] aren’t “magic”, like C++ inside a struct declaration or anything. They’re just normal lines of code, like you’d find inside a { }. The difference is the insides of [ ] are using a specially prepared scope with access to a “this” and a “super”, and at the end of the [ ] the [ ] expression returns the scope into which all these values are being assigned (“this”). The upshot is you could easily have the first line of your [ ] be something like an “inherit counter;” call that sets the ditch and does some various other fix-up to make this prototype system act more like some other kind of object system, like a class system (I like classes). This sort of thing is possible because

Blocks are scopes

Like most languages, this one has a chain of scopes. You’ll notice above I offhandedly use both ( ) and { } ; these are the same thing, in that they’re a series of statements which evaluate to the value of the final statement:

    x = ( 1; 2; 3 )

…sets x equal to 3. The 1; and 2; are noops. (Semicolon is equivalent, in the examples I’ve given here, to line-ending. There’s also a comma which is effectively a semicolon but different and the difference is not worth explaining right now.)

The one difference between { } and ( ) is that { } places its values into a new scope. What is a scope? A scope is a block. When you say

    a = 4

The unbound variable a is atom-ized, and fed into the current scope block. In other words “a” by itself translates to “scope.a”. When you create a new inner scope, say by using { }, a new scope block is created, and its ditch is set to the block for the enclosing scope. The scope hierarchy literally uses the same mechanism as the prototype chain.

Block constituents are properties (or: blocks are assignment statements)

Non-language geeks may want to skip this section.

I’ve been pretty vague about what = does, and that’s because it has to do several layers of things (matching items that already exist, binding variables, wrapping closures, and actually performing assignment). However, ultimately = must write [pattern, closure] pairs into one or more blocks. = cannot, however, actually write anything by itself. Ultimately, when = decides it needs to assign something, it is calling a “set” method.

    a = 4

Is ultimately equivalent to

    scope.set .a 4

That = is sugar for .set is a small detail, but it has some neat consequences. For one thing, since everything that happens in this language is curryable, it means you can trivially make a function:

    a_mutator = set.a

…which when called will reassign the “a” variable within this current scope (remember, “set” by itself will just be “scope.set”). For another thing, this means you can create a “property” for a particular variable:

    set.a ^x = ( b = x + 1 )
    a = 3

After this code runs, “b” will be equal to 4 (and “a” will still be equal to a function that mutates “b”).

The existence of .set will also have some interesting effects once we have types and therefore a type checker. I’ve been kinda vague about whether = has “set” or “let” semantics– that is, if you assign to a variable does it auto-instantiate or must you predeclare it, if there is a variable by the assigned name in the ditch does assignment shadow in the assigned block or reassign in the parent block, etc. And the answer is it doesn’t much matter for purposes of this post, because any of the possible things that happen when you set a field (“not declared” error thrown, assigned to top-level block, assigned to a parent block) could just be all things that could and do happen in different blocks, depending on what that block’s .set is set to. For example, it would probably make sense for object blocks and scope blocks to have a different last-ditch “.set” behavior, or be sensible to allow different source files to have different “.set”s for their file-level scopes (“use strict”).

On that note, let’s talk about types. There’s a lot of very exciting stuff happening in the study of types in programming languages right now, both types as used in languages and types as used in extra-lingual static analysis tools. I don’t understand a lot of this research yet (and I want to learn) but I think I understand enough to have an idea of what’s possible with types right now, and that means I know how I want types in this language to work.

Blocks are types

Let’s say we have a syntax variable:type that we can use to constrain the arguments of a function.

    factorial ^x : int = x - 1

When this function is called, there will be a runtime check, if “x” is not an int, it will be a runtime failure. Let’s say we can use the a:b construct inside expressions too:

    square ^x = ( x * x:float ) :: stateless

Let’s say that :: instead of : indicates that the type is being applied not to the value returned by that parenthesis, but to the implicit “function” defined by the parenthesis itself. “stateless” is a type that applies to functions; if we assert a function is “stateless” we assert that it has no side-effects, and its resulting value depends only on its inputs. (In other words, it is what in another language might be called “pure”.)

There’s some kind of a inferred typing system in place. There’s a compile time type checker, and when it looks at that “square” function it can tell that since “x” is a float in one place inside the expression, that the “x” passed into square must itself be a float. It can also tell that since the only code executed in “square ^x” is stateless, that the function “square ^x” is also stateless. Actually the “stateless” is from the checker’s perspective unnecessary, since if the checker has enough information about x to know the * in (x * x) is a stateless operation– which, if it knows x is a float, it does know that– then square ^x would be stateless anyway.

There’s some kind of a gradual typing system in place. There is a compile-time step which, everywhere square ^x is called, tries to do some kind of a type-proving step and determine if the argument to square is a float. If it can prove the argument is a float, it actually omits the runtime check to save performance. If it *can’t* prove the argument is a float, or it can prove the argument *isn’t* a float, it adds the check and maybe prints some kind of compile-time warning. (To stress: some of these properties, like “stateless”, might be in many cases *impossible* to prove, in which case the checker is conservative and treats “can’t prove” as a failure.) Besides omitting safety checks, there are some other important kinds of optimizations that the type checker might be able to enable. Critically, and this will become important in a moment, if a function is stateless then it can be potentially executed at runtime.

So what are types? Well, they’re just functions. “int” and “stateless” are language-builtin functions that return true if their arguments are an int, or a provably stateless function, respectively. (For purposes of a type function, if the type *doesn’t* match, then either a runtime failure or a return false are okay.) Types are values, so you can construct new ones by combining them. Let’s say that this language has the || and && short-circuit boolean operators familiar from other languages, but it also has & and | which are “function booleans”– higher level functions, essentially, such that a | b returns a function f(x) which is true if either a(x) or b(x) is true. So if “stateless” and “nogc” are two of the builtin type functions, then we can say:

    inlineable = stateless | nogc

And if we want to define a totally unique type? Well, you just define a function:

    positive ^x = x > 0
    sqrt ^x : positive = x / x    # Note: There might be a bug here

Obviously you can’t use just any function here– there would have to be some specific type condition (probably something like the “inlineable” I describe above) that any function used as a type in a pattern would be required to conform to. This condition would begin and end with “whatever the type checker can efficiently prove to apply or not at compile-time”.

Let’s finally say there’s some sugar for letting you define these “type condition” functions at the same time you define the function to whose parameters they apply; we could reduce that last block down to

    sqrt (^x >= 0) = x / 2    # Square root implementation, WIP 2

One other bit of sugar that having a type system makes easy:

Blocks are argument lists

So everything so far has been a unary function, right? There’s only so much we can do with those. This language is set up for currying– that’s how method lookup works, after all– and I would like to offer explicit sugar for curry:

    curryadd ^x ^y = x + y

But ehh, I don’t actually like using currying for everything. I like argument lists. And I really, *really* like named arguments, like Python uses. Let’s say we have this syntax:

    divide [^numerator, ^denominator = 1] = numerator / denominator

The “parameters” block there? Is totally just a block. But there’s some kind of block wiring such that:

    divide [4, 2]           # Evaluates to 2
    divide [4]              # Evaluates to 4-- "denominator" has a default argument
    divide [9, denominator=3]                       # Evaluates to 3
    divide [denominator = 4, numerator = 16]        # Evaluates to 4
    divide [ ]       # Compile-time error -- assignment for "numerator" not matched

There’s some sort of block “matching” mechanism such that if the argument block can be wired to the parameter block, it will be. I don’t have an exact description handy of how the wiring works, but as long as blocks remember the order in which their (key, value) pairs are assigned, and as long as they can store (key, value) pairs where exactly one of key and value is (no value), then such a matching mechanism is at least possible.

My expectation is that almost all functions in this language will use the argument blocks for their parameters, and almost all invocations will have an argument block attached.

Blocks are macros

I wanna go back here and look at something closer: We’ve defined that there’s some subset of this language which can be run at compile time, and that the type checker can identify which functions are in that category. I think this is a pretty powerful concept, because it means the language can use *itself* as its macro language.

So far in this post, you’ve seen three main kinds of syntax in the code samples: Unary function application (again, a field lookup like a.b.c is really just a bunch of currying), “=”, and little extra operators like “+”. What I’m going to assert is that the extra operators– and also maybe =, and maybe even [ ]– are actually just rewrite rules. So for the line:

    3 + square 4

Before actually being executed, this line is transformed into

    3 .plus ( scope .square 4 )

“3”, like anything else, is a block. Like in Io or Self, adding three to four is just invoking a particular method on the 3 object. In this language “+”, the symbol, is just a shortcut for .plus, with parser rules to control grouping and precedence. (If we actually just wrote “3 .plus square 4”, then the currying would try to interpret this as “(3 .plus square) 4”, which is not what we want.)

There’s some kind of a syntax for defining line-rewrite rules, something like:

    op [ symbol = "!", precedence = 6, replace = .not, insert = .unary_postfix, group = .right_inclusive ]
    op [ symbol = "*", precedence = 5, replace = .times, insert = .infix, group = .both ]
    op [ symbol = "+", precedence = 4, replace = .plus, insert = .infix, group = .both ]
    op [ symbol = "==", precedence = 3, replace = .eq, insert = .infix, group = .both ]
    op [ symbol = "&&", precedence = 2, replace = .and, insert = .infix, group = .both ]
    op [ symbol = "||", precedence = 1, replace = .or, insert = .infix, group = .both ]

Which means for something like

    result = 12 * 2 + 9 == 3 + 8 * 4
    result = !parser.valid 34 && result

Ultimately what’s actually being executed is:

    scope .set .result ( ( ( 12 .times 2 ) .plus 9 ) .eq ( 3 .plus ( 8 .times 4 ) ) )
    scope .set .result ( ( ( scope .parser .valid 34 ) .not ) .and ( scope .result ) )

So this is fine for symbols like + and – which operate on two clearly-defined values, but what about something more complicated like “=”? Well, there ought to be some kind of way to pass “op” a .custom function, which takes in a list of lexed tokens representing a line and returns a transformed list of tokens. At that point you can do pretty much anything. “=” might be the *one* thing that you can’t implement this way because = does special things involving adding bindings. But short of that, custom “op”s would be sufficient even for things like, I don’t know, flow control:

    if ( a == 4 ) { k.x = 3 } else { k.x = 4 }

I may be getting into the language-geek weeds again here but I’m gonna walk through this: Let’s say I have a higher order function “if ^pred ^exec” which takes functions “pred” and “exec”, executes “pred” (pred is probably nullary… which I haven’t decided what that means in this language yet), if the result is true it executes “exec” and returns the void combinator (v ^x = v), if the result is false it returns a function which expects as argument either .elsif (in which case it returns if) or .else (in which case it returns a function that takes a nullary function as argument and evaluates it). We’ve now defined the familiar if…elsif…else construct entirely in terms of higher order functions, but actually *using* this construct would be pretty irritating, because the “pred” and “exec” blocks couldn’t just be ( ) or { } as people expect from other languages, they’d have to be function-ized (which means annoying extra typing, toss some ^s in or however lambdas are made in this language). But, we can declare “if”, “else” and “elsif” rewrite ops: “if” ^-izes the next two tokens and then replaces itself with just “if” again; “else” and “elsif” ^-ize the next one token each and then replace themselves with .else or .elsif. If we do this, then the familiar if… else syntax above just *works*.

…why am I going into all this, about “if” “else”? Well, because I want to stress that it means *flow control constructs can be implemented in the language itself*, and they will be truly first-class equals with builtins like “if” or “while”. In my eventual vision of this language, the *only* language-level syntactical elements are

    .
    ^
    ( )
    [ ]
    { }
    ;

And *everything* else, including comment indicators and the end-of-line statement-terminator, is just rewrite rules, ideally rewrite rules written in the language itself. Which implies if you don’t like the language’s syntax much, you could just unload the builtin “stdops” module that contains things like “+” and “if”, and substitute your own. “op” rules are local to scopes, so syntax could vary hugely file to file. Which… well, shouldn’t it? I know people who avoid entire languages because they don’t like one or two things about the syntax. Say, people who go “well, Objective-C has a neat object model, but I can’t get used to all those square brackets”. Or I in my last blog post specifically said that although they both have lots of features I like, I personally won’t use LISP because I can’t make visual sense of S-expressions, I won’t use Javascript because of the casting rules. None of this makes any sense! Languages should be about *features*. They should be models of computation, and we should be evaluating them based on how expressive that model is, based on the features of the underlying flow control or object model or type system or whatever. Syntax shouldn’t have to be part of the language selection process, and if languages let us put the sugar on ourselves instead of pre-sugaring everything then it wouldn’t have to be. I’m probably getting carried away here. What was I talking about? Did I say something just now about casting rules? Let’s talk about casting rules.

Blocks are language machinery

Some syntactical elements, like [ ] and =, might be too complex for the programmer to plausibly implement themselves. The programmer should still have a fair amount of control over these builtins work. One way to do this would be to have things like [ ] and = implicitly call functions that exist in the current scope. For example, instead of calling .set, = might call a function “assign” that exists in current scope; this would allow individual scopes to make policy decisions such as the variable auto-instantiation rules I mentioned earlier. [ ], at the moment it instantiates the new block, might call a function “setup” that exists in the current scope, allowing the programmer to do things like change the default ditch (base class) or the exact meaning of “inherit”. There might be a function that defines the default type constraints for numbers, or strings, or lines of code. Maybe somewhere there’s a Haskell fan who wants to be able to have every ( ) wrapped up to be ^-ized and every line wrapped in ( ) :: stateless, so that any code *they* write winds up being effectively lazy-evaluated and side-effect-free and they can only communicate with the rest of the language using unsafe monads. They should be able to do that.

One thing I definitely want in is for there to be something like a “fallback” function which, if a particular block is called with an argument whose type doesn’t fit any pattern the block has defined, attempts to map the argument to one of the patterns the block *can* handle. In other words questions about whether different but intraconvertable types like ints and floats can be converted between without a cast would be a decision made on a per-project or per-file basis. Or for example if there’s a function

    square ^x:int = x*x

and one of the patterns on the fallback block is

    fallback ^fn : function( ^type, _ ) [^x : type] = fn x    # Follow all that?

(Let’s assume “function” is a higher-order type function such that function(a,b) is the type of a function a -> b, and let’s assume _ has the magic property of “match anything, but don’t capture it” when used in a pattern.)

…then even though the function is only defined for (square x) we could totally get away with calling square[ x ], because the fallback function could match [ x ] to x.

Uh, incidentally, I’m not totally sure this thing with the fallback function is actually in general possible or possible to make performant. But as with most of the stuff in this language, I think it would be fun to try!

Blocks are C++ or Javascript objects in disguise, potentially

There’s one last thing I want to talk about here, although it’s one of the most important features from my perspective. The model we have here– where formally speaking all field accesses execute functions, all field assignments execute functions, and there’s some kind of type checker at work capable of tracking fine detail about what kinds of operations get performed on individual blocks– means that the underlying language-level implementation of “a block” could differ from block to block.

The model I’ve described here for blocks is extremely dynamic and flexible– *too* flexible, such that it would be very difficult to make code using all these dynamic features performant. Except not every block will be using all of the features blocks have. Some blocks will only contain “value” keys (i.e. never a ^var:type pattern), and the type inferrer will be able to prove this the case. The compiler/interpreter could represent this one block internally as a plain hashtable, rather than taking the overhead to enable executing arbitrary code on every access. Some blocks, despite being mutable, will have a fixed known set of keys; the language could maybe represent these in memory as plain structs, and translate atoms to fixed memory offsets at compile time.

And some blocks, at the programmer’s direction, might be doing something else altogether. It’s easy to imagine a “proxy object” where each invocation of an atom and an argument on the block is actually copying the atom and argument and shipping them into another thread or across a network, and the type checker ensures the contract is followed and objects are actually copyable; you could build an Erlang style messaging system this way.

Of particular interest to me, some blocks might actually be guests from some totally other system, say a different language with its own object model. An FFI for some other language could make wrapper blocks for that language’s objects, and put in place type guarantees that the programmer does not interact with those blocks in any way the guest language does not support. The two languages I’d personally really like to be able to interface with this way are C++ and Javascript, because these languages have valuable platform and library support, but also are languages I do not actually want to *write*.

C++ in particular interests me, because I’m not aware of any languages which are “higher level” in the sense that interests me but which can currently adequately interface with C++. C++ is actually pretty tricky to interface with– the big problem here, to my mind, being that method calling conventions (name mangling) vary from compiler to compiler. Actually, on some platforms (by which I mean “Windows”) it’s the case that shared libraries (DLLs) can’t be shared between compilers even if you are writing in C++ yourself. It would probably be necessary, if making a C++ FFI, to target one particular compiler (I’d vote Clang, because it’s extensible and has good platform support). Choosing to target one particular compiler would have a neat side effect: With some knowledge of the compiler’s implementation details, it *ought* to be possible to make blocks that inherit from C++ classes, and have those blocks actually construct fake vtables at runtime that jump into the compiled code for (or interpreter for) my language. Since in my language “classes” and “objects” get constructed by calling functions whose execution could be potentially deferred to runtime, it would be essentially invisible to the programmer when they say [ inherit QObject; objectName = “Block” ] whether a normal block or a pseudo-C++ class is being constructed.

Okay?

Anyway, here’s what I think I’ve got here. I started with one single idea (pattern-matched unary functions that remember the order in which their patterns were assigned), and asked the question “how much of what a language normally does could I collapse into this one concept?”. The answer turns out to be “very nearly EVERYTHING”, including stuff (like type specifications, macros and FFIs) that most languages would wind up inventing effectively an entire sub-language just to support (templates… ugh). I actually *do* want a new programming language, mostly because of that thing I mentioned with not liking any existing language’s C++ interop, and I actually do intend to at least attempt this project. I’m basically just gonna download the Clang source at some point and see how far I get. One thing in my favor is that since this language is based on a small number of simple things that interact in complex ways, I could probably get a minimal implementation going without too much difficulty (especially if I don’t go for types on the first pass).

Oh, one very final thing: I never said what all of this is called. In my head I’m planning to name this language either Emily, because despite being fundamentally OO it strikes me as a fairly “ML-y” language; or Emmy, after Emmy Noether. I’ll decide which later.

That’s all.

Note to commenters: Harsh criticisms are very much welcomed. Criticisms based on demographics or assumptions about my background are not. Thanks.

9 Responses to “I designed a programming language and it looks like this.”

  1. Brian Mock Says:

    I’m definitely intrigued by unifying similar language constructs, but I wonder what effect such a drastic unification would have. In JS, you have a whole swath of module and OOP patterns because the language isn’t prescriptive enough to make one obviously preferred way. Of course, it seems like your intentions are to make a simple language where everything is a pattern, rather than a language construct.

    It’s not clear to me why the `^` is needed in `f ^x = x * f (x – 1)`. Would removing it make the grammar ambiguous? Would that express something else entirely? Could just be the Haskell talking.

    Symbols are cool, though they appear to be a constant point of confusion in the Ruby community. I wonder what the right way to explain them is. Perhaps something like “symbols are names, strings are lists of characters” would help make the point.

    Not really following how the line `counter_instance ^x = counter x` works.

    The example…
    set.a ^x = ( b = x + 1 )
    a = 3
    …is neat. Would it be possible to make `a` somehow pull double duty as a getter and setter for a “private” property `_a` or something?

    I’m a little confused as to whether functions like `stateless` are executed at compile-time or runtime based on your description in the “Blocks are types” section.

    The “Blocks are argument lists” I think unfortunately deviates from your concept of unification. It seems to create multiple calling conventions for unary functions possible, like `f x` and `f [x]`.

    The seamless C++ integration you describe would definitely be super cool, but I wonder if it’s feasible. I tried out JRuby once and found the integration to be shockingly seamless, but I’ve not heard of anything so nice for C++, probably in part due to name mangling and varied calling convention, like you mentioned.

    I think Emily has a nice ring to it. Would fit in nicely with Julia (http://julialang.org/).

    It might be cool to see more examples of how you think the various features you detail could be useful, especially in conjunction with each other.

    Overall, it sounds like a simple but powerful language worthy of being implemented.

  2. Peter Says:

    Overall, the language sounds mildly interesting. It’s probably not going to work, but you will probably learn a lot.

    Now, specific criticisms:

    What you call a “type system”, well, isn’t. The purpose and value of a type system is making illegal states irrepresentable. What you have seems to be a kind of guaranteed optimizations (an optimizer is an automated theorem prover, similarly to a type checker) baked into the language. That is not to say such feature is useless, it’s just an entirely different one.

    Another problem with a type system in a language where everything can be mutated or extended at runtime (can it? your description of a dictionary seems to suggest so) is that it is very hard to conclusively prove anything. The formal semantics of types being created, parameterized with other types, at runtime, is something that has been tried (e.g. OCaml’s first-class modules), but it’s such an awful, user-hostile feature that it is almost useless in practice.

    The problem with blocks-as-macros, pervasive rewrite rules, is that they make sensible error messages almost impossible to write. Your hypothetical users would necessarily be required to read and correlate the preprocessed source with what they write. It’s not very convenient.

    The problem with stealing pattern matching but not anything else from FP is that pattern matching works well when matching syntax reflects construction syntax. I do not understand your language well enough (yet?) to say whether this will work or not, but you may want to keep this in mind. Additionally, if it’s possible to mutate an object while it is being matched, bad things will happen.

    Another problem with pattern matching is that one of the most valuable features of pattern matching is exhaustiveness checks–they’re the reason why pattern matching is much more powerful than ladders of `if’s. But, once again, in a language where everything can be extended, exhaustiveness checks cannot exist.

    To conclude, the criticisms I list are based on spending two years on a language with some very similar features. While I eventually became convinced that the requirements I had were mutually exclusive and a language like this cannot exist, I’m still interested in exploring this design space. (Anyone interested is also welcome to email me at whitequark@whitequark.org.)

  3. Kathryn Long Says:

    This is something that’s a much more fleshed out version of things I have long thought about in terms of unifying language features into underlying pattern. I think it’s really great and would definitely love to see it and use it and see how it works out.

    I kept thinking of specific use cases you hadn’t addressed and was going to point them out, but then kept realizing how easy it would be to apply all this to solve them without any additional changes. So a very good job at providing a simple yet solid paradigm.

    One minor thing: I’m not exactly sure on the need for the “function booleans” you mentioned to be separate from normal logical operators given that the functions were using boolean values to begin with?

    I think the key will be providing a robust but simple standard library. As you mentioned, it could easily be swapped out, but having a good base to work with that has standard language constructs (like number types, boolean values, etc.) will make everything a lot more clear and make refining the language easier (like figuring out making = defined in language like you said)

  4. Getty Says:

    There are some interesting similarities (although also major differences) between Em(il)y and the Io language, which is a very simple SmallTalk-ish prototype-based object-oriented language. Io also uses objects (which are some manner of hash table) that serve major duty as e.g. environments in the language. Assignment is sugar, like your system, so that a := 5 is syntactic sugar for setSlot("a", 5) to assign the value of a key in the context of the scope object in which it’s being called. It doesn’t use a symbol type (which, incidentally, traces its lineage back to Lisps, where it is written as 'foo), but in many of the other areas, there is a fair amount of overlap.

    The way Io approaches functions is very different, though. Your blocks are by necessity “lazy”, because f ^x = x + 1 can’t be evaluated until it knows what x is, so for conceptual symmetry something like f 1 = { print "Hello!"; 1 } would presumably print “Hello!” each time you invoked f 1. Io is in contrast eager, so functions are instead a special case of sequences—they are stored, even at runtime, as a (possibly modifiable) syntax tree. Invoking a function will re-traverse that tree and execute each expression. That means in practice that, despite their similarity, Io and Emily programs would probably work and look very different. I don’t know exactly how, but I’m certainly interested in finding out.

    Finally, I want to pose a specific question: what if I wanted to have multiple block literals nested within each other? Would the inner this entirely shadow the outer one? There could of course be ways around this, perhaps like

    obj = [
    __outer_this = this
    a = 5
    b = [
    a = 8
    c = { this.a + __outer_this.a }
    ]
    ]

    but it might be worth thinking about whether there are alternate ways of handling this case, e.g. by having a built-in notion of an environment graph (like d = { this.a + this.parent.a }) or by having explicit naming for this (like obj = [this: a = 5; b = [self: a = 8 c = { this.a + self.a } ] ]) or something similar. Again, I don’t know what the ramifications would be, but they’d be interesting to consider!

  5. mcc Says:

    Hi, thanks everyone for reading and commenting. Some responses to the comments here:

    Brian: I do take your point about things getting sloppy in JS because the language doesn’t push you toward one particular approach. C++ sort of has a similar issue. However, I think that part of the problem there is JS doesn’t make any single one of the paradigms it enables feel particularly natural. I think if you give people a “preferred” path which is legitimately nice to use and then the ability to step off it, they’ll be more likely to stay on the easy path than if you just try to force people to program a certain way. It also *might* be the case that by unifying a lot of concepts into one (as I’m trying to do with Emily) rather than just plain OFFERING a lot of different paradigms (as C++ and Javascript do) it might make it safer for different people to adopt different paradigms, because the different paradigms will be more likely to work politely together or at least more likely to be possible to stitch together.

    “It’s not clear to me why the `^` is needed in `f ^x = x * f (x – 1)`. Would removing it make the grammar ambiguous?”

    I need to be able to express the difference between “The pattern of f followed by the value of the current local variable x” versus “The pattern of f with a variable, which should be bound to x”.

    I’m also trying to preserve the rule that “if a closure was just created, a ^ appears in the statement”. Otherwise I think it might get confusing when an expression is evaluated immediately and when it is deferred. I *do* worry though that the ^s everywhere might get obnoxious or offputting to newcomers.

    “Not really following how the line `counter_instance ^x = counter x` works.”

    “Any argument passed to ‘counter_instance’, instead just turn around and pass it off to ‘counter'”

    “set.a ^x = ( b = x + 1 ) …is neat. Would it be possible to make `a` somehow pull double duty as a getter and setter for a “private” property `_a` or something?”

    That would look something like:

    _a = 0
    a ^= _a
    set.a ^x = ( _a = x )

    (^= is not clearly explained in the blog post because I’m still trying to decide if I like it as a syntax. When interpreting it, remember: What you’re actually doing is setting the function executed when scope(.a) is called.)

    “I’m a little confused as to whether functions like `stateless` are executed at compile-time or runtime based on your description in the “Blocks are types” section.”

    In the general case, executed at runtime. However at compile time, an *attempt* will be made to determine if “stateless” is provably always true. In this case it will be executed at compile time, and the runtime check will be omitted.

    “The “Blocks are argument lists” I think unfortunately deviates from your concept of unification. It seems to create multiple calling conventions for unary functions possible, like `f x` and `f [x]`.”

    This is true, and I really would prefer if the [] were always passed. I’m not sure such a thing can sensibly be required though. Atoms kinda work on the assumption no [] is present, as will some of the important builtin functions.

    “The seamless C++ integration you describe would definitely be super cool, but I wonder if it’s feasible.”

    No idea! I’ll find out!

    Peter: “What you call a “type system”, well, isn’t. The purpose and value of a type system is making illegal states irrepresentable. What you have seems to be a kind of guaranteed optimizations”

    So if it wasn’t clear, when type checking fails at runtime, it is a fatal error. A : by itself, if it fails, halts program flow, and a : on a function argument which fails will potentially result in a “no pattern matches”, which also halts program flow. Meanwhile, at compile time failure to demonstrate type safety will be a compile warning or error (idk when exactly it’s appropriate to emit an outright error).

    “Another problem with a type system in a language where everything can be mutated or extended at runtime… is that it is very hard to conclusively prove anything”

    This is definitely something that worries me– the idea of the compiler/optimizer proving various nice properties at compile time, which then some kind of runtime mucking breaks all the assumptions of, which possibly means the compiler/optimizer can’t ever say it proved those properties to start because it has zero way to prove some future runtime mucking will break its assumptions… I think my vision of how this works is that most Emily programs will have a locked-down core with enough restrictive typing that nobody can runtime-muck them, and then the parts where dynamism reigns will just be relatively inefficient.

    “Additionally, if it’s possible to mutate an object while it is being matched, bad things will happen.”

    I’m definitely not planning to allow any kind of memory-sharing parallelism right now.

    “Another problem with pattern matching is that one of the most valuable features of pattern matching is exhaustiveness checks”

    Yeah, I think this is something I am not planning to attempt (and possibly, given some of the things I’ve said blocks and/or the types can do, would as you say not be possible).

    “To conclude, the criticisms I list are based on spending two years on a language with some very similar features”

    Was this language released, is it on your website? What is its name? I’d be curious to see how things turned out.

    Kathryn: “One minor thing: I’m not exactly sure on the need for the “function booleans” you mentioned to be separate from normal logical operators given that the functions were using boolean values to begin with?”

    So here’s an implementation of what I mean by “functional or”, I guess in Python– does this make its use clearer?:

    def function_or(f1, f2):
    return lambda v: f1(v) or f2(v)

    Getty: Io looks really interesting to me! I would like to learn it.

    “what if I wanted to have multiple block literals nested within each other? Would the inner this entirely shadow the outer one?”

    I think the current described syntax would just make that pretty awkward, yeah. Which is unfortunate since it’s a common use case

    “explicit naming for this (like obj = [this: a = 5; b = [self: a = 8”

    I like this, especially because if a syntax like this exists it provides a hint at how I might later offer named breaks (a feature from Perl and Java I *really* like). However, I’m already using the colon for something! ^_^;


    (Note: Based on comments from C Willmore and Z Sparks, I updated the blog post text to fix a couple of typos that made the text confusing in places.)

  6. ᙇᓐ M Edward Borasky (@znmeb) Says:

    “Most languages use (x) to indicate an argument to a function, and [x] to indicate an index to an array. Why?” History. FORTRAN was built on a six-bit character set, which had parentheses ‘(‘ and ‘)’ but not brackets ‘[‘ and ‘]’. So FORTRAN used F(X) and A(I) to mean either function F with argument X or array A indexed by I. Functions and arrays had to be declared, so the compiler could tell which was which.

    Along came ALGOL and its designers decided they should use ‘(‘ and ‘)’ for functions and ‘[‘ and ‘]’ for arrays in their publication language. Implementers with the same crappy 6-bit character set had to use digraphs like ‘(*’ for a left ‘[‘ and ‘*)’ for ‘]’. But the 7-bit ASCII character set came along, with upper and lower case letters, less-than and greater-than signs, parentheses and brackets *and* curly braces! So languages in the ALGOL family, which is just about all of them, could distinguish between function applications and array indexing. They still had to declare functions and arrays, so it’s not clear why they thought the two different syntaxes were a good idea.

  7. ᙇᓐ M Edward Borasky (@znmeb) Says:

    P.S.: The history of programming languages is filled with elegant ideas (APL, LISP 1.5, Scheme, FORTH, Lua, Smalltalk and regular expressions) and some godawful kludges (PL/I, Perl 5, RPG (Report Program Generator), S and its descendent R, C++, JavaScript and just about everything that’s been jammed down the throat of the Java Virtual Machine). Emily or Emmy looks to be a member of the elegant class. I think you’re onto something!

  8. Bach Says:

    This may be too dynamic for my taste but I like the idea of unifying everything under function. This paper http://www.cs.swarthmore.edu/~zpalmer/publications/batsl.pdf talks about a structure called onion and it’s used in combination with pattern matching to build a type checked scripting language. It has a similar idea: use functions to build everything. It might be useful for your type system.

  9. Dentista de Urgencias Says:

    Hi to every single one, it’s genuinely a nice for me to pay a
    visit this site, it includes important Information.

Leave a Reply