Archive for the 'This is Maybe Important' Category

I designed a programming language and it looks like this.

Tuesday, April 22nd, 2014

For reasons I talk about here, I’m going to try to create a programming language. So far I’ve got basically a first-draft design.

There is a specific idea I decided I want to explore, when I do this: Programming languages duplicate too much. Programming languages often have multiple syntaxes that do very similar things, or multiple underlying concepts that do very similar things. It is sometimes possible to collapse these similar things into one thing, and when we do, I usually like the results better.

For example, many languages (Perl, Python, PHP) have both a dictionary type and an object type, but the two are used in effectively the same way; on the other hand Lua collapses dictionaries and objects into one type (tables), and makes an object field lookup identical to a dictionary string lookup. Or most object oriented languages distinguish objects and classes, but prototype based languages show that you can get by with just objects; if objects can inherit from other objects, then a “class” is just a pattern for a particular kind of object. When you collapse ideas together like this, or build language features on top of existing features rather than adding new primitives, you reduce both the amount of mental overhead in thinking about code implementation and also the amount of redundant syntax. There’s usually a lot of redundant syntax. C++ uses . to access a field from a reference, and -> to access a field from a pointer. Most languages use (x) to indicate an argument to a function, and [x] to indicate an index to an array. Why? If pointers and references were just special cases of one underlying concept, or if arrays and functions were, you could use one syntax for each pair and you wouldn’t have to mentally track what each variable is, you wouldn’t have to do all the obnoxious manual refactoring when you suddenly decide to replace a reference with a pointer somewhere or vice versa.

In the language I’ve been thinking about, I started with Lua’s “Table” idea– what if i built objects out of dictionaries?– and decided to take it one step further, and build both objects and dictionaries out of functions. In this language, there’s one underlying data structure that functions, objects, dictionaries, and some other stuff besides are just special cases of– design patterns of.

Taking a cue from Smalltalk, I’m going to call this underlying structure “blocks”.

Blocks are just functions

A block, for purposes of this blog post, is a unary function. It takes exactly one argument, and it returns a value. Anywhere in this blog post I say “blocks”, I could have just written “functions”. I’m going to mostly use the “block” jargon instead of saying “functions” because some blocks will be “used like” functions and some very much will not be.

In my language, you’ll define a function just with “=”:

    addOne ^x = x + 1

The ^x is an argument binding. The ^ signals to the language that the right side of the = needs to be a function body (a closure). If on the next line you just said

    y = addOne 3

That would just assign 4 to the variable “y”, it would not create a function.

Blocks are pattern-matched functions

A big part of this project is going to be that I really like the ideas in functional languages like ML or Haskell, but I don’t actually enjoy *writing* in those languages. I like OO. I want a language that gives me the freedom and expressiveness of FP, but comfortably lets me code in the OO style I use in Python or Lua or C++. So I’m going to steal as many ideas from FP languages as I can. Three really important ideas I’m going to steal are closures, currying, and pattern matching.

In case you don’t know those languages, let me stop and explain pattern matching real quick. You know how C++ lets you function overload?

    // In C++
    void addOneHour(int &k) { k = (k + 1) % 12; }
    void addOneHour(float &k) { k = fod(k + 1.0, 12); }

Well, pattern matching is as if you could switch not just on type, but also on value:

    // In hypothetical-C++
    void addOneAbsolute(int &k where k > 0) { k = k + 1; }
    void addOneAbsolute(int &k where k < 0) { k = k - 1; } void addOneAbsolute(0) { } // Do nothing

That last line– the one demonstrating we could write a function whose pattern matches only *one single value*– is going to be important to this language. Why?

Blocks are dictionaries

In my language, if I want to assign more than one pattern to a single block, I just use = multiple times:

    factorial ^x = x * factorial (x - 1)
    factorial 0 = 1

“Factorial” is a block. The way I’m looking at it, a block is just a data structure which maps patterns to closures. It’s like a dictionary, but some of the keys (the ones with bound variables) match multiple values.

However we could not bother assigning any bound-variable patterns, and then we’d just have a dictionary or an array:

    nameOfMonth 1 = "January"
    nameOfMonth 2 = "February"
    nameOfMonth 3 = "March"
    ...

Blocks are objects

Here I want to introduce a data type called an “atom”. This is an idea stolen from Erlang (and possibly Ruby?). Technically an atom is an “interned string”. It’s something that the programmer sees as a string, but the compiler sees as an integer (or a pointer, or something which has a constant-time comparison). You get at the atom by putting a . before a symbol; the symbol is the name of the atom:

    x = .atomname

It’s cheaper to compare atoms than strings (.atomname == .atomname is cheaper than “atomname” == “atomname”) and cheaper to use them as dictionary lookup keys. This means atoms work well as keys for fields of an object. Objective-C for example actually uses atoms as the lookup keys for its method names, although it calls them “selectors”. In my language, this looks like:

    constants.pi = 3.14
    constants.e = 2.71
    constants.phi = 1.61

Notice this looks like normal object syntax from any number of languages. But formally, what we’re doing is adding matching patterns to a function. What’s cool about that is it means we’ll eventually be able to use machinery designed for functions, on objects. Like to skip ahead a bit, eventually we’ll be able to do something like

    map constants [.pi, .e, .phi]

and this will evaluate to an array [3.14, 2.71, 1.61].

What’s up with the square brackets? Oh, right. well, I think all this “constants.” nonsense is gonna get kinda tiresome. So let’s say there’s a syntax like:

    constants = [ pi = 3.14, e = 2.71, phi = 1.61 ]

Notice I say “pi” and not “.pi”– on the left side of an =, the initial “.” is implicit. More on that in a moment.

One other thing. Inside of the [ ], there exists an implicit “this” variable, corresponding to the object the [ ] creates. so if you say

    counter = [
        count = 0
        increment ^x = { this.count = this.count + x }
        decrement ^x = { this.count = this.count - x }
    ]
    counter.increment 1
    counter.increment 3

Then at the end of this string of code “counter.count” is equal to four.

Blocks are prototypes

What if we want more than one counter object? Well, you’ll notice an interesting consequence of our pattern matching concept. Let’s say I said:

    counter = [
        init ^x = { this.count = x }
        increment ^x = { this.count = this.count + x }
        decrement ^x = { this.count = this.count - x }
    ]

    counter_instance ^x = counter x
    counter_instance.init 3
    counter_instance.increment 5

When we say “counter_instance.whatever”, the language interprets this as calling the block counter_instance with the argument .whatever. So if counter_instance is defined to just re-call “counter”, then on the next line saying “counter_instance.init 3” will fetch the block stored in counter.init, and then that block gets called with the argument 3. The way the “this” binding works is special, such that counter_instance.init gets invoked “on” counter_instance– “this” is equal to counter_instance, not counter.

The syntax we used to make counter_instance “inherit” is pretty ugly, so let’s come up with a better one:

    counter_instance.ditch = counter

I haven’t explained much about how = works, but when we say “counter_instance ^x = “, what we’re really doing is taking a closure with an argument binding and adding it to counter_instance’s implementation-internal key-value store, with the key being a pattern object that matches “anything”. “.ditch” is a shortcut for that one match-anything key slot. In other words, by setting counter_instance.ditch to counter, we are saying that counter is counter_instance’s “prototype”.

Something to make clear here: the lines inside a [ ] aren’t “magic”, like C++ inside a struct declaration or anything. They’re just normal lines of code, like you’d find inside a { }. The difference is the insides of [ ] are using a specially prepared scope with access to a “this” and a “super”, and at the end of the [ ] the [ ] expression returns the scope into which all these values are being assigned (“this”). The upshot is you could easily have the first line of your [ ] be something like an “inherit counter;” call that sets the ditch and does some various other fix-up to make this prototype system act more like some other kind of object system, like a class system (I like classes). This sort of thing is possible because

Blocks are scopes

Like most languages, this one has a chain of scopes. You’ll notice above I offhandedly use both ( ) and { } ; these are the same thing, in that they’re a series of statements which evaluate to the value of the final statement:

    x = ( 1; 2; 3 )

…sets x equal to 3. The 1; and 2; are noops. (Semicolon is equivalent, in the examples I’ve given here, to line-ending. There’s also a comma which is effectively a semicolon but different and the difference is not worth explaining right now.)

The one difference between { } and ( ) is that { } places its values into a new scope. What is a scope? A scope is a block. When you say

    a = 4

The unbound variable a is atom-ized, and fed into the current scope block. In other words “a” by itself translates to “scope.a”. When you create a new inner scope, say by using { }, a new scope block is created, and its ditch is set to the block for the enclosing scope. The scope hierarchy literally uses the same mechanism as the prototype chain.

Block constituents are properties (or: blocks are assignment statements)

Non-language geeks may want to skip this section.

I’ve been pretty vague about what = does, and that’s because it has to do several layers of things (matching items that already exist, binding variables, wrapping closures, and actually performing assignment). However, ultimately = must write [pattern, closure] pairs into one or more blocks. = cannot, however, actually write anything by itself. Ultimately, when = decides it needs to assign something, it is calling a “set” method.

    a = 4

Is ultimately equivalent to

    scope.set .a 4

That = is sugar for .set is a small detail, but it has some neat consequences. For one thing, since everything that happens in this language is curryable, it means you can trivially make a function:

    a_mutator = set.a

…which when called will reassign the “a” variable within this current scope (remember, “set” by itself will just be “scope.set”). For another thing, this means you can create a “property” for a particular variable:

    set.a ^x = ( b = x + 1 )
    a = 3

After this code runs, “b” will be equal to 4 (and “a” will still be equal to a function that mutates “b”).

The existence of .set will also have some interesting effects once we have types and therefore a type checker. I’ve been kinda vague about whether = has “set” or “let” semantics– that is, if you assign to a variable does it auto-instantiate or must you predeclare it, if there is a variable by the assigned name in the ditch does assignment shadow in the assigned block or reassign in the parent block, etc. And the answer is it doesn’t much matter for purposes of this post, because any of the possible things that happen when you set a field (“not declared” error thrown, assigned to top-level block, assigned to a parent block) could just be all things that could and do happen in different blocks, depending on what that block’s .set is set to. For example, it would probably make sense for object blocks and scope blocks to have a different last-ditch “.set” behavior, or be sensible to allow different source files to have different “.set”s for their file-level scopes (“use strict”).

On that note, let’s talk about types. There’s a lot of very exciting stuff happening in the study of types in programming languages right now, both types as used in languages and types as used in extra-lingual static analysis tools. I don’t understand a lot of this research yet (and I want to learn) but I think I understand enough to have an idea of what’s possible with types right now, and that means I know how I want types in this language to work.

Blocks are types

Let’s say we have a syntax variable:type that we can use to constrain the arguments of a function.

    factorial ^x : int = x - 1

When this function is called, there will be a runtime check, if “x” is not an int, it will be a runtime failure. Let’s say we can use the a:b construct inside expressions too:

    square ^x = ( x * x:float ) :: stateless

Let’s say that :: instead of : indicates that the type is being applied not to the value returned by that parenthesis, but to the implicit “function” defined by the parenthesis itself. “stateless” is a type that applies to functions; if we assert a function is “stateless” we assert that it has no side-effects, and its resulting value depends only on its inputs. (In other words, it is what in another language might be called “pure”.)

There’s some kind of a inferred typing system in place. There’s a compile time type checker, and when it looks at that “square” function it can tell that since “x” is a float in one place inside the expression, that the “x” passed into square must itself be a float. It can also tell that since the only code executed in “square ^x” is stateless, that the function “square ^x” is also stateless. Actually the “stateless” is from the checker’s perspective unnecessary, since if the checker has enough information about x to know the * in (x * x) is a stateless operation– which, if it knows x is a float, it does know that– then square ^x would be stateless anyway.

There’s some kind of a gradual typing system in place. There is a compile-time step which, everywhere square ^x is called, tries to do some kind of a type-proving step and determine if the argument to square is a float. If it can prove the argument is a float, it actually omits the runtime check to save performance. If it *can’t* prove the argument is a float, or it can prove the argument *isn’t* a float, it adds the check and maybe prints some kind of compile-time warning. (To stress: some of these properties, like “stateless”, might be in many cases *impossible* to prove, in which case the checker is conservative and treats “can’t prove” as a failure.) Besides omitting safety checks, there are some other important kinds of optimizations that the type checker might be able to enable. Critically, and this will become important in a moment, if a function is stateless then it can be potentially executed at runtime.

So what are types? Well, they’re just functions. “int” and “stateless” are language-builtin functions that return true if their arguments are an int, or a provably stateless function, respectively. (For purposes of a type function, if the type *doesn’t* match, then either a runtime failure or a return false are okay.) Types are values, so you can construct new ones by combining them. Let’s say that this language has the || and && short-circuit boolean operators familiar from other languages, but it also has & and | which are “function booleans”– higher level functions, essentially, such that a | b returns a function f(x) which is true if either a(x) or b(x) is true. So if “stateless” and “nogc” are two of the builtin type functions, then we can say:

    inlineable = stateless | nogc

And if we want to define a totally unique type? Well, you just define a function:

    positive ^x = x > 0
    sqrt ^x : positive = x / x    # Note: There might be a bug here

Obviously you can’t use just any function here– there would have to be some specific type condition (probably something like the “inlineable” I describe above) that any function used as a type in a pattern would be required to conform to. This condition would begin and end with “whatever the type checker can efficiently prove to apply or not at compile-time”.

Let’s finally say there’s some sugar for letting you define these “type condition” functions at the same time you define the function to whose parameters they apply; we could reduce that last block down to

    sqrt (^x >= 0) = x / 2    # Square root implementation, WIP 2

One other bit of sugar that having a type system makes easy:

Blocks are argument lists

So everything so far has been a unary function, right? There’s only so much we can do with those. This language is set up for currying– that’s how method lookup works, after all– and I would like to offer explicit sugar for curry:

    curryadd ^x ^y = x + y

But ehh, I don’t actually like using currying for everything. I like argument lists. And I really, *really* like named arguments, like Python uses. Let’s say we have this syntax:

    divide [^numerator, ^denominator = 1] = numerator / denominator

The “parameters” block there? Is totally just a block. But there’s some kind of block wiring such that:

    divide [4, 2]           # Evaluates to 2
    divide [4]              # Evaluates to 4-- "denominator" has a default argument
    divide [9, denominator=3]                       # Evaluates to 3
    divide [denominator = 4, numerator = 16]        # Evaluates to 4
    divide [ ]       # Compile-time error -- assignment for "numerator" not matched

There’s some sort of block “matching” mechanism such that if the argument block can be wired to the parameter block, it will be. I don’t have an exact description handy of how the wiring works, but as long as blocks remember the order in which their (key, value) pairs are assigned, and as long as they can store (key, value) pairs where exactly one of key and value is (no value), then such a matching mechanism is at least possible.

My expectation is that almost all functions in this language will use the argument blocks for their parameters, and almost all invocations will have an argument block attached.

Blocks are macros

I wanna go back here and look at something closer: We’ve defined that there’s some subset of this language which can be run at compile time, and that the type checker can identify which functions are in that category. I think this is a pretty powerful concept, because it means the language can use *itself* as its macro language.

So far in this post, you’ve seen three main kinds of syntax in the code samples: Unary function application (again, a field lookup like a.b.c is really just a bunch of currying), “=”, and little extra operators like “+”. What I’m going to assert is that the extra operators– and also maybe =, and maybe even [ ]– are actually just rewrite rules. So for the line:

    3 + square 4

Before actually being executed, this line is transformed into

    3 .plus ( scope .square 4 )

“3”, like anything else, is a block. Like in Io or Self, adding three to four is just invoking a particular method on the 3 object. In this language “+”, the symbol, is just a shortcut for .plus, with parser rules to control grouping and precedence. (If we actually just wrote “3 .plus square 4”, then the currying would try to interpret this as “(3 .plus square) 4”, which is not what we want.)

There’s some kind of a syntax for defining line-rewrite rules, something like:

    op [ symbol = "!", precedence = 6, replace = .not, insert = .unary_postfix, group = .right_inclusive ]
    op [ symbol = "*", precedence = 5, replace = .times, insert = .infix, group = .both ]
    op [ symbol = "+", precedence = 4, replace = .plus, insert = .infix, group = .both ]
    op [ symbol = "==", precedence = 3, replace = .eq, insert = .infix, group = .both ]
    op [ symbol = "&&", precedence = 2, replace = .and, insert = .infix, group = .both ]
    op [ symbol = "||", precedence = 1, replace = .or, insert = .infix, group = .both ]

Which means for something like

    result = 12 * 2 + 9 == 3 + 8 * 4
    result = !parser.valid 34 && result

Ultimately what’s actually being executed is:

    scope .set .result ( ( ( 12 .times 2 ) .plus 9 ) .eq ( 3 .plus ( 8 .times 4 ) ) )
    scope .set .result ( ( ( scope .parser .valid 34 ) .not ) .and ( scope .result ) )

So this is fine for symbols like + and – which operate on two clearly-defined values, but what about something more complicated like “=”? Well, there ought to be some kind of way to pass “op” a .custom function, which takes in a list of lexed tokens representing a line and returns a transformed list of tokens. At that point you can do pretty much anything. “=” might be the *one* thing that you can’t implement this way because = does special things involving adding bindings. But short of that, custom “op”s would be sufficient even for things like, I don’t know, flow control:

    if ( a == 4 ) { k.x = 3 } else { k.x = 4 }

I may be getting into the language-geek weeds again here but I’m gonna walk through this: Let’s say I have a higher order function “if ^pred ^exec” which takes functions “pred” and “exec”, executes “pred” (pred is probably nullary… which I haven’t decided what that means in this language yet), if the result is true it executes “exec” and returns the void combinator (v ^x = v), if the result is false it returns a function which expects as argument either .elsif (in which case it returns if) or .else (in which case it returns a function that takes a nullary function as argument and evaluates it). We’ve now defined the familiar if…elsif…else construct entirely in terms of higher order functions, but actually *using* this construct would be pretty irritating, because the “pred” and “exec” blocks couldn’t just be ( ) or { } as people expect from other languages, they’d have to be function-ized (which means annoying extra typing, toss some ^s in or however lambdas are made in this language). But, we can declare “if”, “else” and “elsif” rewrite ops: “if” ^-izes the next two tokens and then replaces itself with just “if” again; “else” and “elsif” ^-ize the next one token each and then replace themselves with .else or .elsif. If we do this, then the familiar if… else syntax above just *works*.

…why am I going into all this, about “if” “else”? Well, because I want to stress that it means *flow control constructs can be implemented in the language itself*, and they will be truly first-class equals with builtins like “if” or “while”. In my eventual vision of this language, the *only* language-level syntactical elements are

    .
    ^
    ( )
    [ ]
    { }
    ;

And *everything* else, including comment indicators and the end-of-line statement-terminator, is just rewrite rules, ideally rewrite rules written in the language itself. Which implies if you don’t like the language’s syntax much, you could just unload the builtin “stdops” module that contains things like “+” and “if”, and substitute your own. “op” rules are local to scopes, so syntax could vary hugely file to file. Which… well, shouldn’t it? I know people who avoid entire languages because they don’t like one or two things about the syntax. Say, people who go “well, Objective-C has a neat object model, but I can’t get used to all those square brackets”. Or I in my last blog post specifically said that although they both have lots of features I like, I personally won’t use LISP because I can’t make visual sense of S-expressions, I won’t use Javascript because of the casting rules. None of this makes any sense! Languages should be about *features*. They should be models of computation, and we should be evaluating them based on how expressive that model is, based on the features of the underlying flow control or object model or type system or whatever. Syntax shouldn’t have to be part of the language selection process, and if languages let us put the sugar on ourselves instead of pre-sugaring everything then it wouldn’t have to be. I’m probably getting carried away here. What was I talking about? Did I say something just now about casting rules? Let’s talk about casting rules.

Blocks are language machinery

Some syntactical elements, like [ ] and =, might be too complex for the programmer to plausibly implement themselves. The programmer should still have a fair amount of control over these builtins work. One way to do this would be to have things like [ ] and = implicitly call functions that exist in the current scope. For example, instead of calling .set, = might call a function “assign” that exists in current scope; this would allow individual scopes to make policy decisions such as the variable auto-instantiation rules I mentioned earlier. [ ], at the moment it instantiates the new block, might call a function “setup” that exists in the current scope, allowing the programmer to do things like change the default ditch (base class) or the exact meaning of “inherit”. There might be a function that defines the default type constraints for numbers, or strings, or lines of code. Maybe somewhere there’s a Haskell fan who wants to be able to have every ( ) wrapped up to be ^-ized and every line wrapped in ( ) :: stateless, so that any code *they* write winds up being effectively lazy-evaluated and side-effect-free and they can only communicate with the rest of the language using unsafe monads. They should be able to do that.

One thing I definitely want in is for there to be something like a “fallback” function which, if a particular block is called with an argument whose type doesn’t fit any pattern the block has defined, attempts to map the argument to one of the patterns the block *can* handle. In other words questions about whether different but intraconvertable types like ints and floats can be converted between without a cast would be a decision made on a per-project or per-file basis. Or for example if there’s a function

    square ^x:int = x*x

and one of the patterns on the fallback block is

    fallback ^fn : function( ^type, _ ) [^x : type] = fn x    # Follow all that?

(Let’s assume “function” is a higher-order type function such that function(a,b) is the type of a function a -> b, and let’s assume _ has the magic property of “match anything, but don’t capture it” when used in a pattern.)

…then even though the function is only defined for (square x) we could totally get away with calling square[ x ], because the fallback function could match [ x ] to x.

Uh, incidentally, I’m not totally sure this thing with the fallback function is actually in general possible or possible to make performant. But as with most of the stuff in this language, I think it would be fun to try!

Blocks are C++ or Javascript objects in disguise, potentially

There’s one last thing I want to talk about here, although it’s one of the most important features from my perspective. The model we have here– where formally speaking all field accesses execute functions, all field assignments execute functions, and there’s some kind of type checker at work capable of tracking fine detail about what kinds of operations get performed on individual blocks– means that the underlying language-level implementation of “a block” could differ from block to block.

The model I’ve described here for blocks is extremely dynamic and flexible– *too* flexible, such that it would be very difficult to make code using all these dynamic features performant. Except not every block will be using all of the features blocks have. Some blocks will only contain “value” keys (i.e. never a ^var:type pattern), and the type inferrer will be able to prove this the case. The compiler/interpreter could represent this one block internally as a plain hashtable, rather than taking the overhead to enable executing arbitrary code on every access. Some blocks, despite being mutable, will have a fixed known set of keys; the language could maybe represent these in memory as plain structs, and translate atoms to fixed memory offsets at compile time.

And some blocks, at the programmer’s direction, might be doing something else altogether. It’s easy to imagine a “proxy object” where each invocation of an atom and an argument on the block is actually copying the atom and argument and shipping them into another thread or across a network, and the type checker ensures the contract is followed and objects are actually copyable; you could build an Erlang style messaging system this way.

Of particular interest to me, some blocks might actually be guests from some totally other system, say a different language with its own object model. An FFI for some other language could make wrapper blocks for that language’s objects, and put in place type guarantees that the programmer does not interact with those blocks in any way the guest language does not support. The two languages I’d personally really like to be able to interface with this way are C++ and Javascript, because these languages have valuable platform and library support, but also are languages I do not actually want to *write*.

C++ in particular interests me, because I’m not aware of any languages which are “higher level” in the sense that interests me but which can currently adequately interface with C++. C++ is actually pretty tricky to interface with– the big problem here, to my mind, being that method calling conventions (name mangling) vary from compiler to compiler. Actually, on some platforms (by which I mean “Windows”) it’s the case that shared libraries (DLLs) can’t be shared between compilers even if you are writing in C++ yourself. It would probably be necessary, if making a C++ FFI, to target one particular compiler (I’d vote Clang, because it’s extensible and has good platform support). Choosing to target one particular compiler would have a neat side effect: With some knowledge of the compiler’s implementation details, it *ought* to be possible to make blocks that inherit from C++ classes, and have those blocks actually construct fake vtables at runtime that jump into the compiled code for (or interpreter for) my language. Since in my language “classes” and “objects” get constructed by calling functions whose execution could be potentially deferred to runtime, it would be essentially invisible to the programmer when they say [ inherit QObject; objectName = “Block” ] whether a normal block or a pseudo-C++ class is being constructed.

Okay?

Anyway, here’s what I think I’ve got here. I started with one single idea (pattern-matched unary functions that remember the order in which their patterns were assigned), and asked the question “how much of what a language normally does could I collapse into this one concept?”. The answer turns out to be “very nearly EVERYTHING”, including stuff (like type specifications, macros and FFIs) that most languages would wind up inventing effectively an entire sub-language just to support (templates… ugh). I actually *do* want a new programming language, mostly because of that thing I mentioned with not liking any existing language’s C++ interop, and I actually do intend to at least attempt this project. I’m basically just gonna download the Clang source at some point and see how far I get. One thing in my favor is that since this language is based on a small number of simple things that interact in complex ways, I could probably get a minimal implementation going without too much difficulty (especially if I don’t go for types on the first pass).

Oh, one very final thing: I never said what all of this is called. In my head I’m planning to name this language either Emily, because despite being fundamentally OO it strikes me as a fairly “ML-y” language; or Emmy, after Emmy Noether. I’ll decide which later.

That’s all.

Note to commenters: Harsh criticisms are very much welcomed. Criticisms based on demographics or assumptions about my background are not. Thanks.

First Post

Sunday, April 23rd, 2006

I hate blogs.

I really do. I’ve been a very, very long-time follower of internet discussions groups of various kinds, and I’ve watched over the last three or so years as any kind of site even remotely related to technology or politics has gradually had the life sucked out of it as, one by one, every user of any worth has been gradually assimilated by the Blogosphere. And I, not wanting to follow, have gradually found myself with nowhere left to go. The discussion sites of yore, once vibrant communities, have become alternatively ghost towns or wastelands, with no one left but trolls and people who spend all their time talking about their blogs. Only Slashdot still lives, and Slashdot… kind of sucks.

It’s still not clear to me exactly what “Blogs” are supposed to be, or the “Blogosphere” for that matter. “Blogs”, despite sounding like they ought to be onomatopoeia for some kind of disgusting biological function, seem to just be websites. It’s not like there weren’t websites with frequently updated content, or people who put journals or diaries or news commentary or ideas on the internet, until four or five years ago when blogs appeared. Suck.com was perfectly mimicking the structure and nature of a blog as far back as 1995, and survived just long enough to comment on the word “Blog” as it first appeared. Supposedly, blogs are different. The two most common explanations of what makes blogs different are the appearance of “blog software”, and the idea that blogs “democratize” the internet by erasing the line between content consumers and providers. But there really isn’t anything to the “blog software”– Blog software is so incredibly easy to write I actually once wrote a blog engine by accident, and almost anything that in the 90s would have been called a “news script” qualifies as a blog engine by today’s standards. (When they’re used right, blog packages do offer a lot of innovative and frankly neat features that the high-end hardcore set gets a lot of use out of, like RSS, but not everyone uses those or is even clearly aware they exist.) The “democratizing content” thing seems a bit odd as well, since that’s the exact same sales pitch that was originally used to describe the internet itself when the media first started to notice it.

So maybe the idea is that the internet promised a global two-way street of content (where everyone participated and everyone was equal) from the very beginning, but it wasn’t really until the Blog Revolutionâ„¢ that this potential was actually fulfilled. Before blogs came along, let’s say, that promise that anyone could be a content provider was largely an illusion; this magical place where you could beam your thoughts to the entire world existed, but it was reserved for techheads, walled off by a thin but hard-to-scale barrier made of HTML and FTP sites. You had to master all these difficult acronyms and know the Secret SGML Handshake to get into the Cool Kids Club of people that were allowed to run websites in the Old New Media. But now we have “blogs”, and that’s all changed. Blog software removed that techhead barrier to entry, laying down a royal road to internet presence that anyone can master, and now that original promise that Everyone Can Have Their Own Website is fulfilled. We’re going to start the internet over, Web 2.0, and this time we’re going to get it right.

I don’t really buy that either. Probably it’s true that getting and maintaining one of those very first free websites on Geocities was reasonably beyond the technical abilities of most people. But for a rather long time there were sites where you could set up a personal page without understanding anything at all about what you were doing– mostly using free software similar or even identical to the “blog” sites of today. And the barriers to content creation weren’t a problem of the internet in general, just the web– many of the things people use blogs for today would have been provided, at one time, by mailing lists or USENET groups, and those are as easy to use as opening Outlook Express. Easy to use alternatives that effectively provided the functionality of today’s blog software existed all the way back to the 80s, and toward the end of the 90s the Geocities style sites even started to adopt simple, web-based personal website creation scripts that the proverbial grandmother could use; but people didn’t use them, or didn’t see the point, or weren’t interested, until they got to start calling them “blogs”. Even the web diary concept that one would expect is the central Blog innovation isn’t a particularly novel one. Livejournal was around for a pretty long time before there was ever a “blogosphere” to label it as part of. Until the “blogosphere”, Livejournal was just another internet community, just like the ones that had existed before that on web forums, and IRC, and USENET– and it wasn’t really all that different from any one of a dozen sites that people were using to communicate diary-like material at that point.

So how is “blog software” something new, exactly, if it isn’t the technical functionality, or the ease of use, or the format, or the communities? Because something happened there; there’s something about the difference between blog software and all the software before that that created a social movement where before there had just been an internet. All these people are running around now insisting blogs are the next big thing. Something apparently happened that inspired change in the transition between Fark and Movable Type; what?

As I see it, the one thing that actually makes blogs different is that it lets people do the same things they were doing with the Internet all along, except now they get to pretend it is important. Before, there were both important and unimportant things on the internet; but looking important was something nontrivial, a luxury reserved for people with budgets and people who know Perl. You could easily get stuff on the internet where people could see it, but since the HTML barrier meant you had to find someone else to administrate the site, you were always in effect working for someone else– you could write the most important, intelligent thing in the world, but it would never be your post, it would always be your post on Livejournal, and anyone could see that just from looking at the top of the page. Who wants to spend all that time writing something when you don’t get the credit, or the link from the front page, and somebody else’s big tacky logo is marring your work? It just doesn’t feel as special somehow.

But now, everyone from some guy with a cat to Chrysler can project the same illusion of respectability. The new blog software packages project the polish and flourish of something like Slash or Metadot or a big professionally-made news site, without the techhead difficulty and configure-these-Perl-files learning curve; and once you’re done, those big blog edifices look totally professional even if the URL is just a subdomain somewhere. That feels good. You’ve made it, you’re your own Commander Taco, your postings are beholden to nobody else’s “ban” button. You may not have any more readers in this state than if you’d just posted your thoughts periodically on some big web forum, of course. But by golly, you can look at your front page with the fancy list of prior postings, and the default WordPress theme, and the trackback where some other guy nobody’s heard of either says “so I saw the American Kumquat talking today about…” just like you were the New York Times or something, and dammit, that’s your website.

The “blogosphere” is something a bit less superficial and maybe more interesting, and it’s what happens when you take the army of identical I Am Unique And Independent websites these software packages necessarily create, and turn them on all at the same time. The name “blogosphere” seems to imply some kind of giant floating hive mind from a bad science fiction movie, and at first glance, that really is what it seems to be. Despite all the effort that’s been put into making clear site divisions between all the different bloggers, once they start writing all the blog sites kind of start to blur together; everybody links each other, everybody responds to each other, everybody builds on what everybody else has already written. When this works, this kind of gradually forms into a large cross-site exercise in collaborative journalism; where one writer would not be able to make something really stunning and well-researched by themselves (or at least would have to put a lot of work into it), when a whole bunch of writers try to attack a problem at the same time, each one contributing one little tiny piece, the final result becomes, through the magic of interferometry, something really quite impressive. The spirit of a funny cross between a capitalist fight for eyeballs and a big friendly circle jerk that the blogosphere encourages makes this possible, by ensuring that the individual contributors to the mass simultaneously compete and collaborate.

That’s when it works, mind you.

It doesn’t really wind up working all that often.

Before the blogs really arose, the tendency in internet communities was toward forming ever bigger and more monolithic single sites. To an extent you could gauge these sites by how much different *stuff* was going on within its walls, how many of its cliques had developed into full-blown subcultures. (We still get sites like this, a la Facebook and Xanga, appearing all the fricking time. But they all either are or claim to be “blog” sites, so for purposes of this particular discussion they don’t count.) These sites had intricate and multifaceted communities, and could readily create fads or community responses which spilled over into other sites or the “real world”. But they were all always focused inward. Information came in from the outside, but the focus was never quite on the information itself, the focus was always on the community’s response to the information. That was what you came there for. You could read just the start of every post to see if somebody dug up an interesting link, but you probably didn’t. There are probably more people on Slashdot who read the comments but not the article than the other way around.

Blogs, though, at least individually, are all always outward-focused. You come to a blog to see what the blogger said, but it’s almost invariably what the blogger has to say about something they saw somewhere else, what the blogger has to say about this link, what the blogger has to say about what this other blogger said, in a best-case scenario what the blogger saw this morning on the subway. This tendency means that proper communities almost never form on single blog sites; they usually form across several sites, with several blogs clumping together to form a little clique where everybody knows everyone else. Unlike the big cathedral monasteries that the old web communities formed, the blogosphere is like a million tiny islands, each linked to the two or three islands near it by little bridges. Big walled blog cities like DailyKos nonwithstanding, everything is open, everyone knows their neighbors the next island or two away, and if you just keep following those bridges you can hop from blog to blog until you’ve crossed the entire internet from one side to the other.

In theory, what the blogosphere ought to do is basically merge the entire internet into one gigantic community, one big site with a thousand faces. We don’t need the cathedrals anymore to provide a common ground for a community; the internet itself is the common ground. We can build communities just as strong, and just as complex, by just stitching all these islands together and realizing that even though they may all be on different hosting providers, in the aggregate they’re all the same thing.

That’s, again, when it works.

This is where, I think, things start to go wrong:

Let’s look at the cathedral model of internet communities again. The big Slashdot style sites. What advantages does this model offer? Well, none, really, except better organization and ease of finding things– and that can be supplied even in the island model by the use of things like Google or Feedster. The cathedral model does have some disadvantages, though. These turn out to be a bit more important.

The most important disadvantage of the cathedral model is this: Everyone inside the cathedral is basically locked inside the cathedral with every other user, which means if there’s another user on the site you dislike, you actually have to learn to get along with them. Blogs don’t have this disadvantage, and this turns out to be the chief reason why blogs don’t work. Nobody is forcing you to spend time near anyone you don’t like. If you don’t like something about a user somewhere, you can just go somewhere else. There are many islands in the sea, and you don’t have to hang out at any one of them if you don’t get along with the people there, or you don’t get along as well as you used to.

At first glance this is liberating. Every little clump of blogs, every mini-community can find its optimal userbase. Internet rivalries are something you enter into cheerfully and voluntarily, rather than being an unavoidable fact of life, and if you get tired of the rivalry you can just start banning people until it’s over. Even better, having to uproot and rearrange your internet allegiances on a whim is not a huge production in a world full of blogs, the way it is in a world full of webforums or even usenet groups. Because communities are encoded in the connections and relationships between different sites, instead of being encoded in the user database of one big site, these communities are much easier to rearrange. Switching from one web forum community to another involves signing up for accounts, setting up new profiles, starting over with a postcount of 0 and no karma. Switching from one blog community to another involves just changing what the links are on your “blogs i read!!” sidebar. Since your own blog retains everything, even your post history comes with you when you move.

After awhile this becomes addictive. The liberating realization of “hey, I don’t have to put up with these assholes, I can just go over here for awhile!” gradually melts into a realization that beyond just what kinds of communities you invest your personal attention and posts into, you don’t actually have to ever read anything at all that involves people you don’t like. Or that makes you in some way uncomfortable. Or that you don’t agree with.

And this is, though probably not the only reason, at least part of why blog communities are so cliquey as to make every single other thing that has ever happened on the internet pale by comparison. Cliquey, in fact, to the point that it effects the very content of the sites themselves.

Spend some time reading blogs and you’ll quickly realize that what conceptually seems to be a big, amorphous, continuous expanse of blogosphere is actually highly and rigidly structured. Communities span across sites, and sites bleed into one another, but the communities are highly insular, and the sites tend to only bleed into sites that the author likes. And the authors of blogs, oddly enough, tend to only like blogs that are exactly like theirs. The sense that you could just follow links from one blog to another until you’ve crossed the entire internet turns out to largely be an illusion. If you actually try this– start off at some random politics blog and start randomly clicking links on the omnipresent “blogs i like” sidebars, and just see where they take you– you’re more likely to find yourself trapped in a circular pool of sites that all link to one another, but pretty much no one else. That pool might be in some cases quite large, hitting tens or maybe even hundreds of sites. But a large fishbowl is still a fishbowl. And blog communities have generally developed a remarkable ability to form cliques that, despite there not being anything holding them together but social forces, are entirely hermetically sealed (everyone in the circle has links to lots of blogs but they’re all blogs inside the circle, everyone in the circle is always referring to things happening on other blogs but they’re all blogs inside the circle, and everything outside the circle is totally ignored except to the extent it offers something to rebut) in a way that utterly breaks the small-world effect that governs real-world social networks. This does things to the people who write and read these blogs. This is where the blog model starts to fall apart.

That part of the blogosphere which aspires to be journalism or at least op-ed is in many ways a giant experiment in how people can come to focus on the people they know until they start to mistake them for the entire world. When everything you read comes from the same pool, and everyone who reads what you write comes from the same pool, and you do this in an environment which (thanks to the illusion of being in this big open interconnected ocean) provides the illusion of incorporating the entire spectrum of discourse on the Internet, you gradually start to feel that the circle of people you talk to are the only people that matter– or even the only people that exist. The thought “well, it must be a good idea; everyone who reads my blog liked it” is just way too easy to fall into, and the more reassuring and comforting your blog circle becomes, the easier it becomes to just assume the rest of the world would just kind of feels the same way. A blogger can be bounded in a nutshell and crown himself king of infinite space, so long as he has yes-men.

This kind of intellectual tribalism leads to a habit of feeling that anyone who feels the same way you do must automatically be right– a habit that makes bloggers startlingly easy to manipulate. Case in point: Andrew Sullivan. Andrew Sullivan is “the gay conservative blogger”. These four words are all that most people bother to explore of Andrew Sullivan’s identity. Sullivan is one of those few bloggers whose sites have gotten so huge and widely-read that he’s pretty much a brand name; Andrew Sullivan means “the gay conservative blogger” the way Coke means “soda”. Sullivan’s writings and opinions are sometimes excellently-composed and insightful, and sometimes dishonest and stupid; for the moment, though, let’s not worry about what his writings actually say, and concentrate on the site itself. The site is one of the most heavily-trafficked and widely-linked sites in the entire blogosphere. It frequently comes up in arguments when people start to talk about the democratizing power of blogs. Sullivan has been showing up a lot in major media sources as a kind of spokesperson for the blog movement; the September 2004 issue of Time printed a blogging mini-manifesto by him, in which he is identified as “a member of the blogging class” and tries to write some kind of “creed” for bloggers who blog. Here’s the thing about this:

There is this idea that there is an “Old Media” and a “New Media”, and “Old Media” is an outmoded and repressive institution in which a small crowd of elites hold a monopoly on expression, and “new media” (now embodied by blogs) is a new and democratic movement in which voice is extended to all who wish to speak, demolishing the power of Old Media. If we are to accept this idea, Andrew Sullivan is as old media as one can get. Sullivan was the editor of The New Republic for six years in the early 90s; later on, right up until around the time his blog started taking off, he was a regular contributor to The New Yorker and The New York Times. The latter two are impressive on any resume, but the New Republic is probably a bigger deal. The New Republic is one of a small number of “policy rags” who not all that many people have heard of, and who not all that many people read (its circulation is about equal to the number of hits that, as of early 2005, Sullivan’s current blog would get in a single day), but whose influence is enormously disproportionate to its readership. It is kind of like how Brian Eno once claimed that the Velvet Underground only sold a couple of thousand records, but every single person who bought one started a band; the policy rags only distribute in the thousands, but almost everyone who holds actual power reads them. Not to say they’re all reading The New Republic in specific of course; but the people who actually run this country are all reading The New Republic or something like it, and anything which can get into a magazine like that has a direct private channel right into the ear of the most powerful people on earth, with the potential to make an impact that ten thousand letters written to congressmen or news corporations (and thrown away by interns) never could. In the runup to the Iraq War, when The New Republic held the distinction of being the voice of the neoconservative movement (whatever that means), The New Republic’s foreign policy writings very briefly were so in lockstep with that of the Presidential inner circle that you could practically predict what the administration was going to do next by reading The New Republic that week.

So here we have Andrew Sullivan, who for six years in the 90s ran one of these elite-of-the-elite shadow-media mouthpieces; who has connections in major media institutions most writers could only dream of; who if, if we are to accept this idea that that there’s a small cadre of entrenched old-media opinion monopolists who are the only ones in America who are given access to the national dialogue, Andrew Sullivan is right down in the core of that privileged group to the point that at one stage around 2000 he was actually publicly claiming he was being censored because the New York Times was no longer offering to print his columns. This is the person who is being selected or allowed to speak for the movement by which blogs are supposedly democratizing media as a whole, one of the chief evangelists who is bringing the gospel of the changing power of blogs to the masses. Does this not strike you as a bit odd? Andrew Sullivan is supposedly one of the blog movement’s great success stories. But from what I can see, it looks like all that’s happened in the case of Mr. Sullivan is that a media powerbroker took a break from his main career of doing op-ed writing for gigantic news publications to go slumming for a few years with this “blog” thing, and now a short few years later he’s back to writing in Time, doing essentially the same things he was doing before, except now he gets to slap the “blogger” badge on his byline (though, coincidentally, the blog itself is as I write this on an apparently temporary sabbatical). This wasn’t an elevation; it was a rebranding.

Which makes this, of course, an excellent and impressive success story for Mr. Sullivan, who is clearly doing quite well at his job as a media commentator. It is not a success story for the blog movement. The blog movement here is simply being used; it was a tool by which an established writer kept himself relevant. Perhaps this says something about what a useful tool blogs can be in the hands of someone looking for success in the fields of media or PR. It doesn’t say anything at all about the utility of blogs as an agent for change either inside or outside the media. The bloggers aren’t bothered by this; indeed, they don’t even seem to have noticed. What matters to the bloggers about Andrew Sullivan is (a) he’s a blogger (b) he’s successful and most importantly, (c) he agrees with the other bloggers. After all, if you agree that bloggers judge content based on whether it reinforces the things they already agree with, well, the different cliques of bloggers may disagree with one another about everything under the sun, but the one thing they all agree on is that blogs are important. Bloggers may reject some subset of Sullivan’s politics, but they can be counted on to rally behind the most consistent message Sullivan can be heard pushing these days, which is that blogs are changing everything and revolutionizing the media and empowering the previously voiceless. Bloggers like this message, and cannot be bothered to concern themselves with petty details such as the context of this message, or whether it makes any sense whatsoever for Sullivan (himself a living disproof of the message) to be the one promoting it. Context is something that exists outside the blogosphere. In the mindset of blogging (where the writing is always focused on things happening elsewhere in the world, but the world is always equal to your clique), the wider media and social context is something bloggers triumph over; it isn’t something bloggers exist within. The denizens of the blogosphere, apparently, don’t know or need to know what’s happening outside.

As it happens, some rather big things are happening outside of the blogosphere.

Something in the American spirit has become terribly sick. The entire mechanics of American media and society is going through a massive weird period that looks suspiciously like some kind of transition. What this is a transition to, or what from, is anyone’s guess. However, what is clear is that right now, the American national dialogue does not value honesty. American media, at this point, doesn’t so much report as it does repeat; various people have for various reasons felt for some time that the news media isn’t very good at seeking or exposing the truth, but in the last few years it’s entirely gotten to the point that the news media doesn’t even try to seek or expose the truth, doesn’t even act as if it feels that truth is something news media can or should concern itself with.

After all, truth is hard. You have to do a whole lot of work to get to it, and sometimes doing all that work to get the truth makes enemies of various kinds. Even worse, truth is risky. If you get truth even a little bit wrong, you get in big trouble and people yell at you and stuff. The news, these days, would rather concern itself with opinions. Opinions can’t be wrong. Opinions are easier, and safer, and cheaper. All you have to do to report an opinion is find someone with an opinion, and then tell people what it was. This makes all sorts of things easier. For example, you don’t have to worry about tricky things like journalistic integrity; instead, all you have to do is be balanced, which means that anytime you report an opinion, you have to also report an opposing opinion. Except it really isn’t even that hard; all you really have to do is report two opinions, and just assume that these two opinions you picked, whatever they were, are both opposing and represent the entire spectrum of thought on the issue. Except it isn’t even really that hard; it appears a lot of the time that you don’t really need to present the opposing viewpoint, so long as the opposing viewpoint is “liberal”. (Those opinions, like maybe if someone has some kind of loony communist opinion like “maybe a supreme court candidate isn’t really ‘mainstream’ if his appointment would make abortion potentially illegal where it was previously legal”, can be saved for the Letters To The Editor page or something.)

So news outlets have a pretty simple job now. Simple as the world has become for the news outlets, of course, this kind of makes things difficult for some of the rest of us. Some of us think that there are these things called “basic facts” (like “the second law of thermodynamics is not meant to be applied to an open system such as the planet Earth” or “on October 7, 2002 the President gave a speech which said in part that military action against Iraq was needed to prevent a nuclear strike on the United States of America”), and these things can be demonstrated to be either true or false and then used to make decisions. Now we find out that these things are, in fact, no longer facts at all, but just “opinions”, and moreover find out that the news media is obligated to give them equal credence as the alternative opinions (such as “the second law of thermodynamics means that life could not have evolved on Earth” or “the President never claimed Iraq had nuclear weapons”) without trying to examine or judge which alternative might be correct. This makes it kind of hard to form or argue opinions that one can consider informed, since facts apparently don’t exist anymore to be informed about.

Worse, now that facts no longer exist, there is no longer such a thing as honesty– since, after all, there’s no longer such a thing as truth to be honest about. This is not just a silly joke about wordplay. America in general seems to have lost the capacity to even talk about or understand what honesty means. For example, let’s say the President of the United States of America gets up on television and tells the world he has evidence Iraq has weapons of mass destruction. Now let’s say a few years pass and it turns out that not only did Iraq not have weapons of mass destruction, but the evidence the President said he had before was so flimsy, so full of obvious holes, and so highly contradicted by other evidence the U.S. had at the time, that it can’t really be said to have ever existed in the first place. A truly amazing number of people will look at a situation like this and conclude that the President wasn’t lying about the evidence he said he had, for the reason that the president didn’t know there were problems with the evidence he said he had. The President could have, of course, found out about these problems if he had just looked at the evidence, or asked someone else to check the evidence for problems, or just asked the simple question “now, is there any evidence which might say that Saddam Hussein doesn’t have weapons of mass destruction after all”? But he didn’t. And it was a wise choice, it turns out; so long as the President chooses not to ever concern himself with what the truth might be, he’s protected from any allegations of lying about anything, ever. He can repeat this trick any number of times: becoming willfully ignorant about the exact consequences of his decision that America doesn’t need to follow the Geneva convention sometimes, and then saying Americans are not committing torture; becoming willfully ignorant about exactly what his staff did after he gave them permission to leak classified information, and then saying his administration didn’t leak the name of a CIA operative that, it turns out later, they did. In all cases, most of a nation will claim that even though he said things that were totally untrue, if the President didn’t believe he was lying, he wasn’t lying– and the amazing thing is, they really mean it. These people aren’t just doing this to defend the President. They are doing it because they seriously feel that this is what honesty means. And what choice do they have? Most of these people are generally uninterested or unenthusiastic about knowing what the truth about things is themselves; how could they blame the President for something they themselves are doing?

This situation does not look to be changing anytime soon. Everyone who is in a position to convince America it really needs to start caring about truth is too busy taking advantage of the current situation for their own ends. America’s honesty problems do not exist in a vacuum; there are a lot of groups that are, literally or figuratively, profiting from it, and this means they have a vested interest in sustaining the problem. If you are a group which holds power, it’s a lot easier to wield that power if nobody really cares whether you tell the truth about what you’re doing. And so most groups which hold power are doing exactly that. Most of the important things in America right now are controlled by organizations which value loyalty over ethics, such as most corporations, religions, labor unions and political parties. And organizations or movements which value ethics over loyalty, such as one would normally expect the media to be, are not arising to take their place.

Which brings us to blogs. Blogs have the opportunity to change all of this, blogs are in a position to make truth and journalism matter again (as opposed to just opinions and media). So far this is not happening, and it does not look like it is actually going to. Blogs are more useful for comfortingly reinforcing opinions you already have than they are for gathering the basis for an informed opinion, and bloggers themselves are almost universally disinterested in truth (unless it’s a truth that’s uncomfortable for the other guy; those truths they care about deeply). America has a problem that it no longer cares about honesty, and the tribalism that the blog movement makes so easy means that blogs are not part of the solution; blogs are not even part of the problem. Blogs are just another symptom of the problem.

We (and by “we” I mean everybody within earshot of the blog movement) are constantly being told of late that blogs are a watershed change in how media works, that blogs will bring accountability to the press, that even if most people aren’t getting their news from blogs, blogs force the media to stay honest by applying pressure for the media to address issues it normally ignores and own to its mistakes. The idea is that normally the media would just breeze past uncomfortable issues or errors, but the collaborative fact-checking layer the blogosphere represents will refuse to drop these things until the media, due to the public attention the blogs bring to these things, is forced to revisit them. There is a problem here. This is not change. This is, in fact, exactly how right-wing talk radio worked in the 90s, to the comma. What the blogosphere is doing is not a new concept at all; it’s replicating something that’s 10 years old, except now it’s on the internet instead of the airwaves, and we now have a rabid left-wing talk radio blogosphere to match the rabid right-wing talk radio blogosphere that already existed. Instead of just Rush Limbaugh, we now have rushlimbaugh.com and DailyKos. This is not progress.

The problem is that both in the talk radio of yore and the blogosphere of today, the things that the blogosphere forces the media to address are in no way obligated to be facts. It helps, of course, but the entire theory is that an idea or allegation gets picked up by the media due to blog pressure because it’s being widely repeated; it’s no particularly easier to get a true idea widely repeated than a false one. If the blogosphere were a system that valued facts, or truth, or honesty, or one where things that are true are more likely to gain wide currency than things that aren’t, then blog pressure would indeed have a positive effect on the media outside. But blogs are themselves governed by the same principle that rules the normal media: repeat, don’t report. If you pay attention, you’ll find that the blogosphere exists in a fascinating kind of hierarchy where the people who write the little blogs read the big blogs in the morning, and simply take the opinion they decide they’re going to have that day from whichever big blog suits them most. If you very pay careful attention to the internet some days, you can actually track the spread of a single bad idea (say, a catchphrase, or an argument, or a… “meme”) as it trickles down through this hierarchy, showing up on the big megablogs at the beginning of the day, moving on to the moderate-sized blogs as the morning progresses, thriving on all of the small flyspeck blogs run by the people who read the moderate blogs as afternoon comes, and from there disseminating to the external sources like internet discussion boards, where suddenly one finds that there are a bunch of people who believe the exact same idea that appeared on the megablogs at the beginning of the day. The flow of ideas here is not of course strictly one way; sometimes the idea the megablog latched onto came from a smaller blog, or started at a flyspeck blog and had been slowly for a little while climbing its way up the hierarchy, garnering supporters among blogs at different levels. But a surprising amount of the time the day’s talking point basically originated from one of the same sources that set the tone of coverage for the normal media that day– something like Fox News. This means that what could be a marketplace of ideas that collaboratively uncovers the truth of things turns into a big daily game to see which partisan blogger army can outshout the other that day. Blogs are supposed to correct and cancel out the assumptions and groupthink that have pervaded the general media; in practice bloggers only just barely manage to correct and cancel out the assumptions and groupthink that the rival bloggers generate.

The media has become addicted to dealing in opinions instead of news or facts, and the blogosphere in its current state is not going to do anything to help this situation– “opinions as news” is, after all, the core principle of the blogging movement, and so blogs are just as bad about this as any TV news station. The cliquey homogeneity one finds within any one of the various differing partisan wings of bloggers goes beyond a simple matter of everyone echoing one another; it’s a matter of these different wings of bloggers actually living in entirely different realities. The “opinions” that have replaced what used to be basic facts split one way or another along partisan lines (Did the President ever suggest Iraq had nuclear weapons, or not? Was Iraq collaborating with Al-Qaeda? Do “liberals” “hate” “America”?), and the disconnect on these facts become just another factor separating the different cliques from one another. The fluid nature of the blogosphere has turned out not to produce an open field of borderless bridges, but a series of xenophobic little island fiefdoms that clump together into fishbowls, and shut the rest of the world out. The freedom to find or form exactly the community that’s right for you turns out, when you actually start looking at blogs, to be an opportunity to choose which groupthink you want to conform to.

But blogs are all that are left. The idioms of communication on the internet flow in phases, and right now, if you are interested in discussion and news on the internet, blogs are the form that the current phase takes. Reading the above, you may have noticed, just a teensy bit, that I’m not terribly cheerful about this. Personally, all I ever really wanted out of the internet was to have a nice conversation with someone, and good conversations are surprisingly hard to find. I have no interest in either inserting myself into a circle where everyone agrees with me on everything, nor inserting myself into a hostile blog clique (ruled by “the other side”) where I become the local effigial punching bag; and while I know at some level that I must be overgeneralizing, there must be some corner of the blogosphere that isn’t like that that I just haven’t found yet, it seems like these two extremes are the main options in the blog world. But nor (whether I’m right about the way blogs are or not) do I really want to sit and watch as the internet mediums I’ve known in the past gradually fade away entiely into ghost towns, as everyone filters away to settle in the island paradises of the blogosphere. So, I’m giving up. I’ve got a blog now. It’s got its own little subdomain and a little default WordPress layout and everything.

Here I am.

What happens now?