Archive for the 'I Made This' Category

Jumpman

Thursday, February 19th, 2009

I made a video game.

DOWNLOAD (Version 1.0.2)

GAMEPLAY VIDEO

FEATURES

  • Old-school puzzle platforming with some twists
  • Low-definition graphics
  • Gamepad support
  • Full level editor

PLOT

  • Guide Jumpman to the exit.

 
————————————————————————————————————————
 
(OTHER STUFF)

  • There is a collection of user-created levels for Jumpman here.
  • In 2010 I released an iPhone/iPad version of Jumpman.
  • You can find the source code for this game on Bitbucket.

A Game of the Year 2008 Poll: Results

Friday, January 9th, 2009

CLICK HERE TO JUMP TO THE PRETTY COLOR-CODED FULL RESULTS

I’m just gonna copy and paste the explanation I gave last year:

For the last few years I’ve been hosting this Game of the Year poll for the users of some forums I read. There are a lot of GOTY polls out there, but this one I think is kind of special. Most polls, you’re given a list of four or five options and you’re asked to pick the one you liked best. This poll, people are given a list of a couple of hundred options, consisting of every new game released in the previous year– and asked to rate their top ten or twenty.

This does a few interesting things. First off, we get to see all the information about what people’s second, third etc choices are. Second off, because the second, third etc choices count, people are more likely to vote for the game they want to win, rather than the game they think is likely to win– they’re less likely to engage in “strategic voting”. Finally, because we have all this information, we’re actually able to provide somewhat reasonable rankings for something like the top hundred or so games of last year.

The full results– showing the exact number of voters who ranked each game first, second, third place etc– can be found here. In the meantime, the final results were:

  1. Fallout 3 (8780) *** GAME OF THE YEAR ***
  2. Left 4 Dead (6626)
  3. Grand Theft Auto 4 (5032)
  4. Super Smash Bros. Brawl (4321)
  5. Rock Band 2 (3290)
  6. Dead Space (3151)
  7. Gears of War 2 (2942)
  8. Fable 2 (2751)
  9. Braid (2729)
  10. Metal Gear Solid 4 (2666)
  11. Little Big Planet (2520)
  12. No More Heroes (2241)
  13. Audiosurf (2152)
  14. Castle Crashers (2083)
  15. Valkyria Chronicles (2027)
  16. Mario Kart Wii (2014)
  17. The World Ends with You (2000)
  18. World of Warcraft: Wrath of the Lich King (1914)
  19. Penny Arcade Adventures: On The Rain-Slick Precipice of Darkness Ep. 1 (1910)
  20. Sins of a Solar Empire (1850)

The numbers in parentheses are the final scores each game got under the poll’s ranking system. (The scores in general were a lot closer than last year–basically all the rankings 14-18 are within a couple votes of each other!) Thanks if you voted, and some more elaborate analysis of the results (plus an explanation of the scores) can be found below.

NOTEWORTHY WINNERS

  • GOTY 2008:

    #1, Fallout 3

  • Top-ranked Wii Exclusive:

    #4, Super Smash Bros. Brawl

  • Top-ranked 360 Exclusive:

    #7, Gears of War 2

  • Top-ranked PS3 Exclusive:

    #10, Metal Gear Solid 4

  • Top-ranked PC Exclusive:

    #13, Audiosurf

  • Top-ranked DS Exclusive:

    #17, The World Ends With You

  • Top-ranked PSP Exclusive:

    #39, Crisis Core: Final Fantasy VII

  • Best FPS:

    #2, Left 4 Dead

  • Best RPG:

    #1, Fallout 3

  • Best Sports Game:

    #27, Burnout Paradise

  • Best Game Only Available Through A Console Download Service:

    #8, Braid

  • Special “Cult” Award (see below):

    #26, Persona 4 & #15, Valkyria Chronicles (Tie)

NOTEWORTHY LOSERS

  • Best game of 2008 which somehow nobody considered to be their #1 pick: #33, Spore
  • Worst game of 2008 that at least one person considered their #1 pick: #179, Midnight Club: Los Angeles (Only two people voted for this)
  • Worst game of 2008: #203, Mystery Case Files: MillionHEIR (Only one person voted for this; it was their #20 pick)

There were also ten games which were listed, but which no one voted for at all.

ALTERNATE SCORING METHODS

The rankings listed above are based on what was intended to be an approximation of Condorcet voting, but which I’m told is actually closer to the Borda count. In my Borda-ish voting method, each vote cast for a game gives that game a certain number of points. If someone ranks a game #1, that game gets 20 points. If they rank it #2, the game gets 19 points. If they rank it #3 the game gets 18 points… and so on. I have a script that checks a couple of alternate ways of ranking the same data, though.

For example, if we rank games only by the number of first post votes they got, we get a wildly different list:

First Past the Post

  1. Fallout 3 (182 first-place votes)
  2. Left 4 Dead (109)
  3. Super Smash Bros. Brawl (42)
  4. Metal Gear Solid 4 (42)
  5. Valkyria Chronicles (39)
  6. Persona 4 (39)
  7. Grand Theft Auto 4 (35)
  8. Dead Space (31)
  9. Rock Band 2 (30)
  10. The World Ends with You (30)
  11. World of Warcraft: Wrath of the Lich King (30)
  12. Gears of War 2 (23)
  13. Little Big Planet (21)
  14. No More Heroes (19)
  15. Braid (15)
  16. World of Goo (14)
  17. Spelunky (12)
  18. Sins of a Solar Empire (11)
  19. Fable 2 (10)
  20. Prince of Persia (10)

Every year when I do this there’s some game which scores horribly low in the objective rankings but gets a really startling proportion of first-place votes; last year the standout game in the “cult” department was Persona 3; this year the standout was, interestingly enough, Persona 4, which only got 87 votes at all, placing it at #26 in the overall rankings– but nearly half of those votes, a full 39, ranked it in first place, putting it in sixth place in the First Past the Post ranking above. Tying Persona 4 in the First Past the Post ranking is Valkyria Chronicles, which did a little better in terms of how many people voted for it (117 votes) but which still gets a pretty great cult ranking since one in three of those voters considered it their #1 game. (Honorable mention in the cult category should probably go to “Spelunky“, a wildly obscure but kind of awesome freeware pixel art game released in the last two weeks of December, which came in way down at 51st place in the overall rankings but managed to come in 17th in first-pace votes– with again nearly one-third of the people who voted for Spelunky at all rating it #1.)

I also did two more ways of sorting the rankings: an “approval” vote, where nothing is counted except the number of votes a game received (i.e. a first-place and a twentieth-place ranking count the same– all the matters is if the game was on someone’s list); and an instant runoff vote. Most years I’ve done this the Instant Runoff and pseudo-Borda rankings have been almost the same, but this time there were some interesting differences (with the biggest one being, for some reason I don’t understand, World of Goo somehow jumping a good seven spots in the rankings?!). Your eyes are probably starting to glaze over at this point, so I bolded the places where these two votes differ from the normal rankings:

Approval

  1. Fallout 3 (488)
  2. Left 4 Dead (388)
  3. Grand Theft Auto 4 (325)
  4. Super Smash Bros. Brawl (266)
  5. Rock Band 2 (205)
  6. Dead Space (204)
  7. Braid (187)
  8. Gears of War 2 (185)
  9. Fable 2 (184)
  10. Audiosurf (161)
  11. Little Big Planet (161)
  12. Metal Gear Solid 4 (161)
  13. Castle Crashers (160)
  14. Penny Arcade Adventures ep.1 (176)
  15. No More Heroes (155)
  16. Mario Kart Wii (148)
  17. The World Ends with You (129)
  18. Professor Layton and the Curious Village (128)
  19. Mega Man 9 (124)
  20. Sins of a Solar Empire (122)

IRV

  1. Fallout 3
  2. Left 4 Dead
  3. Grand Theft Auto 4
  4. Super Smash Bros. Brawl
  5. Dead Space
  6. Rock Band 2
  7. Gears of War 2
  8. Fable 2
  9. Braid
  10. Metal Gear Solid 4
  11. Little Big Planet
  12. Castle Crashers
  13. Mario Kart Wii
  14. No More Heroes
  15. World of Goo
  16. AudioSurf
  17. The World Ends with You
  18. Valkyria Chronicles
  19. Penny Arcade Adventures ep.1
  20. Sins of a Solar Empire

FINALLY: PER-FORUM BREAKDOWNS

As mentioned before, this poll mostly exists for a handful of video game forums where some people I know post. Since last year when I started posting the results on this blog, I’ve tried to actually run some extra results, in each case counting only those voters who– as far as one could tell from looking at the logs– had come to the poll from one particular forum or other.

So, here you have it– these numbers aren’t totally accurate because my logging method is not entirely trustworthy, but here’s an approximate by-forum breakdown of these results. Links go to color-coded full listings.

Penny Arcade Forums (806 voters)

  1. Fallout 3
  2. Left 4 Dead
  3. Grand Theft Auto 4
  4. Super Smash Bros Brawl
  5. Rock Band 2
  6. Dead Space
  7. Braid
  8. Gears of War 2
  9. Fable 2
  10. Metal Gear Solid 4
  11. Little Big Planet
  12. AudioSurf
  13. Mario Kart Wii
  14. Castle Crashers
  15. No More Heroes
  16. Valkyria Chronicles
  17. World of Warcraft: Wrath of the Lich King
  18. Penny Arcade Adventures ep.1
  19. The World Ends with You
  20. Sins of a Solar Empire

Platformers.net (42 voters)

  1. Super Smash Bros. Brawl
  2. Fallout 3
  3. Left 4 Dead
  4. Apollo Justice: Ace Attorney
  5. No More Heroes
  6. Persona 4
  7. Mega Man 9
  8. Professor Layton and the Curious Village
  9. The World Ends with You
  10. AudioSurf
  11. Grand Theft Auto 4
  12. World of Goo
  13. Dead Space
  14. Castlevania: Order of Ecclesia
  15. Metal Gear Solid 4
  16. Advance Wars: Days of Ruin
  17. Little Big Planet
  18. Rock Band 2
  19. Tales of Vesperia
  20. Braid
360Arcadians.net (37 voters)

  1. Fallout 3
  2. Grand Theft Auto 4
  3. Left 4 Dead
  4. Gears of War 2
  5. Rock Band 2
  6. Metal Gear Solid 4
  7. Fable 2
  8. Geometry Wars: Retro Evolved 2
  9. Little Big Planet
  10. Dead Space
  11. Saints Row 2
  12. Sins of a Solar Empire
  13. Burnout Paradise
  14. Prince of Persia
  15. Valkyria Chronicles
  16. Castle Crashers
  17. NHL 09
  18. Lost Odyssey
  19. Penny Arcade Adventures ep.1
  20. Civilization Revolution

Mechanically Separated Meat (6 voters)

  1. Super Smash Bros Brawl
  2. Professor Layton and the Curious Villiage
  3. Super Street Fighter 2 HD Remix
  4. Mega Man 9
  5. World of Goo
  6. Trauma Center: Under the Knife
  7. The World Ends with You
  8. Iji
  9. Barkley, Shut Up and Jam: Gaiden Hourglass
  10. Fallout 3

Super Mario World vs. the Many-Worlds Interpretation of Quantum Physics

Sunday, February 3rd, 2008

Short version: Just watch this video.

Okay, now what was that?

So a few months back some of my friends were passing around these videos of something called “Kaizo Mario World“, which I was told, at first, translated to “Asshole Mario World”. This turned out to have actually been a misunderstanding of something in the youtube posting of the original creator’s videos:

[Asshole Mario] is not the real name for this series of videos, but it is my personal name for it.
The literal translated name for 自作の改造マリオ(スーパーマリオワールド)を友人にプレイさせる is “Making my friend play through my own Mario(Super Mario World) hack”, hence Kaizo(hack) Mario to the USA.

…but, the name is pretty appropriate. Kaizo Mario World is one of a series of rom hacks people create in special level editors that let you take Super Mario World and rearrange all the blocks; the point of Kaizo appears to have been to create the most evil Super Mario World hack ever.

I started watching these videos, but after seeing how the player got past the first gap stopped, went wait, this actually doesn’t look so bad, and started playing it instead. It’s actually not that bad! I was expecting it to be like Super Mario Frustration, Kaizo Mario World’s equivalent in Super Mario Bros. 1 hacks– all ridiculous jumps that require pixel-perfect timing, memorizing the location of a bunch of hidden blocks that exist only to foil a jump and, occasionally, actually exploiting glitches in the game engine.

Kaizo Mario World actually turns out though really to be kind of more like a puzzle game– giving you a series of seemingly impossible situations and then leaving you to figure out how to get past them. It only uses the sadistic-invisible-block trick sparingly (and, hey, even SMB2JP did that a couple times). And it actually turns out to be kind of fun.

It’s still sadistically hard, though, so if you want to play it you have to use what are called “save states”. Most emulators let you do this kind of save-and-rewind thing, where if you screw up you can back up just a few seconds to the last time you were standing in a safe place. So if you’re playing Kaizo Mario world you find yourself playing the same four-second section over and over and over until you get the jump just right, listening to the same two seconds of the soundtrack looping Steve Reich style

Anyway, the idea for the video up top was inspired by an offhanded comment in the “original” Kaizo Mario World youtube post I linked above:

The original videos were in god awful codecs that were a bitch to convert, so unfortunately the Tool Assisted Speedruns came first to most youtube watchers.
This is rather unfortunate, as I feel you lose a lot of the “appeal” by watching those.

This refers to the way that most emulators, if you are recording a video of yourself playing a game and you do the save-state rewind thing, they’ll rewind the video too, such that the video shows only your final attempt, not any of your messups. People use this to make “speedruns” showing a best-of-all-possible-worlds recording of them playing through some game or other, with all the errors scrubbed out. The guy’s point was that watching Kaizo Mario World this way kind of ruins it, since most of what makes Kaizo great is watching someone fail over and over and over again until they finally get it right.

On the other hand, Kaizo Mario World involves SO much failing that this means all the “real” videos are, like, twenty minutes long just to get through what in a tool-assisted run would have been a two-minute level. So I was thinking, what if you had a special tool that instead of erasing all the screwups, it saved all of them and made a video of all the screwups plus the one successful path superimposed? I kept thinking about this and eventually I just sat down and hacked SNES9X to work exactly like that. The result was the video up top, showing the 134 attempts it took me to successfully get through level 1 of Kaizo Mario World.

I think I’m going to make some more videos in this style of different Kaizo Mario World levels and post them back here, but in the meanwhile, if you want to make your own many-worlds speedrun videos, here’s my custom version of SNES9X 1.43 with the multi-record function:

  1. For the Mac OS X version, click here.
  2. For a Windows version, click here. (Many thanks to Syndalis of 360Arcadians for compiling this for me.)
  3. If you want a Linux version, you’ll have to compile that yourself, but you can do this by finding a copy of the 1.43 source and replacing movie.cpp with this.
  4. And for the full Mac OS X source, click here.


[Update 2/9/08: The Mac version now correctly processes movies recorded in the Windows version.]
[Update 2/10/08: Mac version updated to fix a problem where certain kinds of corrupt recording files could cause the program to loop endlessly; window titlebar now contains status information.]

Note that this is a quickly-tossed-together hack all done to make a single video, and I make NO promises as to the quality, ease-of-use, correctness, or safety of these links. Also, I think the video feature should work with any SNES game, but I’ve only tested it with Kaizo. If anyone attempts to try this yourself, I’d be curious to hear about your results.

To make a video: First, use SNES9X’s “record movie” function to record yourself playing some game; while the game is running, use the save and restore feature at least once. When you’re done, you’ll find that SNES9X has created a yournamehere.smv file and also a series of files with names like yournamehere.smv.1, yournamehere.smv.2, etc. These .number files are all the different “mistake” playthroughs, so keep all these files together in one directory.

To turn this into an actual movie you can watch, you will need to use the OS X version of the emulator. Unfortunately, the Windows and Linux versions can only record multiple-run SMVs– they can’t do the export-to-quicktime thing. The quicktime-export code is based on alterations to the mac-specific parts of 1.43 (although considering that I hear the Quicktime API is mostly identical between Mac and Windows, it might be pretty easy to port that code to Windows at least…).

Anyway, in the OS X version, open up the appropriate ROM and choose “Export to Quicktime Movie” from the Option menu. Before leaving the export dialogue, make sure to click the “Compression…” button. You *MUST* choose either the “None” or “Planar RGB” codecs, and under the “Compressor” pane you *MUST* choose a depth of “Millions of Colors+”. The “+” is important. Once you’ve saved the movie location, go to “Play Movie” in the Option menu and choose the .smv you want to play. The emulator will play through each of the playbacks one by one; when it’s done (you’ll know because the background turns back on) your movie will appear in the location you chose. Note that there’s one more step! You won’t be able to actually play this movie, at least not very well, because the export feature works by creating a different movie track for each playthrough and the file will be huge and bloated. Open your video in Quicktime Player, then choose “export” and export to some video codec with actual compression (like H.264). This will flatten all the different layers of the movie into one. Okay, NOW you’re done.

…So what’s this about quantum physics? Oh, right. Well, I kind of identify the branching-paths effect in the video with the Everett-Wheeler “Many Worlds Interpretation” of quantum physics. Quantum physics does this weird thing where instead of things being in one knowable place or one knowable state, something that is quantum (like, say, an electron) exists in sort of this cloud of potentials, where there’s this mathematical object called a wavefunction that describes the probabilities of the places the electron might be at a given moment. Quantum physics is really all about the way this wavefunction behaves. There’s this thing that happens though where when a quantum thing interacts with something else, the wavefunction “collapses” to a single state vector and the (say) electron suddenly goes from being this potential cloud to being one single thing in a single place, with that one single thing randomly selected from the different probabilities in the wavefunction. Then the wavefunction takes back over and the cloud of potentials starts spreading out again from that randomly selected point.

A lot of scientists really don’t like this “collapse” thing, because they’re uncomfortable with the idea of nature doing something “at random”. Physics was used to dealing with randomness before quantum physics came along– the physics of gases are all about the statistics of randomly moving gas particles, for example– but those kinds of randomness aren’t assumed to be actually random, just “effectively random” because the interactions of air molecules are so chaotic and complicated that they’re too unpredictable for humans to track. Think about what happens when you roll a die: the number that comes up when the die lands isn’t strictly speaking “random”, it’s absolutely determined by the physics of motion and the velocity at which you let go of the die and so forth. The “randomness” of a die roll isn’t about actual indeterminacy, but rather just a way of talking about your ignorance of how the deterministic processes that control the die operate. Quantum physics, on the other hand, has things that as far as anyone can tell are really, objectively random, with no mechanism producing that randomness and nowhere apparent to stick one.

Since this makes some physicists uncomfortable, they came up with a sort of a philosophical trick: they interpret quantum physics in such a way that they say when there’s more than one possible random outcome of some quantum process, then the different possibilities all happen, in alternate universes. They can’t prove or disprove that this idea is true– from the perspective of someone inside one of these universes, everything behaves exactly the same as if the “wavefunction collapse” really was just picking a random option. But it’s one way of looking at the equations of quantum mechanics, and as far as the mathematics cares it’s as valid as any other. Looking at things this way, if there’s a 3/4 chance of a quantum process doing one thing and a 1/4 chance of it doing the other, then we get three universes where the one thing happens and one universe where the other one does. This does mean that there’s some universe where two seconds ago all of the atoms in your heart spontaneously decided to quantum-tunnel two feet to the left, but in almost every universe this doesn’t happen so we don’t worry about that.

Science fiction authors love this. There’s a bunch of stories out there exploring this idea of a multiverse of infinite possibilities all occurring side by side (the best of these I’ve ever read being Robert Anton Wilson’s Schrödinger’s Cat). Most of these stories get things totally wrong. Science fiction authors like to look at many-worlds like, this morning you could either take the bus to work or walk, so the universe splits in two and there’s one universe where you decided to walk and one universe where you decided to take the bus. This is great for purposes of telling a story, but it doesn’t really work like that. The many-worlds interpretation is all about the behavior of quantum things– like, when does this atom decay, or what angle is this photon emitted at. Whereas human brains are big wet sloppy macroscopic things whose behavior is mostly governed by lots of non-quantum processes like neurotransmitters releasing chemicals.

This said, tiny quantum events can create ripples that have big effects on non-quantum systems. One good example of this is the Quantum Suicide “experiment” that some proponents of the Many-Worlds Interpretation claim (I think jokingly) could actually be used to test the MWI. The way it works is, you basically run the Schrödinger’s Cat thought experiment on yourself– you set up an apparatus whereby an atom has a 50% chance of decaying each second, and there’s a detector which waits for the atom to decay. When the detector goes off, it triggers a gun, which shoots you in the head and kills you. So all you have to do is set up this experiment, and sit in front of it for awhile. If after sixty seconds you find you are still alive, then the many-worlds interpretation is true, because there is only about a one in 1018 chance of surviving in front of the Quantum Suicide machine for a full minute, so the only plausible explanation for your survival is that the MWI is true and you just happen to be the one universe where the atom’s 50% chance of decay turned up “no” sixty times in a row. Now, given, in order to do this, you had to create about 1018 universes where the Quantum Suicide machine did kill you, or copies of you, and your one surviving consciousness doesn’t have any way of telling the people in the other 1018 universes that you survived and MWI is true. This is, of course, roughly as silly as the thing about there being a universe where all the atoms in your heart randomly decided to tunnel out of your body.

But, we can kind of think of the multi-playthrough Kaizo Mario World video as a silly, sci-fi style demonstration of the Quantum Suicide experiment. At each moment of the playthrough there’s a lot of different things Mario could have done, and almost all of them lead to horrible death. The anthropic principle, in the form of the emulator’s save/restore feature, postselects for the possibilities where Mario actually survives and ensures that although a lot of possible paths have to get discarded, the camera remains fixed on the one path where after one minute and fifty-six seconds some observer still exists.

Note: Please do not use the comments section of this post to discuss ROMs or where to get them. IPSes are okay. Thanks.

A Game of the Year 2007 Poll: Results

Wednesday, January 9th, 2008

CLICK HERE TO JUMP TO THE PRETTY COLOR-CODED FULL RESULTS

So for the last few years I’ve been hosting this Game of the Year poll for the users of some forums I read. There are a lot of GOTY polls out there, but this one I think is kind of special. Most polls, you’re given a list of four or five options and you’re asked to pick the one you liked best. This poll, people are given a list of a couple of hundred options, consisting of every new game released in the previous year– and asked to rate their top ten or twenty.

This does a few interesting things. First off, we get to see all the information about what people’s second, third etc choices are. Second off, because the second, third etc choices count, people are more likely to vote for the game they want to win, rather than the game they think is likely to win– they’re less likely to engage in “strategic voting”. Finally, because we have this information, we’re actually able to provide somewhat reasonable rankings for something like the top hundred or so games of last year.

The full results– showing the exact number of voters who ranked each game first, second, third place etc– can be found here. In the meantime, the final results were:

  1. Portal (9532) *** GAME OF THE YEAR ***
  2. Bioshock (8004)
  3. Super Mario Galaxy (7968)
  4. Mass Effect (5874)
  5. Team Fortress 2 (5256)
  6. Call of Duty 4 (5051)
  7. Halo 3 (4848)
  8. Half Life 2: Episode 2 (4660)
  9. Rock Band (3788)
  10. Guitar Hero 3 (2968)
  11. Metroid Prime 3: Corruption (2948)
  12. Assassin’s Creed (2937)
  13. Legend of Zelda: Phantom Hourglass (2605)
  14. World of Warcraft: Burning Crusade (2324)
  15. Super Paper Mario (2215)
  16. Pokemon Diamond/Pearl (2083)
  17. Crackdown (1978)
  18. Puzzle Quest: Challenge of the Warlords (1773)
  19. Phoenix Wright: Ace Attorney – Justice for All (1621)
  20. Zack and Wiki (1453)

The numbers in parentheses are the final scores each game got under the poll’s ranking system. (Bioshock and Galaxy were close!, as were Guitar Hero, Metroid, and Assassin’s Creed.) Thanks if you voted, and some more elaborate analysis of the results (plus an explanation of the scores) can be found below.

NOTEWORTHY WINNERS

  • GOTY 2007:

    #1, Portal

  • Top-ranked 360 Exclusive:

    #4, Mass Effect

  • Top-ranked Wii Exclusive:

    #3, Super Mario Galaxy

  • Top-ranked PS3 Exclusive:

    #34, Uncharted: Drakes Fortune

  • Top-ranked PC Exclusive:

    #14, World of Warcraft: Burning Crusade

  • Top-ranked DS Exclusive:

    #13, Legend of Zelda: Phantom Hourglass

  • Top-ranked PSP Exclusive:

    #56, Castlevania: Dracula X Chronicles

  • Top-ranked GBA Exclusive:

    #100, Legend of Spyro: Eternal Night

  • Best FPS:

    #1, Portal

  • Best RPG:

    #4, Mass Effect

  • Best Sports Game:

    #27, Forza Motorsport 2

  • Best Game Only Available Through A Console Download Service:

    #49, Sin and Punishment

  • Special “Cult” Award (see below):

    #25, Persona 3

NOTEWORTHY LOSERS

  • Best game of 2007 which somehow nobody considered to be their #1 pick: #19, Phoenix Wright: Ace Attorney – Justice for All
  • Worst game of 2007 that at least one person considered their #1 pick: #187, Hammerfall (Only two people voted for this)
  • Worst game of 2007: #208, SNK vs Capcom Cardfighters DS (Only one person voted for this; it was their #19 pick)

There were also five games which were listed, but which no one voted for at all.

ALTERNATE SCORING METHODS

The rankings listed above are based on an approximation of Condorcet voting. In my pseudo-Condorcet approximation, each vote cast for a game gives that game a certain number of points. If someone ranks a game #1, that game gets 20 points. If they rank it #2, the game gets 19 points. If they rank it #3 the game gets 18 points… and so on. I have a script that checks a couple of alternate ways of ranking the same data, though.

For example, if we rank games only by the number of first post votes they got, we get a slightly different list:

First Past the Post

  1. Portal (176 first-place votes)
  2. Super Mario Galaxy (170)
  3. Mass Effect (105)
  4. Bioshock (88)
  5. Rock Band (48)
  6. Team Fortress 2 (44)
  7. Call of Duty 4 (39)
  8. Halo 3 (27)
  9. Persona 3 (20)
  10. World of Warcraft: Burning Crusade (14)
  11. Metroid Prime 3: Corruption (12)
  12. S.T.A.L.K.E.R.: Shadow of Chernobyl (10)
  13. Half Life 2: Episode 2 (10)
  14. Zack and Wiki (9)
  15. Uncharted : Drakes Fortune (9)
  16. Phoenix Wright: Ace Attorney – Trials and Tribulations (8)
  17. Forza Motorsport 2 (7)
  18. Legend of Zelda: Phantom Hourglass (6)
  19. Pokemon Diamond/Pearl (6)
  20. skate. (5)

Every year when we do this there’s some game which scores horribly low in the objective rankings but gets a really startling proportion of first-place votes; this year the standout game in the “cult” department was Persona 3, which only got 78 votes at all, placing it at #25 in the overall rankings– but 20 of those votes ranked it in first place, putting it in ninth place above.

I also did two more ways of sorting the rankings: an “approval” vote, where nothing is counted except the number of votes a game received (i.e. a first-place and a twentieth-place ranking count the same– all the matters is if the game was on someone’s list); and an instant runoff vote. Almost every time I’ve ever done this the Instant Runoff and pseudo-Condorcet rankings have been almost the same, but this time they were actually kind of different. Your eyes are probably starting to glaze over at this point, so I bolded the places where these two votes differ from the normal rankings:

Approval

  1. Portal (537)
  2. Bioshock (473)
  3. Super Mario Galaxy (445)
  4. Mass Effect (336)
  5. Team Fortress 2 (322)
  6. Halo 3 (321)
  7. Call of Duty 4 (314)
  8. Half Life 2: Episode 2 (307)
  9. Rock Band (228)
  10. Guitar Hero 3 (211)
  11. Assassin’s Creed (202)
  12. Metroid Prime 3: Corruption (200)
  13. Legend of Zelda: Phantom Hourglass (182)
  14. Super Paper Mario (176)
  15. World of Warcraft: Burning Crusade (159)
  16. Crackdown (153)
  17. Pokemon Diamond/Pearl (150)
  18. Puzzle Quest: Challenge of the Warlords (139)
  19. Phoenix Wright: Ace Attorney – Justice for All (122)
  20. S.T.A.L.K.E.R.: Shadow of Chernobyl (102)

IRV

  1. Portal
  2. Super Mario Galaxy
  3. Bioshock
  4. Mass Effect
  5. Team Fortress 2
  6. Call of Duty 4
  7. Halo 3
  8. Half Life 2: Episode 2
  9. Rock Band
  10. Metroid Prime 3: Corruption
  11. Assassin’s Creed
  12. Guitar Hero 3
  13. Legend of Zelda: Phantom Hourglass
  14. Super Paper Mario
  15. World of Warcraft: Burning Crusade
  16. Crackdown
  17. Pokemon Diamond/Pearl
  18. Puzzle Quest: Challenge of the Warlords
  19. Zack and Wiki
  20. S.T.A.L.K.E.R.: Shadow of Chernobyl

FINALLY: PER-FORUM BREAKDOWNS

As mentioned before, this poll mostly exists for a handful of video game forums where some people I know post. This year, I decided to actually run some extra results, in each case counting only those voters who– as far as one could tell from looking at the logs– had come to the poll from one particular forum or other. Meanwhile, as coincidence would have it, a few days into the vote one of the posts from my blog– where I had also posted about the poll– got linked by Digg, and as far as I can tell from the logs a group of the Digg users actually clicked over to the next post and voted in the poll.

So, here you have it– these numbers aren’t totally accurate because my logging method is not entirely trustworthy, but here’s an approximate by-forum breakdown of these results. Links go to color-coded full listings.

Penny Arcade Forums (678 voters)

  1. Portal
  2. Bioshock
  3. Super Mario Galaxy
  4. Mass Effect
  5. Team Fortress 2
  6. Call of Duty 4
  7. Half Life 2: Episode 3
  8. Halo 3
  9. Rock Band
  10. Guitar Hero 3
  11. Metroid Prime 3: Corruption
  12. Assassin’s Creed
  13. World of Warcraft: Burning Crusade
  14. Legend of Zelda: Phantom Hourglass
  15. Super Paper Mario
  16. Pokemon Diamond/Pearl
  17. Crackdown
  18. Puzzle Quest: Challenge of the Warlords
  19. S.T.A.L.K.E.R.: Shadow of Chernobyl
  20. Phoenix Wright: Ace Attorney – Justice for All


Platformers.net (73 voters)

  1. Super Mario Galaxy
  2. Portal
  3. Bioshock
  4. Legend of Zelda: Phantom Hourglass
  5. Metroid Prime 3: Corruption
  6. Super Paper Mario
  7. Half Life 2: Episode 2
  8. Phoenix Wright: Ace Attorney – Justice for All
  9. Phoenix Wright: Ace Attorney – Trials and Tribulations
  10. Team Fortress 2
  11. Pokemon Diamond/Pearl
  12. Guitar Hero 3
  13. Halo 3
  14. Mass Effect
  15. Zack and Wiki
  16. Crackdown
  17. Hotel Dusk: Room 215
  18. Wario Ware: Smooth Moves
  19. Sin and Punishment
  20. Call of Duty 4

360Arcadians.net (53 voters)

  1. Bioshock
  2. Portal
  3. Mass Effect
  4. Call of Duty 4
  5. Halo 3
  6. Assassin’s Creed
  7. Rock Band
  8. Team Fortress 2
  9. Super Mario Galaxy
  10. Crackdown
  11. Forza Motorsport 2
  12. Half Life 2: Episode 2
  13. Puzzle Quest: Challenge of the Warlords
  14. Ace Combat 6: Fires of Liberation
  15. skate.
  16. World of Warcraft: Burning Crusade
  17. Overlord
  18. Uncharted: Drake’s Fortune
  19. God of War 2
  20. Pac-Man Championship Edition


Digg?!? (16 voters)

  1. Super Mario Galaxy
  2. Portal
  3. Bioshock
  4. Guitar Hero 3
  5. Super Paper Mario
  6. Half Life 2: Episode 2
  7. Metroid Prime 3: Corruption
  8. Rock Band
  9. Legend of Zelda: Phantom Hourglass
  10. Assassin’s Creed

<— (Incidentally, the 360Arcadians guys’ 21st-place pick was Ratchet and Clank for the PS3, and their 22nd-place pick was Picross DS.)

Datafall.org, take two

Tuesday, November 20th, 2007

SHORT VERSION

So I’ve made this website for building blog communities, and it’s called datafall.org. Datafall lets you start these things called “blogcircles”. If there’s some group of people– maybe the people from your web forum, or your fish club, or just your circle of friends– that you know have blogs, you can go start a blogcircle for those people, and then send those people a link to it. Then all they have to do is hit the “click here to join this blogcircle” link at the top of the page and enter the URL of their blog, and from then on, whenever they post something at their blog it will appear at your circle at Datafall, too, with a link back to the blog that posted it. (Datafall doesn’t host any blogs on the website itself– if that’s what you want, there are lots of great free websites for that. What Datafall does is bring blogs together.) Finally (as you can see in my own sidebar, on the right of my own blog’s main page) once you’ve got everyone hooked up to the blogcircle, Datafall gives you several ways of embedding the stream of links from your blogcircles right into your blog, so that your own blog can show in realtime a list of the new posts by all your friends without you having to do anything.

In other words, Datafall is an RSS Aggregator, except that normal RSS Aggregators are controlled by just one person and read by just one person, and the blogcircles at Datafall are open to the world.

If you want to see how this works in practice, take a look at the Platformers community blogcircle, which is the first (and as of this writing only) blogcircle on the site; I set it up for the people I know at Platformers.net, a website that has a small gaming forum I belong to. I hope you’ll consider setting up a blogcircle for the people you know, too.

LONG VERSION

This next part mostly has to do with the history of the website at datafall.org right now, and this may or may not be of any interest to you. But, here it is anyhow.

I actually started Datafall something like a year and a half ago, but until the last couple of weeks it was actually a completely different site. The original Datafall was kind of modeled to be an RSS-based implementation of Scoop, which is the engine that DailyKos and Kuro5hin use. Scoop sites are kind of like little Slashdots, with a stream of special “official” blog posts on the front page approved in some way by either the site operators or the community of users, and then a stream of “diaries” freely posted on the side pages by normal users. My thought was that I’d used a couple of Scoop-based sites and liked them a lot; but the problem was it was hard for Scoop communities to get started, and they died really easily, since being a user on a Scoop site is kind of an all-or-nothing proposition. Becoming a user on a Scoop site effectively meant either starting a blog there or moving your existing blog, and once your blog there was set up there it was likely no one would be reading it except the other site users– so you kind of had a lot invested in the site. Scoop sites which have a real critical mass to them, like DailyKos, could make this argument easily and thrived; others couldn’t and struggled.

I looked at this situation and thought: Scoop sites has great community integration features. But they usually aren’t very good [i]blog[/i] sites. So, why not make something Scoop-like which has all of Scoop’s great community features– but which doesn’t try to be a blog site at all? The blog posts could be posted elsewhere, and the scoop site could just slurp them from RSS, and categorize and link to them. So I installed Ruby on Rails and did some tinkering, and this is basically what the first Datafall was. It looked exactly like a Scoop site, but if you clicked on any of the posts you’d find yourself on an external blog.

A year passed, and I eventually realized two things: One, I hated Ruby On Rails; and two, nobody was using my Scoop-style Datafall site, nor did it seem likely anyone was going to start doing so in future. Why would they? The site didn’t really give you any reason to use it; the site had all these community features, but these features only made sense if the site already had a big community, and this site didn’t. The only people who were using Datafall were the people who knew me from this Platformers forum I visit; Datafall was basically being used as a blog tracker for the Platformers forum users. Which was actually kind of neat, but it meant most of the Scoop-style features were useless.

So, okay then, I thought, if these Scoop-style features aren’t any use for the way people are using the site, then why keep those features? So I deleted the whole site, installed Django to replace the Ruby On Rails stuff that wasn’t working, and started over. The result is the site you see there now. The database contents, and the CSS file moved over from the old site to the new one; everything else is new.

So, what’s the point of the new site?

The idea for the new site can actually be seen in one of my “to do” bullet points for the old site. If you’ve ever used LiveJournal, you’ve probably seen these “groups” they have. LiveJournal groups are like little group blogs, where anyone with a LiveJournal account– when they write up a blog post– can choose to drop the post to the LiveJournal group instead of the normal blog. Which is really neat, but of course it has the problem that you have to have a LiveJournal account in order to use it. This doesn’t make much of a difference since you can of course start a LiveJournal account just to post in the groups, but in a certain sense this still is a little bit like the “all or nothing” problem I mention with Scoop– you can take advantage of this neat blog community feature on this one site, but unless you just happen to host your blog on the same site then a lot of the community integration is lost. I thought– as long as the point of Datafall is to offer blog community-building features, based around using RSS to paste different sites together– that replicating this groups feature on Datafall would someday make sense too. But at first I assumed that this was something to put off until the site had grown a bit– since after all, who would join these groups if the site doesn’t have any users yet?

On the other hand, Datafall in the pseudo-Scoop era was, if you think about it, basically like one little LiveJournal group unto itself– the Platformers community LiveJournal group, say. No one there wanted to use the Scoop-style features, but there was this group of people in this existing, external community who had blogs and were using the Datafall site for LiveJournal-group-style features. Looking at this I figured, well, if the people on Platformers are using Datafall for this purpose, might there also be other small net communities who might be interested in doing so too, as long as the site supported it?

So, that’s basically what Blogcircles are: Blogcircles are kind of like the “groups” on LiveJournal, but posts can go there from any blog, myspace page, any website at all, not just the blogs on LiveJournal. And hopefully this gives the reason why people would want to use Datafall in its new form: because there are people who already have some little community they’re in in which the community members just happen to have blogs, and Datafall gives some way for that community to organize itself.

OKAY, SO HOW DO I USE THIS THING?

If you go to Datafall, you’ll see a handful of links on the front page. Feel free to browse the feeds and blogcircle[s] already on the site, but probably what you want to do is either add a new RSS feed to Datafall, or add a new Blogcircle. (Don’t worry about making an account; this will be done automatically once you start doing things.)

Let’s say you want to add a new blogcircle. Hit the “Add a blogcircle” link, and fill out the form– all you really need to give it is a name, but if you want you can also add a description and a URL of your choice. (If you’re not already logged in, the form will also have you create a new account.) Once you’ve created your blogcircle, when you look at that blogcircle logged in there will be some extra options visible to you– as the owner of the blogcircle– which don’t appear for anyone else. Specifically you’ll be able to edit the blogcircle’s information, or “attach a feed”. You might actually want to do the second one of these– what this means is that you can add a special item to the Datafall sidebar, visible whenever anyone looks at the blogcircle, containing the contents of an RSS feed of your choice. (For example, remember me mentioning the blogcircle for the “Platformers” video game forum? Well, on the Platformers blogcircle, the “attached feed” shows the recent front-page posts on Platformers itself.)

Alternately, let’s say you want to add your blog to Datafall. Datafall calls the blogs it’s keeping track of “feeds” (since, after all, they might not be a blog exactly). You’ll probably want to do this in the context of adding your blog to a blogcircle– if you want, you can just add your feed now and join a blogcircle later once more blogcircles have started, but I don’t have the site to the point yet where you can get a lot of use out of it without being in some blogcircle! Maybe later. So what you’ll probably want to do for now is go to the page for a blogcircle you want to join; if you look, you’ll see at the top a link that says “Click here to join this blogcircle”. Hit that, and a form will appear asking for the URL of the website you want to add (and creating an account if you aren’t signed in already). That’s it! Your most recent post will appear on the blogcircle immediately, and when you make posts in future Datafall will notice and add those to the blogcircle, too.

Note that once you’ve logged in, the front page will appear as a summary of all the different blogcirlces you belong to; the links that are normally in the front page directory can after you’ve logged in be found in the sidebar to the right.

One last thing you might want to do, if you’ve found or started a blogcircle you really like, is embed the blogcircle into your own blog in such a way that anyone visiting your blog can see what’s been posted in your blogcircle lately without having to go all the way to Datafall. I am trying to set up Datafall so as to make it simple to embed a “faucet” from Datafall into absolutely any web page, anywhere– although how you do it may be different depending on where your site is hosted. Maybe I’m biased because I’m trying to sell you on this Datafall thing I made here, but this is actually a feature of a kind I’ve been wishing blogs had for a long time. Most blogs have a little “blogroll” bar on the right side of the page, linking an occasionally huge number of different blogs that the blogger likes; but you usually don’t have any idea which, if any, of these different blogs actually have new content. I think it would be neat if instead of forcing viewers to check each item on your blogroll manually, you could just show them an up-to-the-minute listing of all the newest posts by people on your blogroll. The Datafall embedding feature tries to be a step toward that.

If you want to try to do this, what you should do is go to Datafall and look on the right-hand sidebar, underneath where the login box normally would be. On some pages on Datafall, particularly blogcircles, there will be a little box here labeled “Embed”. This box will contain a link, which will take you to a page containing instructions on how to embed the live listing from that specific particular page somewhere else. The instructions page in question will have different instructions for different kinds of websites and blogs– blogspot accounts, WordPress blogs, plain html sites, etc– and most of the instructions will consist of a large block of HTML which you’re supposed to paste somewhere or other. Part of the reason why I have different instructions for each different kind of blog is I’m trying to provide some way of embedding that can blend into your site completely seamlessly– I’m trying to set things up so that if you embed a blogcircle in a webpage it looks like it was designed to be there. Note, though, that although , as, I only have a few things listed there now. If you have a blog or website that isn’t covered by the instructions on that page, then please do post in the comments below, tell me what kind of blog it is and why it is that the existing instructions don’t work, and I’ll see if I can add a section for your blog type.

IS THAT IT?

So this is basically what Datafall is right now. I’m still actively working on it, and since the new site is a lot easier to make changes on than the old one I hopefully should be able to do them at a potentially fast clip. I’m happy to take any suggestions for improvements, and I’ve got a list of improvements I’m going to try to add as soon as I can. Here are some of the things I want to work on with Datafall in the future:

  • Right now you can’t post to any blogcircle except one you’ve specifically joined– and once you’ve joined, you can’t not post to it. Every post you make on the blog will appear on all of your blogcircles, period, and you can’t remove them. This needs to be fixed stat. You should be able to add yourself to a blogcircle “conditionally”, such that your posts are displayed on the blogcircle only when you assign them to be rather than automatically; you should be able to withdraw a post from a blogcircle or from datafall if you want; and ultimately I think it would be neat if there were “open” blogcircles that anyone could post to, whether they’ve joined the blogcircle or not. (So for example there maybe be like a “Science” blogcircle, and any individual post from any feed on Datafall could be assigned to appear on the Science blogcircle so long as it had science content.) This kind of hands-on way of using Datafall probably isn’t the way most people would want to use it– better to just use it the normal way and have the site do everything for you automatically– but it should at least be an option.
  • Right now there really aren’t any limits on who can join what blogcircle. This could potentially be kind of bad; in many cases it would make sense for some kinds of blogcircles to be able to control their membership and content. There needs to be the ability for the operator of a blogcircle to remove feeds and posts that aren’t appropriate to that blogcircle, and it needs to be possible to set a blogcircle such that when people click “join blogcircle” they aren’t instantly added, but have to be approved first. (Conversely, if there’s a feed on Datafall you like or think is appropriate to a particular blogcircle, maybe it should be impossible to invite people.)
  • Right now the only person who can do any kind of maintenance on a blogcircle is the person who created it. The blogcircle owner should be able to delegate authority. This doesn’t make much difference right now, when there’s very little that even the blogcircle owner is able to do, but once the blogcircle owner gains the ability to delete posts approve feeds etc the blogcircle owner should be able to also give select members of the blogcircle the ability to do same.
  • Combining the above three ideas together,
  • AJAX. If you don’t know what AJAX is, then don’t worry about it too much, but in my book this is a biggie. The old Datafall had some great AJAXy features– this was the one thing Ruby On Rails was good at– but the new Datafall has none, mostly because. Incidentally, if anyone can recommend a Python library for AJAX generation hopefully analagous to RJS for Ruby, please let me know.

Okay, that’s it.

Pretty pictures

Wednesday, April 11th, 2007

So last week I made a little custom variation on an old computer program called the “Game of Life”, with the hopes that my version could be used to demonstrate neat things about quantum physics. As chronicled in my last blog post, that basically didn’t work at all. However, despite not working the way I’d hoped it did, the program did incidentally happen to burp out some very pretty looking animations. This week I figured I’d just say screw the quantum physics stuff, and instead just explore what kinds of interesting pictures I could tease out of the program.

Before I go on, a couple of things to note:

  1. There was a mistake in my description of Life last week; any time I made a reference to the number of “neighbors” a cell had, I was counting the liveness or deadness of the cell itself as counting toward the “neighbor” total. The program behaves the same way however you do the counting, but this means some of the numbers from my description last week aren’t the same as the ones this week.
  2. Because some of the images in this week’s post are kind of large, I posted all the images as the non-animated first frame, and you have to click the image to see the animated version. I assure you that scanning through here and clicking for the linked pictures this is probably more worth it than actually reading the text.

So, what’s this about quantum physics?

I’m still trying to figure this out, but this is what I’ve got so far: Quantum physics is a special very weird kind of way of looking at the universe where things aren’t ever one particular way, but are probably one of a bunch of different kinds of ways. If a particle is heading down a path and reaches some kind of fork where it could equally likely go either one way or the other, it possibly goes both ways at once, so we say it’s 50% at the end of one of the paths and 50% at the end of the other. It stays in this noncommittal state until some kind of interaction with the environment happens in a way that requires it to actually be in either one place or the other, at which point the particle randomly picks either one or the other and materializes 100% at the end of the paths as if it had been heading that way all along. As soon as this “wavefunction collapse” style interaction is over and the environment is looking the other way, the particle smears back out into possibilities again and starts again taking all the different paths it could at once, but now only those paths that start from the point at which its probabilities last collapsed. The splits in the paths it takes usually aren’t as simple as 50% one way, 50% the others– usually instead it goes in absolutely all directions at once, and its position is described by something called a “distribution function” that describes all the different things that particle could be doing at some given moment, and how likely each possibility is compared to the others. Because of something I don’t understand called the “Schrodinger equation”, the way these particles probabilistically spread out in every direction at once means they act like waves, which is where the “wave/particle duality” thing you hear about sometimes comes from– particles spread out as waves of probability until they encounter something that needs them to be a particle, at which point they switch to being particles, and after that they start spreading out as probability waves again. (We as humans don’t ever experience things that have this probably-there quantum behavior, like cats that are simultaneously alive or dead– the things that have quantum behavior are all very small, and things like cats or pennies or humans are very large, so by the time you start getting to interactions big enough that they can comprise our everyday experience all the little quantum things have been overwhelmingly spooked into acting like particles as far as we can tell.)

From a math person’s perspective, this whole thing about distribution functions that collapse into certainties upon measurement all just looks like a very contrived way of describing a system where you can’t make exact predictions about some of the things that happen and instead have to describe the probabilities. There’s a weird twist, though, because it’s more than that. When we talk about a particle being 50% in one place and 50% in another, we don’t just mean that there’s a 50% probability, if somebody barged in and collapsed the wavefunction, that the particle would turn out to have been there. The particle actually is there, in a 50% sort of way, and so all the different probably-there versions of the particle engage in something called “self-interference”, which means the different possible particles can interact with each other. If the particle goes 50% down one path and 50% down another, and then the two paths suddenly converge back again, the two 50% particles can actually smash into each other (destructively interfering with each other the same way waves do) and destroy each other at that point. All this weird behavior is kind of odd and interesting, and seems like it would be kind of fun to just play around and experiment with. Playing around with quantum mechanical objects, of course, requires either enormous and expensive test equipment that doesn’t do what you want anyway because of the Heisenberg Uncertainty Principle, or simulating everything with really really difficult math. The second of these sounds more attractive, and the fact the math is really hard probably isn’t such a problem; maybe one could make some kind of simulation program where the computer does all the math and you just mess around with stuff on the screen acting quantumy.

If you were going to do that, though, the obvious, particle-based stuff that quantum physicists spend most of their time working with doesn’t seem like the best place to start. Actual quantum physicists are mostly interested in hopelessly dry and practical things, like the behavior of electrons. There’s a good reason for that, but this sort of thing probably wouldn’t make for a very interesting or attention-capturing experience for someone just wanting to play around with something on a computer. I for one cannot say that electrons have much applicability or relevance to my daily life.

Given this, I was curious what else a Quantum Something simulation on a computer could be based around besides just particles. And I was thinking a good place to start would be cellular automata, like the Game of Life, because at least in the classical version they’re simple enough to be played with by clicking around at random but have enough hidden depth to make little machines out of; and also because they’re readily adaptable to acting like little nodes on a funny-shaped grid, which is incidentally the way that this “quantized geometry” thing that some guantum gravity people are moving toward lately looks from the perspective of a computer. A quantum Game of Life seemed like an interesting idea, so for my blog post last week I made a little animated GIF generator that ran the Game of Life, with the addition of “probable” cells which had a certain probability of either being there or not being there (with higher or lower probabilities represented by darker or lighter shades of gray) in hope of simulating or at least mimicking quantum behavior. As I said before, this didn’t really work.

Why not?

I’m not sure what I was expecting to see when I turned the thing on, but I think I kind of hoped that the patterns would act kind of like the maybe-there maybe-not particles in quantum physics– like, I perhaps expected that one way or another you would be able to make little partly-there patterns, like maybe a little 50% glider or something, and it would wiggle off in a 50% way until it collided with another glider, at which point depending on what hit what and how probable it all was maybe they’d destroy each other or merge into one 100%-there object or maybe even spin off several different overlaid collision patterns at the same time representing different probable results that the collision could have produced. In retrospect, it’s pretty obvious why it was completely implausible to expect behavior like this.

Patterns in “quantum Life” did in fact turn out to exhibit something like self-interaction. Unfortunately, they exhibited much too much of it. It isn’t just that patterns in Life interact with each other; every single cell in Life interacts with every single cell neighboring it, and one cell more or less worth of neighbors tends to make the difference between life or death in the next generation– the survival conditions in Life are extremely narrow. These properties– every cell interacts with every other, and you only get what you want under narrow conditions– is in fact what makes classical Life patterns capable of such complexity and thus interesting to study in the first place. In the quantum version of Life, though, these properties mean that if you introduce just one “gray” (probable rather than certain) pixel into the quantum Life board, then in the next generation every cell that pixel was touching will also be some shade of gray. And when you start getting a few grayish pixels next to each other, everything just kind of falls apart– fill an area in this conception of quantum Life with gray, and on every turn you’re basically trying every single combination of living and dead cells possible in that space, with some combinations maybe being more likely than others. Since only a narrow range of configurations allows life to survive in Life, this meant that each gray pixel tends to become less and less likely to survive with each frame, and gray areas very quickly fade to apparent white:

Interestingly, though, they don’t fade directly to white– they first try to stabilize at a specific gray value around a probability of 35%, with the 35% gray areas spreading to swallow up all the black pixels. If this 35% gray color manages to spread to cover the entire board, with each pixel being about as likely as any other, it just stays that way forever. If any areas that are at (or have stabilized near) 0% white exist on the board at all, however, the white eats away at any gray areas (or at least those that aren’t actively eating up new 100% black pixels) until nothing appears to be left. In practice this looks to me kind of like a zombie invasion spreading to eat all the Life and then dying out for lack of food:

Which is kind of amusing, but makes it really hard to do anything with the gray cells. In normal Life, where things are always either dead or alive instead of just probabilistic, you can build little Life machines by taking advantage of structure in the regular ways that the cells interacted with each other. In the quantum Life game, though, any structure is immediately destroyed as soon as it comes into contact with a gray pixel. Pattern-building in Life is based around individual pixels interacting with each other; but in quantum Life individual pixels lose their identity, and instead just smear out into rapidly disappearing gray blobs. This is about as far as I got by the end of my last blog post about this.

This week, my thought was: Before I give up on this completely, maybe I can at least get the blobs to do something interesting. If the tendency for everything to turn into blobs means I can’t build structures out of individual pixels, maybe I can at least build structures out of the blobs. Before I could do that, though, I first had to do something about the whole fading-to-white problem, since few if any of my patterns survived long enough to analyze them in an interesting way. I came up with two ways of getting around this problem. The first was just kind of silly, but I was curious what it would do so I tried it. The second thing I tried actually worked quite well.

As far as the first thing I tried goes, here’s the thing: Everything fades to white in these animated GIFs, but that’s partly just a result of the way they’re rendered. After the gray fades away, there’s still stuff going on in the apparently blank areas; it’s just that anything less than 0.0015% probability winds up being rounded down to 0% (white) when colors are picked, so you can’t see them in the GIFs. Likewise, even when everything appears to have stabilized at 35% gray, some areas will be closer to that stable gray point than others. I kinda wanted to see what this all looks like, so I changed the program so that it “normalized” each frame before it drew it– that is, instead of black being 100% and white being 0%, it redefined black and white as just the highest and lowest probability values visible on that frame. This lets us watch exactly what’s going on while various things are fading out:

If you’re wondering what happens at the end of those animations there, that’s what happens when the probabilities at each pixel get so low that Perl can no longer accurately represent them. Perl only uses 64 bits to store each floating point number; this is normally way more than you need, but when you start trying to fit really, really low numbers into that space, like around 2-1022, you start losing precision and eventually lose the ability to tell the difference between any given number and zero entirely. With the kind of math the quantum Life program does, you get into this area pretty quickly, and with the normalization turned on the rounding errors that result become very clearly visible. Oddly, though, the rounding errors turn out to look a lot more interesting than the quantum Life program does when it’s operating normally. I may try to experiment with that more later, to see if I can replicate that behavior on purpose (hopefully this time by just rounding things instead of actually breaking the Perl interpreter).

After moving on from the normalization thing, I had a bit more luck with the next thing I tried. That was this: okay, so Life tends to turn into disappearing gray blobs when you add probability to it. Life isn’t the only standard cellular automata, though. Why not just try another one?

The set of rules Conway’s Life uses are really a bit arbitrary, and there’s a lot of variations on those rules out there. Life, because it’s the best well known and because it behaves in a way that exhibits a relatively nice balance, is the variation you generally hear about, but there’s nothing particularly special about it; to someone exploring the alternate rules, Conway’s Life becomes just the rule with serial number 23/3 (because life survives if it has two or three neighbors, and spawns if it has exactly three neighbors). Beyond this there are in fact 262,144 different 2D cellular automata rules of the same type (that is, where the rules are unchanged besides the survive/spawn numbers), and they all act very drastically different. Some of them are sane models of computation, like Conway’s Life, where you can build things out of patterns of pixels. Some of them inevitably descend into seething masses of chaos. Some of them just result in the screen endlessly blinking, or everything you draw exploding, or even stranger things happening. If you want to be really surprised, try loading up this Flash version of Life, entering rule 1234/3 (which you’ll note is not all that different from the Conway’s Life rule), drawing a little pool of pixels, and hitting “run”.

And 262,144 is of course just the number of variations you get by varying the survive/spawn numbers; you can get even more elaborate behaviors by introducing more complicated rules, for example by introducing more states besides just “alive” and “dead” or by allowing survival to be based on pixels further away than immediate neighbors. There’s one famous CA variant called Wireworld that is a lot like Life except for having four colors instead of two, and which can actually be used to present fairly realistic-looking simulations of electrical circuits.

(If you’re bored and want a quick little tour of a more complicated type of ruleset, go here, download the program or launch the java app, and do this: Choose “weighted life” in the first menu; choose “Conway–” from the second menu; hit “start”, and let the game run until the number of dots remaining on the screen is fairly small, then hit “stop”; choose “Border” from the second menu; hit “start”, and let the game run until the screen looks like television static; hit “stop” and choose “Bricks” from the second menu; hit “start” and let the game run until you get bored of what you’re seeing; hit “stop” and choose “Career” from the second menu; then hit “start”… each of these different options in the second menu is a relatively simple Life-like rule, with the only twist being that in these rules the direction your neighbors are located in makes a difference when counting. Even slight differences in what you do after you’ve counted these neighbors results in drastically different behavior.)

So, with all these cellular automata variations out there, is there an existing ruleset that fixes my blob problem? It turns out, yes. There’s a rule called “Day & Night”, a vanilla Life-alike with the rule 34678/3678 (you can try that in either the Java or Flash Life implementations above).

This is what happens when you try to feed a Conway’s Life glider into the Day & Night ruleset.

This rule has a lot of similarities to Life from a pattern-designer’s perspective; it has simple gliders and spaceships and oscillators and such, although they look nothing like the ones in Life. However, Day & Night also has one very odd and interesting feature, which is that under the Day & Night rule, white patterns on a black background and black patterns on a white background behave exactly the same. You can randomly invert the entire board, and it will not change the way the patterns act one bit. More interestingly, you can actually have entirely separate black and white areas to the board, with little Life patterns swimming around inside:

This pattern seemed almost tailor-made to my problem: all my quantum Life patterns kept eventually stabilizing into invisible, boring white, but in Day & Night both white and black are stable. So what happens when I try to run a quantum Day & Night?

Well, this:

Okay, so this is a bit more interesting than Life was. Instead of the probabilistic parts just fading out of existence, the dark regions stabilize to black, the light regions stabilize to white, and the stuff in between stabilizes to some kind of midlevel gray. Things don’t stop there, but what they do next is kind of interesting to watch: The gray areas (although they never seem to go away after any amount of time) shrink until they become just a thin membrane between day and night, and the borders twist and curve according to some internal logic I haven’t quite figured out yet– I think the borders seek whatever position minimizes surface area, so to speak. (It’s kind of interesting to see the little bubble in the second image above slowly seeking air…)

Like in quantum Life, any single gray pixels introduced into an image spreads like a cancer over the entire board, but instead of this being an effectively destructive process, the “zombie” regions form into interesting patterns along the outlines of the black and white regions they follow, and the zombified regions continue evolving, just not according to exactly the same set of rules. The following board has one gray pixel buried in the upper left corner, and when Day & Night runs on it:

Something interesting you might notice about that one is that since the zombification process mostly acts on the borders of the day and night regions rather while leaving the interiors mostly solid, you can have little bitty Day & Night patterns embedded inside of the big gray blobs that just float there indefinitely doing their own thing (they’re still visible in frame 1000). In fact, it’s possible for there to be blocks which are zombie-ized on one side but continue to follow normal Day & Night rules on the other. Look at this one carefully:

Now that just looks cool. That last animation is seriously my favorite thing out of anything that’s come out of this silly little project so far. (Incidentally, if you want to see that a little more clearly, it may be worth it to try the YTMND version. Something not quite obvious in the images the way I’m posting them here is that since my implementation of Life wraps around the board at the edges, all of these images tile fantastically well.)

The fact that Day & Night supports stable areas of both colors means that I can do more elaborate things when setting up quantum Day & Night patterns. For example, something I wanted to do with quantum Life, but couldn’t really because everything always just disappeared, was set up patterns made from photos. Yes, both of these are starting patterns for animated GIFs:

Whee! Just for reference, here’s what if I round those last two off to solid black and white, so the quantum-ness goes away:

Finally, a couple more odd-looking results I got more or less by accident while playing around:

The one on the right does what it does because of a bug.

So, at this point I’m feeling a lot better about the possibility of actually doing something interesting with this probabilistic/quantum Life idea, now that I’ve seen it’s possible to do anything. The behavior of the white vs black blobs in this one still way favors entropy over structure– once the borders between white and black have been staked out they seem to do some kind of funny seeking-minimum-surface-area thing, and once they’ve done that (at least insofar as experimenting with firing spaceships at them seems to indicate) it doesn’t seem to be possible to move them back. You can of course still have normal Day & Night patterns happening inside the blob, but you could do that anyway; the quantum-ness doesn’t really add anything. Still, the Day & Night stuff at least works well enough it hints you could modify the rules so that the blobs were induced to interact with each other in some more interesting way. After all, I’ve still not even toyed with most of the different kinds of rule variations for cellular automata, and there’s even a couple of interesting options that may be worth exploring that only exist because of the probability thing. Meanwhile, there’s still a lot I could do here to work in math that’s actively taken from quantum physics, rather than just mimicking quantum-ness crudely. As things are, I’m not sure my conception of self-interaction is quite like that of “real” quantum physics, and I’m wondering if there’s some way I could work in a more explicit notion of quantum superposition (the way I do things now, basically each pixel is its own basis state). To be honest, I’m not really sure that deserves the title “quantum Life” rather than just “probabilistic life”.

I’m not sure to what degree I’m going to try to explore all that, though. In the meantime, I’m more curious about going back and starting to explore the original idea that had me thinking about quantum cellular automata in the first place: specifically, the idea of making a cellular automata that runs on, instead of a grid of pixels, the nodes of some kind of graph. And the reason for doing this would be that the graph would be not just a graph, but actually an instance of some kind of quantized geometry, like the spin networks in loop quantum gravity. Which would that mean… well, I don’t have any idea what it would mean. That’s why I want to do it and find out.

If I’m going to do that, though, I first want to stop and deal with something else, which is the way I’ve been making these little animations. Everything above was generated by a little perl module; I load the module up with starting states, and it spits out animated GIFs. This has more or less worked fine so far. There isn’t really any better way of posting Life animations than a GIF, since every pixel is crucial and so this stuff doesn’t compress very well; the chief downside to making this way is that by I generally have to craft the patterns by sticking them into a program and running it to see what comes out rather than doing anything interactive, but that doesn’t matter so much since the program with the quantum behavior turned on runs much too slowly to be interactive anyway (although that may only be because I wrote it inefficiently– I haven’t even looked into optimizing it).

If I go further with this series, though, the next batch of stuff I do is going to be all weird diagrams of jumbled lines, like those old photos of spiderwebs spun by spiders on drugs. This kind of stuff compresses very badly as GIFs, compresses well as other things, and will benefit from some level of interactivity. I’m basically only telling you this so that I can link the last thing I discovered this weekend and two more pointless little animations:

OMG HAX

So it turns out that at some point while I wasn’t paying attention, these people made a free actionscript compiler called MTASC, and these other people made a free swf linker kind of thing named swfmill. You can use these to make Macromedia Flash movies without actually having to own or use Macromedia Flash. Or you can just say shove both of these, and use HAXE, an odd but elegant little EMCAscript derivative with strong type inference and other pleasant features that take away the dirty feeling that writing Javascript normally leaves you with. Haxe can be “compiled” either to vanilla Javascript, or directly into SWF files. The reason all of this matters is that one can use Haxe or MTASC to make Flash movies without having to draw a damn thing. Both of the following movies are totally generative; thanks to Haxe, I was able to just write a little half-page program that draws some lines and boxes and things, and the program runs in the Flash interpreter:

This is not particularly complicated or interesting stuff– this barely qualifies as anything better than “hello world”– but it works, the resulting flash files are tiny (the freezeframe GIF of the colored boxes animation above is actually larger than the animation itself), and the Haxe language seems expressive enough to scale to much more elaborate Flash programs.

I’ll see what I can do with all this in a future post.

By the way, the source code used to generate this week’s images is available here and here, same usage instructions as last time.

A Quantum Game of Life

Tuesday, April 3rd, 2007

I’ve had this blog for awhile, but barely posted in it. I’m tired of this, and it’s strange because the reason I haven’t posted isn’t a lack of things to say, the problem is just plain not getting around to writing anything. So, an experiment: From now on, I’m going to start making a post here every monday. Even if it’s not particularly coherent or long or I don’t have a specific topic and have to just rant out whatever is in my head, that’s fine. If the resulting posts turn out to be terrible, I’ll just stop doing them or bury them in a box deep beneath the earth or something. In the meantime, I’m just going to make an effort to post something here every week from here on out. So if you know me outside of this site, and a monday comes and you realize I didn’t post anything here, find me and yell at me.

Sound good? All right, here goes.

So I’ve been reading all this stuff about physics lately. I haven’t been doing this for any particular reason. At least, not any good reason; the reason I originally started was that I’d been reading a bunch of stuff about evolutionary biology so that I could argue with creationists, but then arguing with creationists started getting kind of boring, so I started looking around for something else to read.

Once I got started, though, it turned out that this is actually a really interesting time to be following physics.

Fundamental physics has been kind of in a holding pattern for about the last thirty years or so. Somewhere in the mid-70s, every single force, particle and effect in physics except gravity was carefully fit together into a single unified theory called the Standard Model, and past that, physicists kind of got stuck. This isn’t to say nothing’s been done in the last thirty years, mind you– experimentally it has really been a productive time– but all the work that has been done has just served to validate the Standard Model, theoretical physics’ last great achievement, as correct. Nobody’s been able to move beyond or supercede the Standard Model. And physicists really, really want to supercede the Standard Model. To even the physicists that developed it, the standard model has always seemed like kind of a Rube Goldberg contraption; it has all these unexplained fiddly bits and extra pieces that don’t seem to do anything, and it’s not clear why any of the parts fit together the way they do. Scientists have a very specific and clear idea of how this big Rube Goldberg machine works; but they don’t know why it works, or rather, they don’t know why the machine is fit together the way it is.

Theoretical physicists are convinced there’s some kind of underlying order to this that we haven’t just figured out yet, and that all the funny complexities of the Standard Model are just emergent side-effects of something much more fundamental and simple going on below the surface. Theoretical physicists are also annoyed that they haven’t been able to figure out how gravity fits in to all to this– in fact, they can’t figure out how to make gravity work together with any theory of quantum physics. So for the last thirty years theoretical physicists have been going to enormous lengths experimenting with grand unified theories and supersymmetric models and such, trying to come up with a better theory that explains the same things the Standard Model explains (and hopefully also gravity), but in a way that makes more sense. None of these attempts have so far worked. They’ve all turned out to not quite be able to predict the actual universe, or else have predicted the universe but with some weird quirk or other that doesn’t exist in reality, things like proton decay.

The longest-standing of these attempts at unification is something called String Theory, which has been worked on steadily for about twenty or thirty years now. String Theory isn’t really a theory– it’s more like a set of hypothetical theories that share certain characteristics. String Theory is really promising in theory and has gotten a lot of attention because it has the ability to describe a universe which has lots of particles and forces (which maybe include the kinds of particles and forces that the Standard Model contains) and which also has gravity, all based on nothing but simple vibrating stringlike objects following simple rules. But there are some problems, foremost among which is that in practice, nobody’s yet figured out how to use this to describe our universe. Every time people try to put together a string theory that might describe the things we see in our universe, it also describes lots of other things, things like lots of extra dimensions that have never been detected and enormous quantities of new kinds of particles that have never been seen. These isn’t exactly a problem because string theories are very flexible and so every time somebody finds a reason String Theory might not work, they can just modify the theory– adding ways of hiding the extra dimensions and particles, and then hiding the weird complexities they used to hide the extra dimensions, such that even though String Theory still predicts most of these things it predicts that they’d be hiding where we couldn’t find them. This has happened enough times by now that String Theory, which started with just some simple rules, has now gotten very complicated. It’s not clear whether this is a good or a bad thing. This might just mean String Theory is maturing, and so naturally picking up complexity as it moves toward completing an actual descriptive and usable theory. Or it might mean we’re on the wrong track entirely with String Theory, and this burgeoning complexity means we’re taking an originally good idea and torturing it into increasingly unrecognizable shapes, adding more and more epicycles every time something doesn’t quite work right, in an attempt to twist the idea into fitting a universe it fundamentally does not describe. Either way, we’re still no more sure whether String Theory can ever actually work than we were two decades ago, String Theory is still more a toolbox of promising mathematical ideas than an actual working theory, and although String Theory gets most of the attention these days it’s still the case that nobody knows how to move past the Standard Model.

Anyway.

This has all been the state of things up until very recently, but right now at this instant early 2007 some kind of interesting new developments are starting to crop up, and it’s starting to look like this holding pattern may be breaking sometime soon. These new developments are making physics very interesting to follow right now, since they mean we might see some new and very dramatic and unexpected developments in physics maybe even in the next year or two.

The first of these developments, and probably the most directly exciting because it involves things blowing up, is the Large Hadron Collider, a 17-mile concrete tunnel buried underneath Switzerland where, starting at the end of this year or beginning of the next, physicists will be smashing hydrogen atoms together to break them apart so they can look at the pieces. Experimental physics has kind of been in a pattern for the last fifty years where most of the progress is done by building a big machine that fires two particle beams at each other, then recording the little tiny explosions that happen when the beams collide and checking to see whether you know enough about physics to explain why those explosions looked the way they did. Once you you’re confident, yes, you can explain the little explosions from your big machine, you build a bigger machine that does the same thing and repeat. The reason this repetitive mode of experimentation has come about is that the most fundamental physical phenomena are only really detectable at very high energies; in a simplified sense, the more “electron-volts” you have, the more interesting things happen. Every time a new particle accelerator is built that can hit a higher electron-volt threshold, new kinds of particles become visible. This is again a bit of a simplification, but particle physics has been in a lot of ways doing this kind of game of particle leapfrog for the last half-century, where the experimentalists will build a bigger particle accelerator and discover some new particles nobody’s seen, and then the theoreticians will have to come up with a theory to explain what those particles are, but then the new theory will predict some new particles nobody’s ever seen and the experimentalists will have to build a bigger accelerator to look for them. There’s a bit of a problem with this system, though, which is that this pattern is strictly dependent on the regular production of these ever-more-expensive accelerators, each of which takes decades to plan and build and hundreds or thousands of people working together to design and operate. This is kind of a lot of eggs to put in one basket, so when the last big particle accelerator that was supposed to have been built– the Superconducting Supercollider, which was supposed to have been built in the 90s– got cancelled amidst massive mismanagement and governmental budget crunches, it kind of screwed the pattern up. (The cancellation of the supercollider has been kind of a factor in why theoretical physics has been in holding pattern mode lately, and also represented a real identity crisis for particle physicists, who for a long time during the Cold War basically had the ability to get as much money as they wanted from the government, while some other branches of science just begged for scraps, because during the Cold War it was considered a national priority that our giant concrete particle beam shafts be bigger and longer than the ones the Russians had.) In the meantime we’ve been stuck with the Tevatron, finished in the mid-80s, which scientists have just continued to use for new experiments. This means we’ve gotten some good chances to explore everything in the energy range up to the Tevatron’s limit of one terra-electron-volt, but nobody’s been able to look past that.

But now the Large Hadron Collider, which can go up to about 15 terra-electron-volts, is going online, with the first operational tests scheduled (link goes to a big pretty graph!) for October of this year. In the short term, this mostly just means probably a bunch of science stories in the news when this all starts up toward the end of the year. But then once a year or few has passed and the real science starts, physics is going to actually start changing. There’s a lot of possibilities of things that could happen– finding supersymmetric partners or tiny black holes or the other exotic things that String Theory and other unification theories predict. Even aside from getting lucky and finding something big like that, though, the big expected purpose of the LHC is going to be to find a particle called the Higgs Boson, which is the only thing in the universe which is predicted by the Standard Model but which has never been actually seen. Finding the Higgs Boson would be a big deal, among other things because part of what the Higgs does is cause things to have mass, so once we’ve found one and nailed down its properties this might be a push in the right direction for the people trying to figure out how to make the standard model play nice with gravity. The other, unbelievably frustrating but actually probably even more promising possibility is that the LHC won’t find the Higgs Boson. This would be a big deal because it’s starting to look like if the Higgs Boson can’t be found at 15 terra-electron-volts, then it very probably doesn’t even exist, meaning theoretical physicists would have to go back to the drawing board in a potentially big way.

Aside from all this, a second interesting ongoing development in physics is that, while all these problems with accelerators have been going on, a whole bunch of rich and occasionally just plain crazy experimental results (and pretty pictures) have been showing up in astronomy thanks to advances in space-based telescopes. Astronomers have been undergoing a quiet renaissance in the last ten years after the Hubble Space Telescope finally started working, turning up all kinds of stuff that– although physicists haven’t really managed to find any clues forward in it yet– provide some big juicy questions that the next breakthrough in physics might be able to make a go at tackling. The biggest of these is the discovery of “dark energy”, the idea behind which can be summarized as “the universe’s expansion is speeding up over time, and we have no idea why“. It’s not clear whether this recent period of rapid advancement in astronomy will be able to continue much longer, since between the upcoming scuttling of Hubble and the gradual shifting of science research out of NASA’s budget it’s now questionable whether some of even the really promising experiments planned to follow up on the Hubble discoveries will be able to actually get carried out. But even still, we should expect at least a couple upcoming surprises from those experiments that do manage to get launched.

A third, less concrete and more sociological development in physics lately has been the new backlash against String Theory, typified by (or maybe just consisting of) two new and “controversial” books by Lee Smolin and Peter Woit. String Theory has always had detractors, but lately– partially in response to some documentaries in the late 90s where String Theory physicists took their ideas to the public, partially as a result of the increasingly long period of time String Theory has gone now without predicting anything concrete, and partially in reaction to the recent embrace by many string theorists of a really, really bad idea called the “landscape”– a handful of these detractors have started making their case against String Theory in a very public and organized way, and a lot of average members of the public (like, well, me) are starting to take notice. So far this String Theory Counterrevolution doesn’t really seem to have had any real effect on the state of science itself; its chief byproduct seems to have been a series of blog wars between physicists. (Though, if the reason I’m mentioning all of this is to list some of the things that have made physics interesting to follow lately, it’s definitely the case that watching PH.Ds with decades of experience flame each other on their blogs is entertaining, in a pro-wrestling sort of way.) Still, the way this backlash against string theory has played out does seem to hint we may indirectly see some interesting advancements in the near future; many or all of the people speaking out against string theory lately have been doing so under the auspices of arguing actually not that String Theory is outright wrong or should be dropped, but simply with the intent of trying to argue that additional approaches to fundamental physics should be explored as well– many of the string theory backlashers are themselves incidentally proponents of still-embryonic alternate approaches, usually Loop Quantum Gravity in specific. Considering that these arguments are being made in the full hearing of the public and other people who are responsible for the actual funding of physics, there may soon be a bit more oxygen for these alternate approaches for to get the attention and funding needed for serious progress, so we may see some interesting developments coming from them before long.

The last kinda-interesting development in physics of late is the one that actually inspired this blog post and long tangent, which is the emergence of quantum computing as an active research field which may sometime soon actually produce useful machines. Basically, while particle physicists have been busting their asses trying to convince the government to buy bigger magnets and theoretical physicists have been gothing it out about their problems with strings, a lot of applied physicists, people in areas like quantum optics, have just been quietly doing their jobs and, although maybe not working on as large a scale as some of those other groups of physicists, actually getting shit done. One of these busy areas has been quantum computers, which over the last few years has undergone some really dramatic advances, from the first working programs running on a quantum computer in only 1998, to two years later it being a big deal when quantum computers with five or seven qubits were able to factor numbers up to 15, to by this year people getting all the way up to realistic levels like 16 qubits and sudden advances happening in things like memory transfer or gates for quantum computers. This is all a really intense rate of progress– it was not long ago at all that quantum computers only existed as buzzwordy black boxes used by science fiction authors, who assigned them improbable capabilities– and there’s not really any telling what exactly happens next.

Quantum computers work by storing information in the quantum states of various physical things. This is useful because of the weird ways quantum states act. Information in a traditional computer is stored using physical things themselves, like holes in a punchcard, or tiny pits on a DVD surface, or electricity in a capacitor in a RAM chip. Each of these different ways of physically encoding data, when you store a “bit” of information, it’s either a 0, or it’s a 1. Either there’s a pit in the DVD surface, or there’s not. Quantum states, on the other hand, are not necessarily one single thing at a time; when you store data as a quantum state, what you instead store is a probability that that quantum bit (qubit) is 0 or 1. This means that when you store a number in a qubit register, you don’t have to store just one number in there; you can store lots of different numbers, as superpositions of probabilities. This leads to lots of crazy possibilities, since it means a program running on a quantum computer can, in a sense, do more than one thing at the same time. In another sense, this is just fakery; you’re not actually doing more than one thing at once, you’re just probabilistically choosing one of a certain number of possible paths the program could have taken. This leads to another weird property, that quantum computers don’t always return the right answer– they at best just have a certain known probability of returning the right answer. But probabilistic algorithms actually are a real thing in computer science, and can be fundamentally faster than traditional ones, so this is actually okay: you can run a quantum algorithm over and over and average the result, and you’ll still get the right answer way faster than you would have with a traditional computer.

The thing is though that working with quantum states as quantum computers do is actually really, really hard. Quantum superposition is usually described to people using a thought experiment with a cat in a box and lots of big macroscopic objects like geiger counters and vials of poison, but this is mostly just a metaphor and it’s never anywhere near that easy to set up. There’s this “observation” in the Schrödinger’s cat thought experiment that collapses the waveform and forces the cat to be either alive or dead; this is usually described as “opening the box”, but actually any interaction whatsoever between the systems inside and outside the box, including stray heat propagation or vibrations in the air (say, from the cat yelling to be let out of the box) while the box is still closed, is enough to entangle the systems and collapse the waveform. Theoretical physicists doing thought experiments may get to gloss over these little details, but when doing quantum computing, these little details become incredibly practical and immediately relevant– because they mean that if the qubits in a quantum computer interact with their environment, ever, in any way, then the quantum state is destroyed and your computation is erased. This means that whatever things hold the qubits in your computer have to be effectively totally isolated from quantum interference of any kind, and continue to be so while doing gobs of things that normally no one would want, need, or try to do with a quantum state, like transport or copy it. Figuring out how to do this, and all the other things that quantum computing entails, means really stretching the edges of what we think we know about quantum physics, and means that quantum computing has actually become something of a hotspot for theoretical physics researchers at the same time that “normal” theoretical physics has been struggling to progress. I’m only able to perceive all of this from a far distance, of course, but still it’s been interesting to watch.

It’s been particularly interesting to me, though, because at the same time theoretical physics is getting a workout in the applied problem of building quantum computers, something else is getting a workout in the related applied problem of what to do with the quantum computers after they’re built: theoretical computer science. Theoretical computer science and computability theory is something I actually personally know something about (unlike, it is worth noting, quantum physics), and it’s incidentally a topic which gets very little attention. Computer science professors often do their best to convince the students passing through that things like turing machines and lambda calculus actually matter, but people don’t get computer science degrees to do computer science theory, they get computer science degrees to get a job programming– and in a day to day practical sense, aside from an occasional vague awareness of the existence of “big O” notation, the vast bulk of people doing programming have little use for or awareness of computation theory. There are specific domains where this stuff is still directly useful, and there were times in the past where this stuff was still directly practical and relevant, but on average, outside of pure academia, the underlying theoretical basis of computer science these days practically seems to mostly only exist for the construction of intellectual toys for geeks like me who apparently don’t have anything else to do. Quantum computing, however, means that all this foundational computer science stuff not only has become suddenly very relevant and useful, but also suddenly has a bunch of new and relatively accessible open questions: As far as I’m aware no quantum algorithms even existed before 1993, and there’s still a lot of important questions to be answered on the subject of, once quantum computers “work”, what the exact extent of their capabilities might be and what we should do with them.

So to me personally, quantum computing is kind of interesting on two levels. One, if I’m going to treat the science of physics as a spectator sport, then from a spectator’s perspective quantum computers are one of the more interesting places things are going on right now; two, because of the connection to pure computer science, I or other people familiar with computers or programming actually have a chance of directly understanding what’s happening in quantum computers, whereas with, say, the Higgs Boson (which really? I’ve seen no fewer than five distinct attempted explanations of the Higgs Boson now, and I still have no clue what the darned thing really is), we’re left mostly only able to read and repeat the vague summaries and metaphors of the people who’ve worked with the underlying math. And then there’s a third thing I can’t help but wondering about in the interaction between these two things: I’m wondering if someone like me who already understands computers but doesn’t understand physics might be able to use quantum computing as a way of learning something about quantum physics. Real quantum physics takes years of really hard work and really hard math to learn, and most colleges don’t even let you at this math until you’ve already dropped a significant amount of time investment into the low-level, more accessible physics courses; on the other hand, if you’re outside of a college and trying to learn this stuff on your own, it’s very hard to find sources that really tell you directly what’s going on instead of coddling you in layer after layer of increasingly elaborate metaphors. Metaphors of course are and should be a vital tool in learning physics, but when you come right down to it physics is math, and glancing at most materials on quantum physics, I tend to get the feeling I’m learning not quantum physics, but just learning an elaborate system of metaphors, which interact with all the other metaphors as archetypical symbols but are connected to nothing else. On the other hand, stuff on quantum computing has to actually work with the real-world nuts and bolts and math of quantum physics (unlike popular books on the subject), but because so much of it has to be worked with by computer scientists, non-physicists, at least some of it is surely going to be necessarily aimed at a lower level of background knowledge (unlike learning this stuff at a college would be). One would have to be careful not to fool oneself into thinking this could actually be a substitute for a full physics education, of course, but from the perspective of an interested spectator I can’t help but wonder if this might be a good way of sneaking into the world of higher-level physics through the back door.

This is all a roundabout way of leading up to saying that I’ve been looking into some stuff on quantum computing, and I’m going to try to use what little I’ve worked out so far to try to make a little toy model computer simulation thing, which I will then post about it here, on the internet, along with some hopefully psychedelic animated GIFs the simulation generated. If I’ve kind of got things right, then putting this post together in a form other people can hopefully understand will help me get my thoughts in order and test what I’ve got so far (I’m allowing myself the presumption of assuming anybody’s still reading at this point); if I’m just totally confused and wrong, meanwhile, then hopefully somebody who understands this stuff will eventually find this and point my mistakes out. A first little attempt at this is below.

Specifically, I’m curious whether one could define a quantum version of Conway’s Game of Life.

Conway’s Game of Life

“Life” is this thing this guy named John Conway came up with in the 70s. It’s referred to as a “game” because originally that’s more or less what it was; Conway originally published it as a brainteaser in Scientific American, intended to be played by hand on graph paper, with the original challenge being a $50 reward to the first person who could devise a Life pattern that continued growing endlessly no matter how long the game went on. Conway at first expected this was impossible; he was horribly, incredibly wrong. (An example of such a pattern is at right.)

Despite these fairly simple origins, Life turned out to be sort of the tip on the iceberg of an entire category of mathematical systems called “cellular automata”, which are now a recognized tool in computation theory and which have received by now a fair amount of attention; Stephen Wolfram, the guy who created Mathematica, spent ten years writing a very long book all about cellular automata (a book which, although I’ve seen a lot of doubt as to whether anything in the book is actually useful to any domain except computation theory, is universally agreed to contain lots of pretty pictures). Life itself has remained the most famous of these; today there are entire websites devoted to categorizing Life patterns of every kind you could think of, and most people with an education in programming have been forced to implement a Game of Life program at one point or other.

- - - - - 1 2 3 2 1
- O O O - 2 3 4 2 1
- O - - - -> 2 4 5 3 1
- - O - - 1 2 2 1 0
- - - - - 0 1 1 1 0
On the left is a grid of cells containing a glider; on the right is a grid containing the count of living neighbors that each cell on the left has. I’ve colored light red the dead spaces which in the next generation will be new births, dark red the live spaces which in the next generation will survive, and light blue the live spaces which in the next generation will die. Let this keep running a few more generations and you get the glider:

The term “cellular automata” just refers to systems that can be described by patterns of colored dots in “cells” (i.e., grid squares or some other shape on graph paper) and which change over “time” (i.e., redrawing the system over and over on pages of graph paper) according to some simple rule– in all variations of cellular automata I’ve ever seen, how a cell is colored in one frame depends on how its neighbors were colored in the previous frame. In the original Conway’s Life, the rule was that a cell is colored black in a new frame if

  • in the previous frame it was colored black and was touching either three or four cells colored black, or
  • the cell was colored white in the previous frame but was touching exactly three cells colored black.

Conway intended this as a sort of population model (of “life”) where black dots represented living things; black dots disappearing when they have too few or too many neighbors represent that lifeform dying of either loneliness or starvation, black dots appearing when they have a certain number of neighbors represent reproduction.

These rules favor non-life over life, so if you just fill a board up with randomly placed live and dead squares, within a couple of generations most of the life will have died off or settled into tiny static patterns, like little solid blocks or little spinners eternally looping through a two-frame animation. But you’ll also have little areas that wind up filled with sloshing waves of chaos, breaking against and devouring the static bits. If you watch this carefully, you’ll notice that chaotic as these waves look, the way Life patterns act when they collide with one another is actually very specific and structured. The systems in Life are chaotic enough that predicting what a random Life pattern you come across is going to do is just about always going to be impossible unless you sit down and actually run it. But if you sit down ahead of time and work out a catalog of patterns that you know what they do and how they interact with other patterns, and then stick to placing those patterns in very carefully controlled ways and places, you can actually use these smaller parts to design larger life patterns that have basically whatever behaviors you want.

You can build machines.

Life is an example of something we call a “model of computation”, which is a name we give to systems– machines, language systems, board games, whatever– that can be one way or another tricked into running some kind of “program” that’s dummied up for it, such that after a while its changes in state have effectively computed some sort of result. The kinds of computational models we normally care about are the ones that we say are “Turing complete”. This is a reference to the first ever proven universal model of computation, the Turing Machine devised by Alan Turing in 1936. The Turing Machine is a hypothetical computer, described as a sort of little stamping machine on wheels that slides back and forth on an infinite tape reading and rewriting symbols according to a little set of rules it’s been wired up with. The Turing Machine was not meant to ever actually be built; it was designed just to be easy to reason about and demonstrate interesting things in mathematical proofs. The most interesting thing about the Turing Machine is that it is in fact universal; there’s a generally accepted but technically unproven principle called the Church-Turing Thesis that any programming language, computer, whatever, that you can think of, can be emulated on a Turing Machine. This makes the Turing Machine suddenly really important as a sort of least common denominator of computers, since if you have a machine that can emulate a Turing Machine– in other words, a machine which is Turing complete– then that machine can inevitably emulate anything a Turing Machine can, including every other Turing complete machine, as well. Turing Complete models of computation are as common as dirt– they include every computer and programming language you can think of, as well as some really bizarre and deceptively useless-looking mathematical systems, a couple of which Conway invented— and they’re all approximately the same, because they can all do exactly those things the other Turing complete systems can.

The Game of Life is, as it happens, Turing complete, as was quite dramatically demonstrated a few years back when some guy actually built a Turing Machine that runs inside of a Life system. The Life Turing Machine is a machine in a hopelessly literal sense; viewed zoomed out, it looks like some kind of aerial photograph of some steam-powered contraption that a man from the 1800s with a handlebar moustache would demonstrate, after dramatically pulling off the canvas that covered it. The thing isn’t particularly efficient; if you for some reason took the time to make an emulator for an 386 Intel PC out of Turing machine rules, and then ran this emulator on the Life turing machine, I imagine the heat death of the universe would probably come before you managed to get Minesweeper loaded. But efficiency isn’t the important thing here: it works, and that’s what matters.

Something worth noting is that, although I’ve already claimed that turing machines are approximately the same as any computer you can think of, this isn’t quite true– quantum computers actually aren’t included in this generalization. Normal computers can emulate quantum computers, but only with unacceptable amounts of slowdown (insofar as certain kinds of proofs are concerned), and there are things quantum computers can do but normal Turing-complete machines outright can’t, specifically generate truly random numbers. This isn’t a problem for the Church-Turing thesis, exactly: We can just get around this by saying that the Church-Turing thesis is only supposed to describe deterministic machines, which quantum computers aren’t. Despite this, we still have to have some kind of least common denominator to reason about quantum computers with, so we have this thing called a Quantum Turing Machine that vaguely resembles a Turing Machine but is able to provide a universal model of computation for quantum computers.

So, all this in mind, here’s what I wonder: The Turing Machine is a universal model of conventional computation, and you can make minor modifications to the turing machine and get something that’s also a universal model of quantum computation. Life is also a universal model of conventional computation; you can make a turing machine in it. Is there some simple bunch of modifications that can made to Life that make it a universal model of quantum computation?

I originally started wondering about this in the context of Loop Quantum Gravity, something I mentioned earlier as an “alternative”, non-String theory of Physics. Loop Quantum Gravity is supposed to be a theory of Gravity that incidentally follows the rules of Quantum Physics. This is something that is understood to be either extremely difficult or impossible, for a couple reasons, one of which is that gravity has to play nice with the Theory of Relativity. The Theory of Relativity plays hell with quantum theories because it says the geometry of the universe is bending and warping all the time like silly putty, which makes the quantum theories scream “I CAN’T WORK UNDER THESE CONDITIONS!” and storm out of the building. Loop Quantum Gravity, at least as I understand it, gets around this in part by quantizing geometry itself– that is, it makes the very geometry the universe exists inside subject itself to a form of quantum behavior. One of the consequences of this as I understand it is that the geometry of the universe becomes discrete, which kind of means that it can be represented as if it were a big interlocking bunch of (albeit really weirdly shaped) cells on graph paper. In other words, you could if you really wanted to treat the geometry described by Loop Quantum Gravity as a board to run Life on. If true, this seems like an interesting idea to me because Loop Quantum Gravity is really hard to find any remotely accessible information on and all the information that is available is super high level math, so it seems like something like Life or some other cellular automata running on the nodes of a LQG spin network would be a fun little visual demonstration of how the thing is supposed to work. But before you did that, I assume, you’d have to have some kind of conception of what a cellular automata in a quantum universe would work like at all.

A Quantum Game of Life

Conveniently, creating cellular automata with quantum behavior isn’t at all a unique idea; looking around I’ve found that people have been working with QCA models almost as long as they’ve been working on Quantum computers, and several conceptions of cellular automata with quantum behavior are already around. I’m going to list them here, since I used some of them in formulating what comes after.

  • First, there’s the universal quantum turing machine itself. Mark Chu-Carroll has an informal but very clear explanation here of how the quantum turing machine conceptually works; you can also find the original paper that first defined the QTM online under the title Quantum theory, the Church-Turing principle and the universal quantum computer. It’s a bit of a heavier read, but the introductory parts are plain readable everyday English and interesting for their own reasons.
  • On top of this, there’s actually real ongoing research into what are called “quantum cellular automata” or “quantum dot cellular automata”. If you’ll look into these, you’ll find they seem to mostly consist of systems of little objects that look like dice arranged next to each other like dominoes, such that the painted dots in any individual die change color over time depending on how the dots in the closest adjacent die are colored. I’m not exactly sure how these things work, but I do notice two things. One, this version of cellular automata looks very serious and formal and appears to have real practical application in actually simulating quantum systems. Whereas what I’m hoping to make here is more a silly little toy program that makes pretty pictures. Two, doing anything in this version of QCAs seems highly dependent on how you arrange the tiling of the dominoes against each other, whereas I’m really hoping for something where the behavior is dictated by the way the “lifeforms” are positioned and not by the way the external geometry is preconfigured.
  • Some guy named Wim van Dam in 1996 actually wrote an entire Master’s thesis on the subject of quantum cellular automata, under the simple title of “Quantum Cellular Automata“. Van Dam’s thesis there is actually well worth reading if you’re finding any of this so far interesting, and explains the motivations and workings of quantum computers, cellular automata, and hypothetical quantum cellular automata all way better than I do here– probably to be expected, since it’s an entire master’s thesis, instead of just a “this is how I wasted my Saturday” blog post. Potentially extremely conveniently for my purposes, Van Dam actually describes specifically how to formulate a one-dimensional cellular automata of the exact kind that the Game of Life uses. He does not, however, bother generalizing this to two dimensions as would allow one to implement the game of life in specific, since of course the one-dimensional version is sufficient for his goal of proving the QCAs work.
  • Google incidentally turns up a short and concise 2002 paper titled “A semi-quantum version of the Game of Life“. Looking over this, however, it doesn’t appear to be exactly the thing I’m looking for (i.e. a reformulation of Life with quantum style rules). Instead, the paper suggests that such a thing is probably not a good idea, noting “A full quantum Life would be problematic given the known difficulties of quantum cellular automata” (uh oh), and instead takes the opposite tack, of trying to design a quantum system that can simulate life: specifically, they design a way of creating systems of quantum oscillators which interfere with one another in a way which “in the classical limit” exactly follows the rules of life, but which otherwise exhibits various interesting quantum mechanical properties as a result of the oscillators it’s made out of. Which is all interesting, but it isn’t quite what I was looking for (at least I don’t think so), and it also requires one to understand the quantum harmonic oscillator (which I don’t).

None of this exactly provides what I was looking for, since all of the attempts above, as it happens, are by people doing actual physics research. They’re thus doing their work under the constraints of a set of fairly serious goals, goals like applicability in proving things about quantum physics or even implementability as a real-world device. When I think about deriving a version of quantum Life, though, I’m just thinking of something that would be easy for me to implement and useful and fun for demonstrating things. This means that if I’m going to try to construct my own version of quantum Life, I’m going to be working off a somewhat more modest set of goals:

  1. Just as in the original game of Life, whatever the basic rules are, they should be simple to understand and implement even to someone who doesn’t necessarily understand quantum physics and turing machines and such.
  2. Also like the original game of life, running a simulation of quantum life should produce pretty pictures.
  3. And also like the original game of life, the way patterns evolve should exhibit some sort of structure which could be potentially used by someone with lots of time on their hands to design starting patterns with arbitrary complex behaviors.
  4. The game should incorporate or exhibit some kind of behavior of quantum mechanical systems. Hopefully the quantum behavior should incidentally impose some kind of interesting limitations or provide some kind of interesting capabilities to someone trying to design starting patterns that wouldn’t be the case in standard Life. In the best of all possible worlds it would be possible to demonstrate some interesting feature[s] of quantum mechanics by setting up quantum Life patterns and letting them run.
  5. Ideally, hopefully, this simple quantum life should turn out to be a model of quantum computation– that is, it should be possible to simulate a quantum turing machine and thus emulate any quantum computer using this quantum life. Since (the Wim Van Dam paper proves this) it is possible to simulate a quantum turing machine on a universal quantum cellular automata and vice versa, it should be sufficient to show any of the “real” QCAs described above can be implemented or simulated as patterns in quantum life.

My first attempt at this follows; to test it, I did a quick and dirty implementation in Perl. Note that although I’m pretty confident of everything I’ve written in this blog post up to this point, after this point I’m in uncharted territory and for all I know everything after this paragraph is riddled with errors and based in gross misunderstandings of quantum physics. In fact, since as I type these words I haven’t actually started writing any of the Perl yet, I have no idea if this quantum Life thing is even going to work. (Though even if it doesn’t, I’m posting this damn thing anyway.)

(Update, Feb 2009: So, I wrote this post a couple years ago when I was first trying to learn about quantum physics and quantum computing, and I actually did make a couple mistakes. The one really huge mistake I made occurs a few paragraphs back, and it propagates forward into the part of the post that follows. The mistake is this: I claim in this post that a qubit in a quantum computer can be described by the probability that that bit is 0 or 1. This is close, but not right. A better way to put this is that a qubit can be described by a quantum probability that the bit is 0 or 1. What is the difference? Well, a normal probability is just a number between 0 and 1– like, 30%. However a quantum probability is a complex number– a number with an imaginary part– of absolute value less than one. This unusual notion of a “probability” turns out not only to make a big difference in the behavior of quantum systems, it’s the entire thing that makes quantum computers potentially more powerful than normal computers in the first place! Unfortunately I didn’t clearly realize all this when I wrote this blog post. So in the final part of this post, I try to define a “quantum” cellular automata– but since at the time I misunderstood what “quantum” meant, I wind up instead defining a cellular automata model which is in fact only probabilistic. Now, given, I think what follows is still kinda interesting, since the probabilistic cellular automata turns out to be kind of interesting unto itself. And it’s definitely a first step toward quantum cellular automata. But, just a warning, it’s not itself quantum. I did later took some first steps toward implementing an actual quantum cellular automata, but they’re not done yet. So hopefully someday I can revisit this and write another post in this series that gets it right this time.

If you want a clearer explanation of this whole “quantum probabilities” thing, I suggest you read this, which is about the clearest explanation of what the word “quantum” means I’ve ever found.)

The most logical way to construct quantum life, at least as far as I’m concerned, is to just take the trick used to create the QTM and apply it to the Life grid. In a Post machine [2-state Turing machine], each tape cell is equal to either state “0” or state “1”; in the QTM, the tape cell is essentially equal to a probability, the probability that the tape cell contains state “1”. This simple conceptual change, from states to probabilities of states, provides basically all of the QTM’s functionality. In Life, meanwhile, as in the Post machine, each grid cell is equal to either state “0” or state “1”; a naive quantum Life, then, could be created by just setting each grid cell essentially equal to a probability, the probability that the grid cell contains state “1” (“alive”). This is the tack I’m going to follow. I’m going to speak of cells below as being “equal” to a number between 0 and 1; when I do this, the given number is the probability, at some exact moment in time, that the cell in question is “alive”.

The introduction of probabilities like this even just by itself offers a lot of opportunity for the creation of new and elaborate cellular automata rules, say rules where each varying number of neighbors offers a different probability of survival into the next generation; I imagine the people who’ve already done so much elaborate stuff with just the addition of just new discrete colors to Life could have a field day with that.

I’m going to ignore those possibilities, though, and just maintain the rules from “Real Life” entirely unchanged– that is, a cell is created iff it has three neighbors, a cell survives iff it has three or four neighbors. These rules still work perfectly well even though now some cells are only “probably” there, and conveniently, this selection of rules means that this kind of quantum Life is in fact completely identical to normal life, so long as all the cells are equal to exactly 0 or exactly 1. The simulation only behaves differently if at least one cell, from the beginning, is programmed to be “indeterminate”– that is, to have some probability of being alive between 0 and 1. When this happens, then we can no longer calculate an exact number of neighbors for the cells in the vicinity of the indeterminate cell. We deal with this situation simply: instead of calculating the number of neighbors, we calculate the probability that the number of neighbors is “right” for purposes of the cell being alive in the next generation. That probability then becomes the cell’s “value” in the next generation. For example, let’s say an empty cell has two neighbors equal to 1, and one neighbor equal to 0.5. This empty cell has a 50% probability of having 2 neighbors, and a 50% probability of having 3 neighbors. Since the cell needs exactly three neighbors to reproduce, the probability of the cell being alive in the next generation will be 0.5.

The questions that immediately come to mind about this way of doing things are:

  • Does the addition of “indeterminate” states actually have any interesting or useful effect?
  • Is the addition of these indeterminate states to Life sufficient to allow the emulation of a QTM or UQCA?
  • Does this actually have anything to do with quantum physics?
  • Is this different in practice from the “semi-quantum life” linked above?
  • Though one of my goals is for the rules to be simple, and they seem simple to me, I took three long paragraphs to describe these rules. Is the way I described the rules at all clear, and maybe is there some way I could have just summarized them in a couple of sentences?

(As far as the second question here goes, it occurs to me that what I’ve designed here may well not really qualify as “quantum life”, and maybe “probabilistic life” would be a better way of describing it. In fact, looking on Google for the phrase “probabilistic game of life”, I now find that a couple of people who’ve attempted rules along these lines: There’s this little writeup on a math site here, which appears to be describing a version of probabilistic Life with my exact rules. This author doesn’t provide an implementation, though, so I shall press on, although that site does provide some interesting proofs about the behavior of a probabilistic life that I’m not going to bother trying to understand just now. On an unrelated note, Google also turned up this odd little thing, though what it actually specifically has to do with the Game of Life is not obvious from the link.)

Anyway, let’s look at an actual implementation of this kind of quantum life and see the answers to my questions here are at all obvious.

I happened to have this simple implementation of Life sitting around I’d made awhile back, as a perl module that takes in grids containing Life starting states and spits out animated GIFs. This particular implementation was written such that on each frame, the module would make a table listing the number of neighbors each cell in the grid has, then use this number-of-neighbors table to generate the alive and dead states in the next frame. Modifying this existing module to work with quantum life turned out to be incredibly simple: Rather than the number-of-neighbors table being a simple number, I just switched it to being a probability distribution, an array of ten values representing the probabilities that that cell had zero through nine neighbors respectively. I build up this array by starting out with a distribution giving 100% probability of zero neighbors and 0% probability of any other number, and then factoring in the changes to the probability distribution caused by each of the cell’s nine neighbors, one at a time:

0 neighbors:
1 neighbor:
2 neighbors:
3 neighbors:
4 neighbors:
= 1
0
0
0
0
+
100%
alive
neighbor
0
1
0
0
0
+
100%
alive
neighbor
0
0
1
0
0
+
50%
alive
neighbor
0
0
0.5
0.5
0
+
50%
alive
neighbor
0
0
0.25
0.5
0.25

(If you squint real hard, you can probably see the binomial distribution in there.)

The probability of the cell being alive in the next generation is then calculated by

(probability of cell having 3 neighbors this generation) + (probability of cell being alive this generation)*(probability of cell having 4 neighbors this generation)

What happens when we run this?

Well, at first glance at least it doesn’t appear to work all that well. Here’s what happened when I took a simple glider and placed it on a quantum Life board, in each case varying the probability that the glider was “there” (and, in the last couple cases, varying the probability that each of the “empty” cells were actually filled). In this table I show first the starting condition, and then an animated GIF showing what happened when the Life simulation ran. In each frame, the darker the pixel, the more probable it is that there was a living cell there at that exact moment.

100% 50% 75% 90% 100% 0.001%

The 100% alive glider, as expected, behaves just as it would in normal life. The other gliders… well… just kind of instantly explode. The 90% glider manages to actually on average complete a couple of halting steps before it falls apart entirely, the poor thing, but it too smears out of apparent existence within a few more frames. This in retrospect isn’t actually surprising. When the gliders “disappear”, they’re not outright gone; instead, the blank area where they once were (and a gradually spreading portion of the rest of the blank area, as well) is equal to a bunch of incredibly low probabilities, so low they’re indistinguishable from pure white. This happens because when we run a game of Life containing indeterminate states, we’re basically simulating every single possible board configuration that could have existed at once, with some just being more likely than others. The Life rules favor non-life over life, so the vast majority of these possible games wind up just rapidly spinning out into nothingness– or, at least, the possible games that wind up containing some living cells left don’t have the life settling in some areas any more prominently than others, so the blankness is able to dominate essentially without opposition.

So this version of Life doesn’t at first glance seem to turn out to be very useful in terms of creating a usable or playable game. We could maybe fix the “everything dies instantly” problem by fiddling with the rules some in the next version, though, so maybe things aren’t hopeless. In the meantime, does what we have so far at least demonstrate quantum behavior? Well, it does seem pretty clear that the life cells participate in self-interference, at least from informal observation:

Starting state Center line is 100% there Center line is 99.99% there

At least self-interference is the way I interpret the sort of gradienty behavior the version on the right engages in before it fades out entirely. The probabilities don’t do that whole square-of-complex-numbers thing, and there isn’t anything resembling observation or wavefunction collapse (I could change the GIF-making function such as to basically assume the wavefunction collapse happens on every frame, by randomly populating the gray areas with life instead of displaying the probability distributions, but that would probably just make things visually confusing without conveying anything interesting), but (although I could be wrong about this) the quantum turing machine doesn’t as far as I can gather really have any of these properties either so we could maybe just shrug that off.

Even so, there really doesn’t seem to be any way you could make a turing machine out of parts that keep fading out of existence, so after encountering this everything-fades-away problem I was just about to give up and declare the whole thing a wash. Still, turing machines aside, I did manage to at least find one interesting thing about this particular quantum life implementation. We see the first glimpse of this when we try populating a Life board at random:

Okay, now here there seems to be something going on. You’ll notice that most of these areas actually don’t fade right to white, but instead stabilize at what looks like a sort of 50% gray (although, while they look like it, the stable half-gray spots aren’t 50/50 gray– each of those cells are about 34.7583176922672% there. I don’t know why it’s that particular number, but I assume it has something to do with the tension between the “survivable” thresholds of three and four neighbors.) of probability; apparently there’s some kind of sweet spot there where life boards filled to a certain density of living cells tend to keep that density more or less consistently on average. A few little bits of area stabilize toward zero, though, and the areas of zero gradually eat away at the areas of half-life until there’s nothing left. So that’s kind of interesting. But the one thing that suddenly and out of nowhere justified this entire silly little project to me is what happened when I randomly populated a board, and then “rounded” most of it– that is, I rounded everything on the randomly generated board to exactly 1 or 0, except for a thin little strip of indeterminacy running vertically down the right side. And I got this:

…WTF?

What happens here is that the “probably there” areas converge on the mysterious 34.7% half-life value and then just kind of spill outward, swallowing up all the actually-living cells as they go. Then, once all the living matter has been devoured and no food is left, the white starts eating in and the areas of half-life slowly die out. In other words…

Holy crap! It’s a zombie apocalypse simulator!

So, I’m not sure this entire exercise was altogether productive in any way, though I’m going to keep experimenting with all of this– maybe trying some alternate Life rules besides just Conway’s 23/3– to see if I can coax anything useful out of this quantum Life concept. Check again next week to see if I come up with anything. In the meantime, though, at least, I think that the final image makes the whole thing satisfying to me in two ways:

  1. I at least got my psychedelic animated GIFs
  2. I tried to test quantum physics, and instead accidentally created zombies. Does that mean I qualify as a mad scientist now?

If you want to run any of this stuff yourself, you can find the messy little quantum life module here and the script that generated all the images on this page here. You’ll need to rename those files to qli.pm and gliders.pl, and you’ll need Perl and GD::Image::AnimatedGif to run them; and to do your own tests you’ll have to modify gliders.pl. If you need any help with that, just ask below.

Pixels and Politics

Saturday, December 16th, 2006

So last month, just before the elections, I was thinking about electoral shifts. With everyone pretty much convinced that the Democrats were about to take over Congress, or at least the House, I saw a lot of people making comparisons to the 1994 Republican takeover of Congress, and I saw one person make the claim that the 1994 Republican takeover wasn’t really that big of a shift compared to previous Congressional swings earlier in American history.

This made me curious. I started to wonder how one might go about judging such a thing, and started to realize that although detailed histories of the U.S. presidency abound, there really is not very much well-organized information out there about the historical makeup of the Congress.

I decided to solve this the way I solve most problems in my life: by making animated GIFs. I downloaded the rosters of the first through 109th congresses from the Congressional Biographical Directory, and then a month later, when things had settled down a bit, added the information from Wikipedia’s tentative listing of the newly-elected 110th congress. Then I wrote some perl to convert the Congressional rosters to graphs, with one colored pixel marking the party which held each seat in each of the 110 elected congresses. You can find the results below.

For starters, here’s just one big graph of everything, sorted from top to bottom by state, with senate and house seats separated. As with any of the images in this post, if you want to see it closer, you can click to zoom in:

Although this graph is to some extent cryptic since it doesn’t tell us exactly why any of these pixels change colors with time, if you look closely you can actually see many of the important events of American history reflected quite visibly in this graph. For the most obvious example, the rise and fall of the Federalist and Whig parties are clearly visible in the early part of the graph as big blobs of purple and green, accompanied by a wave of gray pixels around the 1820s, just before the Whigs appeared, marking the collapse of the Democratic Republicans before the party was reborn under Andrew Jackson. A solid line of gray pixels is also visible at the beginning of the graph, marking those heady few early years of American politics before any political parties existed at all. The Civil War is clearly visible as a long vertical black streak cutting through the graph around 1860, marking the period when the southern states simply didn’t participate in the Congress. After the Civil War most of the southern states turn very briefly red, then blue again, as blacks suddenly gained the right to vote, then lost it again with the rise of Jim Crow and the Ku Klux Klan. After this point the “solid south” phenomenon becomes incredibly marked, with the northern states in the graph a patchwork of red and blue pixels, but the southern states a solid sea of blue for a hundred years as a result of post-Reconstruction animosity toward the Republicans. In the decades after the 1950s, the great northern/southern swap as the Democratic and Republican parties in many was reversed themselves is visible as a great gradual blur of colors swapping, followed by a solid wall of change around 1994– Massachusetts and Texas are almost mirrors of one another in this period, with Massachusetts slowly turning from nearly solid red to solid blue, and Texas doing the same in reverse.

When we look at the graph by states this way, of course, shifts in Congressional control— which is more of a numbers game– are not so clear. Some of the big shifts are visible– for example stripes of red and blue are clearly visible around the beginning of the 1900s, as first the Republicans sweep congress during the Spanish-American War, then the Democrats sweep congress during the Great Depression and WWII. The 1994 Republican Revolution is visible in some states, but not others– and in those places where it does occur, it seems less like a solid switch than just an acceleration of the steady progression from blue to red in many places of the country that followed Nixon’s “southern strategy”, and the steady emergence of the “red state” phenomenon. The 2006 elections– the last column of pixels on the right– is barely visible at all.

The shifts become a little more clearly visible if we choose not to sort by state:

In the graph on the left here, sorting still occurs by state, but rather than being separated neatly the states are all just mashed together. This graph is a little hard to make sense of. More clear is the graph on the right, where pixels are instead sorted by party. Here the shifts in congressional control are quite blatant; very brief swings in power, like the Democratic powergrabs following the Mexican-American war and the Watergate scandal, become easier to see, and it’s easier to see which numeric swings were lasting and which weren’t. The “Republican Revolution” is a lot more visible on this graph than on any other, and at the very end of the graph, someone familiar with the politics of the last decade can almost chart the rise and fall of the Republican congressional majority pixel by pixel: Republican control spikes like crazy in 1994; then drops off just a little bit as voters become disillusioned with the Republicans in the aftermath of the impeachment circus; voters then warm toward the Republicans again in one final one-pixel spike, representing the halo effect of Bush’s 2004 campaign; then suddenly the numbers swing toward the Democrats again in that last final rightmost pixel.

One thing that stands out to me in this particular graph is that though the swing towards the Democrats in 2006 is quite pronounced, it’s certainly not nearly as pronounced as the swing that put the Republicans in power in 1994. Although the Democrats still hold a decent majority, and it looks like they’re about on par with where they were at what looks like the beginning of the Reagan revolution, they don’t hold nearly as much power as most of the historical Democratic majorities since FDR have. Although there are other reasons besides pure numbers to think that in this particular election the voters meant to send a message– although it’s not really visible in any of the graphs above, one of the interesting facts about the 2006 elections is that no congressional seats or governorships held by the Democrats went to the Republicans on 2006, only the other way around– in terms of pure numbers the 2006 elections were not really that big of a shift, and the Democrats are only really halfway to replicating the feat that the Republicans pulled off in the 90s. If nothing else, this means that the Democrats are going to have to govern carefully to keep control of the situation with their relatively thin majority– and will have to really convince the voters they’re doing something worthwhile with that majority from day one, because it will not take much to lose it all in 2008.

These graphs aren’t perfect. The chief problem with them is that they aren’t exactly sorted by seat. The data that I’m working off of here doesn’t show who serves in which district, only who served in what state. This means that if someone holds a particular congressional seat for 20 years, they’ll show up on the graph as a solid line of 10 horizontal pixels– but their replacement for that same seat won’t necessarily be in the exact same horizontal position as they were. Also, I don’t have records of who won the elections– Congress’s listings only showed who served during each two-year period, so if more than one Congressperson occupied the same seat during some period (for example, because one of them died and was replaced with the second), both show up in the graph. (This is why, although each state only has two Senators, many of the “Senate” lines in the by-state graph at the top are occasionally taller than two pixels.)

What I’d be curious about doing with these graphs in future is getting hold of some more specific data concerning who served in exactly which Congressional district, so the graphs can more accurately reflect the shifts within states– for example, so that if most of a state votes one way, but there’s one specific city or region that consistently votes another, it would be clearly visible. It also might be interesting, with that information in hand, to try to rework some of these graphs as colored maps, although I’ve never found a good way of making maps in software. Another interesting possibility might be implementing some sort of mouseover feature, so that by moving the cursor over any particular pixel you can see the name of the person that pixel represents.
The other thing that I’d like to try to fix about these graphs, though I’m less sure how is that they’re kind of a lot to take in all at once– they’re too tall to fit on a computer monitor, and without zooming in a lot of the features are hard to make out. This is a little bit helped on the graph that I think is my favorite, since it serves very well as a kind of “summary”– the graph where House reps are ignored and only the Senate is displayed. On this graph we get a good general idea of how people are voting but the graph is still small enough to take in at a glance, so the nature of the big party shifts by region and event are most “obvious”:

If anyone has any other suggestions for ways that these graphs could possibly be improved, I’d be curious to hear them.

As one final bonus, here’s an animated graph, with columns of pixels from left to right representing states:

I have made a website

Monday, May 8th, 2006

So: I have made this website. You can find it at http://datafall.org/.

The idea of datafall.org is that if you have a website that is something like a blog– like, a copy of WordPress or Movable Type, or a LiveJournal, or a Blogspot account, or a MySpace page, or basically anything that uses RSS– you can add it to Datafall, and after that everything you post to your site will automatically appear at Datafall also. It’s kind of like Slashdot, except that instead of being a group blog for CmdrTaco and Zonk and the three other people who can post at Slashdot, it’s a group blog for the entire internet.

Or, if you don’t have a blog or know what I’m talking about: Datafall is an open site that (hopefully) collects the best bits of other sites, and puts them in one place for your reading pleasure.

Why I did this, and why you might care

Lately a lot of the good content and discussion on the internet has been posted in what are called “blogs”. This is a word that is supposed to be short for “weblogs” but basically just means a site where people frequently post things they wrote.

A problem with blogs, at least in my opinion, is that they aren’t very good at forming communities. Almost all blogs have comment sections, so there’s usually a little community there; but these communities usually aren’t very large, and they can sometimes be very insular. Also, most blogs have links to blogs they like and those blogs usually link back, so you sometimes get little rings of blogs that all tie together; but these usually aren’t communities so much as they are cliques. Sometimes you see “group blogs” where a couple different blogs band together, like the excellent Panda’s Thumb; but this is not common, and the tools for setting this sort of thing up don’t seem to be very good.

To me, a good internet community should be something where a whole bunch of people come together to some kind of common ground that no single person exactly controls, the way most web forums work and by-invite blog cliques don’t. When communities are open like this, you get a much wider and more interesting range of opinions, and people are encouraged to respond to things they don’t agree with instead of just shutting them out. Another nice thing about big “common ground” sort of sites is that finding the good stuff is easier– content comes to you, instead of you having to go to it. Good blogs, in my opinion, are after all kind of hard to find. On the other hand, look at something like Slashdot– it’s not very good, but it’s consistent, and that makes it easy. The links on Slashdot are usually just whatever the rest of the blogosphere was talking about three days ago, so you could get the same links by just reading a bunch of different blogs– but the links do get to Slashdot eventually, and personally, I’d just rather read something like Slashdot because it’s easier.

The problem is, though, that while some kind of big centralized site like a webforum may be what I’d prefer as a reader, the people who are actually writing good, interesting stuff prefer to do it in their blogs rather than something like a web forum. And this makes sense. Who wants to pour their heart and soul into writing something really good if it’s just going to get a Score:5 stamp in a slashdot story and then disappear forever? If you save your best writing for a blog, not only do you get more attention, it’s safer– the blogger has control over their own site, so they never have to worry about somebody else screwing it all up for them. (I’ve seen at least two collaborative writing sites fall apart, partly because the people running it couldn’t consistently keep the hardware up and running.) It’s easy enough to get people to collaborate and submit good stuff when your site is nothing but links, like the front pages of Slashdot or Fark or Digg are. But what if you want actual writing– things like news analysis, or political commentary, or interesting stories? Well, that’s what blogs are for.

I wish there was some way that you could blend the best advantages of blogs with the best advantages of something like Slashdot or a big web forum.

So I decided to try to create one.

How this works

One of the common features all blogs share is what’s called an “RSS Feed”. RSS is a way of displaying posts on a website without displaying the website itself. A lot of people use these programs called “RSS Aggregators” to read blogs. RSS Aggregators (Firefox and Safari each have one built in) keep bookmarks of all your favorite sites, and when you open the aggregator it shows you all the new posts from all of your favorite sites, all mixed together in one place.

Datafall is kind of like an RSS aggregator that’s shared by the entire internet; anyone can add their site as a bookmark at Datafall by going here. (All you have to do is give Datafall a link to your site– Datafall figures out the rest from there.) Once a site is bookmarked on Datafall, Datafall will automatically notice when the site updates, and add an excerpt of the new post, with a “Read more” link that leads to the full post on the blog where it was posted.

There are a few different ways to find posts on Datafall. Every post on Datafall has a post “type” (is it news, an op-ed, a diary?) and a post “topic” (is it about politics, computers, culture..?). By default, everything on Datafall gets posted in “Diaries” (which is basically the “anything goes” section) and doesn’t have a topic. You can move one of your posts to a different type or topic by clicking the “Edit or Moderate” link that appears under every post.

Aside from this, there is also a “Front Page” section, which is supposed to be the best of the best from all story types and topics. Like on Kuro5hin or Digg, the users vote on which stories are the best ones and worthy of going to the front page (again, by clicking the “Edit or Moderate” link under the post).

Regardless of type or topic, you can always see the newest posts on Datafall by looking at the sidebar on the right side of every page.

The hope is that Datafall will eventually work like a big collaborative RSS filter, with a bunch of feeds coming in and the very best stuff coming out on the front page, with the will of the users deciding what goes where. (Of course, since there are no users yet, all it takes to get something to the front page right now is for a single person to click on the “nominate” button.)

In principle, there are several sites that work kind of like Datafall already– sites like Feedster or Blogsearch.google.com, which take in many RSS feeds and help you find things within them. However, these are not communities. They are search engines. They do not bring different blogs together any more than Google brings different forums together, and they’re all pull, no push– you can’t get anything out of Feedster unless you already know what you’re looking for.

Datafall can be different.

Site principles

Datafall is far from finished (the section after this one describes some of the things that need to be done), and along with the work that isn’t done yet, there are also going to be a number of decisions that need to be made about how the site should work and how the community should look. As [if] the site gains momentum, these are the principles I am going to try to shape everything that happens around:

  1. The site should be interesting and readable. All other goals must kneel before this one. If looking at the Datafall front page doesn’t immediately produce something worthwhile to read, then what’s the point?
  2. The site should be controlled by the users. Group moderation should be used everywhere. Datafall isn’t “my” site. If I just wanted to run a blog, I’d just do that. Actually, I’m doing it already, now. Datafall, on the other hand, should be a site that exists for, and is controlled by, the people who post there. I am only one of those. Whenever it is possible for a decision about the site– about what kinds of features get implemented, about what does and doesn’t get moderated well, about how (if at all) the site is policed– to be in some way deferred to the userbase at large, it should be.
  3. Filter, don’t exclude. Of course, there’s a big problem with the above idea: not all of the users are going to agree on everything. Different users might have different ideas about what is good content, or a good feature. Whenever possible, the users on the losing side of the decisionmaking process should be given some way to split away and continue on as they like. The entire point of Datafall is about bridging gaps and bringing different sites together, but it’s important to realize that this isn’t always possible, and you need to have a plan for what to do when it isn’t. If it’s decided that content doesn’t belong on Datafall (short of it actually being spam), it should be hidden, not deleted. If it reaches the point where a subset of the users wind up with a vision of what Datafall should be which is entirely opposed to that of the rest of the userbase, and it turns out there really is no way to reconcile this, the minority should be given some way to split off and carry on without the rest of us (see “groups” and “open source” below).There are two reasons for this. First off, collaborative processes can succumb to groupthink. Whether content is good or bad doesn’t have much to do with whether it is popular or unpopular– but democratic processes, like voting on which stories are the best, are better at picking out what is the popular thing to say than what is the right thing to say. This means eventually content gets excluded which does not deserve to be. The best way to avoid this is to try not to exclude content, at least not all the way. Second off, and more importantly, excluding people never works. Sad as it is to say, every site winds up accumulating people who really shouldn’t be there; but ironically enough, invariably the ones who most deserve to be thrown off the boat turn out to be the ones who are best at keeping themselves from being thrown off the boat. In a best case scenario this “certain kind of person” does this by manipulating the emotions of the people responsible for policing the site, in a worst case scenario by cheating and evading bans. The best way to deal with this, I think, is to just go ahead and give these people their soapbox, and then give everyone else the tools to avoid having to listen to it.
  4. Never stop experimenting. The Internet never stops changing; you can’t survive on the internet unless you do the same. I have seen (and used) enough small sites that failed miserably to know this. Datafall should always be a work in progress, and the site should always be incorporating new ideas, even if they’re bad ones. If they turn out to be bad ideas we can just take them out again.
  5. AJAX. This is a technical issue, but it’s an important one. AJAX is this new fancypants internet technology that lets webpages update without reloading. Like most things on the internet, AJAX has the potential to allow a lot of cool and interesting things, and also the potential to allow a lot of abuse. AJAX is used on Datafall in the following ways:
    • AJAX should always be used for controls. Everything on the site like reporting a bad post, or voting on a good one, is controlled by AJAX. You should never have to suffer a pageload just to change the state of something, and so far, on Datafall, you don’t– the only forms that trigger pageloads are when you’re logging in or signing up for an account, and I may even be able to remove even those eventually.
    • AJAX should never be used to navigate. That’s what pageloads are for. The “back” button is sacred and it should always do exactly what you expect.
    • The site should always work exactly the same with Javascript turned off as it does with Javascript turned on.

Future plans

Things about Datafall that should change in the near term:

  1. Voting is not as robust as it should be. Right now, anyone can change any article to any section, and anyone can nominate something to the front page. I have features in place that would do this better, but they are not turned on– again, because there aren’t any users on the site yet, so right now they’d just make things needlessly complicated. Eventually the site will have something like “I liked this / I didn’t like this” counters on every story. If a lot of people like a story, it will get shown on the front page. If a lot of people dislike a story, it will get cast back down into the diary section.
  2. Hilariously, although the entire site is made up of RSS feeds, Datafall itself doesn’t offer an RSS feed yet.
  3. This is an important one– pinging. Blog engines offer ways to automatically notify sites like Feedster or Datafall when they have updated. I don’t actually even know how this works exactly. I need to find out. Right now Datafall doesn’t immediately know that one of its bookmarked sites has updated– it just checks for changes periodically. This is bad.
  4. More story types— we need a “Links” section eventually, and I’m considering a “podcasts” section.
  5. Deletion. Right now, if you make a post on Datafall, you can’t remove it. Nobody can delete posts but me. This is probably bad and stuff.

Things about Datafall that should change in the long term:

  1. Groups. Right now, the only way to sort things on Datafall are the type and topic sections linked at the top of every page. There should be ways for users to create new types, new topics, or entire other ways of categorizing things. In principle, this should work like “Groups” on LiveJournal– LiveJournal lets you make specialized group blogs that act kind of like message boards, and that you post to as if you were making a post in a LiveJournal. Of course, you can only post to a LiveJournal group by making a post specifically to it on LiveJournal.com. Datafall groups, of course, should be able to take in posts from anywhere. Eventually this can hopefully even work such that it’s possible for users to create their own totally autonomous subsites with Datafall, with their own moderation rules and everything.
  2. Ripping off Feedster and Digg. Right now, posts only enter Datafall if the person who owns the RSS feed wills it. It doesn’t have to work this way. If Datafall ever gets ridiculously large, we could add a separate “best of the internet” section that works kind of like Fark. The outputs would be voted on the same way that any other Datafall post is, but the inputs would be the entire blogosphere instead of just Datafalls’ diaries– for example, maybe Datafall users could nominate articles they liked but didn’t write. Now, given, I really don’t think this is a good idea. It doesn’t fit with any of the site’s goals, and it also introduces various difficulties (both legal and technical). However, it’s something worth considering.
  3. Comment and account tracking. This, on the other hand, is something I really do want to try: Datafall bridges the gaps between sites by putting articles in a central place. However, comments on different Datafall blogs may as well be in different universes. I am curious what can be done about this. Think back to Slashdot: If you post in six Slashdot threads in one day, you can come back to Slashdot later, go to your user page, and have nice convenient links to all your posts, along with how many replies each one got. If you post in six different threads in the Blogosphere in one day, on the other hand, the only way to see what happened to them later to is to go back and track down your posts in each of those six threads. There must be a better way to do this.Right now, a Datafall account isn’t really used for anything except creating feeds. It would be interesting to try to make it so that the posts you make in the comments section of a blog that uses Datafall are automatically recognized as being part of your Datafall account. (Right now there are a couple of “shared account” services which let you access many blogs with a single signin. But as far as I know, none of them are very open or, for that matter, open source.) In addition to, or maybe instead of, this, Datafall could track comments made on Datafall blogs (some, but not all, blog engines offer RSS syndication for comments) and provide a “comments I have made on any Datafall blog” page. I think this entire concept would be something extremely useful, maybe something even more useful than the part of Datafall I’ve implemented so far. However, it would not be trivial. Each blog would have to individually support the comments features; not only is there the problem that not everyone would want to participate in this, but also there is the problem that (by the very nature of Datafall) every blog linked from Datafall is running different software. But, of course, this leads me to:
  4. Blog plugins. Blog engines like WordPress or Movable type all support plugins. I would like to look into making plugins for these blog engines that makes posting on Datafall easier. A simple version of this plugin might do nothing more than add “type” and “topic” menus whenever you post a story, so you don’t have to go through the silly step of, every time you making a post, fishing it off Datafall and rescuing it from the Diary section. I don’t think this would be very hard (though, on the other hand, I don’t think I really want to do this unless people are actually interested).
  5. Open source. One last thing: I want to release the code that runs Datafall as a Ruby On Rails plugin. I have not actually figured out how to do this yet. Once I have this worked out however I intend to release Datafall’s software under the GNU LGPL.

That’s about it. I hope you find Datafall useful or at least interesting. If you have any thoughts on this experiment, please leave them as a comment below.