Archive for the 'Math/Sci/Tech' Category

Super Mario World vs. the Many-Worlds Interpretation of Quantum Physics

Sunday, February 3rd, 2008

Short version: Just watch this video.

Okay, now what was that?

So a few months back some of my friends were passing around these videos of something called “Kaizo Mario World“, which I was told, at first, translated to “Asshole Mario World”. This turned out to have actually been a misunderstanding of something in the youtube posting of the original creator’s videos:

[Asshole Mario] is not the real name for this series of videos, but it is my personal name for it.
The literal translated name for 自作の改造マリオ(スーパーマリオワールド)を友人にプレイさせる is “Making my friend play through my own Mario(Super Mario World) hack”, hence Kaizo(hack) Mario to the USA.

…but, the name is pretty appropriate. Kaizo Mario World is one of a series of rom hacks people create in special level editors that let you take Super Mario World and rearrange all the blocks; the point of Kaizo appears to have been to create the most evil Super Mario World hack ever.

I started watching these videos, but after seeing how the player got past the first gap stopped, went wait, this actually doesn’t look so bad, and started playing it instead. It’s actually not that bad! I was expecting it to be like Super Mario Frustration, Kaizo Mario World’s equivalent in Super Mario Bros. 1 hacks– all ridiculous jumps that require pixel-perfect timing, memorizing the location of a bunch of hidden blocks that exist only to foil a jump and, occasionally, actually exploiting glitches in the game engine.

Kaizo Mario World actually turns out though really to be kind of more like a puzzle game– giving you a series of seemingly impossible situations and then leaving you to figure out how to get past them. It only uses the sadistic-invisible-block trick sparingly (and, hey, even SMB2JP did that a couple times). And it actually turns out to be kind of fun.

It’s still sadistically hard, though, so if you want to play it you have to use what are called “save states”. Most emulators let you do this kind of save-and-rewind thing, where if you screw up you can back up just a few seconds to the last time you were standing in a safe place. So if you’re playing Kaizo Mario world you find yourself playing the same four-second section over and over and over until you get the jump just right, listening to the same two seconds of the soundtrack looping Steve Reich style

Anyway, the idea for the video up top was inspired by an offhanded comment in the “original” Kaizo Mario World youtube post I linked above:

The original videos were in god awful codecs that were a bitch to convert, so unfortunately the Tool Assisted Speedruns came first to most youtube watchers.
This is rather unfortunate, as I feel you lose a lot of the “appeal” by watching those.

This refers to the way that most emulators, if you are recording a video of yourself playing a game and you do the save-state rewind thing, they’ll rewind the video too, such that the video shows only your final attempt, not any of your messups. People use this to make “speedruns” showing a best-of-all-possible-worlds recording of them playing through some game or other, with all the errors scrubbed out. The guy’s point was that watching Kaizo Mario World this way kind of ruins it, since most of what makes Kaizo great is watching someone fail over and over and over again until they finally get it right.

On the other hand, Kaizo Mario World involves SO much failing that this means all the “real” videos are, like, twenty minutes long just to get through what in a tool-assisted run would have been a two-minute level. So I was thinking, what if you had a special tool that instead of erasing all the screwups, it saved all of them and made a video of all the screwups plus the one successful path superimposed? I kept thinking about this and eventually I just sat down and hacked SNES9X to work exactly like that. The result was the video up top, showing the 134 attempts it took me to successfully get through level 1 of Kaizo Mario World.

I think I’m going to make some more videos in this style of different Kaizo Mario World levels and post them back here, but in the meanwhile, if you want to make your own many-worlds speedrun videos, here’s my custom version of SNES9X 1.43 with the multi-record function:

  1. For the Mac OS X version, click here.

  2. For a Windows version, click here. (Many thanks to Syndalis of 360Arcadians for compiling this for me.)
  3. If you want a Linux version, you’ll have to compile that yourself, but you can do this by finding a copy of the 1.43 source and replacing movie.cpp with this.
  4. And for the full Mac OS X source, click here.


[Update 2/9/08: The Mac version now correctly processes movies recorded in the Windows version.]
[Update 2/10/08: Mac version updated to fix a problem where certain kinds of corrupt recording files could cause the program to loop endlessly; window titlebar now contains status information.]

Note that this is a quickly-tossed-together hack all done to make a single video, and I make NO promises as to the quality, ease-of-use, correctness, or safety of these links. Also, I think the video feature should work with any SNES game, but I’ve only tested it with Kaizo. If anyone attempts to try this yourself, I’d be curious to hear about your results.

To make a video: First, use SNES9X’s “record movie” function to record yourself playing some game; while the game is running, use the save and restore feature at least once. When you’re done, you’ll find that SNES9X has created a yournamehere.smv file and also a series of files with names like yournamehere.smv.1, yournamehere.smv.2, etc. These .number files are all the different “mistake” playthroughs, so keep all these files together in one directory.

To turn this into an actual movie you can watch, you will need to use the OS X version of the emulator. Unfortunately, the Windows and Linux versions can only record multiple-run SMVs– they can’t do the export-to-quicktime thing. The quicktime-export code is based on alterations to the mac-specific parts of 1.43 (although considering that I hear the Quicktime API is mostly identical between Mac and Windows, it might be pretty easy to port that code to Windows at least…).

Anyway, in the OS X version, open up the appropriate ROM and choose “Export to Quicktime Movie” from the Option menu. Before leaving the export dialogue, make sure to click the “Compression…” button. You *MUST* choose either the “None” or “Planar RGB” codecs, and under the “Compressor” pane you *MUST* choose a depth of “Millions of Colors+”. The “+” is important. Once you’ve saved the movie location, go to “Play Movie” in the Option menu and choose the .smv you want to play. The emulator will play through each of the playbacks one by one; when it’s done (you’ll know because the background turns back on) your movie will appear in the location you chose. Note that there’s one more step! You won’t be able to actually play this movie, at least not very well, because the export feature works by creating a different movie track for each playthrough and the file will be huge and bloated. Open your video in Quicktime Player, then choose “export” and export to some video codec with actual compression (like H.264). This will flatten all the different layers of the movie into one. Okay, NOW you’re done.

…So what’s this about quantum physics? Oh, right. Well, I kind of identify the branching-paths effect in the video with the Everett-Wheeler “Many Worlds Interpretation” of quantum physics. Quantum physics does this weird thing where instead of things being in one knowable place or one knowable state, something that is quantum (like, say, an electron) exists in sort of this cloud of potentials, where there’s this mathematical object called a wavefunction that describes the probabilities of the places the electron might be at a given moment. Quantum physics is really all about the way this wavefunction behaves. There’s this thing that happens though where when a quantum thing interacts with something else, the wavefunction “collapses” to a single state vector and the (say) electron suddenly goes from being this potential cloud to being one single thing in a single place, with that one single thing randomly selected from the different probabilities in the wavefunction. Then the wavefunction takes back over and the cloud of potentials starts spreading out again from that randomly selected point.

A lot of scientists really don’t like this “collapse” thing, because they’re uncomfortable with the idea of nature doing something “at random”. Physics was used to dealing with randomness before quantum physics came along– the physics of gases are all about the statistics of randomly moving gas particles, for example– but those kinds of randomness aren’t assumed to be actually random, just “effectively random” because the interactions of air molecules are so chaotic and complicated that they’re too unpredictable for humans to track. Think about what happens when you roll a die: the number that comes up when the die lands isn’t strictly speaking “random”, it’s absolutely determined by the physics of motion and the velocity at which you let go of the die and so forth. The “randomness” of a die roll isn’t about actual indeterminacy, but rather just a way of talking about your ignorance of how the deterministic processes that control the die operate. Quantum physics, on the other hand, has things that as far as anyone can tell are really, objectively random, with no mechanism producing that randomness and nowhere apparent to stick one.

Since this makes some physicists uncomfortable, they came up with a sort of a philosophical trick: they interpret quantum physics in such a way that they say when there’s more than one possible random outcome of some quantum process, then the different possibilities all happen, in alternate universes. They can’t prove or disprove that this idea is true– from the perspective of someone inside one of these universes, everything behaves exactly the same as if the “wavefunction collapse” really was just picking a random option. But it’s one way of looking at the equations of quantum mechanics, and as far as the mathematics cares it’s as valid as any other. Looking at things this way, if there’s a 3/4 chance of a quantum process doing one thing and a 1/4 chance of it doing the other, then we get three universes where the one thing happens and one universe where the other one does. This does mean that there’s some universe where two seconds ago all of the atoms in your heart spontaneously decided to quantum-tunnel two feet to the left, but in almost every universe this doesn’t happen so we don’t worry about that.

Science fiction authors love this. There’s a bunch of stories out there exploring this idea of a multiverse of infinite possibilities all occurring side by side (the best of these I’ve ever read being Robert Anton Wilson’s Schrödinger’s Cat). Most of these stories get things totally wrong. Science fiction authors like to look at many-worlds like, this morning you could either take the bus to work or walk, so the universe splits in two and there’s one universe where you decided to walk and one universe where you decided to take the bus. This is great for purposes of telling a story, but it doesn’t really work like that. The many-worlds interpretation is all about the behavior of quantum things– like, when does this atom decay, or what angle is this photon emitted at. Whereas human brains are big wet sloppy macroscopic things whose behavior is mostly governed by lots of non-quantum processes like neurotransmitters releasing chemicals.

This said, tiny quantum events can create ripples that have big effects on non-quantum systems. One good example of this is the Quantum Suicide “experiment” that some proponents of the Many-Worlds Interpretation claim (I think jokingly) could actually be used to test the MWI. The way it works is, you basically run the Schrödinger’s Cat thought experiment on yourself– you set up an apparatus whereby an atom has a 50% chance of decaying each second, and there’s a detector which waits for the atom to decay. When the detector goes off, it triggers a gun, which shoots you in the head and kills you. So all you have to do is set up this experiment, and sit in front of it for awhile. If after sixty seconds you find you are still alive, then the many-worlds interpretation is true, because there is only about a one in 1018 chance of surviving in front of the Quantum Suicide machine for a full minute, so the only plausible explanation for your survival is that the MWI is true and you just happen to be the one universe where the atom’s 50% chance of decay turned up “no” sixty times in a row. Now, given, in order to do this, you had to create about 1018 universes where the Quantum Suicide machine did kill you, or copies of you, and your one surviving consciousness doesn’t have any way of telling the people in the other 1018 universes that you survived and MWI is true. This is, of course, roughly as silly as the thing about there being a universe where all the atoms in your heart randomly decided to tunnel out of your body.

But, we can kind of think of the multi-playthrough Kaizo Mario World video as a silly, sci-fi style demonstration of the Quantum Suicide experiment. At each moment of the playthrough there’s a lot of different things Mario could have done, and almost all of them lead to horrible death. The anthropic principle, in the form of the emulator’s save/restore feature, postselects for the possibilities where Mario actually survives and ensures that although a lot of possible paths have to get discarded, the camera remains fixed on the one path where after one minute and fifty-six seconds some observer still exists.

Note: Please do not use the comments section of this post to discuss ROMs or where to get them. IPSes are okay. Thanks.

The Physics of Super Mario Galaxy

Sunday, December 2nd, 2007

For the last week I’ve been playing this game called “Super Mario Galaxy”, and if it isn’t the best game ever made, then it’s in the top three. It has fanservice by the buckets, and a soundtrack that soars as if the game itself is just happy it is being played, and little blue tractor beam stars that sing you sad little theremin songs, and evil hats. Things happen for no reason, like they used to in old NES video games, and you don’t care why, you just love it. I don’t even know the words to express how much fun I’ve been having with this game.

So instead of trying, I’m going to instead write about the physics problems the game poses.

The gimmick in Super Mario Galaxy is that where previous Mario games had you jumping between platforms floating in the air in the Mushroom Kingdom, the new one has you jumping between little bitty planetoids floating out in space. It works like this:

You get the idea pretty quickly. So here’s what I wonder:

Most of Mario Galaxy is spent running around on the surfaces of those little asteroids, like you see on the video. The asteroids vary in size and shape, although most are roughly spherical and on average each asteroid is about the size of a Starbucks franchise. Every single one of them, regardless of size or shape, has completely normal earth gravity. And it’s hard not to think, when you see Mario taking an especially long jump on a small asteroid and landing about halfway around the planet, that he kinda looks like he’s having a lot of fun. So what I want to know is, is any of this possible in real life?

Would it be physically possible– assuming that you have more or less unlimited resources and futuristic engineering, but are still bound by the laws of known physics– to actually build a bunch of little asteroids like this, with compact volume but earth gravity? Say, if you’re a ringworld engineer or a giant turtle with magic powers. Could you build the Mario Galaxy universe?

Here’s what I worked out:

So first off, clearly we’re not going to be able to do most of the things in the Mario Galaxy universe, since SMG is of course a game and obviously they weren’t trying to be realistic or imitate anything like real physics. I’m going to just ignore these things– for example, I’m ignoring anywhere where there’s just a vanilla gravity field pointing in some direction, as if generated by a machine. As far as we know, that’s just not possible. In the physics we know, the only way to create a gravity field is to put a bunch of mass in the place that you want the gravity to point toward.

This isn’t so bad, since a lot of the planetoids in Galaxy look like this might well be what they are– just a big lump of mass creating a gravity well. In fact, some of the planets– for example the one at the start of the video above– are actually shown to just be thin hollow shells with a black hole at the center. So, in the post that follows, I show what you’d have to do to build one of these planets. I’m going to consider just a single example planet, of about the size of the one from the video above; you’d have to use different masses and densities for each planet in the game, since each has a different size but the exact same amount of gravity at the surface.

ANALYSIS

The first question to ask here is, how much mass do we need? Well, the planet in the video, eyeballing it, looks like it’s about as wide as a 747 is long. Wikipedia says a 747 is 230 feet long.

Newton’s formula for gravity is m_1*a = G * m_1 * m_2 / r^2, where I take m_1 as the mass of Mario and m_2 is the mass of the planetoid. Solving for m_2, I get:

m_2 = a * r^2 / G

r is 115 feet (half a 747), and a is earth gravity, 9.8 m/s/s.

Plugging in to google calculator, I get:

((9.8 m (s^(-2))) * ((115 ft)^2)) / G = 1.8043906 × 10^14 kilograms

So, if you want to get earth-like gravity on the surface of a 230-foot-diameter sphere, you’re going to need about 1.8 * 10^14 kg of mass. This is not that much! Checking Wikipedia I find even the smallest moons and dwarf planets in the solar system get up to about 10^20 kg. in fact, 2 * 10^14 kg is just about exactly the mass of Halley’s Comet. This sounds attainable; other characters are shown hijacking comets for various purposes elsewhere in Mario Galaxy, so no one will notice a few missing.

Meanwhile, Wikipedia’s formula for escape velocity for a planet ( sqrt( 2GM / r ) ) tells us that escape velocity from this planetoid will only be about:

sqrt((2 * G * (1.8043906 × (10^14) kilograms)) / (115 feet)) = 26.2110511 m / s

Which is about 60 MPH. So the cannon stars Mario uses in the video to move from planet to planet should work just fine. (Although platforms might not work so well, at least not tall ones; since the planetoid is so small, you’d only have to get about 45 or 50 feet up off the ground before the amplitude of gravity is halved. Actually gravity will fall off so quickly on the planetoid that there would be a noticeable gravity differential between your feet and your head: a six-foot-tall person standing on it would experience about 1 m/s^2 more acceleration on their feet than their head. So expect some mild discomfort.)

At this point the only question is: If one were to attempt to fit Halley’s Comet into a 230-foot-diameter sphere, would this turn out to be impossible for any reason? In answering, I’m going to consider two cases: One where the mass is in a black hole at the center of the sphere; and one where the mass is distributed through the sphere evenly. In both I’ll try to see if anything really bad happens.

So, first, the black hole case. Looking here I’m told that the event horizon of a mature black hole should be equal to G * M / c^2, and Wikipedia tells me that the Schwarzschild radius (the radius you have to pack a given mass into before the black hole starts to form on its own) is about twice that. So plugging this in, I find that the radius of this black hole at the center of the sphere is going to be:

(G * (1.8043906 × (10^14) kilograms)) / (c^2) = 1.33970838 × 10^-13 meters

… uh oh! Now we seem to be running into trouble. If someone wanted to construct a black hole with the mass of Haley’s comet, they’d somehow have to pack the mass of the whole thing into… well, google claims the diameter of a gold atom is 0.288 nanometers, so… about one-hundredth of the diameter of a gold atom?! That doesn’t sound very feasible.

On the other hand, considering the case where the sphere is solid, one finds that the required density is going to be about:

(1.8043906 × (10^14) * kilograms) / ((4 / 3) * pi * ((115 feet)^3)) = 1.00023843 × 10^9 kg / m^3

As remarkable coincidence would have it, 10^9 kg/m^3 is, according to wikipedia, exactly the density of a white dwarf star, or the crust of a neutron star! So this is sounding WAY easier than the black hole plan: All you have to do is go in and chip off 180,000 cubic feet of the crust of a neutron star, and you’ve got your planetoid right there.

Of course, there’s still one more step. First off, I’m not sure what the temperature of a block of white dwarf matter that size would be, but you might not want to actually walk on it. For another thing, the reason why a white dwarf or the crust of a neutron star has that much density in the first place is that all that matter is being held in place by the incredible gravitational weight of the star the matter is attached to– your average white dwarf is about the size of Earth, which by star standards is tiny, but which compared to our little Mario Galaxy planet is rather large. So if you tore off a 747-sized chunk of one of these stars and just dumped it in space, it would almost certainly explode or expand enormously or something, because the chunk’s gravitational pull on itself would probably not be sufficient to hold it together at the white dwarf density. So we’re going to need some kind of a shell, to hold the white dwarf matter in place and (one assumes) trap things like heat and radiation inside.

When you get to the density a white dwarf is at, the main thing you have to worry about is what’s called degeneracy pressure. Usually, when you try to compress matter, the resistance you meet is due to some force or other– the force holding atoms in a solid apart, for example, or the accumulated force of marauding gas molecules striking the edge of their container. When you get to anything as dense as a white dwarf, though, the particles are all basically touching (“degenerate”), and the main thing preventing you from compressing any further is literally just the physical law that prevents any two particles from being in the same place at the same time, the Pauli Exclusion Principle. If you try to compress past that point, a loophole in the exclusion principle comes into play: it’s actually possible for two particles to share the same chunk of space as long as they’re in some way “different”, for example if one of them is in a higher energy state than the other (say, it’s moving faster). So in order to compress two particles “on top” of each other, you have to apply enough force that you’re basically pushing one of the particles up to a higher energy. For large numbers of particles, this can get hard.

Different kinds of matter degeneracy happen at different pressures. At the center of a neutron star, the pressure is so high that neutrons become degenerate and start stacking up on top of each other. The matter in white dwarfs and neutron star crusts, on the other hand, exhibits only electron degeneracy. Whew! So, how bad is this going to be? Well, looking here, we find the formula for electron degeneracy pressure to be:

P = ( (pi^2*((planck’s constant)/(2*pi))^2)/(5*(mass of an electron)*(mass of a proton)^(5/3)) ) * (3/pi)^(2/3) * (density/(ratio of electrons to protons))^(5/3)

(I’m assuming that our little white-dwarf-chunk will not be so warm that the particles will be moving at relativistic speeds; if not, we have to switch to a different formula, found here.)

So, we know the density; the ratio of electrons to protons has to be 1 (Otherwise the white dwarf would have an electric charge. Of course, if you can somehow find a positively charged white dwarf somewhere, you can reduce your required pressure noticeably!); and everything else here is a constant. Plugging this in we get:

P = ( (pi^2*((6.626068 * 10^-34 m^2 kg / s)/(2*pi))^2)/(5*(9.10938188 * 10^-31 kilograms)*(1.67262158 * 10^-27 kilograms)^(5/3)) ) * (3/pi)^(2/3) * (1.00023843 * 10^9 kg/m^3)^(5/3) = 9.91935718 × 10^21 kg m^-1 s^-2

Or in other words, if we neglect the assistance that the white dwarf material will be providing in holding itself together in terms of gravitational pull, the shell for our planetoid would need to be able to withstand 9.91935718 × 10^21 Pascals of pressure in order to keep all of that degenerate matter in. That’s a bit of a problem. Actually, it’s more than a bit of a problem. It’s most likely impossible. The strongest currently known material in the entire universe is the carbon nanotube, and it has a theoretical maximum tensile strength (I think tensile strength is what we want to be looking at here) of more like 10^11 Pascals. So you’d have to find a material that could do better than that by a power of like 10^10. But, hey, that’s just an engineering problem, right?

CONCLUSION

So what the above basically tells us is this. As long as you can do one of the following things:

  1. Hijack Halley’s Comet and collapse all of its mass down into a volume with diameter 5.35 milliangstroms, to create a small black hole

  2. Build a pressure vessel capable of withstanding 1022 Pascals of stress, then trap inside a big chunk torn out of a white dwarf

Then you, too, can have a Super Mario Galaxy style planetoid in your very own space station! Now, given, these things aren’t easy, and possibly not even possible. But isn’t it worth it?

DISCLAIMERS

The above analysis has some limitations which should be kept in mind.

  • In the case of the black hole strategy, you’d have to somehow stabilize the position of the black hole relative to the shell, so that the surface of the shell always stayed exactly 115 feet away from the black hole– otherwise random drift would cause one side or other of the shell to gradually drift toward the black hole and eventually cause the whole thing to fall in. Which probably would be kind of fun to watch, but isn’t what you want if you’re standing on it at the time.
  • In the case of the solid-body/white dwarf strategy, I am assuming that the final planetoid can be treated like a point mass. This is almost certainly wrong. I’m not sure how to go about figuring out exactly the gravitational force from a chunk of matter which rather than being treated like a point is distributed through a sphere immediately underfoot.
  • On the other hand, it might be interesting to find out, because if you knew how to do that you’d probably know also how to figure out the gravitational force from matter distributed through an irregular body. Most of the planets in Mario Galaxy are not spheres! There’s also planetoids shaped like barbells, and avocados, and cubes. Most interestingly, there are a handful of planetoids in certain places in Mario Galaxy shaped like toruses (doughnuts). I’m curious but not sure how to figure out, if you actually were to construct such a thing in real life, what would the gravity when walking on it be like? What would happen if you tried to walk onto the inside rim?
  • None of these planetoids would be able to maintain an atmosphere. The escape velocity is just too stupidly low. (Puzzlingly, this does not seem to matter in Mario Galaxy, since Mario seems to be able to breathe even when floating out in deep space. One would be tempted to simply conclude that Mario, being a cartoon character, does not need air, but no, there are sections in the game with water and Mario suffocates if he stays underwater too long. Apparently the Great Galaxy is permeated with some kind of breathable aether?)
  • Even the gravity and breathable aether aside, many the elements of Super Mario Galaxy do not seem to be possible to replicate under normal physics. For one thing no matter where Mario walks on any structure anywhere in the game, his shadow is always cast “down” underneath his feet, as if light in Mario’s universe always falls directly toward the nearest source of gravity, or if the shadows weren’t cast by light at all but were simply visual markers in a video game allowing the player to tell where they are about to land.
  • I am not a licensed structural engineer, space architect or astrophysicist. The data above is provided on an “as is” basis without any representations or warranties and should not be used in the construction of any actual space vessel or celestial object. John Carmack, this means you.

Special thanks to pervect at Physicsforums for help with the degenerate matter stuff, and Treedub for pointing out what the electron/proton ratio of a white dwarf would be.

The Ballad of the Lambda Calculus

Saturday, September 29th, 2007

Computer Science can be said to predate even the idea of a computer itself by at least two millennia, in the sense that even before Babbage started designing calculating engines (for simplicity, I will here ignore mechanical calculation devices which predate Babbage), mathematicians had for a long time been analyzing “algorithms”, mathematical procedures which behave like functions and are calculated by following a specific set of completely prescribed steps. In fact, although the name was not given to them until the 17th or 18th century, algorithms can be said to go at least as far back as Euclid in 300 BC. Since algorithms, which are the fundamental basis of computer programming, are basically just computer programs without the syntax, it at first seems surprising that they would predate machines capable of running them by so much; but this really isn’t so unusual, since algorithms in principle can be worked by a human with a sheet of paper– all that is necessary for all the properties of an algorithm to hold universally is that something, mechanical or human, follows the steps without room for inserting will of one’s own.

Work on understanding algorithms before the 20th century, however, was stymied by a lack of rigor in what we understood an algorithm to be. Though it did occur to mathematicians hundreds of years before Babbage that algorithms are not just something to design, but objects in their own right whose properties can be meaningfully analyzed, all such analysis ultimately was doomed by the intuitive (rather than rigorous) nature of what an algorithm was understood at the time to be. For example, I say above that an algorithm consists of a series of “steps”. But what, exactly, is a “step” in an algorithm? For a machine, such as a computer, a “step” can be easily understood to be a machine instruction. Instructions for humans, on the other hand, do not come in discrete units– if step one in an algorithm is “write down a number”, could we actually call this a “step” or should we break things down further into “pick up a pencil”, “pick a number”, “position hand over paper” etc? This sounds nitpicky, but it’s crucially important, since without knowing what a step is it is of course impossible to ask any of the questions we would like to ask about algorithms which concern steps– questions like “if this algorithm is executed, how many steps will it take?”. What was not understood in early attempts to understand algorithms was that algorithms could not really be meaningfully studied in the absence of some specific idea of a “model of computation”– a formal system by which algorithms can be specified according to some exact scheme and then deterministically executed.

The path that lead to this realization was arguably first set off by David Hilbert, who for the tenth of his famous 23 problems of 1900 asked:

The Entscheidungsproblem [decision problem for first-order logic] is solved when we know a procedure that allows for any given logical expression to decide by finitely many operations its validity or satisfiability … The Entscheidungsproblem must be considered the main problem of mathematical logic.

This question was eventually answered in 1935, before the first computer had ever been devised, in a famous paper written by one Alan Turing; in this paper the “Turing machines” that now bear his name were first defined. The answer was not the one Hilbert would have hoped for. Turing defined a simple, abstract machine which in principle could be built, but in practice would be impractical; the point of this machine was simply to provide a sort of least common denominator for computing devices. Studying this machine provided a number of surprises .The surprise which would have disappointed Hilbert was that his question from 1900 was impossible to answer: there are logical expressions whose validity or satisfiability cannot be finitely decided.

The surprise which would be more interesting to us however was the idea that the machine turns out to be universal. The Turing machine, despite its simplicity, has all the capabilities of any finite-time deterministic machine which can be built or even imagined. A related surprise was that any machine capable of emulating a Turing machine is, itself, just as powerful as a Turing machine. This gives rise to the idea of a “universal model of computation”. There are a countless number of these, and the Turing machine was simply the first to be known. Any such universal model provides the rigor needed to analyze algorithms; whichever one you choose, or whichever one you mechanically implement, things will behave in certain important ways the same.

Interestingly, however, though we tend to think of it as sort of the platonic form of a computational model, the Turing machine was not the first universal model of computation ever invented. The first was in fact invented without anyone having fully realized its significance, by Alan Turing’s teacher, Alonzo Church. Church had years before Turing’s paper defined a “Lambda calculus”, a sort of way of defining trees of functions that call other functions, equipped with a rule for simplifying these trees as if the function’s value were being evaluated. (The function’s value turns out to be another tree of functions that call other functions.) The lambda calculus is if anything even more simple than the Turing machine, giving it valid claim to be more fundamental than the Turing machine; and it has a certain strange beauty, in that “programs” written in it build all normal accouterments of programming out of nothing but functions. (Numbers, for example, are encoded by chaining functions together in a way that they “count” themselves off when evaluated.) Unfortunately the lambda calculus is completely alien to anything encountered in human experience, and is infamously difficult to understand; so difficult, in fact, that the fact the Lambda calculus is just as powerful as the Turing machine was not realized until the 1950s, decades after Turing’s paper, and the proof was written by someone other than Alonzo Church. Because it failed to gain historical precedence and because it is neither as easy to grasp nor as close in resemblance to a normal computer as the Turing machine, the Lambda calculus is today largely unknown, although in an odd twist a rephrasing of the Lambda calculus (the calculus of logical combinators) is still used today deep in the guts of the virtual machines of certain so-called “functional programming” languages, as a sort of abstract “machine code” the virtual machine evaluates.

The Lambda calculus looks like this:

((λ (a b c d e f g h i j k l) ((λ (m n o p q r) ((λ (s t u) ((λ (v w x) (λ (y) (λ (z) (i (v z) z (n z (f c) (λ (aa ab) (j (k aa) ab)) l (t q (x (w y z) (s q (((f c) l) z))))))))) (λ (v) (k ((o l) (s b ((o (e l)) v))))) (λ (v w) (u (g p e) (s e v) (s e ((c (c (d (f l)))) w)))) ((λ (v w) (λ (x y) (u (g p q) (w y) (v x)))) ((λ (v w x y z) (λ (aa) (z (g p q) (y (g p q) (x v p aa)) (x w d (((c o) l) aa))))) (λ (v) (r (((c l) v) a) ((((o c) l) v) a))) (λ (v) ((λ (w) (r (w a) ((λ (x) (r (x a) (r ((x b) a) (((o l) x) a)))) ((p l) w)))) ((o (d l)) v))) ((λ (v w) (λ (x y z) (m (g q p) ((q (p (w x))) (v y z)) b))) (λ (v w) (m v (m v w b) (j b ((v l) w)))) (λ (v) (λ (w) (j (v w) w)))) (λ (v w) (n w v (λ (x y) (j ((k x) a) y)) l b)) ((λ (v) (λ (w x y) (n (j a (j x y)) w (λ (z aa) (j (((k z) k) (((k (k (l z))) k) (((k (l (l z))) k) a))) aa)) (λ (z) (j ((i (k z) g v) (k (k (l z))) (k (l (l z)))) (j (l (k (l z))) (l (l (l z)))))) b))) (λ (v w) ((v a) w)))) ((λ (v w) (λ (x) (v q (w q (v q x))))) (λ (v w) (n w v (λ (x x) ((λ (y z aa) (y aa aa x (y aa z (l (l (l (l x)))) x))) (λ (y z aa aa) (j (k aa) (j (k (l aa)) (j (r (k (l (l aa))) (y (k aa) (k (l aa)))) (j (r (k (l (l (l aa)))) (z (k aa) (k (l aa)))) aa))))) (λ (y z) (((y a) z) a)) (λ (y z) (y (z a))))) (p l) b)) (λ (v w) (n w v (λ (x x) ((λ (y) (j ((((x b) b) b) a) (j (((y b) b) a) (j ((y b) a) (j ((x b) a) (j (((x b) b) a) (j ((((y b) b) b) a) (j (x a) (j (y a) x))))))))) ((((x b) b) b) b))) (p l) b)))))) ((λ (s) (λ (t u) (n u t (λ (v w) (s (k v) w)) l b))) ((λ (s) (λ (t u) (n (s t) p (λ (v w) (j (k v) w)) (λ (v) (s (l v))) u))) (λ (s) ((s (λ (t) (j ((t a) a) (h (t b) (t a))))) (λ (t) a))))) ((λ (s) (λ (t u) (s t (c o) (s (g c t) o (s (g o t) c u))))) (λ (s t u) (n u s (λ (v w) (j (h (k v) (g t (k (l v)))) w)) (c l) b))) (λ (s t u) (n (j t u) s (λ (v w) (j (r (k (k v)) (k (l v))) w)) (λ (v) (j (l (k v)) (l (l v)))) b)))) (λ (m n o) ((λ (p) (((m p) (j n o)) b)) (λ (p) (j ((p a) b) (j ((p a) a) (p b)))))) (λ (m n o p q) ((k ((n (λ (r) (j (λ (s) ((k r) (o (l r) s))) (p (l r))))) (j (λ (r) r) m))) q)) (c c) (d c) (g (f c) (g d e)) (λ (m n) ((m n) ((n m) a))))) (λ (a) (λ (b) b)) (λ (a) a) (λ (a) (λ (b) (a (a b)))) (λ (a) (λ (b) (a (a (a b))))) (λ (a) (λ (b) (a (a (a (a (a b))))))) (λ (a) (λ (b) (a (a (a (a (a (a (a b))))))))) (λ (a b) (λ (c) (a (b c)))) (λ (a b) (λ (c) (λ (d) ((b c) ((a c) d))))) (λ (a b c) ((a (λ (d) c)) b)) (λ (a b) (λ (c) ((c (λ (d) b)) a))) (λ (a) (a (λ (b) (λ (c) c)))) (λ (a) (a (λ (b) b))))

Dear Congress: Beyond Einstein, Wtf

Monday, April 23rd, 2007

This is a copy of a letter I wrote to various congressthings. I reproduce it here because I didn’t write a blog post this weekend, and because you probably won’t be able to tell the difference between this and a blog post anyway. Enjoy.

* * * * *

Dear [Whoever],

I am writing about the recent effective dismantling of NASA’s Beyond Einstein program, and in general the massive slashing of science research within NASA’s budget in the last few years. This is an issue which has received almost no popular attention, and which I suspect Congress has not had the chance to seriously consider. However, I feel that preserving these science programs, and Beyond Einstein in specific, is in the long term of great importance to America.

Beyond Einstein1 is an umbrella program by which NASA is performing a series of flight experiments to gather information about four cosmological phenomena at the limit of human understanding: black holes, gravitational waves, dark energy, and cosmic inflation. All four of these things are an accepted part of modern science, but their exact workings are very poorly understood. Data about their operation would be invaluable to physicists, who must develop theories to explain these things even though they are astronomical phenomena and cannot be observed in a lab.

Unfortunately, though, NASA has in the last few years been de-emphasizing science in its funding, and one of the many worthy programs that may not survive this shift in priorities is Beyond Einstein. The following is a quote from Steinn Sigurðsson, a physicist who provides a stark account2 of an NRC meeting where the future direction of the Beyond Einstein missions was discussed:

At the request of the DoE, the NRC is doing a priority ranking, a funding wedge is opening in 2009, one mission can get a startup, the others, not so much.

Realistically, a second mission will probably be named as a ranked priority, then the rest will get bounced to the decadal survey that will start in a couple of years, and we start all over again. If there is any funding for new starts again (we’re looking at maybe 2022-2025 at current budget profiles).

No one is going to win this, only lose. It should never have come to this.

The stakes are high; literally thousands of scientists are looking at the core science activity they have chosen to work in being annihilated for 10-20 years, a lot of junior people could be dumped from science, a lot of senior people could look at having the field they worked to build being shut down.

It is an indescribabl[e] waste, for what is a surprisingly small amount of money on the scale of the US economy. The funding gap that is squeezing the Beyond Einstein Program out of existence is about 2-300 million dollars per year – that, over ~ 15 years is what it would take to do 3-5 of the missions in quick overlapping succession.

I would like to call attention to something that may not be obvious in this quote. Because physics experiments, like a NASA probe or a particle collider, are such concrete things, it is easy to lose sight of exactly how much accumulated effort goes into them. Cancelling or going forward with an experiment like this seems like a simple decision: either you build it and you have one, or you don’t. It is easy to forget the fact that the experiment is not just a manufactured physical object, but an entire section of the scientific community who are working on the experiment and dependent on it going forward. Besides just the people designing the probe, just one of these experiments inevitably leads to years of papers and advancements just analyzing the collected results; canceling the experiment means shutting all of that down.

Meanwhile, going forward with just one of the projects is not much of a compromise, since Beyond Einstein is a survey, not a single experiment; some of the Beyond Einstein experiments are on drastically different subjects, and so deciding to perform only one of the experiments means telling physical science it will be given the opportunity to move forward on some subjects, but not others.

It is easy to shrug off the sciences, especially edge science like Beyond Einstein represents, as optional or inessential. However foundational scientific research like this is, in the long term, what drives real scientific and technological progress. Since Beyond Einstein covers the least well-understood aspects of physics, it has the real opportunity to spark serious advancements in scientific theory. By limiting its scope, we risk missing that opportunity.

It is my hope that when the new budget comes up for consideration over the next few months, the Congress will act to ensure the Beyond Einstein program receives adequate funding to complete its mission. The current NASA budget request as I understand it does not serve this goal, instead choosing to focus its funding efforts largely on new manned spaceflight programs. While expanding manned space exploration is a worthy goal for NASA, this goal should not be pursued at the expense of NASA’s ability to do science.

Thank you for your efforts,

[mcc]

1. http://scienceandreason.blogspot.com/2006/11/beyond-einstein.html
2. http://scienceblogs.com/catdynamics/2007/04/beyond_einstein_iv_showdown_in.php

Pretty pictures

Wednesday, April 11th, 2007

So last week I made a little custom variation on an old computer program called the “Game of Life”, with the hopes that my version could be used to demonstrate neat things about quantum physics. As chronicled in my last blog post, that basically didn’t work at all. However, despite not working the way I’d hoped it did, the program did incidentally happen to burp out some very pretty looking animations. This week I figured I’d just say screw the quantum physics stuff, and instead just explore what kinds of interesting pictures I could tease out of the program.

Before I go on, a couple of things to note:

  1. There was a mistake in my description of Life last week; any time I made a reference to the number of “neighbors” a cell had, I was counting the liveness or deadness of the cell itself as counting toward the “neighbor” total. The program behaves the same way however you do the counting, but this means some of the numbers from my description last week aren’t the same as the ones this week.
  2. Because some of the images in this week’s post are kind of large, I posted all the images as the non-animated first frame, and you have to click the image to see the animated version. I assure you that scanning through here and clicking for the linked pictures this is probably more worth it than actually reading the text.

So, what’s this about quantum physics?

I’m still trying to figure this out, but this is what I’ve got so far: Quantum physics is a special very weird kind of way of looking at the universe where things aren’t ever one particular way, but are probably one of a bunch of different kinds of ways. If a particle is heading down a path and reaches some kind of fork where it could equally likely go either one way or the other, it possibly goes both ways at once, so we say it’s 50% at the end of one of the paths and 50% at the end of the other. It stays in this noncommittal state until some kind of interaction with the environment happens in a way that requires it to actually be in either one place or the other, at which point the particle randomly picks either one or the other and materializes 100% at the end of the paths as if it had been heading that way all along. As soon as this “wavefunction collapse” style interaction is over and the environment is looking the other way, the particle smears back out into possibilities again and starts again taking all the different paths it could at once, but now only those paths that start from the point at which its probabilities last collapsed. The splits in the paths it takes usually aren’t as simple as 50% one way, 50% the others– usually instead it goes in absolutely all directions at once, and its position is described by something called a “distribution function” that describes all the different things that particle could be doing at some given moment, and how likely each possibility is compared to the others. Because of something I don’t understand called the “Schrodinger equation”, the way these particles probabilistically spread out in every direction at once means they act like waves, which is where the “wave/particle duality” thing you hear about sometimes comes from– particles spread out as waves of probability until they encounter something that needs them to be a particle, at which point they switch to being particles, and after that they start spreading out as probability waves again. (We as humans don’t ever experience things that have this probably-there quantum behavior, like cats that are simultaneously alive or dead– the things that have quantum behavior are all very small, and things like cats or pennies or humans are very large, so by the time you start getting to interactions big enough that they can comprise our everyday experience all the little quantum things have been overwhelmingly spooked into acting like particles as far as we can tell.)

From a math person’s perspective, this whole thing about distribution functions that collapse into certainties upon measurement all just looks like a very contrived way of describing a system where you can’t make exact predictions about some of the things that happen and instead have to describe the probabilities. There’s a weird twist, though, because it’s more than that. When we talk about a particle being 50% in one place and 50% in another, we don’t just mean that there’s a 50% probability, if somebody barged in and collapsed the wavefunction, that the particle would turn out to have been there. The particle actually is there, in a 50% sort of way, and so all the different probably-there versions of the particle engage in something called “self-interference”, which means the different possible particles can interact with each other. If the particle goes 50% down one path and 50% down another, and then the two paths suddenly converge back again, the two 50% particles can actually smash into each other (destructively interfering with each other the same way waves do) and destroy each other at that point. All this weird behavior is kind of odd and interesting, and seems like it would be kind of fun to just play around and experiment with. Playing around with quantum mechanical objects, of course, requires either enormous and expensive test equipment that doesn’t do what you want anyway because of the Heisenberg Uncertainty Principle, or simulating everything with really really difficult math. The second of these sounds more attractive, and the fact the math is really hard probably isn’t such a problem; maybe one could make some kind of simulation program where the computer does all the math and you just mess around with stuff on the screen acting quantumy.

If you were going to do that, though, the obvious, particle-based stuff that quantum physicists spend most of their time working with doesn’t seem like the best place to start. Actual quantum physicists are mostly interested in hopelessly dry and practical things, like the behavior of electrons. There’s a good reason for that, but this sort of thing probably wouldn’t make for a very interesting or attention-capturing experience for someone just wanting to play around with something on a computer. I for one cannot say that electrons have much applicability or relevance to my daily life.

Given this, I was curious what else a Quantum Something simulation on a computer could be based around besides just particles. And I was thinking a good place to start would be cellular automata, like the Game of Life, because at least in the classical version they’re simple enough to be played with by clicking around at random but have enough hidden depth to make little machines out of; and also because they’re readily adaptable to acting like little nodes on a funny-shaped grid, which is incidentally the way that this “quantized geometry” thing that some guantum gravity people are moving toward lately looks from the perspective of a computer. A quantum Game of Life seemed like an interesting idea, so for my blog post last week I made a little animated GIF generator that ran the Game of Life, with the addition of “probable” cells which had a certain probability of either being there or not being there (with higher or lower probabilities represented by darker or lighter shades of gray) in hope of simulating or at least mimicking quantum behavior. As I said before, this didn’t really work.

Why not?

I’m not sure what I was expecting to see when I turned the thing on, but I think I kind of hoped that the patterns would act kind of like the maybe-there maybe-not particles in quantum physics– like, I perhaps expected that one way or another you would be able to make little partly-there patterns, like maybe a little 50% glider or something, and it would wiggle off in a 50% way until it collided with another glider, at which point depending on what hit what and how probable it all was maybe they’d destroy each other or merge into one 100%-there object or maybe even spin off several different overlaid collision patterns at the same time representing different probable results that the collision could have produced. In retrospect, it’s pretty obvious why it was completely implausible to expect behavior like this.

Patterns in “quantum Life” did in fact turn out to exhibit something like self-interaction. Unfortunately, they exhibited much too much of it. It isn’t just that patterns in Life interact with each other; every single cell in Life interacts with every single cell neighboring it, and one cell more or less worth of neighbors tends to make the difference between life or death in the next generation– the survival conditions in Life are extremely narrow. These properties– every cell interacts with every other, and you only get what you want under narrow conditions– is in fact what makes classical Life patterns capable of such complexity and thus interesting to study in the first place. In the quantum version of Life, though, these properties mean that if you introduce just one “gray” (probable rather than certain) pixel into the quantum Life board, then in the next generation every cell that pixel was touching will also be some shade of gray. And when you start getting a few grayish pixels next to each other, everything just kind of falls apart– fill an area in this conception of quantum Life with gray, and on every turn you’re basically trying every single combination of living and dead cells possible in that space, with some combinations maybe being more likely than others. Since only a narrow range of configurations allows life to survive in Life, this meant that each gray pixel tends to become less and less likely to survive with each frame, and gray areas very quickly fade to apparent white:

Interestingly, though, they don’t fade directly to white– they first try to stabilize at a specific gray value around a probability of 35%, with the 35% gray areas spreading to swallow up all the black pixels. If this 35% gray color manages to spread to cover the entire board, with each pixel being about as likely as any other, it just stays that way forever. If any areas that are at (or have stabilized near) 0% white exist on the board at all, however, the white eats away at any gray areas (or at least those that aren’t actively eating up new 100% black pixels) until nothing appears to be left. In practice this looks to me kind of like a zombie invasion spreading to eat all the Life and then dying out for lack of food:

Which is kind of amusing, but makes it really hard to do anything with the gray cells. In normal Life, where things are always either dead or alive instead of just probabilistic, you can build little Life machines by taking advantage of structure in the regular ways that the cells interacted with each other. In the quantum Life game, though, any structure is immediately destroyed as soon as it comes into contact with a gray pixel. Pattern-building in Life is based around individual pixels interacting with each other; but in quantum Life individual pixels lose their identity, and instead just smear out into rapidly disappearing gray blobs. This is about as far as I got by the end of my last blog post about this.

This week, my thought was: Before I give up on this completely, maybe I can at least get the blobs to do something interesting. If the tendency for everything to turn into blobs means I can’t build structures out of individual pixels, maybe I can at least build structures out of the blobs. Before I could do that, though, I first had to do something about the whole fading-to-white problem, since few if any of my patterns survived long enough to analyze them in an interesting way. I came up with two ways of getting around this problem. The first was just kind of silly, but I was curious what it would do so I tried it. The second thing I tried actually worked quite well.

As far as the first thing I tried goes, here’s the thing: Everything fades to white in these animated GIFs, but that’s partly just a result of the way they’re rendered. After the gray fades away, there’s still stuff going on in the apparently blank areas; it’s just that anything less than 0.0015% probability winds up being rounded down to 0% (white) when colors are picked, so you can’t see them in the GIFs. Likewise, even when everything appears to have stabilized at 35% gray, some areas will be closer to that stable gray point than others. I kinda wanted to see what this all looks like, so I changed the program so that it “normalized” each frame before it drew it– that is, instead of black being 100% and white being 0%, it redefined black and white as just the highest and lowest probability values visible on that frame. This lets us watch exactly what’s going on while various things are fading out:

If you’re wondering what happens at the end of those animations there, that’s what happens when the probabilities at each pixel get so low that Perl can no longer accurately represent them. Perl only uses 64 bits to store each floating point number; this is normally way more than you need, but when you start trying to fit really, really low numbers into that space, like around 2-1022, you start losing precision and eventually lose the ability to tell the difference between any given number and zero entirely. With the kind of math the quantum Life program does, you get into this area pretty quickly, and with the normalization turned on the rounding errors that result become very clearly visible. Oddly, though, the rounding errors turn out to look a lot more interesting than the quantum Life program does when it’s operating normally. I may try to experiment with that more later, to see if I can replicate that behavior on purpose (hopefully this time by just rounding things instead of actually breaking the Perl interpreter).

After moving on from the normalization thing, I had a bit more luck with the next thing I tried. That was this: okay, so Life tends to turn into disappearing gray blobs when you add probability to it. Life isn’t the only standard cellular automata, though. Why not just try another one?

The set of rules Conway’s Life uses are really a bit arbitrary, and there’s a lot of variations on those rules out there. Life, because it’s the best well known and because it behaves in a way that exhibits a relatively nice balance, is the variation you generally hear about, but there’s nothing particularly special about it; to someone exploring the alternate rules, Conway’s Life becomes just the rule with serial number 23/3 (because life survives if it has two or three neighbors, and spawns if it has exactly three neighbors). Beyond this there are in fact 262,144 different 2D cellular automata rules of the same type (that is, where the rules are unchanged besides the survive/spawn numbers), and they all act very drastically different. Some of them are sane models of computation, like Conway’s Life, where you can build things out of patterns of pixels. Some of them inevitably descend into seething masses of chaos. Some of them just result in the screen endlessly blinking, or everything you draw exploding, or even stranger things happening. If you want to be really surprised, try loading up this Flash version of Life, entering rule 1234/3 (which you’ll note is not all that different from the Conway’s Life rule), drawing a little pool of pixels, and hitting “run”.

And 262,144 is of course just the number of variations you get by varying the survive/spawn numbers; you can get even more elaborate behaviors by introducing more complicated rules, for example by introducing more states besides just “alive” and “dead” or by allowing survival to be based on pixels further away than immediate neighbors. There’s one famous CA variant called Wireworld that is a lot like Life except for having four colors instead of two, and which can actually be used to present fairly realistic-looking simulations of electrical circuits.

(If you’re bored and want a quick little tour of a more complicated type of ruleset, go here, download the program or launch the java app, and do this: Choose “weighted life” in the first menu; choose “Conway–” from the second menu; hit “start”, and let the game run until the number of dots remaining on the screen is fairly small, then hit “stop”; choose “Border” from the second menu; hit “start”, and let the game run until the screen looks like television static; hit “stop” and choose “Bricks” from the second menu; hit “start” and let the game run until you get bored of what you’re seeing; hit “stop” and choose “Career” from the second menu; then hit “start”… each of these different options in the second menu is a relatively simple Life-like rule, with the only twist being that in these rules the direction your neighbors are located in makes a difference when counting. Even slight differences in what you do after you’ve counted these neighbors results in drastically different behavior.)

So, with all these cellular automata variations out there, is there an existing ruleset that fixes my blob problem? It turns out, yes. There’s a rule called “Day & Night”, a vanilla Life-alike with the rule 34678/3678 (you can try that in either the Java or Flash Life implementations above).

This is what happens when you try to feed a Conway’s Life glider into the Day & Night ruleset.

This rule has a lot of similarities to Life from a pattern-designer’s perspective; it has simple gliders and spaceships and oscillators and such, although they look nothing like the ones in Life. However, Day & Night also has one very odd and interesting feature, which is that under the Day & Night rule, white patterns on a black background and black patterns on a white background behave exactly the same. You can randomly invert the entire board, and it will not change the way the patterns act one bit. More interestingly, you can actually have entirely separate black and white areas to the board, with little Life patterns swimming around inside:

This pattern seemed almost tailor-made to my problem: all my quantum Life patterns kept eventually stabilizing into invisible, boring white, but in Day & Night both white and black are stable. So what happens when I try to run a quantum Day & Night?

Well, this:

Okay, so this is a bit more interesting than Life was. Instead of the probabilistic parts just fading out of existence, the dark regions stabilize to black, the light regions stabilize to white, and the stuff in between stabilizes to some kind of midlevel gray. Things don’t stop there, but what they do next is kind of interesting to watch: The gray areas (although they never seem to go away after any amount of time) shrink until they become just a thin membrane between day and night, and the borders twist and curve according to some internal logic I haven’t quite figured out yet– I think the borders seek whatever position minimizes surface area, so to speak. (It’s kind of interesting to see the little bubble in the second image above slowly seeking air…)

Like in quantum Life, any single gray pixels introduced into an image spreads like a cancer over the entire board, but instead of this being an effectively destructive process, the “zombie” regions form into interesting patterns along the outlines of the black and white regions they follow, and the zombified regions continue evolving, just not according to exactly the same set of rules. The following board has one gray pixel buried in the upper left corner, and when Day & Night runs on it:

Something interesting you might notice about that one is that since the zombification process mostly acts on the borders of the day and night regions rather while leaving the interiors mostly solid, you can have little bitty Day & Night patterns embedded inside of the big gray blobs that just float there indefinitely doing their own thing (they’re still visible in frame 1000). In fact, it’s possible for there to be blocks which are zombie-ized on one side but continue to follow normal Day & Night rules on the other. Look at this one carefully:

Now that just looks cool. That last animation is seriously my favorite thing out of anything that’s come out of this silly little project so far. (Incidentally, if you want to see that a little more clearly, it may be worth it to try the YTMND version. Something not quite obvious in the images the way I’m posting them here is that since my implementation of Life wraps around the board at the edges, all of these images tile fantastically well.)

The fact that Day & Night supports stable areas of both colors means that I can do more elaborate things when setting up quantum Day & Night patterns. For example, something I wanted to do with quantum Life, but couldn’t really because everything always just disappeared, was set up patterns made from photos. Yes, both of these are starting patterns for animated GIFs:

Whee! Just for reference, here’s what if I round those last two off to solid black and white, so the quantum-ness goes away:

Finally, a couple more odd-looking results I got more or less by accident while playing around:

The one on the right does what it does because of a bug.

So, at this point I’m feeling a lot better about the possibility of actually doing something interesting with this probabilistic/quantum Life idea, now that I’ve seen it’s possible to do anything. The behavior of the white vs black blobs in this one still way favors entropy over structure– once the borders between white and black have been staked out they seem to do some kind of funny seeking-minimum-surface-area thing, and once they’ve done that (at least insofar as experimenting with firing spaceships at them seems to indicate) it doesn’t seem to be possible to move them back. You can of course still have normal Day & Night patterns happening inside the blob, but you could do that anyway; the quantum-ness doesn’t really add anything. Still, the Day & Night stuff at least works well enough it hints you could modify the rules so that the blobs were induced to interact with each other in some more interesting way. After all, I’ve still not even toyed with most of the different kinds of rule variations for cellular automata, and there’s even a couple of interesting options that may be worth exploring that only exist because of the probability thing. Meanwhile, there’s still a lot I could do here to work in math that’s actively taken from quantum physics, rather than just mimicking quantum-ness crudely. As things are, I’m not sure my conception of self-interaction is quite like that of “real” quantum physics, and I’m wondering if there’s some way I could work in a more explicit notion of quantum superposition (the way I do things now, basically each pixel is its own basis state). To be honest, I’m not really sure that deserves the title “quantum Life” rather than just “probabilistic life”.

I’m not sure to what degree I’m going to try to explore all that, though. In the meantime, I’m more curious about going back and starting to explore the original idea that had me thinking about quantum cellular automata in the first place: specifically, the idea of making a cellular automata that runs on, instead of a grid of pixels, the nodes of some kind of graph. And the reason for doing this would be that the graph would be not just a graph, but actually an instance of some kind of quantized geometry, like the spin networks in loop quantum gravity. Which would that mean… well, I don’t have any idea what it would mean. That’s why I want to do it and find out.

If I’m going to do that, though, I first want to stop and deal with something else, which is the way I’ve been making these little animations. Everything above was generated by a little perl module; I load the module up with starting states, and it spits out animated GIFs. This has more or less worked fine so far. There isn’t really any better way of posting Life animations than a GIF, since every pixel is crucial and so this stuff doesn’t compress very well; the chief downside to making this way is that by I generally have to craft the patterns by sticking them into a program and running it to see what comes out rather than doing anything interactive, but that doesn’t matter so much since the program with the quantum behavior turned on runs much too slowly to be interactive anyway (although that may only be because I wrote it inefficiently– I haven’t even looked into optimizing it).

If I go further with this series, though, the next batch of stuff I do is going to be all weird diagrams of jumbled lines, like those old photos of spiderwebs spun by spiders on drugs. This kind of stuff compresses very badly as GIFs, compresses well as other things, and will benefit from some level of interactivity. I’m basically only telling you this so that I can link the last thing I discovered this weekend and two more pointless little animations:

OMG HAX

So it turns out that at some point while I wasn’t paying attention, these people made a free actionscript compiler called MTASC, and these other people made a free swf linker kind of thing named swfmill. You can use these to make Macromedia Flash movies without actually having to own or use Macromedia Flash. Or you can just say shove both of these, and use HAXE, an odd but elegant little EMCAscript derivative with strong type inference and other pleasant features that take away the dirty feeling that writing Javascript normally leaves you with. Haxe can be “compiled” either to vanilla Javascript, or directly into SWF files. The reason all of this matters is that one can use Haxe or MTASC to make Flash movies without having to draw a damn thing. Both of the following movies are totally generative; thanks to Haxe, I was able to just write a little half-page program that draws some lines and boxes and things, and the program runs in the Flash interpreter:

This is not particularly complicated or interesting stuff– this barely qualifies as anything better than “hello world”– but it works, the resulting flash files are tiny (the freezeframe GIF of the colored boxes animation above is actually larger than the animation itself), and the Haxe language seems expressive enough to scale to much more elaborate Flash programs.

I’ll see what I can do with all this in a future post.

By the way, the source code used to generate this week’s images is available here and here, same usage instructions as last time.

A Quantum Game of Life

Tuesday, April 3rd, 2007

I’ve had this blog for awhile, but barely posted in it. I’m tired of this, and it’s strange because the reason I haven’t posted isn’t a lack of things to say, the problem is just plain not getting around to writing anything. So, an experiment: From now on, I’m going to start making a post here every monday. Even if it’s not particularly coherent or long or I don’t have a specific topic and have to just rant out whatever is in my head, that’s fine. If the resulting posts turn out to be terrible, I’ll just stop doing them or bury them in a box deep beneath the earth or something. In the meantime, I’m just going to make an effort to post something here every week from here on out. So if you know me outside of this site, and a monday comes and you realize I didn’t post anything here, find me and yell at me.

Sound good? All right, here goes.

So I’ve been reading all this stuff about physics lately. I haven’t been doing this for any particular reason. At least, not any good reason; the reason I originally started was that I’d been reading a bunch of stuff about evolutionary biology so that I could argue with creationists, but then arguing with creationists started getting kind of boring, so I started looking around for something else to read.

Once I got started, though, it turned out that this is actually a really interesting time to be following physics.

Fundamental physics has been kind of in a holding pattern for about the last thirty years or so. Somewhere in the mid-70s, every single force, particle and effect in physics except gravity was carefully fit together into a single unified theory called the Standard Model, and past that, physicists kind of got stuck. This isn’t to say nothing’s been done in the last thirty years, mind you– experimentally it has really been a productive time– but all the work that has been done has just served to validate the Standard Model, theoretical physics’ last great achievement, as correct. Nobody’s been able to move beyond or supercede the Standard Model. And physicists really, really want to supercede the Standard Model. To even the physicists that developed it, the standard model has always seemed like kind of a Rube Goldberg contraption; it has all these unexplained fiddly bits and extra pieces that don’t seem to do anything, and it’s not clear why any of the parts fit together the way they do. Scientists have a very specific and clear idea of how this big Rube Goldberg machine works; but they don’t know why it works, or rather, they don’t know why the machine is fit together the way it is.

Theoretical physicists are convinced there’s some kind of underlying order to this that we haven’t just figured out yet, and that all the funny complexities of the Standard Model are just emergent side-effects of something much more fundamental and simple going on below the surface. Theoretical physicists are also annoyed that they haven’t been able to figure out how gravity fits in to all to this– in fact, they can’t figure out how to make gravity work together with any theory of quantum physics. So for the last thirty years theoretical physicists have been going to enormous lengths experimenting with grand unified theories and supersymmetric models and such, trying to come up with a better theory that explains the same things the Standard Model explains (and hopefully also gravity), but in a way that makes more sense. None of these attempts have so far worked. They’ve all turned out to not quite be able to predict the actual universe, or else have predicted the universe but with some weird quirk or other that doesn’t exist in reality, things like proton decay.

The longest-standing of these attempts at unification is something called String Theory, which has been worked on steadily for about twenty or thirty years now. String Theory isn’t really a theory– it’s more like a set of hypothetical theories that share certain characteristics. String Theory is really promising in theory and has gotten a lot of attention because it has the ability to describe a universe which has lots of particles and forces (which maybe include the kinds of particles and forces that the Standard Model contains) and which also has gravity, all based on nothing but simple vibrating stringlike objects following simple rules. But there are some problems, foremost among which is that in practice, nobody’s yet figured out how to use this to describe our universe. Every time people try to put together a string theory that might describe the things we see in our universe, it also describes lots of other things, things like lots of extra dimensions that have never been detected and enormous quantities of new kinds of particles that have never been seen. These isn’t exactly a problem because string theories are very flexible and so every time somebody finds a reason String Theory might not work, they can just modify the theory– adding ways of hiding the extra dimensions and particles, and then hiding the weird complexities they used to hide the extra dimensions, such that even though String Theory still predicts most of these things it predicts that they’d be hiding where we couldn’t find them. This has happened enough times by now that String Theory, which started with just some simple rules, has now gotten very complicated. It’s not clear whether this is a good or a bad thing. This might just mean String Theory is maturing, and so naturally picking up complexity as it moves toward completing an actual descriptive and usable theory. Or it might mean we’re on the wrong track entirely with String Theory, and this burgeoning complexity means we’re taking an originally good idea and torturing it into increasingly unrecognizable shapes, adding more and more epicycles every time something doesn’t quite work right, in an attempt to twist the idea into fitting a universe it fundamentally does not describe. Either way, we’re still no more sure whether String Theory can ever actually work than we were two decades ago, String Theory is still more a toolbox of promising mathematical ideas than an actual working theory, and although String Theory gets most of the attention these days it’s still the case that nobody knows how to move past the Standard Model.

Anyway.

This has all been the state of things up until very recently, but right now at this instant early 2007 some kind of interesting new developments are starting to crop up, and it’s starting to look like this holding pattern may be breaking sometime soon. These new developments are making physics very interesting to follow right now, since they mean we might see some new and very dramatic and unexpected developments in physics maybe even in the next year or two.

The first of these developments, and probably the most directly exciting because it involves things blowing up, is the Large Hadron Collider, a 17-mile concrete tunnel buried underneath Switzerland where, starting at the end of this year or beginning of the next, physicists will be smashing hydrogen atoms together to break them apart so they can look at the pieces. Experimental physics has kind of been in a pattern for the last fifty years where most of the progress is done by building a big machine that fires two particle beams at each other, then recording the little tiny explosions that happen when the beams collide and checking to see whether you know enough about physics to explain why those explosions looked the way they did. Once you you’re confident, yes, you can explain the little explosions from your big machine, you build a bigger machine that does the same thing and repeat. The reason this repetitive mode of experimentation has come about is that the most fundamental physical phenomena are only really detectable at very high energies; in a simplified sense, the more “electron-volts” you have, the more interesting things happen. Every time a new particle accelerator is built that can hit a higher electron-volt threshold, new kinds of particles become visible. This is again a bit of a simplification, but particle physics has been in a lot of ways doing this kind of game of particle leapfrog for the last half-century, where the experimentalists will build a bigger particle accelerator and discover some new particles nobody’s seen, and then the theoreticians will have to come up with a theory to explain what those particles are, but then the new theory will predict some new particles nobody’s ever seen and the experimentalists will have to build a bigger accelerator to look for them. There’s a bit of a problem with this system, though, which is that this pattern is strictly dependent on the regular production of these ever-more-expensive accelerators, each of which takes decades to plan and build and hundreds or thousands of people working together to design and operate. This is kind of a lot of eggs to put in one basket, so when the last big particle accelerator that was supposed to have been built– the Superconducting Supercollider, which was supposed to have been built in the 90s– got cancelled amidst massive mismanagement and governmental budget crunches, it kind of screwed the pattern up. (The cancellation of the supercollider has been kind of a factor in why theoretical physics has been in holding pattern mode lately, and also represented a real identity crisis for particle physicists, who for a long time during the Cold War basically had the ability to get as much money as they wanted from the government, while some other branches of science just begged for scraps, because during the Cold War it was considered a national priority that our giant concrete particle beam shafts be bigger and longer than the ones the Russians had.) In the meantime we’ve been stuck with the Tevatron, finished in the mid-80s, which scientists have just continued to use for new experiments. This means we’ve gotten some good chances to explore everything in the energy range up to the Tevatron’s limit of one terra-electron-volt, but nobody’s been able to look past that.

But now the Large Hadron Collider, which can go up to about 15 terra-electron-volts, is going online, with the first operational tests scheduled (link goes to a big pretty graph!) for October of this year. In the short term, this mostly just means probably a bunch of science stories in the news when this all starts up toward the end of the year. But then once a year or few has passed and the real science starts, physics is going to actually start changing. There’s a lot of possibilities of things that could happen– finding supersymmetric partners or tiny black holes or the other exotic things that String Theory and other unification theories predict. Even aside from getting lucky and finding something big like that, though, the big expected purpose of the LHC is going to be to find a particle called the Higgs Boson, which is the only thing in the universe which is predicted by the Standard Model but which has never been actually seen. Finding the Higgs Boson would be a big deal, among other things because part of what the Higgs does is cause things to have mass, so once we’ve found one and nailed down its properties this might be a push in the right direction for the people trying to figure out how to make the standard model play nice with gravity. The other, unbelievably frustrating but actually probably even more promising possibility is that the LHC won’t find the Higgs Boson. This would be a big deal because it’s starting to look like if the Higgs Boson can’t be found at 15 terra-electron-volts, then it very probably doesn’t even exist, meaning theoretical physicists would have to go back to the drawing board in a potentially big way.

Aside from all this, a second interesting ongoing development in physics is that, while all these problems with accelerators have been going on, a whole bunch of rich and occasionally just plain crazy experimental results (and pretty pictures) have been showing up in astronomy thanks to advances in space-based telescopes. Astronomers have been undergoing a quiet renaissance in the last ten years after the Hubble Space Telescope finally started working, turning up all kinds of stuff that– although physicists haven’t really managed to find any clues forward in it yet– provide some big juicy questions that the next breakthrough in physics might be able to make a go at tackling. The biggest of these is the discovery of “dark energy”, the idea behind which can be summarized as “the universe’s expansion is speeding up over time, and we have no idea why“. It’s not clear whether this recent period of rapid advancement in astronomy will be able to continue much longer, since between the upcoming scuttling of Hubble and the gradual shifting of science research out of NASA’s budget it’s now questionable whether some of even the really promising experiments planned to follow up on the Hubble discoveries will be able to actually get carried out. But even still, we should expect at least a couple upcoming surprises from those experiments that do manage to get launched.

A third, less concrete and more sociological development in physics lately has been the new backlash against String Theory, typified by (or maybe just consisting of) two new and “controversial” books by Lee Smolin and Peter Woit. String Theory has always had detractors, but lately– partially in response to some documentaries in the late 90s where String Theory physicists took their ideas to the public, partially as a result of the increasingly long period of time String Theory has gone now without predicting anything concrete, and partially in reaction to the recent embrace by many string theorists of a really, really bad idea called the “landscape”– a handful of these detractors have started making their case against String Theory in a very public and organized way, and a lot of average members of the public (like, well, me) are starting to take notice. So far this String Theory Counterrevolution doesn’t really seem to have had any real effect on the state of science itself; its chief byproduct seems to have been a series of blog wars between physicists. (Though, if the reason I’m mentioning all of this is to list some of the things that have made physics interesting to follow lately, it’s definitely the case that watching PH.Ds with decades of experience flame each other on their blogs is entertaining, in a pro-wrestling sort of way.) Still, the way this backlash against string theory has played out does seem to hint we may indirectly see some interesting advancements in the near future; many or all of the people speaking out against string theory lately have been doing so under the auspices of arguing actually not that String Theory is outright wrong or should be dropped, but simply with the intent of trying to argue that additional approaches to fundamental physics should be explored as well– many of the string theory backlashers are themselves incidentally proponents of still-embryonic alternate approaches, usually Loop Quantum Gravity in specific. Considering that these arguments are being made in the full hearing of the public and other people who are responsible for the actual funding of physics, there may soon be a bit more oxygen for these alternate approaches for to get the attention and funding needed for serious progress, so we may see some interesting developments coming from them before long.

The last kinda-interesting development in physics of late is the one that actually inspired this blog post and long tangent, which is the emergence of quantum computing as an active research field which may sometime soon actually produce useful machines. Basically, while particle physicists have been busting their asses trying to convince the government to buy bigger magnets and theoretical physicists have been gothing it out about their problems with strings, a lot of applied physicists, people in areas like quantum optics, have just been quietly doing their jobs and, although maybe not working on as large a scale as some of those other groups of physicists, actually getting shit done. One of these busy areas has been quantum computers, which over the last few years has undergone some really dramatic advances, from the first working programs running on a quantum computer in only 1998, to two years later it being a big deal when quantum computers with five or seven qubits were able to factor numbers up to 15, to by this year people getting all the way up to realistic levels like 16 qubits and sudden advances happening in things like memory transfer or gates for quantum computers. This is all a really intense rate of progress– it was not long ago at all that quantum computers only existed as buzzwordy black boxes used by science fiction authors, who assigned them improbable capabilities– and there’s not really any telling what exactly happens next.

Quantum computers work by storing information in the quantum states of various physical things. This is useful because of the weird ways quantum states act. Information in a traditional computer is stored using physical things themselves, like holes in a punchcard, or tiny pits on a DVD surface, or electricity in a capacitor in a RAM chip. Each of these different ways of physically encoding data, when you store a “bit” of information, it’s either a 0, or it’s a 1. Either there’s a pit in the DVD surface, or there’s not. Quantum states, on the other hand, are not necessarily one single thing at a time; when you store data as a quantum state, what you instead store is a probability that that quantum bit (qubit) is 0 or 1. This means that when you store a number in a qubit register, you don’t have to store just one number in there; you can store lots of different numbers, as superpositions of probabilities. This leads to lots of crazy possibilities, since it means a program running on a quantum computer can, in a sense, do more than one thing at the same time. In another sense, this is just fakery; you’re not actually doing more than one thing at once, you’re just probabilistically choosing one of a certain number of possible paths the program could have taken. This leads to another weird property, that quantum computers don’t always return the right answer– they at best just have a certain known probability of returning the right answer. But probabilistic algorithms actually are a real thing in computer science, and can be fundamentally faster than traditional ones, so this is actually okay: you can run a quantum algorithm over and over and average the result, and you’ll still get the right answer way faster than you would have with a traditional computer.

The thing is though that working with quantum states as quantum computers do is actually really, really hard. Quantum superposition is usually described to people using a thought experiment with a cat in a box and lots of big macroscopic objects like geiger counters and vials of poison, but this is mostly just a metaphor and it’s never anywhere near that easy to set up. There’s this “observation” in the Schrödinger’s cat thought experiment that collapses the waveform and forces the cat to be either alive or dead; this is usually described as “opening the box”, but actually any interaction whatsoever between the systems inside and outside the box, including stray heat propagation or vibrations in the air (say, from the cat yelling to be let out of the box) while the box is still closed, is enough to entangle the systems and collapse the waveform. Theoretical physicists doing thought experiments may get to gloss over these little details, but when doing quantum computing, these little details become incredibly practical and immediately relevant– because they mean that if the qubits in a quantum computer interact with their environment, ever, in any way, then the quantum state is destroyed and your computation is erased. This means that whatever things hold the qubits in your computer have to be effectively totally isolated from quantum interference of any kind, and continue to be so while doing gobs of things that normally no one would want, need, or try to do with a quantum state, like transport or copy it. Figuring out how to do this, and all the other things that quantum computing entails, means really stretching the edges of what we think we know about quantum physics, and means that quantum computing has actually become something of a hotspot for theoretical physics researchers at the same time that “normal” theoretical physics has been struggling to progress. I’m only able to perceive all of this from a far distance, of course, but still it’s been interesting to watch.

It’s been particularly interesting to me, though, because at the same time theoretical physics is getting a workout in the applied problem of building quantum computers, something else is getting a workout in the related applied problem of what to do with the quantum computers after they’re built: theoretical computer science. Theoretical computer science and computability theory is something I actually personally know something about (unlike, it is worth noting, quantum physics), and it’s incidentally a topic which gets very little attention. Computer science professors often do their best to convince the students passing through that things like turing machines and lambda calculus actually matter, but people don’t get computer science degrees to do computer science theory, they get computer science degrees to get a job programming– and in a day to day practical sense, aside from an occasional vague awareness of the existence of “big O” notation, the vast bulk of people doing programming have little use for or awareness of computation theory. There are specific domains where this stuff is still directly useful, and there were times in the past where this stuff was still directly practical and relevant, but on average, outside of pure academia, the underlying theoretical basis of computer science these days practically seems to mostly only exist for the construction of intellectual toys for geeks like me who apparently don’t have anything else to do. Quantum computing, however, means that all this foundational computer science stuff not only has become suddenly very relevant and useful, but also suddenly has a bunch of new and relatively accessible open questions: As far as I’m aware no quantum algorithms even existed before 1993, and there’s still a lot of important questions to be answered on the subject of, once quantum computers “work”, what the exact extent of their capabilities might be and what we should do with them.

So to me personally, quantum computing is kind of interesting on two levels. One, if I’m going to treat the science of physics as a spectator sport, then from a spectator’s perspective quantum computers are one of the more interesting places things are going on right now; two, because of the connection to pure computer science, I or other people familiar with computers or programming actually have a chance of directly understanding what’s happening in quantum computers, whereas with, say, the Higgs Boson (which really? I’ve seen no fewer than five distinct attempted explanations of the Higgs Boson now, and I still have no clue what the darned thing really is), we’re left mostly only able to read and repeat the vague summaries and metaphors of the people who’ve worked with the underlying math. And then there’s a third thing I can’t help but wondering about in the interaction between these two things: I’m wondering if someone like me who already understands computers but doesn’t understand physics might be able to use quantum computing as a way of learning something about quantum physics. Real quantum physics takes years of really hard work and really hard math to learn, and most colleges don’t even let you at this math until you’ve already dropped a significant amount of time investment into the low-level, more accessible physics courses; on the other hand, if you’re outside of a college and trying to learn this stuff on your own, it’s very hard to find sources that really tell you directly what’s going on instead of coddling you in layer after layer of increasingly elaborate metaphors. Metaphors of course are and should be a vital tool in learning physics, but when you come right down to it physics is math, and glancing at most materials on quantum physics, I tend to get the feeling I’m learning not quantum physics, but just learning an elaborate system of metaphors, which interact with all the other metaphors as archetypical symbols but are connected to nothing else. On the other hand, stuff on quantum computing has to actually work with the real-world nuts and bolts and math of quantum physics (unlike popular books on the subject), but because so much of it has to be worked with by computer scientists, non-physicists, at least some of it is surely going to be necessarily aimed at a lower level of background knowledge (unlike learning this stuff at a college would be). One would have to be careful not to fool oneself into thinking this could actually be a substitute for a full physics education, of course, but from the perspective of an interested spectator I can’t help but wonder if this might be a good way of sneaking into the world of higher-level physics through the back door.

This is all a roundabout way of leading up to saying that I’ve been looking into some stuff on quantum computing, and I’m going to try to use what little I’ve worked out so far to try to make a little toy model computer simulation thing, which I will then post about it here, on the internet, along with some hopefully psychedelic animated GIFs the simulation generated. If I’ve kind of got things right, then putting this post together in a form other people can hopefully understand will help me get my thoughts in order and test what I’ve got so far (I’m allowing myself the presumption of assuming anybody’s still reading at this point); if I’m just totally confused and wrong, meanwhile, then hopefully somebody who understands this stuff will eventually find this and point my mistakes out. A first little attempt at this is below.

Specifically, I’m curious whether one could define a quantum version of Conway’s Game of Life.

Conway’s Game of Life

“Life” is this thing this guy named John Conway came up with in the 70s. It’s referred to as a “game” because originally that’s more or less what it was; Conway originally published it as a brainteaser in Scientific American, intended to be played by hand on graph paper, with the original challenge being a $50 reward to the first person who could devise a Life pattern that continued growing endlessly no matter how long the game went on. Conway at first expected this was impossible; he was horribly, incredibly wrong. (An example of such a pattern is at right.)

Despite these fairly simple origins, Life turned out to be sort of the tip on the iceberg of an entire category of mathematical systems called “cellular automata”, which are now a recognized tool in computation theory and which have received by now a fair amount of attention; Stephen Wolfram, the guy who created Mathematica, spent ten years writing a very long book all about cellular automata (a book which, although I’ve seen a lot of doubt as to whether anything in the book is actually useful to any domain except computation theory, is universally agreed to contain lots of pretty pictures). Life itself has remained the most famous of these; today there are entire websites devoted to categorizing Life patterns of every kind you could think of, and most people with an education in programming have been forced to implement a Game of Life program at one point or other.

- - - - - 1 2 3 2 1
- O O O - 2 3 4 2 1
- O - - - -> 2 4 5 3 1
- - O - - 1 2 2 1 0
- - - - - 0 1 1 1 0
On the left is a grid of cells containing a glider; on the right is a grid containing the count of living neighbors that each cell on the left has. I’ve colored light red the dead spaces which in the next generation will be new births, dark red the live spaces which in the next generation will survive, and light blue the live spaces which in the next generation will die. Let this keep running a few more generations and you get the glider:

The term “cellular automata” just refers to systems that can be described by patterns of colored dots in “cells” (i.e., grid squares or some other shape on graph paper) and which change over “time” (i.e., redrawing the system over and over on pages of graph paper) according to some simple rule– in all variations of cellular automata I’ve ever seen, how a cell is colored in one frame depends on how its neighbors were colored in the previous frame. In the original Conway’s Life, the rule was that a cell is colored black in a new frame if

  • in the previous frame it was colored black and was touching either three or four cells colored black, or
  • the cell was colored white in the previous frame but was touching exactly three cells colored black.

Conway intended this as a sort of population model (of “life”) where black dots represented living things; black dots disappearing when they have too few or too many neighbors represent that lifeform dying of either loneliness or starvation, black dots appearing when they have a certain number of neighbors represent reproduction.

These rules favor non-life over life, so if you just fill a board up with randomly placed live and dead squares, within a couple of generations most of the life will have died off or settled into tiny static patterns, like little solid blocks or little spinners eternally looping through a two-frame animation. But you’ll also have little areas that wind up filled with sloshing waves of chaos, breaking against and devouring the static bits. If you watch this carefully, you’ll notice that chaotic as these waves look, the way Life patterns act when they collide with one another is actually very specific and structured. The systems in Life are chaotic enough that predicting what a random Life pattern you come across is going to do is just about always going to be impossible unless you sit down and actually run it. But if you sit down ahead of time and work out a catalog of patterns that you know what they do and how they interact with other patterns, and then stick to placing those patterns in very carefully controlled ways and places, you can actually use these smaller parts to design larger life patterns that have basically whatever behaviors you want.

You can build machines.

Life is an example of something we call a “model of computation”, which is a name we give to systems– machines, language systems, board games, whatever– that can be one way or another tricked into running some kind of “program” that’s dummied up for it, such that after a while its changes in state have effectively computed some sort of result. The kinds of computational models we normally care about are the ones that we say are “Turing complete”. This is a reference to the first ever proven universal model of computation, the Turing Machine devised by Alan Turing in 1936. The Turing Machine is a hypothetical computer, described as a sort of little stamping machine on wheels that slides back and forth on an infinite tape reading and rewriting symbols according to a little set of rules it’s been wired up with. The Turing Machine was not meant to ever actually be built; it was designed just to be easy to reason about and demonstrate interesting things in mathematical proofs. The most interesting thing about the Turing Machine is that it is in fact universal; there’s a generally accepted but technically unproven principle called the Church-Turing Thesis that any programming language, computer, whatever, that you can think of, can be emulated on a Turing Machine. This makes the Turing Machine suddenly really important as a sort of least common denominator of computers, since if you have a machine that can emulate a Turing Machine– in other words, a machine which is Turing complete– then that machine can inevitably emulate anything a Turing Machine can, including every other Turing complete machine, as well. Turing Complete models of computation are as common as dirt– they include every computer and programming language you can think of, as well as some really bizarre and deceptively useless-looking mathematical systems, a couple of which Conway invented– and they’re all approximately the same, because they can all do exactly those things the other Turing complete systems can.

The Game of Life is, as it happens, Turing complete, as was quite dramatically demonstrated a few years back when some guy actually built a Turing Machine that runs inside of a Life system. The Life Turing Machine is a machine in a hopelessly literal sense; viewed zoomed out, it looks like some kind of aerial photograph of some steam-powered contraption that a man from the 1800s with a handlebar moustache would demonstrate, after dramatically pulling off the canvas that covered it. The thing isn’t particularly efficient; if you for some reason took the time to make an emulator for an 386 Intel PC out of Turing machine rules, and then ran this emulator on the Life turing machine, I imagine the heat death of the universe would probably come before you managed to get Minesweeper loaded. But efficiency isn’t the important thing here: it works, and that’s what matters.

Something worth noting is that, although I’ve already claimed that turing machines are approximately the same as any computer you can think of, this isn’t quite true– quantum computers actually aren’t included in this generalization. Normal computers can emulate quantum computers, but only with unacceptable amounts of slowdown (insofar as certain kinds of proofs are concerned), and there are things quantum computers can do but normal Turing-complete machines outright can’t, specifically generate truly random numbers. This isn’t a problem for the Church-Turing thesis, exactly: We can just get around this by saying that the Church-Turing thesis is only supposed to describe deterministic machines, which quantum computers aren’t. Despite this, we still have to have some kind of least common denominator to reason about quantum computers with, so we have this thing called a Quantum Turing Machine that vaguely resembles a Turing Machine but is able to provide a universal model of computation for quantum computers.

So, all this in mind, here’s what I wonder: The Turing Machine is a universal model of conventional computation, and you can make minor modifications to the turing machine and get something that’s also a universal model of quantum computation. Life is also a universal model of conventional computation; you can make a turing machine in it. Is there some simple bunch of modifications that can made to Life that make it a universal model of quantum computation?

I originally started wondering about this in the context of Loop Quantum Gravity, something I mentioned earlier as an “alternative”, non-String theory of Physics. Loop Quantum Gravity is supposed to be a theory of Gravity that incidentally follows the rules of Quantum Physics. This is something that is understood to be either extremely difficult or impossible, for a couple reasons, one of which is that gravity has to play nice with the Theory of Relativity. The Theory of Relativity plays hell with quantum theories because it says the geometry of the universe is bending and warping all the time like silly putty, which makes the quantum theories scream “I CAN’T WORK UNDER THESE CONDITIONS!” and storm out of the building. Loop Quantum Gravity, at least as I understand it, gets around this in part by quantizing geometry itself– that is, it makes the very geometry the universe exists inside subject itself to a form of quantum behavior. One of the consequences of this as I understand it is that the geometry of the universe becomes discrete, which kind of means that it can be represented as if it were a big interlocking bunch of (albeit really weirdly shaped) cells on graph paper. In other words, you could if you really wanted to treat the geometry described by Loop Quantum Gravity as a board to run Life on. If true, this seems like an interesting idea to me because Loop Quantum Gravity is really hard to find any remotely accessible information on and all the information that is available is super high level math, so it seems like something like Life or some other cellular automata running on the nodes of a LQG spin network would be a fun little visual demonstration of how the thing is supposed to work. But before you did that, I assume, you’d have to have some kind of conception of what a cellular automata in a quantum universe would work like at all.

A Quantum Game of Life

Conveniently, creating cellular automata with quantum behavior isn’t at all a unique idea; looking around I’ve found that people have been working with QCA models almost as long as they’ve been working on Quantum computers, and several conceptions of cellular automata with quantum behavior are already around. I’m going to list them here, since I used some of them in formulating what comes after.

  • First, there’s the universal quantum turing machine itself. Mark Chu-Carroll has an informal but very clear explanation here of how the quantum turing machine conceptually works; you can also find the original paper that first defined the QTM online under the title Quantum theory, the Church-Turing principle and the universal quantum computer. It’s a bit of a heavier read, but the introductory parts are plain readable everyday English and interesting for their own reasons.
  • On top of this, there’s actually real ongoing research into what are called “quantum cellular automata” or “quantum dot cellular automata”. If you’ll look into these, you’ll find they seem to mostly consist of systems of little objects that look like dice arranged next to each other like dominoes, such that the painted dots in any individual die change color over time depending on how the dots in the closest adjacent die are colored. I’m not exactly sure how these things work, but I do notice two things. One, this version of cellular automata looks very serious and formal and appears to have real practical application in actually simulating quantum systems. Whereas what I’m hoping to make here is more a silly little toy program that makes pretty pictures. Two, doing anything in this version of QCAs seems highly dependent on how you arrange the tiling of the dominoes against each other, whereas I’m really hoping for something where the behavior is dictated by the way the “lifeforms” are positioned and not by the way the external geometry is preconfigured.
  • Some guy named Wim van Dam in 1996 actually wrote an entire Master’s thesis on the subject of quantum cellular automata, under the simple title of “Quantum Cellular Automata“. Van Dam’s thesis there is actually well worth reading if you’re finding any of this so far interesting, and explains the motivations and workings of quantum computers, cellular automata, and hypothetical quantum cellular automata all way better than I do here– probably to be expected, since it’s an entire master’s thesis, instead of just a “this is how I wasted my Saturday” blog post. Potentially extremely conveniently for my purposes, Van Dam actually describes specifically how to formulate a one-dimensional cellular automata of the exact kind that the Game of Life uses. He does not, however, bother generalizing this to two dimensions as would allow one to implement the game of life in specific, since of course the one-dimensional version is sufficient for his goal of proving the QCAs work.
  • Google incidentally turns up a short and concise 2002 paper titled “A semi-quantum version of the Game of Life“. Looking over this, however, it doesn’t appear to be exactly the thing I’m looking for (i.e. a reformulation of Life with quantum style rules). Instead, the paper suggests that such a thing is probably not a good idea, noting “A full quantum Life would be problematic given the known difficulties of quantum cellular automata” (uh oh), and instead takes the opposite tack, of trying to design a quantum system that can simulate life: specifically, they design a way of creating systems of quantum oscillators which interfere with one another in a way which “in the classical limit” exactly follows the rules of life, but which otherwise exhibits various interesting quantum mechanical properties as a result of the oscillators it’s made out of. Which is all interesting, but it isn’t quite what I was looking for (at least I don’t think so), and it also requires one to understand the quantum harmonic oscillator (which I don’t).

None of this exactly provides what I was looking for, since all of the attempts above, as it happens, are by people doing actual physics research. They’re thus doing their work under the constraints of a set of fairly serious goals, goals like applicability in proving things about quantum physics or even implementability as a real-world device. When I think about deriving a version of quantum Life, though, I’m just thinking of something that would be easy for me to implement and useful and fun for demonstrating things. This means that if I’m going to try to construct my own version of quantum Life, I’m going to be working off a somewhat more modest set of goals:

  1. Just as in the original game of Life, whatever the basic rules are, they should be simple to understand and implement even to someone who doesn’t necessarily understand quantum physics and turing machines and such.
  2. Also like the original game of life, running a simulation of quantum life should produce pretty pictures.
  3. And also like the original game of life, the way patterns evolve should exhibit some sort of structure which could be potentially used by someone with lots of time on their hands to design starting patterns with arbitrary complex behaviors.
  4. The game should incorporate or exhibit some kind of behavior of quantum mechanical systems. Hopefully the quantum behavior should incidentally impose some kind of interesting limitations or provide some kind of interesting capabilities to someone trying to design starting patterns that wouldn’t be the case in standard Life. In the best of all possible worlds it would be possible to demonstrate some interesting feature[s] of quantum mechanics by setting up quantum Life patterns and letting them run.
  5. Ideally, hopefully, this simple quantum life should turn out to be a model of quantum computation– that is, it should be possible to simulate a quantum turing machine and thus emulate any quantum computer using this quantum life. Since (the Wim Van Dam paper proves this) it is possible to simulate a quantum turing machine on a universal quantum cellular automata and vice versa, it should be sufficient to show any of the “real” QCAs described above can be implemented or simulated as patterns in quantum life.

My first attempt at this follows; to test it, I did a quick and dirty implementation in Perl. Note that although I’m pretty confident of everything I’ve written in this blog post up to this point, after this point I’m in uncharted territory and for all I know everything after this paragraph is riddled with errors and based in gross misunderstandings of quantum physics. In fact, since as I type these words I haven’t actually started writing any of the Perl yet, I have no idea if this quantum Life thing is even going to work. (Though even if it doesn’t, I’m posting this damn thing anyway.)

(Update, Feb 2009: So, I wrote this post a couple years ago when I was first trying to learn about quantum physics and quantum computing, and I actually did make a couple mistakes. The one really huge mistake I made occurs a few paragraphs back, and it propagates forward into the part of the post that follows. The mistake is this: I claim in this post that a qubit in a quantum computer can be described by the probability that that bit is 0 or 1. This is close, but not right. A better way to put this is that a qubit can be described by a quantum probability that the bit is 0 or 1. What is the difference? Well, a normal probability is just a number between 0 and 1– like, 30%. However a quantum probability is a complex number– a number with an imaginary part– of absolute value less than one. This unusual notion of a “probability” turns out not only to make a big difference in the behavior of quantum systems, it’s the entire thing that makes quantum computers potentially more powerful than normal computers in the first place! Unfortunately I didn’t clearly realize all this when I wrote this blog post. So in the final part of this post, I try to define a “quantum” cellular automata– but since at the time I misunderstood what “quantum” meant, I wind up instead defining a cellular automata model which is in fact only probabilistic. Now, given, I think what follows is still kinda interesting, since the probabilistic cellular automata turns out to be kind of interesting unto itself. And it’s definitely a first step toward quantum cellular automata. But, just a warning, it’s not itself quantum. I did later took some first steps toward implementing an actual quantum cellular automata, but they’re not done yet. So hopefully someday I can revisit this and write another post in this series that gets it right this time.

If you want a clearer explanation of this whole “quantum probabilities” thing, I suggest you read this, which is about the clearest explanation of what the word “quantum” means I’ve ever found.)

The most logical way to construct quantum life, at least as far as I’m concerned, is to just take the trick used to create the QTM and apply it to the Life grid. In a Post machine [2-state Turing machine], each tape cell is equal to either state “0″ or state “1″; in the QTM, the tape cell is essentially equal to a probability, the probability that the tape cell contains state “1″. This simple conceptual change, from states to probabilities of states, provides basically all of the QTM’s functionality. In Life, meanwhile, as in the Post machine, each grid cell is equal to either state “0″ or state “1″; a naive quantum Life, then, could be created by just setting each grid cell essentially equal to a probability, the probability that the grid cell contains state “1″ (“alive”). This is the tack I’m going to follow. I’m going to speak of cells below as being “equal” to a number between 0 and 1; when I do this, the given number is the probability, at some exact moment in time, that the cell in question is “alive”.

The introduction of probabilities like this even just by itself offers a lot of opportunity for the creation of new and elaborate cellular automata rules, say rules where each varying number of neighbors offers a different probability of survival into the next generation; I imagine the people who’ve already done so much elaborate stuff with just the addition of just new discrete colors to Life could have a field day with that.

I’m going to ignore those possibilities, though, and just maintain the rules from “Real Life” entirely unchanged– that is, a cell is created iff it has three neighbors, a cell survives iff it has three or four neighbors. These rules still work perfectly well even though now some cells are only “probably” there, and conveniently, this selection of rules means that this kind of quantum Life is in fact completely identical to normal life, so long as all the cells are equal to exactly 0 or exactly 1. The simulation only behaves differently if at least one cell, from the beginning, is programmed to be “indeterminate”– that is, to have some probability of being alive between 0 and 1. When this happens, then we can no longer calculate an exact number of neighbors for the cells in the vicinity of the indeterminate cell. We deal with this situation simply: instead of calculating the number of neighbors, we calculate the probability that the number of neighbors is “right” for purposes of the cell being alive in the next generation. That probability then becomes the cell’s “value” in the next generation. For example, let’s say an empty cell has two neighbors equal to 1, and one neighbor equal to 0.5. This empty cell has a 50% probability of having 2 neighbors, and a 50% probability of having 3 neighbors. Since the cell needs exactly three neighbors to reproduce, the probability of the cell being alive in the next generation will be 0.5.

The questions that immediately come to mind about this way of doing things are:

  • Does the addition of “indeterminate” states actually have any interesting or useful effect?
  • Is the addition of these indeterminate states to Life sufficient to allow the emulation of a QTM or UQCA?
  • Does this actually have anything to do with quantum physics?
  • Is this different in practice from the “semi-quantum life” linked above?
  • Though one of my goals is for the rules to be simple, and they seem simple to me, I took three long paragraphs to describe these rules. Is the way I described the rules at all clear, and maybe is there some way I could have just summarized them in a couple of sentences?

(As far as the second question here goes, it occurs to me that what I’ve designed here may well not really qualify as “quantum life”, and maybe “probabilistic life” would be a better way of describing it. In fact, looking on Google for the phrase “probabilistic game of life”, I now find that a couple of people who’ve attempted rules along these lines: There’s this little writeup on a math site here, which appears to be describing a version of probabilistic Life with my exact rules. This author doesn’t provide an implementation, though, so I shall press on, although that site does provide some interesting proofs about the behavior of a probabilistic life that I’m not going to bother trying to understand just now. On an unrelated note, Google also turned up this odd little thing, though what it actually specifically has to do with the Game of Life is not obvious from the link.)

Anyway, let’s look at an actual implementation of this kind of quantum life and see the answers to my questions here are at all obvious.

I happened to have this simple implementation of Life sitting around I’d made awhile back, as a perl module that takes in grids containing Life starting states and spits out animated GIFs. This particular implementation was written such that on each frame, the module would make a table listing the number of neighbors each cell in the grid has, then use this number-of-neighbors table to generate the alive and dead states in the next frame. Modifying this existing module to work with quantum life turned out to be incredibly simple: Rather than the number-of-neighbors table being a simple number, I just switched it to being a probability distribution, an array of ten values representing the probabilities that that cell had zero through nine neighbors respectively. I build up this array by starting out with a distribution giving 100% probability of zero neighbors and 0% probability of any other number, and then factoring in the changes to the probability distribution caused by each of the cell’s nine neighbors, one at a time:

0 neighbors:
1 neighbor:
2 neighbors:
3 neighbors:
4 neighbors:
= 1
0
0
0
0
+
100%
alive
neighbor
0
1
0
0
0
+
100%
alive
neighbor
0
0
1
0
0
+
50%
alive
neighbor
0
0
0.5
0.5
0
+
50%
alive
neighbor
0
0
0.25
0.5
0.25

(If you squint real hard, you can probably see the binomial distribution in there.)

The probability of the cell being alive in the next generation is then calculated by

(probability of cell having 3 neighbors this generation) + (probability of cell being alive this generation)*(probability of cell having 4 neighbors this generation)

What happens when we run this?

Well, at first glance at least it doesn’t appear to work all that well. Here’s what happened when I took a simple glider and placed it on a quantum Life board, in each case varying the probability that the glider was “there” (and, in the last couple cases, varying the probability that each of the “empty” cells were actually filled). In this table I show first the starting condition, and then an animated GIF showing what happened when the Life simulation ran. In each frame, the darker the pixel, the more probable it is that there was a living cell there at that exact moment.

100% 50% 75% 90% 100% 0.001%

The 100% alive glider, as expected, behaves just as it would in normal life. The other gliders… well… just kind of instantly explode. The 90% glider manages to actually on average complete a couple of halting steps before it falls apart entirely, the poor thing, but it too smears out of apparent existence within a few more frames. This in retrospect isn’t actually surprising. When the gliders “disappear”, they’re not outright gone; instead, the blank area where they once were (and a gradually spreading portion of the rest of the blank area, as well) is equal to a bunch of incredibly low probabilities, so low they’re indistinguishable from pure white. This happens because when we run a game of Life containing indeterminate states, we’re basically simulating every single possible board configuration that could have existed at once, with some just being more likely than others. The Life rules favor non-life over life, so the vast majority of these possible games wind up just rapidly spinning out into nothingness– or, at least, the possible games that wind up containing some living cells left don’t have the life settling in some areas any more prominently than others, so the blankness is able to dominate essentially without opposition.

So this version of Life doesn’t at first glance seem to turn out to be very useful in terms of creating a usable or playable game. We could maybe fix the “everything dies instantly” problem by fiddling with the rules some in the next version, though, so maybe things aren’t hopeless. In the meantime, does what we have so far at least demonstrate quantum behavior? Well, it does seem pretty clear that the life cells participate in self-interference, at least from informal observation:

Starting state Center line is 100% there Center line is 99.99% there

At least self-interference is the way I interpret the sort of gradienty behavior the version on the right engages in before it fades out entirely. The probabilities don’t do that whole square-of-complex-numbers thing, and there isn’t anything resembling observation or wavefunction collapse (I could change the GIF-making function such as to basically assume the wavefunction collapse happens on every frame, by randomly populating the gray areas with life instead of displaying the probability distributions, but that would probably just make things visually confusing without conveying anything interesting), but (although I could be wrong about this) the quantum turing machine doesn’t as far as I can gather really have any of these properties either so we could maybe just shrug that off.

Even so, there really doesn’t seem to be any way you could make a turing machine out of parts that keep fading out of existence, so after encountering this everything-fades-away problem I was just about to give up and declare the whole thing a wash. Still, turing machines aside, I did manage to at least find one interesting thing about this particular quantum life implementation. We see the first glimpse of this when we try populating a Life board at random:

Okay, now here there seems to be something going on. You’ll notice that most of these areas actually don’t fade right to white, but instead stabilize at what looks like a sort of 50% gray (although, while they look like it, the stable half-gray spots aren’t 50/50 gray– each of those cells are about 34.7583176922672% there. I don’t know why it’s that particular number, but I assume it has something to do with the tension between the “survivable” thresholds of three and four neighbors.) of probability; apparently there’s some kind of sweet spot there where life boards filled to a certain density of living cells tend to keep that density more or less consistently on average. A few little bits of area stabilize toward zero, though, and the areas of zero gradually eat away at the areas of half-life until there’s nothing left. So that’s kind of interesting. But the one thing that suddenly and out of nowhere justified this entire silly little project to me is what happened when I randomly populated a board, and then “rounded” most of it– that is, I rounded everything on the randomly generated board to exactly 1 or 0, except for a thin little strip of indeterminacy running vertically down the right side. And I got this:

…WTF?

What happens here is that the “probably there” areas converge on the mysterious 34.7% half-life value and then just kind of spill outward, swallowing up all the actually-living cells as they go. Then, once all the living matter has been devoured and no food is left, the white starts eating in and the areas of half-life slowly die out. In other words…

Holy crap! It’s a zombie apocalypse simulator!

So, I’m not sure this entire exercise was altogether productive in any way, though I’m going to keep experimenting with all of this– maybe trying some alternate Life rules besides just Conway’s 23/3– to see if I can coax anything useful out of this quantum Life concept. Check again next week to see if I come up with anything. In the meantime, though, at least, I think that the final image makes the whole thing satisfying to me in two ways:

  1. I at least got my psychedelic animated GIFs
  2. I tried to test quantum physics, and instead accidentally created zombies. Does that mean I qualify as a mad scientist now?

If you want to run any of this stuff yourself, you can find the messy little quantum life module here and the script that generated all the images on this page here. You’ll need to rename those files to qli.pm and gliders.pl, and you’ll need Perl and GD::Image::AnimatedGif to run them; and to do your own tests you’ll have to modify gliders.pl. If you need any help with that, just ask below.