Friday, September 28, 2007

Enigma 1462

Here is how Mathematica can be used to solve New Scientist Enigma number 1462 in the 29 September 2007 issue.

List of all 3-tuples of side lengths of red, blue, and yellow cubes. The upper limit of each of these is constrained by the known number of digits in the cubes of these side lengths in the red and yellow cases, and by the known number of digits in the square of the side length in the blue case.
(sizelist = Flatten[Table[{r,b,y}, {r,0,9}, {b,0,9}, {y,0,21}], 2]) // Length

Select the cases where red volume is a 3-digit number, and the blue area is a 2-digit number, and the yellow volume is a 4-digit number, and the first digit of the red volume is the same as the first digit of the blue area, and the last digit of the blue area is the same as the last digit of the yellow volume.
sizelist2 = Select[sizelist, 100<=#[[1]]^3<=999 && 10<=#[[2]]^2<=99 && 1000<=#[[3]]^3<=9999 && IntegerDigits[#[[1]]^3][[1]] == IntegerDigits[#[[2]]^2][[1]] && IntegerDigits[#[[2]]^2][[-1]] == IntegerDigits[#[[3]]^3][[-1]]&]
{{5,4,16}, {6,5,15}, {7,6,16}}

Check the digits corresponding to the letters in "nil", "no", and "zero". Only the first case is admissable because it has all digits distinct for all these letters.
{volumered, areablue, volumeyellow} = {#[[1]]^3, #[[2]]^2, #[[3]]^3};
nilnozero = Flatten[Map[IntegerDigits[#]&, {volumered, areablue, volumeyellow}]];
nilozer = Drop[Drop[nilnozero, {-1}], {4}]
{{1,2,5,6,4,0,9}, {2,1,6,5,3,3,7}, {3,4,3,6,4,0,9}}

1. Choose the first case above which thus fixes the values of {n, i, l, o, z, e, r}.
2. Compute the volumes of the 3 coloured cubes.
3. Generate a list of permutations of these volumes, because which pile is which is unknown.
4. List all the possible alternative cases of {t, h, g}, i.e. permutations of one base case.
5. For each of these cases of {t, h, g} compute the total volume "nothing".
6. Generate a list of the numbers of cubes in each of the 3 piles, where the 3rd pile is the one with the unknown number.
{n, i, l, o, z, e, r} = %[[1]]
{volumered, volumeblue, volumeyellow} = {#[[1]]^3, #[[2]]^3, #[[3]]^3}&[sizelist2[[1]]]
volumelist = Permutations[{volumered, volumeblue, volumeyellow}]
thglist = Permutations[Complement[Range[0,9], {n, i, l, o, z, e, r}]]
volumetotallist = Map[FromDigits[{n, o, #[[1]], #[[2]], i, n, #[[3]]}]&, thglist]
numberlist=Append[Map[FromDigits, {{n, o}, {n, o, n, e}}], x]

{{125,64,4096}, {125,4096,64}, {64,125,4096}, {64,4096,125}, {4096,125,64}, {4096,64,125}}
{{3,7,8}, {3,8,7}, {7,3,8}, {7,8,3}, {8,3,7}, {8,7,3}}
{1637218, 1638217, 1673218, 1678213, 1683217, 1687213}

For each member of the list of volumes (see step 3 above) and each possible total volume (see step 5 above) generate an equation for the number of cubes in the unknown pile, and label each equation with the colour of the unknown pile.
eqns = Flatten[Outer[{Switch[#1[[3]], 125, "r", 64, "b", 4096, "y"], #1.numberlist==#2}&, volumelist, volumetotallist, 1], 1]
{{y,105040+4096 x==1637218}, {y,105040+4096 x==1638217}, {y,105040+4096 x==1673218}, {y,105040+4096 x==1678213}, {y,105040+4096 x==1683217}, {y,105040+4096 x==1687213}, {b,6596560+64 x==1637218}, {b,6596560+64 x==1638217}, {b,6596560+64 x==1673218}, {b,6596560+64 x==1678213}, {b,6596560+64 x==1683217}, {b,6596560+64 x==1687213}, {y,202274+4096 x==1637218}, {y,202274+4096 x==1638217}, {y,202274+4096 x==1673218}, {y,202274+4096 x==1678213}, {y,202274+4096 x==1683217}, {y,202274+4096 x==1687213}, {r,6595584+125 x==1637218}, {r,6595584+125 x==1638217}, {r,6595584+125 x==1673218}, {r,6595584+125 x==1678213}, {r,6595584+125 x==1683217}, {r,6595584+125 x==1687213}, {b,266786+64 x==1637218}, {b,266786+64 x==1638217}, {b,266786+64 x==1673218}, {b,266786+64 x==1678213}, {b,266786+64 x==1683217}, {b,266786+64 x==1687213}, {r,168576+125 x==1637218}, {r,168576+125 x==1638217}, {r,168576+125 x==1673218}, {r,168576+125 x==1678213}, {r,168576+125 x==1683217}, {r,168576+125 x==1687213}}

How many cubes are in the remaining pile and what is their colour? This is the only integer solution that can found to any of the above equations.
Select[Map[{#[[1]], Reduce[#[[2]], x, Integers]}&, eqns], (#[[2]]=!=False&)]

Ising model simulation

This posting shows how to use Mathematica (version 6) to create an interactive simulation of a 2-dimensional Ising model, which is based on some work on Boltzmann machines that I did back in the mid 1980s. This nicely illustrates the use of the Manipulate function in Mathematica, which is by far the easiest way of creating interactive graphical demonstrations that I have ever used.

Define a function for randomly initialising the state of the 2-dimensional Ising model.

init[narray_] := array = RandomInteger[{0,1}, {narray,narray}];

Define a function for doing a single Monte Carlo update of the 2-dimensional Ising model:

update[{pe_,pne_,pn_,pnw_}, narray_] :=
{probpatch={{pnw,pn,pne}, {pe,0.5,pe}, {pne,pn,pnw}}, i, j, di, dj, arraypatch, freq1, freq0},
{i,j} = RandomInteger[{1,narray}, {2}];
arraypatch = Table[array[[Mod[i+di,narray,1],Mod[j+dj,narray,1]]], {di,-1,1}, {dj,-1,1}];
freq1 = Apply[Times, probpatch arraypatch+(1-probpatch)(1-arraypatch), {0,1}];
freq0 = Apply[Times, (1-probpatch)arraypatch+probpatch(1-arraypatch), {0,1}];
array[[i,j]] = If[Random[] < freq1/(freq0+freq1), 1, 0];

Define a function for implementing the interactive graphical demonstration.

If[narray!=Length[array], init[narray], ];
If[u, Do[update[{p2en[[1]],p2nenw[[1]],p2en[[2]],p2nenw[[2]]}, narray], {nupdate}], ];
{{p2en, {0.5,0.5}, "probability(\[RightArrow],\[UpArrow])"}, {e, e}, {1-e, 1-e}, ControlPlacement->Left},
{{p2nenw, {0.5,0.5}, "probability(\[UpperRightArrow],\[UpperLeftArrow])"}, {e, e}, {1-e, 1-e}, ControlPlacement->Left},
{{u,"False","update"}, {True,False}, ControlPlacement->Top},
{{narray,32,"size"}, 3, 64, 1, Appearance->"Labeled", ControlPlacement->Top},
FrameLabel->"Ising Model",
Initialization->(e=0.001; array={}; nupdate=50)

When Manipulate is evaluated the initial output typically looks like this:

Use the check box at the top of the output window to switch the simulation on and off. The number of Monte Carlo updates that is applied between the display of each "frame" of the simulation is hardwired to be 50.

The initial size of the 2-dimensional array of "spins" is 32 by 32, which is controlled by the 1D slider at the top of the output window. This slider can be moved at any time, even whilst a simulation is being run.

Each Monte Carlo update is implemented by selecting a spin at random, and calculating 8 probability factors due to its interaction with each of its 8 surrounding spins. Each such factor specifies a multiplicative contribution to the relative likelihood that the central spin has the same/different value compared to its neighbouring spin (this is not the most general interaction that is possible). There are 4 independent probability factors corresponding to the following directions from the central spin: east, north, north-east, and north-west. These probability factors are controlled by the two 2D sliders on the left of the output window. The top 2D slider controls the east (left/right slider movement) and north (up/down slider movement) factors, and analogously the bottom 2D slider controls the north-east and north-west factors.

Denote the probability factor for a pair of spins being the same as p, and for being different as 1-p. The central position of each 2D slider corresponds to a probability factor p=1/2 (i.e. the same contribution whether the spins are the same or different), the left and bottom edges of each 2D slider correspond to p=0 (i.e. force the spins to be different), and the right and top edges correspond to p=1 (i.e. force the spins to be the same). These sliders can be moved at any time, even whilst a simulation is being run.

So, using a 1-dimensional example, the spin configuration 0?0 gives a product of probability factors (1-p)^2 for 010 and p^2 for 000, whereas 1?0 gives p(1-p) for 110 and (1-p)p for 100 (i.e. the same in each case). A Monte Carlo update of the central spin selects ? as being 1 or 0 with a relative probability given by the ratio of the corresponding products of probability factors. This ratio is ((1-p)/p)^2 and 1 in the two examples above, respectively. The 2-dimensional case is a straightforward generalisation of the 1-dimensional case.

Each snapshot of the output below shows the typical result of running a randomly initialised simulation for a few seconds with the 2D slider settings as shown in the snapshot.

Note that extreme values are used for the probability factors in each of these simulations, and because the simulations each have a short duration they do not actually reach equilibrium. For instance, the first simulation enforces such strong positive correlations between adjacent spins that it should eventually degenerate to all the spins having the same colour.

Enjoy playing with this Ising model simulation. It is quite interesting to move the 2D sliders to vary the probability factors as the simulation is running, because the speed of the simulation is sufficiently fast that you get an almost real-time response as the Ising model dynamically adjusts its equilibrium state.

High risk, high payoff research

There is an interesting blog posting at Advanced Nanotechnology on The struggle over high risk, high payoff research, which pours scorn on the current low-risk approach to research funding. I have extracted some of the more interesting snippets below.

The blog posting itself makes the following comments:

...the United States science and technology research community has seen a return to a culture which is less likely to pursue high risk/high payoff technology research.
There is a struggle between those who want more High risk, high payoff scientific and technological research and development and those who want only timid, incremental goals who also ridicule even the description of a high payoff technological possibility.

Farber [who] sits on a computer science advisory board at the NSF [says]:

... he has been urging the agency to "take a much more aggressive role in high-risk research." He explains, "Right now, the mechanisms guarantee that low-risk research gets funded. It's always, 'How do you know you can do that when you haven't done it?' A program manager is going to tell you, 'Look, a year from now, I have to write a report that says what this contributed to the country. I can't take a chance that it's not going to contribute to the country.'"
The old head of ARPA Charles Herzfeld says (see here):
...the people that you have to persuade are too busy, don't know enough about the subject and are highly risk-averse ... If the system does not fund thinking about big problems, you think about small problems.

It's all pretty damning stuff. Only slightly tongue-in-cheek, I blame it all on the rise of the use of the spreadsheet as a management and accountancy tool, which makes it far too easy for unimaginative people to become so engrossed with the numbers in their spreadsheets that they overlook the big picture.

Friday, September 21, 2007

Enigma 1461

Here is how Mathematica can be used to solve New Scientist Enigma number 1461 in the 22 September 2007 issue. The solution is optimised for clarity rather than brevity.

Generate a list of all the 24-hour clock squares.
squares = Select[Flatten[Outer[100#1+#2&, Range[0,23], Range[0,59]]], IntegerQ[Sqrt[#]]&]
{0, 1, 4, 9, 16, 25, 36, 49, 100, 121, 144, 225, 256, 324, 400, 441, 529, 625, 729, 841, 900, 1024, 1156, 1225, 1444, 1521, 1600, 1849, 1936, 2025, 2116, 2209, 2304}

Generate all 3-tuples from the list of squares. The convention is that each 3-tuple corresponds to {Tom, Dick, Harry}.
(tuples = Tuples[squares, 3]) // Length

Week 1: filter the list of 3-tuples so that T < D < H and also H = T + D.
(triples1a = Select[tuples, #[[1]]<#[[2]]<#[[3]]&]) // Length
(triples1b = Select[triples1a, #[[3]]==#[[1]]+#[[2]]&]) // Length

Use the cyclical relationship between the weeks to obtain the corresponding results for weeks 2 and 3.
triples2b=Map[RotateLeft, triples1b];
triples3b=Map[RotateRight, triples1b];

Extract the case where Harry's square is the same for all weeks, keeping only the value of Harry's square.
harrysquare = Intersection[triples1b, triples2b, triples3b, SameTest->(#1[[3]]==#2[[3]]&)][[1,3]]

Extract the 3-tuple of squares {Tom, Dick, Harry} for each week that contain the above value of Harry's square.
tdhsquares = Map[Cases[#, {_,_,harrysquare}][[1]]&, {triples1b, triples2b, triples3b}]
{{144,256,400}, {441,841,400}, {625,225,400}}

Extract Tom's squares.
{144, 441, 625}

Thursday, September 20, 2007

Drawing on air

I learn from about a report at (see here) on a new virtual reality system called "Drawing on Air" that allows artists to create 3-dimensional objects. They say

By putting on a virtual reality mask, holding a stylus in one hand and a tracking device in the other, an artist can draw 3D objects in the air with unprecedented precision. This new system is called “Drawing on Air,” and researchers have designed the interface to be intuitive and provide the necessary control for artists to illustrate complicated artistic, scientific, and medical subjects.


... artists can stylize their curves while drawing by dynamically adjusting line thickness and color. Haptic effects enable artists to intuitively adjust line thickness by applying pressure against an imaginary 3D surface, making drawing in the air feel similar to pushing a paintbrush against paper ...


... once you have this ability to sketch in the air, there are so many different artistic directions you can go with it ...

I'm quite excited by this development because it gets us closer to having an artistic medium that has the richness of the visualisation "studio" that we each have inside our head. The system is still too expensive for everyday use, so I will use a programmatic approach to 3-dimensional art for the time being (e.g. see the sculpture here), but I already have plans in the pipeline to add a simple "Drawing on Air" type interface to enable the artist to have more intuitive control over the software that creates their artwork, e.g. a standard gamepad can be used to manipulate curved surfaces to form 3-dimensional sculptures, and lots more besides.

Friday, September 14, 2007

Enigma 1460

Here is how Mathematica can be used to solve New Scientist Enigma number 1460 in the 15 September 2007 issue.

This problem is designed to make brute force attack difficult because the set of candidate solutions starts off with (9!)^2=131681894400 elements, i.e. it is the Cartesian product of 2 sets of 9! elements. However, a methodical approach can be used to prune this set down until there is only one case left, i.e. the solution. The main trick is to prune each of the 9! element sets as much as possible before forming the Cartesian product of the residual sets, and keeping track of how many candidate solutions remain at each stage.

Create a list of digits 1,...,9 and a list of all permutations of this.


Create a list of the 9 colours and a list of all permutations of this.


Without loss of generality, assume that "grey" is at circular position 1, and select all digit permutations where each circular position 2, ... , 9 (i.e. not including the "grey" position) has an odd and even digit on its neighbours.

Select the colour permutations where "hazel", "indigo", "jade" and "khaki" are in consecutive clockwise circular positions. There is no need to look for wrapped-round cases because "grey" is locked in circular position 1.

Build a Cartesian product list of all remaining digit permutations and colour permutations.


Select the cases where the "hazel", "indigo" and "jade" digits add up to give the "khaki" digit.

Select the colour permutations where "lemon", "mauve" and "navy" are in consecutive clockwise circular positions. There is no need to look for wrapped-round cases because "grey" is locked in circular position 1.

Select the cases where the "lemon" and "mauve" digits add up to give 2 times the "navy" digit.

Select the cases where the "hazel" digit is the same as the number of times you can find a digit equal to the sum of the digits on its neighbours. This leaves just one possibility which is the required solution.
{{1,grey}, {3,hazel}, {2,indigo}, {4,jade}, {9,khaki}, {5,orange}, {8,lemon}, {6,mauve}, {7,navy}}

Wednesday, September 12, 2007

Hunting for the fundamental laws of physics

Stephen Wolfram has posted here some interesting news about one of his hobbies - hunting for the fundamental laws of physics, where he outlines how he is both developing and using Mathematica to search for fundamental laws of physics that generate simulated universes which have properties that resemble our own real universe.

The power of his approach lies not only in his use of Mathematica, but more fundamentally in his use of very few axioms to define what the fundamental laws of physics are in the first place. His basic object is a network (i.e. nodes and links-between-nodes), and his basic operation is the mutation of a piece of the network (via the application of a set of rules), which thus allows the network structure to have a dynamical behaviour. In this approach the fundamental laws of physics are determined by the choice of the set of network update rules.

It turns out that various simple consistency criteria cause this approach to give rise to both special relativity and general relativity. That is impressive, starting from a rule-based approach!

One of the challenges is to determine the consequences of a particular choice of network update rules, and to ascertain if they correspond to the behaviour of our known real universe. In general, the network behaviour in response to its update rules can be extremely complicated, and working out what is going on can thus be very difficult and time consuming. The development of Mathematica itself is partly driven by the need to create tools for addressing problems such as this.

Wolfram says that he has not yet found a viable candidate for the fundamental laws of physics using this approach, but that he is hard at work both developing and using Mathematica to achieve this goal. As he says

I certainly think it'll be an interesting - almost metaphysical - moment if we finally have a simple rule which we can tell is our universe. And we'll be able to know that our particular universe is number such-and-such in the enumeration of all possible universes.

I wish him luck in this venture. It would be very impressive if he found that a 3-line Mathematica program was all that was needed to generate the behaviour of our known real universe. Even if he is destined not to discover the fundamental laws of physics using this approach, he will nevertheless have created along the way a very useful toolbox for doing lots of other things, i.e. Mathematica.

Tuesday, September 11, 2007

SciVee - a YouTube for science

I learn from ars technica here about SciVee which will

...provide a form of scientific communication that's intermediate between abstracts (which take a few minutes to read) and a full reading of a paper (which can take hours). The primary type of video presentation that SciVee intends to host could be called a "pubcast," in which a researcher provides a short video description of their work that's synchronized to the display of text from the paper.

and also about the process of creating a SciVee presentation

What's the incentive for researchers to put the effort into creating a pubcast? "There's actually a low barrier to entry," Bourne said. "All you need is a webcam and iMovie or Movie Maker." The SciVee site has tutorials for recording and editing video content on both Mac and Windows platforms. Once the resulting video is uploaded, the site's software walks users through synchronizing it with the text of the paper.

I think SciVee will catch on very quickly indeed. There is a large gulf between reading just a paper's abstract and reading the entire paper which SciVee fills well, and because it is video it is (potentially) a very engaging medium with which to attract people's attention. No doubt I will try out SciVee before long, and I will write about my impressions of it here.

Sunday, September 09, 2007

The Spline - a movie

Here is a little doodle that I created whilst waiting for Sunday lunch to cook:

Apologies to Channel 4 for my blatent plagiarism of their animated logo idea.

Saturday, September 08, 2007

Boltzmann brains

In the 18 August 2007 issue of New Scientist there is an article Spooks in space by Mason Inman about so-called Boltzmann brains. Quoting from the introduction to the article

Boltzmann posed the question of whether the universe could have arisen from a thermal fluctuation; his work presaged the idea that a fluctuation could also give rise to a conscious entity that sees the universe. In this regard Boltzmann brains are not necessarily actual brains, but rather are a metaphor for observers of the universe that might appear spontaneously.

Thus a Boltzmann brain is a conscious entity that instantaneously pops into existence as a spontaneous fluctuation of matter into a highly ordered form, rather than gradually coming into existence like us through the slow process of evolution gradually rearranging matter into a highly ordered form. The probability of a whole brain suddenly popping into existence in this way is extremely small, because the internal structure of the Boltzmann brain has to be exactly right so that it works as a brain, so you have to wait a very long time for there to be a significant likelihood of a Boltzmann brain coming into existence. This likelihood problem is very deftly avoided by evolution, because it breaks the overall problem of making a brain into very small steps, each of which is much more likely to occur than the whole big jump from start to finish; of course, evolution doesn't know in advance that this is what is going on.

More generally, you could imagine a whole spectrum of processes ranging from the instantaneous rearrangement of matter at one extreme (e.g. Boltzmann brain) all the way through to the gradual rearrangement of matter at the other extreme (e.g. evolution).

In a sufficiently large and long-lived universe Boltzmann brains will come into existence, because then the extremely small likelihood of one coming into existence at any given place and time is offset by the large number of alternative places and times that are available in the whole universe (i.e. space-time). Our universe looks exactly like the sort of place where these conditions are (or will be in the far future) satisfied, so Boltzmann brains will eventually come into existence here.

The eventual existence of Boltzmann brains worries cosmologists, because the laws of physics that are deduced by a Boltzmann brain (which is a conscious observer) would be different from the laws that are deduced by us. The reason for this difference is that Boltzmann brains would typically come into existence a very long time in our future when the universe is much larger, emptier and colder than now, so the typical observations made by a Boltzmann brain would be very different from the typical observations made by us, so a Boltzmann brain would deduce laws of physics that are very different from ours.

This fact worries cosmologists so much that they would very much like to find a way to "banish" Boltzmann brains from existence, for instance by finding some property of the currently known laws of physics that prevents favourable conditions for Boltmann brains from ever arising.

I am not as worried as cosmologists are about the potential existence of Boltzmann brains who would deduce different laws of physics from us. My reasoning is as follows:

  1. Firstly, we are not special in the grand scheme of things, because we are basically the same type of information processors as Boltzmann brains. Both Boltzmann brains and we are two extreme examples of the outcome of a spectrum of processes that have rearranged matter in the universe. Our brains have come into existence via the gradual process of evolution, which stores its intermediate results in the form of DNA, and then uses this as the starting point for the next step in the rearrangement of matter. Whereas Boltzmann brains pop into existence without going via any of these intermediate steps. There is also a whole spectrum of intermediate cases (e.g. partly DNA and partly random chance) that one could imagine; the only way that DNA can evolve is to live a little way into this intermediate regime where there is a small element of random chance. Our brains are not particularly special in this spectrum of possibilities, other than because evolution using DNA (or something analogous) is the only process that has a significant likelihood of making something as complex as our brains in a universe that is as (relatively) young as ours is. The other possible processes for making brains that lie nearer the Boltzmann brain end of this spectrum will need much longer to have a significant likelihood of happening.
  2. Secondly, the laws of physics that we deduce are (at least partly) environmentally determined, so it doesn't matter too much if observers living in very different universes (or at very different times in the same universe) deduce different laws of physics. We set up our experiments and make observations, then we discover a low-complexity "explanation" of all of these observations and call it the "standard model" (or whatever). The scientific method is driven by whatever experimental observations we make, and we have little choice in this other than the freedom to choose which particular experiments we conduct. We grandly give the name "laws of physics" to our explanation of all of our observations, but the way in which this explanation is constructed leaves a lot of room for doubt about its uniqueness or inevitability. It may yet turn out that the laws of physics are somehow unique/inevitable, but that is far from being obvious right now. It is much safer to keep an open mind, and to assume that the laws of physics are simply what we observe-then-guess them to be, and to be thankful that mathematical beauty and elegance has taken us as far as it has in constraining the precise form of the laws of physics. It may yet turn out that there is a lot more mileage to be had in the mathematical beauty/elegance approach, but we should not assume that this is inevitable.

So we are not special in the grand scheme of things, and the laws of physics that we deduce are (at least partly) environmentally determined. Together, these two points mean that I am not as worried as cosmologists about the potential existence of Boltzmann brains.

Wouldn't it be nice to write a computer simulation of a synthetic universe whose properties gradually changed as the simulation proceeded, in which different types of information processing entity (brains, if you wish) emerged at different times during the simulation, and which interacted with their simulated environment to deduce what they called the "laws of physics" that governed their existence? None of these information processing entities would be "special" in any way (although they might believe themselves to be special!), and the "laws of physics" that they deduced would be environmental.

Enigma 1459

Here is how Mathematica can be used to solve New Scientist Enigma number 1459 in the 8 September 2007 issue.

This is a straightforward digit manipulation problem which can easily and quickly be solved by a brute force search. In this problem there is no point in trying to do any clever programming.

Initialise the counter.

Step the counter until the condition on its digits is satisfied.
While[Not[MemberQ[Map[FromDigits, Permutations[IntegerDigits[n]]], 12n/11]], n++]

Display the counter value that satisfies the condition.

Thursday, September 06, 2007

West Malvern sunset

This is my favourite photo of sunset as seen from my house high up on the west side of the Malvern Hills. I have never seen so many different types of cloud layer in the same small area of the sky.

Unfortunately, this year's summer weather has been useless so there were no opportunities to take good sunset photos. The photo above is one from my archive which I took on 26 July 2001.

Caleb Clarke's favourite painting

This one is for Caleb Clarke. He painted this remarkable picture, and then later sold it to me. I think he regrets having sold it, so I am posting a photo of it here for him and others to enjoy.

It was difficult to take this photo because I had mounted the painting in a glass-fronted frame, so I took the photo from slightly to one side in order to avoid glare from reflection of the camera flash in the glass, and then after that I had to do a bit of image processing to massage the photo back to the correct rectangular shape.

Enjoy, Caleb!

Wednesday, September 05, 2007

The Spline - a musical theme

To play this music click here.

Singularity Summit 2007

The Singularity Summit 2007 is about to start. I wish I could be there, but I will have to content myself with being an observer on the sidelines.

They say in the introduction on the website:

The Singularity Institute for Artificial Intelligence presents the Singularity Summit 2007, a major two-day event bringing together 18 leading thinkers to address and debate a historical moment in humanity's history – a window of opportunity to shape how we develop advanced artificial intelligence.

The introduction goes on to explain what "The Singularity" is:

For over a hundred thousand years, the evolved human brain has held a privileged place in the expanse of cognition. Within this century, science may move humanity beyond its boundary of intelligence. This possibility, the singularity, may be a critical event in history, and deserves thoughtful consideration.

That sounds like hype, but it is not.

Artificial intelligence is a very active area of academic research, and useful software continuously spins-off from this research. All such software applications are very specific in their scope, but their very specificity means that each such application can excel in its own chosen area. Currently, there is no general-purpose artificial intelligence software that can, like a human can, excel at a wide range of activities. The idea of The Singularity is that sometime during the 21st century we will have progressed artificial intelligence software (and hardware) technology to the point where human-level performance (and beyond) will not only become possible but will be highly likely.

Each generation of artificial intelligence will assist in the development of the next generation (this is what happens even now), and this development process will accelerate as the need for external human assistance in the design process becomes less with each advancing generation of AI, until AI can do the development all by itself. Once AI can develop its own next generation without human assistance the pace of progress can become very rapid indeed, and if external resources such as materials and energy were unlimited (which they are not, of course!) then the pace of progress would become so large that it would seem to be infinite (although it is not, of course!).

Hence the use of the phrase "The Singularity", because it marks a fairly sharp transition between the human intelligence that dominates now and the advanced artificial intelligence that will exist (I hesitate to say "dominate"!) in the future. The AI technology that emerges from The Singularity will be thinking thoughts that we will not be able to accommodate within our limited-ability biological brains, so making predictions about the post-singularity era is fraught with difficulties. There is a more detailed discussion of the term "The Singularity" given here.

What sort of things would have to happen in order to make The Singularity (or a smoothed-out version thereof) possible?

  1. We would use an advanced form of nanotechnology to evolve & grow massively parallel fine-grain computer architectures, rather than use the current approach where we design & build every small detail of the computer architecture ourselves. This type of evolution would use an artificial form of DNA to record the long-term state of the evolutionary process, and this type of growth would make use of molecular self-organisation to assemble the computer. This is essentially a fine-grain form of artificial life.
  2. We would use external training to teach the computer what its observed behaviour should be, rather than internal programming to dictate to the computer what its internal workings should be. This training process would involve interaction of the AI with its environment via sensors (e.g. inputs such as eyes and ears) and effectors (e.g. outputs such as touch and speech), and one possible training environment might be Second Life (or something similar).
  3. We would largely remove the artificial distinction that exists between software and hardware, so that each particular behaviour of electrons/molecules/etc in the computer has a unified existence rather than being split up as hardware+software. Currently, the use of programmable architectures has some of the spirit of this unified approach. A useful behaviour that is learnt by one generation could be optionally hard-wired into the next generation (i.e. Lamarkism), but it is not clear that this would be easy to do in a fine-grain architecture that arises through evolution & growth.

An advantage of using our own technology (rather than the outcome of biological evolution) to implement an artificial intelligence is that we can optionally hard-wire some of its behaviour. This could be achieved by "steering" (e.g. selective breeding) the process of evolution & growth that gives rise to the computer architecture in the first place, which could be used to influence the behaviour of the AI in many ways. This offers the possibility of using our human influence to create "nice" advanced AI, but unfortunately the same technique could also be used to create "nasty" advanced AI.

One thing that always comes up in conversation about this sort of advanced artificial intelligence is "do we need it?", or "do we want it?", and so forth. I'll cut straight to the bottom line. The applications of this type of technology are so wide-ranging, the potential for enhancing the quality of our lives is so great, the potential for defending ourselves against an aggressor who might wish to deploy this technology against our better interests is automatically inbuilt (yes, the "arms race" argument!), and so on, that I see no way of suppressing this technology. We have to learn to live with it. If this sort of advanced artificial intelligence is technically possible (and there is no obvious reason why it is not) then it will eventually come into existence no matter how much we try to stall the process.

So I will be watching what happens at the Singularity Summit 2007 with great interest, and I think you should too.

Update: There is now a report and discussion on the presentations at the Singularity Summit at Reason magazine here entitled "Will Super Smart Artificial Intelligences Keep Humans Around As Pets?". I could find only one mention of "nanotechnology", and then only in the context of optimising resource usage (i.e. making things smaller to get more computing done). There was no mention of the clever use nanotechnology, such as using synthetic DNA to evolve & grow massively parallel fine-grain computer architectures, as I discussed above. I find this omission very odd indeed.

Update: Here are links to some live-blogging on the Singularity Summit 2007 to be found on David Orban's blog, which I heard about here on Tommaso Dorigo's blog:

Singularity on the front page (of The San Francisco Chronicle)
Liveblogging the Singularity Summit 2007?
Liveblogging the Singularity Summit 2007 - Day One - morning
Liveblogging the Singularity Summit 2007 - Day One - afternoon
Liveblogging the Singularity Summit 2007 - Day Two - morning
Liveblogging the Singularity Summit 2007 - Day Two - afternoon

Well, I said "So I will be watching what happens at the Singularity Summit 2007 with great interest, and I think you should too.". If what I read in the above live-blogs is even a vaguely accurate report of the sort of discussion that went on at the Singularity Summit, then I am very disappointed by the airing of so many apparently superficial opinions there. Maybe the purpose of the event was to strut their stuff before the adoring eyes of the press, and the meat of the arguments was hidden away behind the scenes. What a pity.

Update: The presentations given at The Singularity Summit 2007 are now online at

Second Life grid

Linden Lab has announced (see here) the launch of the Second Life Grid. This is an exciting development that moves the popular Second Life virtual world towards being

a resource for businesses, organizations and educators for creating a successful virtual presence on the Second Life Grid platform

Linden Lab also says

the Second Life Grid will enable these organizations to understand and create meaningful 3D immersive experiences ... The platform, tools and programs available on will provide the foundation needed to create a successful virtual world experience.

From the users' point of view the Second Life Grid offers all the advantages of virtual worlds in general, with none of the disadvantages of the somewhat anarchistic ongoings in Second Life itself. This is a very exciting development because finally we have the promise of a programmable computing environment which we actually inhabit, rather than being limited to using software that we interact with whilst remaining firmly outside of it (as it were).

Ultimately, environments such as this will be so realistic that it will be easy to forget where the real you is, and then the meaning of "real you" becomes rather ambiguous. Have a look at Counterfeit World (a.k.a. Simulacron-3) which is a book-length essay on this problem; I read this when I was a teenager and I was hooked from that point onwards.

One of my interests in programmable virtual worlds such as this is that I can use them to create simulations that mimic what goes on inside my head. I am a highly visual thinker so I understand a concept by expressing it visually as a 3-dimensional simulation in my mind's eye. If I can formulate a concept visually then I can make fluid use of it (think "bird's eye view"), and if I can't then I have to use it in a plodding one-step-at-a-time sort of way (think "worm's eye view").

Prior to the advent of virtual world technology my visualisations have spilt over into the real world in the form of diagrams that I draw to illustrate selected freeze frames of the visualisation, but to other people these diagrams can appear out of context so their full meaning is elusive. However, the promise of virtual world technology is that I can now recreate a more faithful representation of what goes on inside my head when I am visualising.

It is hard work to create a rich virtual world using the current primitive programming tools in Second Life, i.e. the Second Life scripting language. I find that currently the best approach is to do as much as possible of the programming work outside Second Life (using Mathematica, in my case), and to then upload the results to Second Life. Even when I use the best programming tools that are available to me, the overall process of generating virtual world simulations is hard work, but at least it is now feasible whereas before the existence of Second Life it was impossible.

It will be interesting to see how virtual world programming tools develop over the next few years. In the meantime, to be as productive as possible using virtual world technology, it is best to tailor your ambitions to the sorts of things that are relatively easy to do using current programming tools, which means that you have to experiment continuously with the tools to see what they can do for you. For me that means building up my "flying time" in Second Life, and concentrating on doing work there rather than mucking about ... yes, there are distractions!

Tuesday, September 04, 2007

Enigma 1458

Here is how Mathematica can be used to solve New Scientist Enigma number 1458 in the 1 September 2007 issue.

Define the digits used in the 7-digit number.
u = {n6, n5, n4, n3, n2, n1, n0};

Define the reordered digits used in the 6-digit number.
v = {n0, n1, n2, n3, n5, n4};

Define the 7-digit number.
a = FromDigits[u] // Expand
n0 + 10 n1 + 100 n2 + 1000 n3 + 10000 n4 + 100000 n5 + 1000000 n6

Define the 6-digit number.
b = FromDigits[v] // Expand
100000 n0 + 10000 n1 + 1000 n2 + 100 n3 + n4 + 10 n5

Define some inequality constraints on the digits.
c = Apply[And, Map[0 <= # <= 9 &, u]]
0 <= n0 <= 9 && 0 <= n1 <= 9 && 0 <= n2 <= 9 && 0 <= n3 <= 9 && 0 <= n4 <= 9 && 0 <= n5 <= 9 && 0 <= n6 <= 9

Find solutions for the digits in the domain of integers. Searching for 3 solutions and finding only 2 demonstrates that there are only 2 solutions, one of which is the trivial solution with all digits zero.
s = FindInstance[a == 3 b && c, u, Integers, 3]
{{n6->0, n5->0, n4->0, n3->0, n2->0, n1->0, n0->0}, {n6->2, n5->9, n4->3, n3->9, n2->9, n1->7, n0->9}}

Verify the 2 solutions.
a == 3 b /. s
{True, True}

Sunday, September 02, 2007

Computer assisted sculpture

I am going to show you a way of creating sculptures using Mathematica. For the time being, I will show you just one finished example that I prepared earlier, but I will return to this theme in the future to show more generally how you can do computer-assisted sculpture, and lots more besides.

Here is the finished object (i.e. the letter "X") displayed using Mathematica's tasteful 3D rendering:

The basic trick to creating this sort of sculpture is to start with a sheet of elastic "paper", and to then stretch and fold it to the required shape. The allowed moves are basically the same as in origami, except for the fact that the "paper" used here is elastic. Also, in the example shown here the sheet starts off curled round into a cylinder.

The simplest way to see how the cylinder is deformed into the final letter "X" is to see a video of the whole process:

The various steps shown in the above video are:
  1. Start with a cylinder.

  2. Pinch the top and bottom of the cylinder to bring the front and back sheets of its surface together. The aim of this is to create two separate tubes that will eventually become the left and right halves of the "X". At this point in the video there is an artefact where the front and back sheets of the surface pass through each other; this is a side effect of the interpolation method that I used to create intermediate frames in the video.

  3. Fill out the waist of the above surface to compensate for the fact that the pinching operation (2) has made the front and back sheets of the surface touch all of the way from top to bottom. The aim of this is to recreate a 3D volume contained between the front and back sheets of the surface.

  4. Vertically stretch the left and right tubes of the surface. The aim of this is to begin to make these tubes look a bit more like what they need to be to make an "X".

  5. Bend the top and bottom ends of the left and right right tubes outwards. The aim of this is to make these tubes look even more like what they need to be to make an "X".

  6. Vertically constrict the middle of the surface, and stretch the tubes vertically. The aim of this is to accentuate the left and right tubes of the surface, which makes them look like the required "X".

  7. Sharpen the edges of the surface. The aim of this is to make the final shape of the "X" cleaner and crisper.
Each of the steps above is a simple deformation of the surface, and the sequence of steps is carefully arranged so that the surface gets gradually deformed towards the required shape, i.e. the letter "X". More generally, in addition to the basic deformation operations used above, a different starting shape for the surface could be used (e.g. a plain 2D sheet), or more complex operations could be used such as cutting/gluing the surface to create any sculpture that you want.

In the future I will return to this theme to describe it in more detail. I will also post a link here to a more detailed desciption of the steps, including complete Mathematica code.