2008년 5월 23일 금요일

Programming a board game AI

This semester, the final project of the problem solving class, which belongs to the computer science department, was to program an AI for a board game called Halma, whose variant is often known by the name Chinese Checkers.

Typical board game AI consists of following parts: move generator, evaluation function, and tree search. As the name implies, move generator generates possible moves; it encodes the rule of the game, judging which moves are legal and which moves are not. If move generator encodes the rule, evalution function encodes the "intelligence"; it is a function from the state of the game to some number. Usually, larger number means that the game is favorable to the player. But evaluation function doesn't need to be perfect; that's where tree search comes in. Tree search can "amplify" the minimal intelligence of evaluation function.

Because of such amplification effects, programming a board game AI is an eerie experience. When you program an unfamiliar game, it is a routine experience for your program to outperform you. It is more surprising since even the dumbest evaluation function can beat you. Often evalution function as simple as "substract the number of moves I can make from the number of moves the opponent can make" works surprisingly well. Notice that this function is not even game-specific.

In tree search, the program considers all moves I and the opponent can make to certain "depth", and passes resulting positions to evaluation function. The program assumes that the opponent tries to minimize the value of evaluation function, and tries to maximize the value of evaluation function after its move. The secret is that the program can examine a lot of positions quickly. In my case, the program searched to depth of 5 moves and examined millions of positions for a single move. This sheer quantity of positions compensate for poor quality of evalution function.

There are rich literatures concerning the programming of a board game AI. After all, people programmed computers to play chess as early as 1956. At this dawn of the programming, valiant programmers of Los Alamos laboratory programmed their huge cranking machine, which was actually less capable than cheap graphing calculators of today, to play a simplified version of chess on 6 by 6 board (see Wikipedia, Los Alamos chess). It won a game against a human who just had learned the game. We came a long way since then.

2008년 5월 13일 화요일


Shen Kuo(沈括) was a Chinese scientist of the 11th century. His work, Meng Xi Bi Tan(夢溪筆談), contains his scientific writings.

In one particularly stunning passage, he describes a theory of land formation by sedimentation, and an evidence for coastline movement. Following is the Chinese text and my translation. I got the Chinese text of Meng Xi Bi Tan from the Project Gutenberg.

余奉使河北, 邊太行而北, 山崖之間, 往往銜螺蚌殼及石子如鳥卵者, 橫亙石壁如帶. 此乃昔之海濱, 今東距海已近千里. 所謂大陸者, 皆濁泥所湮耳. (卷二十四)

When I was in Hebei, I went north, Taihang Mountains on one side, and between cliffs, often seashells and stones like bird's eggs crossed stone walls like a belt. Once this was the seashore, now a thousand li east of the sea. So-called continents are all submerged mud. (Volume 24)

This insight is comparable to that of James Hutton, a Scottish geologist of the 18th century, who is considered the father of modern geology. In Concerning the System of the Earth, its Duration and Stability, printed in 1785, he wrote:

Hence we are led to conclude, that the greater part of our land, if not the whole had been produced by operations natural to this globe; but that in order to make this land a permanent body, resisting the operations of the waters, two things had been required; 1st, The consolidation of masses formed by collections of loose or incoherent materials; 2dly, The elevation of those consolidated masses from the bottom of the sea, the place where they were collected, to the stations in which they now remain above the level of the ocean.

James Hutton is often unappreciated, but he must be considered as an equal of Isaac Newton and Charles Darwin. He published one of the most profound idea in the history of science, that the Earth is old. Like Darwin's theory, his theory challenged Christian doctrine, since it was incompatible with the account in Genesis. And it was the idea that the Earth is old, a half century later, that let Darwin think there was enough time for all life to evolve.

2008년 5월 4일 일요일

Witch on a journey

Some think that it is futile to translate poems, or lyrics for that matter. Many people try anyway. Here is my take on translating one of my favorite Korean pop songs.

Witch on a journey
included in "Welcome To My Beach", Kona (1997).

Now listen to my words
at least once a day
clean your crystal balls
get up early in the morning
drink a cup of white milk
and three lemon candies a day
don't forget to brush your teeth
before you sleep

A new wind is blowing for your journey
now raise your little broomstick
and fly to the blue sky
don't worry
and remember

Our beginning more mysterious than magic
the night and the shining promise
I will believe and wait
for the day you will come back to me
and we will start again
new days
in the wonderful world

Black cat's name's Pippi
and black dress is ironed
have you forgotten anything?
check again!


Risk assessment

I know I am behind schedule in writing blog posts. Desperate, here is a quick translation of a short piece I wrote in Korean.

Risk assessment

In assessing risks, it is meaningless to assert that the risk of A is high because some B has the high probability of causing A. The high probability of causing A should be considered together with the probability of B itself.

Here is an example of the fallacy:

Mad cow disease has the high probability of causing death. Therefore, it is highly probable to die from mad cow disease.

Compare with:

Mad cow disease has the high probability of causing death. However, mad cow disease is rare. Therefore, it is unlikely to die from mad cow disease.

2008년 4월 21일 월요일

Newcomb's paradox

Newcomb's paradox is an interesting thought experiment. Some philosophers think it has to do with the problem of free will and causality. Some think it is silly. There are many slightly differing formulations, and some think the details of the formulation matter a lot. I don't take the paradox too seriously and I don't think the details matter, nevertheless it is very interesting to observe people's reaction when they hear this paradox for the first time.

So here is my version of the paradox.

You are presented with two boxes, A and B. The box A is transparent, and inside it is $1,000. The box B is opaque. You are presented with two choices. Either you can take both boxes, or you can take the box B only. And then there is the predictor. With high accuracy the predictor predicts your choice. If he/she/it predicts you will take both boxes, it puts nothing in the box B. If he/she/it predicts you will take the box B only, it puts $1,000,000 in the box B. What is your choice?

Argument 1. The box B has either $1,000,000 or nothing. The predictor already predicted my choice and decided the content of the box B, so my choice doesn't affect the content of the box B in any way. Taking both boxes is always superior to taking the box B only, by the amount of $1,000. Therefore I take both boxes.

Argument 2. Since the predictor is highly accurate, if I choose both boxes it is highly likely that I will find the box B to be empty, and that my income will be $1,000. Since the predictor is highly accurate, if I choose the box B only it is highly likely that I will find the box B to have $1,000,000, and that my income will be $1,000,000. Therefore I take the box B only.

Note that the first argument is independent of the predictor's accuracy. Also note that the mechanism of the predictor's prediction is completely unspecified, and you have no way to assess its accuracy other than that it is "highly accurate".

Let's change that. Suppose that you are an observer, and you observed 10 (100, 1000, whatever) people choosing either both boxes or one box. Suppose that you observed the predictor's prediction to be always correct. Would it change your choice? Does the number of observations matter? Note that if you support the first argument, you have no reason to change your choice.

Also note that unless the mechanism of the prediction is specified, you can argue that the past performance is no proof of the future performance, and to think otherwise succumbs to the gambler's fallacy. The gambler may think that if coin toss turned head ten times in a row, the next toss is likely to turn head (or tail). Of course if coin is fair, each toss is independent and head and tail are equally likely, no matter what.

Suppose that you observed the predictor's prediction to be 90% correct. It is easy to cacluate that the expected value for both boxes is $101,000 and for one box is $900,000. Would the fact that the predictor is not perfect change your choice? Does the exact probability matter? Does the ratio of the amount of money matter? If your choice is based on the expected value, they probably do.

Some argues that the predictor as presented in the paradox can not exist. Suppose that people in the paradox are replaced with AI programs, and the predictor has much faster computer to run programs. Then it is obvious to me that the predictor can indeed predict the behavior. Since I accept the Turing thesis (which states, in short, human-like AI is possible), I also accept that such predictor can exist.

Also note that the predictor in the paradox does not need to be perfect. Public opinion polls routinely make predictions such as that females are more likely to vote for the candidate X, that people who live in Y are more likely to vote for the candidate Z, etc. How can you argue that it is impossible to predict the human behavior?

(P.S. So what is my choice? I choose the box B. My reason shall remain private though.)

2008년 4월 5일 토요일


There is an increasing danger of practicing tetrapyloctomy, as discussion gets longer. This post is another attempt at discussing learning, research, and related matters. Readers should judge whether I succeeded in avoiding the danger.

Let me first cite my sources. The first part of the argument closely follows the article The Two Cultures of Mathematics by Timothy Gowers. The link contains the complete article. The second part of the argument was influenced by the chapter titled "Pinball" from the book The Soul of a New Machine by Tracy Kidder. I can't link to the full text, but the link contains a good summary of pinball analogy.

Now, here is a short recap of the discussion so far:

1. You are scientists so you should love learning.
2. Scientists learn just enough so that they can do research.
3. This is just a semantic matter; scientists must indeed love learning, else why would they bother to do the research?
4. Consider an analogy to writers. Reading is not what they are paid for; writing is. Similarly, scientists do not learn science; scientists do science.
5. I wrote that scientists must be motivated by a love of learning. What motivates scientists to do research? Certainly not money. I think it is outcome. Perhaps you are imagining a sort of performance-art scientist, for whom the very process of research provides its own reward, regardless of outcome?

Certainly, I am not making ridiculous claims like that scientists love research but not learning, or that writers love writing but not reading. I am talking about the priority. Consider the following two statements:

1. The point of scientific research is to understand the nature better.
2. The point of understanding the nature is to do the better scientific research.

At first, the second statement may seem odd. Some variations should help understand it better: 1. The point of understanding the nature is to apply that understanding; one application is as a basis for an effort to achieve deeper understanding. 2. The point of learning existing knowledge about the nature is to do research (as opposed to intrinsic value of such knowledge).

There is truth in both statements, but one does not necessarily agree with both statements to the same degree. I claim that average scientists are inclined to the second rather than the first. You could argue that I am imagining "performance-art scientist"; actually I am claiming that it is not an imagination, but probably a majority. I will argue two points, that outcome is often not the most important, and that research is often primarily seen as a vehicle for further research.

It seems to me that scientists are often not too conscious about outcome of their research. The case of physicists participated in the Manhattan Project is a classic one.

As a student in the computer science department, I want to give another example. Computers are often said to be general purpose, which means, unlike calculators, televisons, or MP3 players, it can do many tasks (like calculation, playing video, and playing audio) given appropriate softwares and peripherals. It is also a common knowledge that general purpose computer chips are used heavily in military applications. If one wants to avoid any association with tanks or missiles because one is against wars, there is no way other than ceasing to work on electronic chip design at all. However, in reality, it won't be too much of exaggeration to assert that most electronic chip designers are utterly uninterested in most applications of their work -- applications that range from executing financial strategies in derivative market to producing virtual child pornography.

It also seems that research are often judged in terms of potential for further research, rather than in isolation. One often talks about "profitable line of research". Naive interpretation is that this refers to research that brings profit. It actually means that the research is likely to lead to further research.

So is the goal of the game of pinball. You win one game, you get to play another. For writers, you finish one novel, you get to write another. For mathematicians, you solve one difficult problem, you get to solve another. For engineers, be they civil engineers, computer hardware designers, or software developers, you complete one engineering project, you get to join another. For scientists, you research one subject, you get to research it further.

I am not saying that one particular round of pinball, or one theorem, or one bridge, or one paper is not important; but as or more important is the process of creation, and the hope of better future. Whatever you learn from others, by definition, is a thing of the past. Whatever you learned yourself, becomes a thing of the past when you learned it. Whatever you are yet to learn, especially if nobody has ever learned it, is the most interesting.

Speaking of my experience, I often longed for the beginning of a new software project when one project was almost at the end. The endgame is not as interesting. However, the endgame, or the polish, absolutely matters in commercial software development. I often thought that this is why most research software, as opposed to commercial software, lacks in polish -- because there is less pressure for the polished product.

If Rowling does not want to write another Harry Potter book, I can understand.

2008년 3월 25일 화요일

Creationist joke

I confess that I enjoy mocking creationists and their claims. I probably shouldn't, and pity them instead. But how can I resist laughing when I read jokes like the following on talk.origins?

"The presence of shells on mountain tops shows that global flood (as written in the Bible) was real," says a creationist.

Q. Where did the water go after the flood?
A. It was turned into wine.
Q. It still has to go somewhere!
A. Noah drank it. Don't you read your bible?

2008년 3월 21일 금요일


English anaphora are so inadequate. Consider a sentence, "Although the motorcycle hit the tree, it was not damaged." Is the damaged thing the motorcycle or the tree?

The teacher says that I am supposed to rewrite the sentence to either "the motorcycle was not damaged" or "the tree was not damaged". Really English is damaged. Anaphoric expressions are useless if one is forced to repeat nouns when one wants to be unambiguous.

I would like to use "the subject of the last verb was not damaged" or "the object of the last verb was not damaged". These are too verbose. Case marking can make them succinct, as in "itos was not damaged" or "itom was not damaged". Haha, they sound better to me.

Note that I marked cases according to the role the word played in the previous sentence, not in the current sentence. In both cases "it" is the subject of the predicate "was damaged". This is different from case systems of other languages. I wonder whether there is any natural language which marks cases on anaphora the way I conceived here.

2008년 3월 3일 월요일

Replies to comments

My cunning mind told me that I could easily fill another blog entry just by replying to comments I received. And what fun is blogging, this modern form of exhibitionism, if I don't interact with its readers? I could keep a journal inside the locked drawer in my desk, instead of broadcasting my writings instantly to the world at large.

Re: Learning and Research, James wrote: "I think... this is just a semantic matter; that is, scientists must indeed love learning, else why would they bother to do the research?"

I would like to draw an analogy to the field of writing. I think most professional writers love reading. But reading is not what they are paid for; writing is. So it's safe to say that SF(Science Fiction) writers read less SF than avid SF fans; they are busy writing!

If one loves reading, more than writing, one would be a critic, an editor, or a reader; but not a writer. If one loves learning science, one would subscribe to Nature, Science, or Scientific American; one would read all popular science books by Carl Sagan or Stephen Hawking; one may enjoy reading science fictions as a hobby; but one would not be a scientist. Scientists do not learn science; scientists do science.

Re: Second person, James wrote: "Where in the world did you learn so much about linguistics?"

In regard to linguistics, I'm mostly an autodidact. There are good books and internet resources you can learn from. I guess the real question is why, not where. Why on earth would one read grammar books as a hobby?

The short answer is that I am a conlanger. It means that I practice conlanging. To conlang is to construct languages. Often, it means to constuct languages as an art form, like novels, paintings, and musical compositions. It means playing with phonology and morphology and syntax and inflection of your own fictional language; devising roots and deriving words for it; constantly tweaking the language to make it beautiful; and wasting lots of time while having great fun. I know that sounds weird. I hope I can elaborate on conlanging in the future.

2008년 2월 29일 금요일

Second person

"I sit here," said the teacher, "and y'all sit there." He did not forget to add, "But don't write y'all in your writing!"

The case of second person plural pronoun in English is the most fascinating. As you know, Modern English uses one word, "you", for both singular and plural form of second person pronoun. I believe I used the plural form in the last sentence. And "y'all" shows desire to make this distinction.

It wasn't always like this. Middle English used two separate words for singular and plural form of second person pronoun. Guess what they were? Singular form was "thou", and plural form "ye". Compare Modern German, which uses "du" and "ihr" respectively. And the distinction was present for many thousand years, since these pronouns ultimately originate from Proto-Indo-European root "tu" and "yus"!

Then how did English come to lose this distinction, that they have kept for such a long time? It is still a mystery to me, but one theory is that "ye" had started to be used as a "polite" form, analogous to French "tu" versus "vous". So for singular form of second person pronoun, "thou" became a "plain" form, while "ye" became a "polite" form. Later, "thou" started to connote contempt, and its use declined. Well, that is the theory. I don't buy it.

It is rather ironic that remaining use of "thou" in Modern English is to address God. "For thine is the kingdom, and the power, and the glory, for ever." Does this mean that we address God in plain, familiar, informal, even contemptous manner? No, it just means that liturgical language simply refuses to change, and that this English prayer was translated from Latin prayer when second person singular pronoun was alive and well.

After all, irregular plural forms are so European. We Koreans simply attach the plural suffix to pronouns. But we do have separate forms for plain and polite usage.

2008년 2월 25일 월요일

Learning and Research

Confucius said: "At fifteen my heart was set on learning" (Analects 2:4). Quoting this, Professor Park asked, "I presume you all are older than 15. Is your heart set on learning?" Silence ensued. Finally, he said, "You are scientists; you should love learning."

Then I said, "Learning is not what scientists do. Scientists learn just enough so that they can do research." The professor thought about it for a while; then he said, "I think such reply is only possible from one who is into the discipline. Thank you."

Is this a difference between scientists and philosophers? Is it a good thing that most scientists don't bother to learn outside of their specialties? Was my reply that unexpected? Science, after all, is about producing knowledge, not consuming it.

2008년 2월 22일 금요일


An innocent-looking exercise in an English writing textbook asks, "Does the universe have an outer edge?". "Such question is meaningless", I instinctively reply. "You are being philosophical" -- the instructor seems to be surprised.

I wasn't being philosophical. By definition, the universe includes all things that exist. There is nothing outside it. Since there is nothing outside, there can't be an outer edge or an inner edge that exists between the inside and the outside.

This seemingly vacuous wordplay actually poses a serious problem in physics. To see why, we need to discuss the interpretations of quantum mechanics.

Digression. The statement, "there is nothing outside the universe", reminds me of the church doctrine, "extra ecclessiam nulla salus", which means "outside the church there is no salvation". As they say, whatever is said in Latin sounds profound. I wonder what would be Latin translation of the statement?

In the book Three Roads to Quantum Gravity, the physicist Lee Smolin includes the statement as one of his four "points of departure". This 2001 book was finally translated into Korean in 2007. The book is accessible yet rich in deep insights; I recommend it. The following discussion draws heavily from the book. End digression.

Quantum mechanics is an extremely successful theory of physics. The theory lets us calculate the probabilities of measurements. For example, one can calculate the probability of finding an electron at a certain distance from a nucleus. But we can't be sure whether we will find an electron or not.

However, when we actually look for the electron -- in other words, when we perform a measurement -- we either find it, or we don't. And after the measurement, any further measurements are consistent with the electron being at the place it was found, not with the probability to find the electron we calculated. It seems clear that the measurement changed "something", but what something is is not clear. This is called the measurement problem.

The Copenhagen interpretation (well, one of Copenhagen interpretations -- there are many variants) says that the probability distribution represents an observer's knowledge about the system. The wave function, a mathematical tool which gives the probability distribution, describes the system. The wave function is not "real", and when the measurement is performed, the wave function "collapses", and the observer is informed.

Now according to the big bang theory, the universe was once small. There must have been time when the universe was so small that quantum effects were significant. The study of such effects are called quantum cosmology.

If we are to accept the Copenhagen interpretation of quantum mechanics, we need to think about the "observer" of this early universe. But since there is nothing outside the universe, this observer must be the part of the universe, not external to it. Doesn't this sound strange and difficult to you?

If you do you are not alone. Hugh Everett proposed that it's the wave function that is real, and there is no collapse after all. The wave function of the entire universe, which is all there is, evolves in a completely deterministic manner according to the law of quantum mechanics. So when the scientist observes an electron that had 80% chance of being there, it's not that the electron changed to 100% being there. Rather, the wave function describing the scientist evolved to the wave function which describes 80% chance of the scientist-who-found-electron and 20% chance of the scientist-who-did-not-find-electron. Bryce DeWitt later named this "many-worlds interpretation", and the name stuck.

Many-worlds interpretation implies that all possible universes should "exist", since all possible universes have non-zero probability and the universal wave function describes such universes. Thus in some universes I have already written this article yesterday and in other universes this article is never written.

Carried to the extreme, consider a person in front of a gun which fires when a radioactive atom decays. A radioactive atom has 50-50 chance of decaying in its half-life. After the half-life elapses, the person is alive in half of universes. But no matter how long he sits in front of a gun, there is non-zero possibility that the atom has not decayed yet. So in some universes he never dies.

Let's wrap up with a little anecdote. Hugh Everett wrote his thesis The Theory of the Universal Wave Function for his Ph.D. For more than a decade, few people paid attention. Disappointed, he directed his talent to applied mathematics of operation research, and earned lots of fortune. Still he believed in quantum immortality, that he will live forever in some universes; but he died in this universe. His daughter committed suicide, before which she said that she was going to a parallel universe, to be with her father.

Reading what I wrote so far, I guess I was being philosophical after all. But it's hard to avoid being philosophical when you discuss the universe.

2008년 2월 15일 금요일

Axiomatic method

This semester, I decided to take a course on Advanced English Writing (this class) and Introduction to Oriental Philosophy. The philosophy class is taught by the professor Woosuk Park. I enjoyed reading his book "Philosophy of Baduk" in the past.

In the Wednesday class, the professor Park raised the issue that the axiomatic method is apprently lacking in the East. He argued that the axiomatic method, examplified in the Elements by Euclid, formed the basis of the scientific revolution in the West. And the lack of axiomatic method is why the West could overtake the East in science. Then he wanted to discuss what role the philosophy played in creating such difference.

I pointed out that the rigorous axiomatic treatment of mathematics is a modern development. He replied that Grundlagen der Geometrie by David Hilbert, motivated by non-Euclidean geometry, may be considered the beginning of the rigorous axiomatic mathematics, but the germ of the axiomatic idea was already present long before. I wasn't satisfied, but I didn't pursue the argument any further at the time.

So here are some of my thoughts. First, Euclid was not rigorous, at least not in the modern sense. Let's look at his first proposition: To construct an equilateral triangle on a given finite straight line. Given the segment AB, two circles, one with center A and radius AB, and the other with center B and radius BA, are constructed. Let C be the point at which two circles intersect. Then that ABC is equilateral follows from the definition of the circle. Now, why should the point C exist? Two circles may not intersect at all, and the existence of the point C is nowhere justified. In general, the Elements is not very careful about between-ness and inside-ness.

Second, it was the analysis, not the geometry, which enabled the development of Newtonian physics and other sciences. And the early analysis was anything but axiomatic. Newton and Leibniz made heavy uses of infinitesimals, treating an infinitesimal as non-zero in one part of the "proof" (where they divide by it), and as zero in the other part of the "proof" (where it is eliminated). The need to prove convergence was not realized for a long time. It was only by the arithmetization of analysis, the 19th century development to found the analysis on the algebra, typified by epsilon-delta definition of limit by Weierstrass, that the analysis became rigorous.

Therefore it seems clear to me, that the axiomatic method is not what made the scientific revolution possible. At the time, the axiomatic method was not applied to the mathematics used by the science, which was the analysis.