1997, chess; 2016, Go

Joined
6 December 2002
Messages
1,468
Location
Lone Pine, CA USA
A computer program has won a match against Lee Sedol, one of the world's very best Go players. This represents a much greater achievement than when a machine beat Garry Kasparov at chess in 1997; Go is a deep, subtle game that's much harder to write programs for than chess. Nobody--neither in the artificial intelligence community nor in the Go community--expected to see a machine beat a top player this soon. Millions of people (primarily in Asia, where Go is popular) watched the match live.

Experts say that Lee Sedol played well. And although he seemed hurt by his losses he upheld a high standard of dignity and sportsmanship at all times. He bowed to his opponent before each game despite the program's inability to appreciate the gesture.

I'm not a Go player but I followed this match with interest. The type of engineering that went into this Go program potentially has broad areas of application.
 
That used to be a holy grail for AI. Huge news indeed, shows how machine learning has progressed - there are indeed a lot of business cases that can make good use of this kind of AI now. What's interesting here is the grid computing approach - more flexible in use than a big supercomputer.
 
The type of engineering that went into this Go program potentially has broad areas of application.
I'm not an AI expert but it appears to me that, while impressive, AlphaGo is narrowly focused on a specific application (Go) and uses its cluster of GPUs and CPUs to recognize historical patterns and calculate the best winning strategy within the rules of the game. So it may be a bit of a stretch to assume that a singularly focused AI program would have 'broad areas of application'. I suppose you might be able to apply it to something repetitive with lots of prior history you could train it with (like weather forecasting) but I'm not sure iweather has 'rules' like a board game.

If you look at another recent AI 'success' .. IBM's Watson .. (also a machine learning program), the 'rules' of Jeopardy are no where as complicated as Go but the learned topics were much more broad. In fact, IBM later worked with the Kettering-Sloan institute to 'teach' Watson about medicine and it's my understanding that Watson went on to pass the New York medical exam .. which was required be the medical community before they'd consider Watson to be capable of dispensing medical advice to physicians. I'm not sure what other applications IBM is using Watson for but they did buy the assets of the weather network so maybe it's going to be doing weather forecasting soon.

My point is that AI learning programs may not all have wide applicability. Things like Watson .. more so; AlphaGo .. less so
 
I'm not an AI expert but it appears to me that, while impressive, AlphaGo is narrowly focused on a specific application (Go) and uses its cluster of GPUs and CPUs to recognize historical patterns and calculate the best winning strategy within the rules of the game. So it may be a bit of a stretch to assume that a singularly focused AI program would have 'broad areas of application'.
Yes AlphaGo is designed for Go, but the people who developed it have also demonstrated programs that aren't tailored to any game in particular. One of their programs can learn a video game from scratch just from playing it and looking at the results on the screen. I mean without being told the object of the game; it figures it out by playing.

I was careful to say it's the type of engineering that went into AlphaGo that has broad application. Neural networks are key to AlphaGo (but aren't all that it uses). Developing this program wasn't just about playing Go; it was also an exercise in further development and study of neural networks.
 
The kind of AI behind AlphaGO and Watson are basically the same, and they are multipurpose. Recently there was a demo of such an AI learning how to play videogames (using controllers etc. like a human being would do) and the key is it adapts quickly to new conditions for "winning" (whatever that means in the context). The most delicate part is actually quantifying the feedback, that's the most critical element of tuning these kind of AI, in the case of game of go for example immediate feedback is difficult at best, impossible at worst (the game has simple open rules, which makes situations impredictable). The breakthrough here is how they managed to have the AI self tune the feedback loop by watching former games, exactly like a human being would do when learning to play properly (and losing matches at first..).
This self tune system has been applied successfully to a lot of different problems, and the results are really impressive. "Cracking" Go was the final proof it can work on a situation where your immediate actions (or even 8-10 moves ahead) have no real quantifiable result (as in better score for example), which shows the capacity to adapt to ever moving situations.
(disclaimer: i am an AI engineer)
 
Bah. There's no AI involved here.

Enter 30 million recorded games into a system with enough memory and CPU power to see what move turns out well, and of course you're going to win.
Real AI would involve some kind of intuition or intelligence, not just knowledge.
(BTW, I was an instructor at the university that produced the world champion checkers program)
 
Bah. There's no AI involved here.

Enter 30 million recorded games into a system with enough memory and CPU power to see what move turns out well, and of course you're going to win.
Real AI would involve some kind of intuition or intelligence, not just knowledge.
(BTW, I was an instructor at the university that produced the world champion checkers program)

I think you're wrong. 30 million would be a tiny drop in the ocean compared to the choices in Go.
 
[MENTION=14507]cmthomson[/MENTION]: that works for checkers because a computer can develop the full game tree and just pick the winning path. That works also for chess because we can develop a relevant sub tree based on previously played games (and apply the same principle). It doesnt work for Go because of the sheer number of possible moves especially opening moves... we're talking several orders of magnitude here (*) . The only way to win a game of Go is to display intuition or intelligence, there is no 'shortcut', as going the classic way of going through the move tree is just impossible. Really please have a detailed look at what a game of Go entails... have a look here for starters: http://www.britgo.org/learners/chessgo.html

(*) Go is an EXPTIME-Complete complexity class game, with a lot of freedom in moves and 19x19 board. A sub part of the board near the endgame is *still* an EXPTIME category problem ! Let's do a wiki quote here:
It is commonly said that no game has ever been played twice. This may be true: On a 19×19 board, there are about 3^361×0.012 = 2.1×10^170 possible positions, most of which are the end result of about (120!)^2 = 4.5×10^397 different (no-capture) games, for a total of about 9.3×10^567 games. Allowing captures gives as many as 10^(7.49×10^48) possible games, most of which last for over 1.6×10^49 moves! (By contrast, the number of legal positions in chess is estimated to be between 10^43 and 10^50, and physicists estimate that there are not more than 10^90 protons in the entire universe.
30 million is 3x10^7. Add 560 zeroes behind your 30.000.000 to get an idea of possible game space for Go.. yep. Better go the smart way with pattern recognition and machine learning :)
 
I do know a little bit about Go. I was a 4-kyu player in college.

The 30 million I referenced was not the number of moves, it was the number of games. Big difference.

Yes, tic-tac-toe was brute forced (I did it myself in the late 60's), then checkers, then chess. Each time with wildly faster CPUs with ever more gigantic memories.

Was there intuition and imagination in these Go matches? Of course there was! By the human player.

The computer program responded with the results of a huge database search for the likely best response. Read the article in last year's Spectrum by Scheaffer (a colleague of mine at U of A, and leader of the checkers project) about how a really fast computer could compete with a human champion at chess more or less randomly.

BTW, anyone who has played much Go knows the game is won or lost in the corners. The 19 gazzilion possible moves never materialize.
 
Was there intuition and imagination in these Go matches? Of course there was! By the human player.

The computer program responded with the results of a huge database search for the likely best response.

It's not accurate to describe AlphaGo as doing a database search.

Michael Redmond, the 9-dan commentator for this match, referred to AlphaGo's "database" in his commentary early on but later corrected himself after the AlphaGo team explained to him that AlphaGo isn't using a database. Neural network weights trained from a large set of games are not the same thing as a database of games to search through. The point is that neural networks find ways to generalize.
 
I think you guys are placing way too much importance on "human" intelligence:tongue:
 
Yeah, probably. Tune back in to Fox/MSNBC/CNN for the latest.

exactly.....I think we all do better on our own.....btw the magic of the human mind is not so much in its complexity....it is rather simple in design and quite symmetric from an anatomic perspective, but the magic of our novel/inventive/illogical thought is based on the plasticity of the neuronal connectivity esp in the frontal/temporal regions.how will we know when our super computers are truly autonomous ?
 
yesssssss I so much want a phased plasma rifle in the 40 watt range...
 
The 30 million I referenced was not the number of moves, it was the number of games. Big difference.
Please read my post again, first 30 million games is absolutely nothing compared to the problem space of Go, second the AI involved here has *no* database. It's a neural network, trained to be good just like a human player would (by playing games and learning past games). It has been demonstrated (i think the articles can be found on Ars Technica for example) that this same kind of AI can adapt to a lot of different challenges, even surprising its creators (when playing some Nintendo games it found ways to cheat...). You could argue that a trained neural network is akin to a database, but so is your own biological brain...

It's not accurate to describe AlphaGo as doing a database search.

Michael Redmond, the 9-dan commentator for this match, referred to AlphaGo's "database" in his commentary early on but later corrected himself after the AlphaGo team explained to him that AlphaGo isn't using a database. Neural network weights trained from a large set of games are not the same thing as a database of games to search through. The point is that neural networks find ways to generalize.
The strength of neural networks is pattern recognition. In this case (game of Go) it's kind of visual with the pattern being the game layout itself since Go is a purely positional game (stones cannot move). I have seen neural networks find patterns in social networks, financial trades, etc.. these are all valid and strong cases where this AI works wonderfully - i've evaluated a CRM product that scans internet for customer data, the relevance of results is upwards of 85% (which is amazing given the size of data space in this case, with natural language processing etc.).
Where docjohn hits right on target is about plasticity - changing the neural paths dynamically, that's the most difficult part to implement in artificial neural networks.
 
Back
Top