First it was checkers, then chess, and now finally the Japanese game Go. AI continues to evolve. From Google. AlphaGo by Deepmind. Another example of leveraging neural net methods.
" .... This paper published in Nature on 28th January 2016, describes a new approach to computer Go that combines Monte-Carlo tree search with deep neural networks that have been trained by supervised learning, from human expert games, and by reinforcement learning from games of self-play. This is the first time ever that a computer program has defeated a human professional player.
The game of Go is widely viewed as an unsolved “grand challenge” for artificial intelligence. Despite decades of work, the strongest computer Go programs still only play at the level of human amateurs. In this paper we describe our Go program, AlphaGo. This program was based on general-purpose AI methods, using deep neural networks to mimic expert players, and further improving the program by learning from games played against itself. AlphaGo won over 99% of games against the strongest other Go programs. It also defeated the human European champion by 5–0 in tournament games, a feat previously believed to be at least a decade away.
In March 2016, AlphaGo will face its ultimate challenge: a 5-game challenge match in Seoul against the legendary Lee Sedol, the top Go player in the world over the past decade. ... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment