A Korean grandmaster of the game Go finally beat a computer designed by Google.
After losing three times in a row, Lee Sedol finally found weaknesses in the computer that beat him.
Lee had said earlier in the series, which began last week, that he was unable to beat AlphaGo because he could not find any weaknesses in the software’s strategy.
But after Sunday’s match, the 33-year-old South Korean Go grandmaster, who has won 18 international championships, said he found two weaknesses in the artificial intelligence program.
Lee said that when he made an unexpected move, AlphaGo responded with a move as if the program had a bug, indicating that the machine lacked the ability to deal with surprises.
AlphaGo also had more difficulty when it played with a black stone, according to Lee. In Go, two players take turns putting black or white stones on a 19-by-19-line grid, with a goal of putting more territory under one’s control. A player with a black stone plays first and a white-stone player gets extra points to compensate.
Lee played with a white stone on Sunday. For the final match of the series, scheduled for Tuesday, Lee has offered to play with a black stone, saying it would make a victory more meaningful.
South Korean commentators could not hide their excitement three hours into Sunday’s match, when it became clear that Lee would finally notch a win. AlphaGo narrowed the gap with Lee, but could not overtake him, resigning nearly five hours into the game.
I am for the human. Let the machines cheer for the machine.

“Let the machines cheer for the Machine”
Evidence that Bill Gates and too many others in Silicone Valley thinks like machines.
LikeLike
Machines are only as good as the people who make them and put information into them.
This we know in working with children also. THEY are NOT machines into which we interject “facts”. Critical thinking, humanitarian feelings etc etc are vital if we are to remain “human”.
In this vein may I once again draw your attention to booktv.org and Jonathan Kozel. Superb educator.
Too, there were several other programs on booktv today which I found enlightening, fascinating and could not turn off.
So much of history that I had no idea existed, one on the Irish, a man I had never heard of but who had one of the most fascinating lives that has ever come to my attention.
Education never ends.
LikeLike
“Education never ends.”
Death begins when learning ends.
LikeLike
“Lee said that when he made an unexpected move, AlphaGo responded with a move as if the program had a bug, indicating that the machine lacked the ability to deal with surprises.”
And yet people think computers can drive cars. Here in Chicago that’s what driving is – dealing with a never-ending string of surprises.
LikeLike
Having now driven in the Windy City (for last spring’s NPE conference) I concur.
LikeLike
Computers can drive cars, and soon they will be driving them much better than humans. Just think about all the accidents and deaths caused by human drivers every year around the world. Much of that will be eliminated with computers driving the cars. Also, we can simply get rid of drunk driving and all the needless death and legal problems that result from it. So many of the students at my community college get in trouble for drunk driving. That will disappear with computer driven cars.
LikeLike
Isn’t this the problem being encountered by driverless cars. Encountering fresh problems stumps the car. As long as the computer is coded for all known problems one might encounter, it can manage. Give it a unique problem the car crashes. Trouble is, problems are infinite.
LikeLike
With robo-cars, accidents will go down by 98%. The problem lies with the remaining 2% of accidents: A human driver could have potentially prevented those — and who will be liable?
For the best drivers, robo-cars may very well be an increased risk of death or injury. Almost everyone else would benefit if the liability issue were solved.
Correct me if I’m wrong, but that’s how I see it so far.
(The statistic is a rough guesstimation for illustrative purposes.)
LikeLike
Interesting post.
Chess programs are already much stronger than the strongest humans. It has been that way for over a decade. Garry Kasparov vs Deep Blue in 1997 was the beginning of the end for humans vs computers in chess: https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov
No top chess grandmaster would dare challenge a computer and bet to win. To paraphrase a top grandmaster, it is like “fighting a really stupid brick wall.”
Until now, due to the nature of the game “Go,” humans have been considered stronger than the best AI… even though AI would use the same computational power as in chess software. This phenomenon is said to be due to the deep complexity that favors the human mind, that Go is a more “organic” rather than formulaic/binary game (like chess). So the argument goes, in Go we are at our best, utilizing this almost magical intuitive human wisdom part of our minds. For a very rough analogy, it is comparable to GPS programs that sometimes give us bad directions, even though they have more information than us.
Recently, AI defeated a top human in Go, several times in a row. This was significant. It is like the battle of human vs computer intelligence… all over again. Maybe this is the turning point where we “lose” the battle?
But now it looks like the human player finally managed to play to the human strengths and AI weakness. Will human players continue to be able to outplay AI? This will speak to the nature of the game, and somewhat, of human nature and the nature of AI.
It will be interesting to watch this develop. If AI becomes unilaterally better at Go than humans, like in chess, we will have to find another game of “intelligence” that illustrates our unique value!
LikeLike
“Maybe this is the turning point where we “lose” the battle?”
We “lose the battle” when we don’t recognize that AI is literally controlling us. Look how the idiot box can control the behavior of millions without them knowing and that is quite low tech and human intensive.
LikeLike
Agreed… I am personally on board with us redefining intelligence and our relationship with technology, as well as calming our obsession with competition and comparison — but I’m not holding my breath for popular culture to see it that way anytime soon.
LikeLike
As a computer scientist and interested bystander:
If you look at a Tic-Tac-Toe, Checkers or Chess board and then look at a Go board, you see that they are essentially the same – all information is knowable, all possible moves are knowable and the consequences of every move is predictable to the last digit.
Now Go has many more possible moves and board situations so humans must use a number of mental shortcuts and learned intuitions to play well. However, to a computer it is just another solvable problem if you throw enough hardware at it. So don’t be fooled by so-called “deep learning” and “neural networks” which are neither deep or neural. The Artificial Intelligence community is expert at branding their toys in ways that hide the truth (for example a neural network does not “work like the human brain” – we have net to no idea how the human brain works!)
Of course, my hat is off to the clever programmers who came up with the ingenious programming shortcuts required to make this spectacle possible. But that just goes to show that HUMAN ingenuity was required to get a machine to even be able to do the math required to solve this game.
LikeLike
“. . . we have next to no idea how the human brain works!
and probably never will due to the inherent limitations of experimentation with human subjects (as indeed those limitations should be).
LikeLike
Scott the computer scientist says that computers win certain games because “all information is knowable, all possible moves are knowable and the consequences of every move is predictable to the last digit.”
Some games, certainly, but then there are the not fully predictable mysteries of learning, where the answers are not known to the learner, and indeed, and the questions may not be known to anyone because the problems are not yet known, or formulated as in a form that is easily answered, not even by game theorists.
Enter the Educational Neuropsychology Laboratory a lab at Washington State University. It seem that the University has gained permission to wire kids from the Pullman WA school district in order to capture real-time data on learning, in what is called a “natural classroom.”
Well, not exactly, because devices are capturing the eye movements, heart and blood pressure rates, and other real time responses of kids.
This is the headline: Neuroscientists Study Real-Time Learning in Classroom Lab.
Please go to the website to see what “natural” means. Please look at the photo and judge how natural everything looks.
http://www.edweek.org/ew/articles/2016/03/09/neuroscientists-study-real-time-learning-in-classroom-lab.html
And do not miss this image of a child who is not exactly in a natural learning environment. https://www.facebook.com/edweek/posts/10153855887303796
Both photos come from the Educational Neuropsychology Laboratory at the Washington State University.
This lab is hardly a natural environment for learning. Let’s hope it never is.
Washington State University says the lab “provides essential educational and psychological testing. Equipment housed in the laboratory consists of neuroimaging instruments including a Functional Near Infrared Spectrometer (fNIR); and Electroencephalographs (EEG). Additional equipment includes an Eye Tracking Apparatus and of physiological measures to include continuous blood pressure, pulse, electroderm activity, electrocardiography, skin temperature, and respiration monitoring. In addition, the laboratory is set up to administer both traditional and computer- based psychological and educational tests. The laboratory also has the capacity to act as a laboratory classroom to make use of new teaching and learning technology.”
That description of the lab comes from the 2015 TECH-Ed Conference supported by a grant from the Education Research Conferences Program of the American Educational Research Association (AERA). AERA was the official sponsor. Most of the presentations were about the use of tech in higher education. Participants could also visit the lab.
The headline writers have given new meaning to “natural” and “real-time learning in a classroom.” These words are likely to become a branding tool for products and services that are being tested by the kids who are serving as guinea pigs.
LikeLike
“physiological measures to include continuous blood pressure, pulse, electroderm activity, electrocardiography, skin temperature, and respiration monitoring.’
The classic “Lie detector”.
These “scientists” (basically little more than crackpots) repackage old ideas, throw in a few new elements and then pawn them off as “science”.
LikeLike
Quite a difference between the 60’s lab of Science Research Laboratory and this classroom. Wonder if there will be any significant difference in the results on reading scores.
LikeLike
Let the super ego win. End the compulsion to play the machines.
LikeLike