I suppose it did not surprise people that a computer could be programmed to beat a human in chess. Chess has a finite set of pieces on a fixed board with strict rules of movement. There are only so many positions. There are a hell of a lot of positions, but still a finite number of them. Give a computer enough computing power as well as a database of chess games between grandmasters and it can figure out the optimum move for every situation. Ultimately even world chess champions succumbed to the machine.
The next challenge was more complex: the ancient Chinese game of Go. Go has more possibilities. It has a larger board, more pieces, and more alternative moves per turn than chess even if its rules are simpler. The game is primarily played in Asia and does not have much of a North American presence. Nonetheless computer programmers took up the challenge and, of course, created a program that could beat the world’s best players. That program was called AlphaGo. AlphaGo started with the rules of Go and 100,000 actual games from expert players and “learned” to imitate the tactics needed from that information. It also “learned” by playing against itself and developed its own schemes from those outcomes.
The next step was to see if the machine could learn all by itself. AlphaGo Zero was the next iteration of the project. This time the programmers gave it the rules of Go and nothing else. No database of games to “learn” from. AlphaGo Zero, using the rules, played against itself and discovered, by trial-and-error, the optimal strategies for winning. The idea was that if a machine only learned by imitating humans then it would be limited to concepts humans had already discovered. The computer would not come up with anything new, it would just be better at the game because it could master all the moves tirelessly and faultlessly.
But AlphaGo Zero “learned” new things and discovered new ways to win and in fact routed top-rated human players easily. AlphaGo Zero works by using a tree search to find the best move. It doesn’t play out every possible outcome, it instead prunes the branches by selecting the most promising ones. It “learned” this from all the previous games it played against itself. It then “remembers” the outcomes of all those pruned tree searches and can use that information again to make optimal “decisions” for the next set of moves.
When I want to draw a straight line I use a ruler. It’s a tool to help me complete a task. When I need to calculate something complex, I use a calculator. AlphaGo Zero is another such tool. Computers are better at data mining than people. They don’t get tired or make mistakes, and the mountains of data available today are beyond the scope of any purely human effort. Now it’s looking more and more that computers are better than people at making the best decision when many, many decisions are possible.
This isn’t scary. It’s exciting. Think about musical notation. A master musician can look at a sheet of music and hear the whole thing in his or her head at a glance. The notation actually frees the mind to see the larger pattern. It’s the same with algebra. The symbolism is powerful, it reduces complicated procedures to almost effortless manipulations. You don’t have to “understand” each step and that saves time and energy. So computers and thinking machines—what we call artificial intelligence or AI—can save time and energy and free humans to work on the things that humans are best at.
So what are humans best at? What can humans do that our technology can’t touch? I don’t know. I imagine most folks would say things like feel and express emotions. A computer could be programmed to simulate human emotional responses, and in fact I suspect that some existing AI systems could pass the Turing Test and fool a user into believing it was interacting with a real person. But that’s not the same thing.
People live in a subjective reality. We experience the world in our own particular way and since no two people are perfectly alike there are a hell of a lot of realities out there. Computers don’t have that problem. If they use the same algorithms to solve the same problems they ought to get the same results. But we aren’t wired like that. Our internal algorithms are fuzzy and inconsistent. We are easily confused, self-contradicting, and frequently irrational. Artificial intelligence is an obvious boon to humankind as it can take on tasks too big and too important (air traffic control, for example) that we mere mortals would eventually screw up. I say we get these machines in as many places as possible and free us from things we don’t need to do anymore (I can’t wait for self-driving cars!).
Then we can spend our time being silly, chaotic, and creative. We can love and laugh and goof off. The sooner this happens, the better, in my mind. I realize that Go and chess are mere games and thus not fully representative of the messy complexity of nature. But machines can do a lot to help us with the mess and I say let’s put ’em to work.
Just finished taking a class on Tom Stoppard’s “Arcadia.” Your little matho/sermon here would have made another good speech by one of characters in Arcadia, which touched upon computer algorithms, fractals, population dynamics, determinism, and a bunch of other topics far beyond my ‘silly and chaotic’ mind.
Keep writing. I bet I’m not the only enjoying this.
LikeLike
only one
LikeLike
I saw that show at OSF many summers ago, I remember really enjoying it. And thanks, I will keep writing as long as my fingers work, and when they fail I’ll get some app that will type for me.
LikeLike