translated from Spanish: 25 years ago Deep Blue achieved an iconic victory over Kasparov

Twenty-five years ago, in February 1996, a computer’s first victory over the world chess champion took place. On the 10th, Garry Kasparov ceded the first game against IBM’s computer called Deep Blue. Although he ended up re-assembling it in the following days (4-2), a year later he lost the match (31–2–21/2) to an improved version. A machine could already be regarded as the champion of the world.
This game had a huge media impact. Somehow, that was a symbolic defeat of all mankind in a struggle between natural intelligence and its creation, artificial intelligence.

Deep Blue, de IBM. A computer similar to this one defeated world chess champion Garry Kasparov. Wikimedia Commons / James, CC BY-SA
The path that Galileo began (with Kepler and Copernicus) by kicking us out of the center of the universe, and Darwin continued to take us out of the center of creation, finds in this defeat a new blow to the self-esteem of the human species. Nor can we base our perception of uniqueness on our wonderful brain. With its tens of billions of synapses in thecephalus, we consider it the most complex system in the universe. It is a source of adaptive, complex and original behaviors, capable of generating wonderful feelings, play and creativity. And yet, in an activity like chess, paradigm of those capabilities, we get a machine.
Maybe we can save the situation if we look at how Deep Blue works. It can be argued that this was not intelligence but brute force, immense computational capacity and a huge database of games. Indeed, IBM’s strategy in that system was to assess the suitability of millions of possible moves to choose the child. To make the mathematical function with which to evaluate that suitability, databases of thousands of games were used in a process of weighting and adjustment guided by several great masters of chess, of course, humans.
Defeated in our favorite games
The next few years have shown vain that hope. Various computer systems have been defeating a multitude of activities that we would consider genuinely human. In 2011, Watson (another IBM creation) was able to interpret natural language and access real-time information to win in television contests, namely Jeopardy!, the American analogue of Know and Win.
The go, a game much harder to compute than chess, fell from the computer side in 2016 thanks to Google’s AlphaGo. You can no longer speak only of brute force, here artificial neural networks are incorporated, systems that learn from examples very autonomously, without requiring a detailed adjustment made by specialists.
Also in poker, a game with asymmetric information, there is an artificial intelligence (DeepStack) that has defeated all the professional players with which it has faced.
All these victories remain symbols of the enormous development that has experienced the field of artificial intelligence in these 25 years. A development that is not limited to media events, but has been giving rise to a multitude of products that sneak into our daily lives. We could say that there are some artificial intelligences that know us better than our mother.
Spotify, Netflix and Amazon are able to recommend music, movies and books to us, and get our tastes right spectacularly. We just have to live a little with those intelligences so that they end up getting to know us; the same thing a human being needs to achieve it. Behind this generic name of “artificial intelligence” there is a set of machine learning algorithms that work by adjusting their parameters from examples, something that we can translate with the expression “learn from experience”.
This way of learning extracts characteristics from the data sample from which you learn. That’s why, too, if you start a AI to learn from unsle selected human messages (from social media, for example) it’s likely to end up becoming sexist or racist. The algorithm doesn’t put anything that wasn’t there before, it just extracts that knowledge of the deal with that community and reproduces its biases.
The thing can be even worse if the selection of training data introduces extra biases. That’s why it’s especially important for developers to become aware and avoid unwanted biases, or beyond, that include active avoidance strategies, for example in algorithms tohealth care.
The celebration of the fourth century of that victory serves us to realize that, indeed, it pointed to an unstoppable path on which we are now fully immersed. It is a path full of successes and opportunities, but not without risks that need to be worked on.
Joaquín Sevilla, Director of the Chair of Scientific Culture UPNA- Laboral Kutxa and Professor of Tec. Electronics, Public University of Navarra
This article was originally published in The Conversation. Read the original.

Original source in Spanish

Related Posts

Add Comment