Where Computers Defeat Humans,and Where They Can’t 计算机的过人与不足之处
2018-01-06安德鲁麦卡菲埃里克布林约尔松蒋威ByAndrewMcAfeeErikBrynjolfsson
文/安德鲁·麦卡菲 埃里克·布林约尔松 译/蒋威 By Andrew McAfee& Erik Brynjolfsson
Where Computers Defeat Humans,and Where They Can’t 计算机的过人与不足之处
文/安德鲁·麦卡菲 埃里克·布林约尔松 译/蒋威 By Andrew McAfee1& Erik Brynjolfsson2
AlphaGo, the artificial intelligence system built by the Google subsidiary3subsidiary子公司。DeepMind, has defeated the human champion, Lee Se-dol4韩国著名围棋棋手,世界顶级围棋棋手。2016年3月9日起,李世石与谷歌计算机围棋程序AlphaGo进行围棋人机大战。截至3月15日,李世石不敌人工智能AlphaGo,以总比分1∶4落败。, four games to one in the tournament of the strategy game of Go. Why does this matter? After all, computers surpassed humans in chess in 1997, when IBM’s Deep Blue beat Garry Kasparov5俄罗斯国际象棋棋手,国际象棋特级大师,在22岁时成为世界上最年轻的国际象棋冠军,是第13位国际象棋世界冠军,此后又数次卫冕成功。. So why is AlphaGo’s victory significant?
由谷歌旗下DeepMind公司创建的人工智能系统AlphaGo在一场围棋比赛中以4比1的成绩击败人类围棋冠军李世石。此事为何意义重大?毕竟,早在1997年IBM公司的“深蓝”击败加里·卡斯帕罗夫之后,计算机在国际象棋领域就已经超越了人类。那么,为何AlphaGo的胜利还不容小觑呢?
[2]围棋和国际象棋一样,也是一种极其复杂的讲究策略的游戏,不可能靠巧合和运气取胜。对弈双方轮番将白色或黑色棋子落于纵横各19道线的网格棋盘上;若棋子四面被另一色棋子围住,则需将其从棋盘上提走,最终棋盘上围占地盘较大、吃子较多的一方获胜。
[2] Like chess, Go is a hugely complex strategy game in which chance and luck play no role. Two players take turns placing white or black stones on a 19-by-19 grid; when stones are surrounded on all four sides by those of the other color they are removed from the board, and the player with more surrounded territory and captured stone at the game’s end wins.
[3] Unlike the case with chess, however, no human can explain how to play Go at the highest levels. The top players, it turns out, can’t fully access their own knowledge about how they’re able to perform so well. This self-ignorance is common to many human abilities,from driving a car in traffic to recognizing a face. This strange state of affairs was beautifully summarized by the philosopher and scientist Michael Polanyi6英籍匈牙利裔物理化学家和哲学家,著有《个人知识》和《社会、经济和哲学》等。,who said, “We know more than we can tell.” It’s a phenomenon that has come to be known as “Polanyi’s Paradox”.
[4] Polanyi’s Paradox hasn’t prevented us from using computers to accomplish complicated tasks, like processing payrolls7payroll工资单。, optimizing flight schedules,routing telephone calls and calculating taxes. But as anyone who’s written a traditional computer program can tell you, automating these activities has required painstaking8painstaking耗力费神的。precision to explain exactly what the computer is supposed to do.
[3]然而,与国际象棋不同的是,没人能解释水平最高的围棋要怎么下。事实上,连顶级棋手本人也不完全清楚为何自己下得如此之好。人类对自身的很多能力都同样缺乏认知,从在车流中驾驶汽车到识别人脸都是如此。哲学家和科学家迈克尔·波兰尼曾对这一怪象进行了精彩总结:“我们知道的比能够言说的要多。”这种现象后来被称为“波兰尼悖论”。
[4]“波兰尼悖论”并没有阻挡我们利用计算机来完成复杂任务,比如处理工资单、优化航班安排、传递电话信号和计算税金。然而,任何一个写过传统计算机程序的人都知道,要实现这些事务的自动化,必须极度精确地指示计算机每一步该怎么做。
[5] This approach to programming computers is severely limited; it can’t be used in many domains, like Go,where we know more than we can tell,or other tasks like recognizing common objects in photos, translating between human languages and diagnosing diseases—all tasks where the rules-based approach to programming has failed badly over the years.
[6] Deep Blue achieved its superhuman performance almost by sheer computing power: It sifted through9sift through筛选。millions of possible chess moves to determine the optimal move. The problem is that there are many more possible Go games than there are atoms in the universe, so even the fastest computers can’t simulate10simulate模拟。a meaningful fraction of them.To make matters worse, it’s usually far from clear which possible moves to even start exploring.
[7] What changed? The AlphaGo victories vividly illustrate11illustrate阐明;说明。the power of a new approach in which instead of trying to program smart strategies into a computer, we instead build systems that can learn winning strategies almost entirely on their own, by seeing examples of successes and failures.
[8] Since these systems don’t rely on human knowledge about the task at hand, they’re not limited by the fact that we know more than we can tell.
[5]这种编程方法具有严重的局限性,在很多领域都不适用,比如“我们知道但难以言说”的围棋,或者对照片中常见物体的识别、人类语言间的转译以及疾病的诊断等——多年以来,基于规则的编程方法在这些任务中都惨遭失败。
[6]“深蓝”的超人表现几乎完全是凭借计算能力来实现的:它通过筛选数百万种可能走法来确定最佳招数。但问题是,围棋的走法比宇宙中的原子数还要多,即使是速度最快的电脑,也无法模拟其冰山之一角。更糟糕的是,我们往往连从何处入手都不清楚。
[7] AlphaGo有何不同?在AlphaGo中,我们没有试图将巧妙的策略编入计算机程序中,而是创建了一系列系统,使它们能够在近乎完全自主的情况下,通过观察胜负实例来学习制胜策略。AlphaGo接二连三的胜利便生动地展现了这一新方法的威力。
[8]由于这些系统并不依赖人类对围棋的已有知识,因此并不会受到“波兰尼悖论”的局限。
[9] AlphaGo does use simulations and traditional search algorithms12algorithm算法。to help it decide on some moves, but its real breakthrough is its ability to overcome Polanyi’s Paradox. It did this by figuring out13figure out解决;算出;想出。winning strategies for itself,both by example and from experience.The examples came from huge libraries of Go matches between top players amassed14amass积累。over the game’s 2,500-year history. To understand the strategies that led to victory in these games, the system made use of an approach known as deep learning, which has demonstrated remarkable abilities to tease out15tease out梳理。patterns and understand what’s important in large pools of information.
[9]在某些走法中,AlphaGo的确会使用模拟和传统搜索算法来帮助决策,但其真正的突破在于有能力克服“波兰尼悖论”。AlphaGo通过以往案例和自身经验自行得出制胜策略。这些实例来自2500年围棋史上高手对决的丰富资源。为理解这些对决中使用的制胜策略,系统采用了一种叫作“深度学习”的方法。这种方法在梳理规律、从大量信息中找出重要信息的惊人能力已得到证实。
[10]人类大脑的学习是一个在神经元间形成和巩固联结的过程。深度学习系统采用的方法与此极其相似,以至于这种系统一度被称为“神经网络”。系统在软件中设置了数十亿个节点和联结,利用实例组成的“训练集”来强化刺激(正在进行的围棋比赛)与反应(下一步棋)之间的联结,然后让系统接收新的刺激,看其会作出何种反应。AlphaGo还和自己进行了数百万场对决,利用一种叫作“强化学习”的技术来记住管用的招数策略。
[10] Learning in our brains is a process of forming and strengthening connections among neurons16neuron神经元。. Deep learning systems take an analogous17analogous类似的。approach,so much so that they used to be called“neural nets.” They set up billions of nodes18node 节点。节点在程序语言中是XML文件中有效而完整的结构的最小单元。内含标示组的节点,加上必要属性、属性值及内容,便可构成一个元素。节点的标志符为<>。and connections in software,use “training sets” of examples to strengthen connections among stimuli19stimuli刺激;刺激物,其单数形式为stimulus。(a Go game in process) and responses(the next move), then expose the system to a new stimulus and see what its response is. AlphaGo also played millions of games against itself, using another technique called reinforcement learning to remember the moves and strategies that worked well.
[11] Deep learning and reinforcement learning have both been around for a while, but until recently it was not at all clear how powerful they were, and how far they could be extended. In fact, it’s still not, but applications are improving at a gallop20at a gallop飞快地。, with no end in sight. And the applications are broad, including speech recognition, credit card fraud detection, and radiology21radiology放射学。and pathology22pathology病理学。.Machines can now recognize faces and drive cars, two of the examples that Polanyi himself noted as areas where we know more than we can tell.
[12] We still have a long way to go,but the implications23implication含蓄;含意。are profound. As when James Watt introduced his steam engine 240 years ago, technology-fueled changes will ripple throughout our economy in the years ahead, but there is no guarantee that everyone will benefit equally. Understanding and addressing the societal challenges brought on by rapid technological progress remain tasks that no machine can do for us.
[11]深度学习和强化学习并非新鲜事物,但直到最近人们才意识到它们的威力以及发展潜能。事实上,人们对其认识依然不充分,但这些技术的应用正在取得飞速进步,而且没有尽头。它们的应用范围非常广泛,包括语音识别、信用卡欺诈侦测,以及放射学和病理学领域的应用。机器如今可以识别人脸和驾驶汽车——这两项技术都曾被波兰尼本人归为“我们知道但难以言说”的领域。
[12]未来的路还有很长,但是意义深远。正如240年前詹姆斯·瓦特推出蒸汽机一样,未来由技术推动的变革将会影响整个人类经济,但并不能保证每个人都能从中获得同等的好处。快速的技术进步带来的社会挑战,依然需要人类自己去理解和应对,这方面没有任何机器能为我们代劳。
1麻省理工学院理学学士与硕士,哈佛商学院博士,麻省理工学院数字经济项目负责人,同时任职于哈佛商学院和哈佛大学伯克曼互联网与社会研究中心。与埃里克•布林约尔松合著有《与机器赛跑》及纽约时报畅销书《第二次机器革命》等,同时著有《企业2.0》。
2麻省理工学院数字经济项目负责人,同时任职于麻省理工大学斯隆商学院,美国国家经济研究局研究助理,与安德鲁·麦卡菲合著有《与机器赛跑》及畅销书《第二次机器革命》等。
(译者曾获第五届“《英语世界》杯”翻译大赛二等奖)