How Will AI Change the WorldAI 是怎么改变世界的
2023-10-03TED-Ed
TED-Ed
自从2017 年人工智能商业化爆发,尤其是ChatGPT 面世以來,行业内外出现了许多讨论的声音,下面这篇文章节选自TED 演讲,可能会让我们用更加理性的态度看待人工智能。
In the coming years, artificial intelligence(AI) is probably going tochange your life, and likely the entire world. But people have a hard timeagreeing on exactly how.
There’s a big difference between asking a human to do something andgiving that as the 1)objective to an AI system. When you ask a human to getyou a cup of coffee, you don’t mean this should be their life’s mission, andnothing else in the universe matters.
And the problem with the way we build AI systems now is that we givethem a fixed objective. The algorithms require us to 2)specify everything inthe objective. And if you say, “Can we fix the acidification of the oceans?”“Yeah, you could have a catalytic reaction that does thatextremely efficiently, but it consumes a quarter of the oxygenin the atmosphere, which would apparently cause us to diefairly slowly and unpleasantly over the course of severalhours.” The AI system may answered.
So, how do we avoid this problem? You might say, okay,well, just be more careful about specifying the objective—don’t forget the atmospheric oxygen. And then, of course,some side effect of the reaction in the ocean poisons all the fish. Okay, well I mean don’t kill the fish either. Andthen, well, what about the seaweed? Don’t do anythingthat’s going to cause all the seaweed to die. And onand on and on.
And the reason that we don’t have to do that withhumans is that humans often know that they don’tknow all the things that we care about. For example, ifyou ask a human to get you a cup of coffee, and youhappen to be in the Hotel George Sand in Paris, wherethe coffee is 13 euros a cup, it’s entirely 3)reasonable to come back andsay, “Well, it’s 13 euros, are you sure you want it? Or I could go next doorand get one?” And it’s a perfectly normal thing for a person to do. Foranother example, to ask, “I’m going to repaint your house—is it okay if Itake off the drainpipes and then put them back?” We don’t think of this asa terribly sophisticated capability, but AI systems don’t have it becausethe way we build them now, they have to know the full objective. If webuild systems that know that they don’t know what the objective is, thenthey start to exhibit these behaviors, like asking permission before gettingrid of all the oxygen in the atmosphere.
In all these senses, control over the AI systemcomes from the machine’s uncertainty aboutwhat the true objective is. And it’s when you buildmachines that believe with certainty that theyhave the objective, that’s when you get this sort ofpsychopathic behavior. And I think we see the samething in humans.
There’s an interesting story that E.M. Forster wrote, where everyone is entirely machine-dependent. The story is reallyabout the fact that if you hand over the management of your civilization tomachines, you then lose the incentive to understand it yourself or to teachthe next generation how to understand it. You can see “WALL-E” actuallyas a modern version, where everyone is enfeebled and infantilized by themachine, and that hasn’t been possible up to now.
We put a lot of our civilization into books, but the books can’t run itfor us. And so we always have to teach the next generation. If you workit out, it’s about a trillion person years of teaching and learning and anunbroken chain that goes back tens of thousands of generations. Whathappens if that chain breaks?
I think that’s something we have to understand as AI moves forward.The actual date of arrival of general purpose AI—you’re not going to beable to 4)pinpoint, it isn’t a single day. It’s also not the case that it’s all ornothing. The impact is going to be increasing. So with every advance inAI, it significantly expands the range of tasks.
So in that sense, I think most experts say by the end of the century,we’re very, very likely to have general purpose AI. The median issomething around 2045. I’m a little more on the conservative side. I thinkthe problem is harder than we think.
I like what John McAfee, he was one ofthe founders of AI, when he was asked thisquestion, he said, somewhere between 5 and500 years. And we’re going to need, I think,several Einsteins to make it happen.
1) objective n. 目標 2) specify v. 明确规定
3) reasonable adj. 明智的 4) pinpoint v. 明确指出
词组加油站
side effect 副作用
care about 关心
ask permission 取得许可
在将来的岁月里,人工智能极有可能会改变你的生活,甚至有可能改变全世界。但人们对于这种改变的呈现方式结论不一。
要求一个人做某件事与将其作为目标交给人工智能系统是有很大区别的。当你拜托一个人帮你拿杯咖啡时,你并不是在让这个人奉它为人生使命,以致宇宙间再也没有更重要的事了。
而我们现在构建人工智能系统的问题是我们给了它们一个固定目标。算法是要求我们规定目标里的一切。如果你说:“我们能解决海洋的酸化问题吗?”人工智能可能会回答:“没问题,可以形成一种非常有效的化学反应,但这将会吞噬大气层里四分之一的氧气,从而导致我们全都慢慢地、不愉快地在几个小时后死去。”
那,我们该如何避免这种问题呢?你可能会说,好吧,那我们就对目标更具体地说明一下——别忘了大气层里的氧气。然后,当然也要避免海洋里某种效应的副作用会毒死所有的鱼。好吧,那我就再定义一下,也别毒死鱼。那么,海藻呢?也别做任何会导致海藻全部死亡的事。以此类推。
我们对人类不需要这样做是因为人们大都明白自己并不可能对每个人的爱好无不知晓。
例如,如果一个人拜托你买咖啡,而你刚好在一杯咖啡为13 欧元的巴黎乔治圣德酒店,你很有可能会再回去问一下:“喂,这里咖啡得13 欧元,你还要吗?要不我去隔壁店里帮你买杯?”这对人类来讲再正常不过。又如,当你问道:“我要重新粉刷你的房子,我可以先把排水管拆了再装回去吗?”
我们并不觉得这是一种特别复杂厉害的能力,但人工智能系统没有这种能力,因为在我们当下的建构方法里,它们必须知道全部目标。如果我们构建的系统明白它们并不了解目标,它们就会开始出现此类行动: 比如在除掉大气层里的氧气之前先征求许可。
在这种意义上,对于人工智能系统的控制源于机器对真正目标的不确定性。而只有在构建对目标自以为有着绝对肯定性的机器时,才会产生这种精神错乱的行为。我觉得对于人类,也是相同的理念。
E.M. 福斯特写过一篇引人深思的故事。故事里的人们都完全依赖机器。其中寓意是,如果你把文明的管理权交给了机器,那你将会失去自身了解文明、把文明传承给下一代的动力。我们可以将《机器人总动员》视为现代版:由于机器,人们变得衰弱与幼儿化,到目前为止,这还不可能。
我们把大量文明写入书籍,但书籍无法为我们管理文明。所以,我们必须一直指导下一代。计算下来,这是一个在一万亿年、数以万计的世代之间绵延不绝的教导与学习的链条。这条链如果断了,将会如何?
随着人工智能的发展,我认为这是我们必须了解的事情。我们将无法精准地确认通用型人工智能真正来临的时日,因为那并不会是一日之功。也并不是存在或不存在的两项极端。这方面的影响力将是与日俱增的。所以随着人工智能的进步,它所能完成的任务将显著扩展。
这样一看,我觉得大部分的专家都说我们极有可能在21 世纪末前生产通用型人工智能。中位数位置在2045 年左右。我对此偏于保守派。我认为问题比我们想象的还要难。
我喜欢人工智能的发明家之一约翰·麦卡菲对这个问题的答案: 他说,应该在5 到500 年之间。我觉得,这得要几位爱因斯坦才能实现。