Tiejun HUANG

About the author Dr. Tiejun Huang is a professor at the School of Computer Science, Peking University, and the Dean of Beijing Academy of Artificial Intelligence (BAAI). He is a recipient of the National Science Fund for Distinguished Young Scholars, Changjiang Distinguished Professorship, and National Talent Program for Technological Innovation Leadership.

His research focuses on visual information processing and brain-inspired intelligence. He has made important contributions to efficient video coding standards and visual big data analysis and processing frameworks. He proposed the spike vision model and developed ultra-high-speed spike vision chips and systems.

He has received one Second Prize of the National Award for Technological Invention and two Second Prizes of the National Award for Progress in Science and Technology. He has published over 300 academic papers and participated in formulating over 20 national, international, and Institute of Electrical and Electronics Engineers (IEEE) standards as the primary drafter. He has authorized over 100 Chinese and international invention patents.

He is a fellow of the China Computer Federation, Chinese Association for Artificial Intelligence, and China Society of Image and Graphics.


The following excerpt is a translation of Tiejun Huang’s 2023 BAAI Conference AI Safety and Alignment Forum Closing Keynote (黄铁军:如何构建安全AI,我们知之甚少,讨论无法闭幕). 

▶ Cite This Work Huang, Tiejun. "Closing Keynote." AI Safety and Alignment Forum, BAAI Conference, Beijing, 2023. Translated by Concordia AI, Aug 2023. 


This morning, Mr. Sam Altman, the young CEO of OpenAI, kicked off the highly anticipated “AI Safety and Alignment” forum at the BAAI Conference. This fascinating forum concluded with a speech by Professor Geoffrey Hinton, known as the “Godfather of Deep Learning.” Hinton, now approaching his eighties, and Sam, in the prime of his life, both showed us a future without definite answers.

今早,年轻的 OpenAI CEO Sam Altman 先生拉开了本次智源大会最受关注的「AI 安全与对齐」论坛的序幕,这一重磅论坛以「深度学习之父」Geoffrey Hinton教授的演讲结尾。Hinton如今已年近八旬,Sam则是而立之年,他们都给我们展示了一个没有确定答案的未来。

In general, AI is becoming more and more powerful. The risks are evident and growing. That is our reality today. How do we build a safe AI? We still know very little about this. We can draw on historical experiences in managing drugs and nuclear weapons. Academician Andrew Yao discussed quantum computing, a completely unknowable prospect that nevertheless can be regulated to some extent. However, highly complex AI systems generate unpredictable outcomes. Can we effectively explain their mechanisms or attempt to understand their generalization abilities using traditional risk assessment methods? Our explorations have just begun. We face entirely new challenges, and existing experiences and methods may not be able to solve these new problems. In particular, Professor Stuart Russell and Professor Hinton both mentioned the question of whether an AI with its own goals will serve itself or serve humanity. This is an open question that requires careful consideration.
总得来说,AI越来越强大。风险显而易见,与日俱增,这就是我们今天的现实。如何构建一个安全的AI?我们仍知之甚少。我们可以借鉴历史上管理药物、管理核武器等方面的经验。姚期智院士谈到量子计算,那是完全不可知的一个世界,都有办法一定程度上去管控它。但是高度复杂的AI系统产生的难以预测。用传统的风险测试的方法,解释其机制,或试图理解泛化能力,是否有效?所有的探索刚刚开始,我们面临着全新的挑战,原有的经验和方法可能都无法解决些新问题。 特别是,Russell 教授和 Hinton 教授都讲到,如果 AI 有了自己的目标,它到底是服务于自己的目标还是服务于人类?这是一个开放问题,我们需要谨慎思考。

Today, many people believe that General Artificial Intelligence (GAI) refers to an increasingly capable form of AI, and we are excitedly working towards creating it. However, in the field of AI, the accurate term for this is Artificial General Intelligence (AGI), not GAI.
今天,许多人认为通用人工智能指的是通用性越来越强的一种人工智能,我们抱着一种很兴奋的态度去创造它。但是,在 AI 领域中,对其准确的定义应该是 AGI、而非 GAI。

AGI refers to an AI that can match human-level performance in all aspects of human intelligence, adaptively respond to challenges from the external environment, and accomplish any task that a human can do. In short, it is a “superhuman.” Only intelligence that surpasses human capabilities can truly be called AGI. Can we create such an AI? As early as 2015, I believed the answer was affirmative. Hinton pointed out in this forum that we don’t necessarily have to use digital methods and can even use simulated devices to achieve this goal. In my popular science article from 2015, I proposed that with new simulation device materials, we could create such artificial intelligence around 2045. I published that article on January 7, 2015. Almost simultaneously, at a January 2 to 5 AGI conference organized by Professor Max Tegmark in Puerto Rico, experts made predictions about the timeline for achieving AGI. There was significant variation in their opinions. Among the attending experts, half believed AGI could be achieved before 2045, while the other half believed it would come after 2045. Some even thought it would never be achieved. Previously, many people considered this goal to be “too sci-fi.” However, with the emergence of GPT-4, views have changed.
AGI 的意思是:在人类的智能所有方面都达到人类水平,能够自适应地应对外界环境挑战,完成人类能完成的所有任务的人工智能。它就是「超人」,一定是比人类强大的智能,才真正叫 AGI。所以,自主智能、超人智能、强人工智能,其实讲的都是一种全面超越人类的智能。我们能否创造出这种人工智能?早在 2015 年,我就认为答案是肯定的。Hinton 在本次论坛中指出,我们不一定用数字的方法,甚至可以用模拟的器件实现这一目标。在2015年的这篇文章中我就提出,用全新的模拟器件材料,在 2045 年前后能够创造出这样的人工智能。我发表那篇科普文章的时间是 2015 年 1 月 7 日。几乎同时,在 2015 年 1 月 2 日- 5 日,在波多黎各举行的,由 Max Tegmark 教授组织的 AGI 的会上,专家们对实现 AGI 的时间进行了预测,不同专家的看法差别很大。 在与会的专家中,有一半的人认为在 2045 年之前能够实现 AGI、当然,也有一半人认为这一时间在 2045 年之后,甚至有人认为永远不能实现。以前,许多人认为这一目标「过于科幻」。但是,随着GPT-4的出现,大家的看法发生了变化。

Should we create this “superhuman” AGI? And what would be the consequences if we do? In fact, the famous Ashby’s Law of Requisite Variety in cybernetics provided us with a conclusion nearly seven decades ago: any effective control system must be as complex as the system it controls. Professor Hinton also pointed out that a simple system cannot control a system that is more complex than itself. As he said, if a frog invents humans, can it control humans? If humans invent an AGI that is more powerful than themselves, theoretically, it is simply impossible for humans to control it. As long as something is more powerful than us, it will become the controller of this world. Currently, our enthusiasm for developing AGI is running high, driven by investment opportunities. However, if our goal is truly to develop an AGI that is more powerful than us and completely under its own control, should we proceed? “To be or not to be?” Tegmark presented various possibilities in his book Life 3.0. His most important point is: it will be a more powerful AGI that determines the fate of the world, not us. Should we create such artificial intelligence?
我们是否应该创造这种「全面超人」的人工智能?如果创造出来了,后果如何?实际上,早在六七十年前,控制论中著名的「阿什比定律」就为我们给出了结论:任何有效的控制系统都必须和它控制的系统一样复杂。Hinton 教授也指出,一个简单的系统是无法控制一个比它更复杂的系统的。也就是他刚才说的,青蛙如果发明了人类,它能控制人类吗?如果人发明了比自己更强大的 AGI,从理论上说根本不可能控制它。只要它比你强大,它就是控制这个世界的控制者。目前,我们对研发通用人工智能的热情高涨,出于投资风口。但是,如果真的将研发一种比我们强大、完全被它控制的 AGI 作为目标,我们是否应该做?「To be or Not to be」?泰格马克在他的《生命3.0》中提出了多种可能性。最重要的观点是:决定这个世界的是更强大的 AGI,而不是我们。我们是否应该创造这样的人工智能?

At the moment, we are in an uncertain phase, I call this “Near AGI.” Everything is controllable as long as it is certain. Uncertainty is what we have to fear. Yet, today we are in a state of uncertainty. Several years ago, the famous AlphaGo displayed better decision-making than any human. Go is an excellent medium for demonstrating decision-making ability. AlphaGo’s decision-making ability, when dealing with complex situations, is stronger than that of 9-dan professionals, meaning it is much better than  almost all of us. I invented a spike vision chip called “Electric Eye” that can perceive 1,000 times faster than a human. To robots, humans move as slowly as bugs crawl. It is very difficult for humans to stand a chance against such agents. GPT-4 knows orders of magnitude more than humans. How many books can a person read in a lifetime? It is often said that one can read no more than 10,000 books. On the contrary, the data GPT-4 has is almost complete; if not complete now, it will be complete within three years. Although we do not consider GPT-4 to be a true AGI, its knowledge base and ability to master that knowledge are already strong. Is such a “Near AGI” better than us? Is it more intelligent than we are? None of the guests at today’s forum gave a definite answer in their reports. They did not explicitly say “NO”, “rest assured”, or “today’s AI systems are not as powerful as humans.” And that’s the problem: we don’t know for sure whether AI has already overtaken us, we don’t know when it will, and the problem is in a completely uncontrollable state. If we can deal with risk with the same enthusiasm as for investment, there is at least some possibility of shaping the future.

目前,我们处在一个模糊的阶段。我将其称之为「Near AGI」,任何事情只要确定都是可以把控的,就怕不能确定。而今天,我们就处在一个不能确定的状态。前些年大名鼎鼎的 AlphaGO 的决策能力比任何人都要强。围棋可以充分体现出决策能力。AlphaGO的决策能力,在处理复杂局面的情况下,强于九段高手,它的决策能力几乎比我们所有人都要强得多。我发明了一种脉冲视觉芯片「电眼」,它的感知速度比人快 1000 倍。在机器人眼中,人类的动作像虫子爬行那么慢。人类很难与这样的智能体对抗。GPT-4 所知道的东西,多于人类好几个数量级。每个人一生能读多少书?经常说不会超过1万本书,而它掌握的数据几乎是全量的,如果现在不是全量,3年之内也会全量。虽然我们认为 GPT-4 还不算真正的 AGI,但是其知识储备量和融会贯通的能力已经很强。这样的「Near AGI」比我们强吗?超过我们的智能了吗?今天论坛的所有嘉宾在报告中都没有给大家一个确定的答案。并没有明确说:“NO”,“放心”,“今天的AI系统还不如人类强大呢”。这就是问题所在。我们并不清楚地知道人工智能是不是已经超过我们,不知道它何时会超过我们,该问题处在一个完全无法把控的状态。如果我们能够像投资那么热情一样来应对风险,至少还有一定把握未来的可能。

But, do you believe humans can do it? I don't know.


Other Authors

Chinese Perspectives
on Existential Risks from AI

Contact us at: info@concordia-ai.com