Ya-Qin ZHANG
张亚勤



About the author Dr. Ya-Qin Zhang is an academician of Chinese Academy of Engineering, Tsinghua University Chair Professor of Intelligent Science, director of the Tsinghua Institute for AI Industry Research, fellow of the American Academy of Arts and Sciences, foreign fellow of Australian Academy of Technology Sciences and Engineering, fellow of U.S. National Academy of Inventors, fellow of the International Eurasian Academy of Sciences, IEEE Fellow, and CAAI Fellow. He previously served as President of Baidu, Corporate Senior Vice President of Microsoft, Chairman of Microsoft Research Asia, Managing Director of Microsoft Research Asia, and Chairman of Microsoft China R&D Group. As a world-class scientist and entrepreneur in digital video and artificial intelligence, he holds over 60 U.S. patents, has published over 500 academic papers, and has authored 11 books. His inventions in image and video compression and transmission technologies have been adopted by international standards bodies and are widely used in HDTV, Internet video, multimedia retrieval, mobile video and image databases.


关于作者张亚勤,中国工程院院士、清华大学“智能科学”讲席教授、智能产业研究院院长、美国艺术与科学院院士、澳大利亚国家工程院外籍院士、美国国家发明家科学院院士、国际欧亚科学院院士、IEEE会士、CAAI会士。曾任百度公司总裁,微软公司全球资深副总裁、微软中国研发集团主席、微软亚洲研究院院长及微软(中国)有限公司董事长。作为数字视频和人工智能领域的世界级科学家和企业家,拥有60多项美国专利,发表500多篇学术论文,并出版11本专著。发明的多项图像视频压缩和传输技术被国际标准采用,广泛应用于高清电视、互联网视频、多媒体检索、移动视频和图像数据库领域。







The following excerpts are translated from an interview with Ya-qin Zhang, titled "Embracing AI by Prioritizing Values over Technology” (张亚勤:将价值观放在技术之上拥抱AI). It was originally published in Internet Weekly, Issue 15, 2023.

▶ Cite Our TranslationConcordia AI. “Ya-qin Zhang — Chinese Perspectives on AI Safety.” Chineseperspectives.ai, 29 Mar. 2024, chineseperspectives.ai/Ya-qin-Zhang.

▶ Cite This Work

张亚勤(2023). “将价值观放在技术之上拥抱AI”. 互联网周刊, 2023(15).





Selected Excerpts

On AI Alignment:


I started talking about this issue 20 years ago. Whenever AI makes progress, this issue comes back. This is natural - the greater the technological capability becomes, the greater risks it will also have. In the early days, not many people believed in the capability of AI, and many people thought it was hot air. Now it seems that the AI scenarios and content in science fiction movies back then are getting closer and closer to reality now, and are becoming real step by step.
原文

关于AI对齐:


这一问题我20年前就开始谈了。每当AI有进展,这个问题就会回来。这也自然,任何技术能力越来越强时,它的风险也会越来越大。早期人们不太相信AI的能力,好多人觉得像是吹牛。现在看起来,当年科幻电影里的AI场景和内容,现在已越来越近了,正一步步成为现实。


Humans have two kinds of wisdom: inventing technologies and controlling the direction those technologies take. The two need to be balanced, but currently the latter lags slightly behind the former. To solve the problem of aligning AI with human values, developers first need to build technology and research on a foundation of alignment with those values. They must enable machines to understand human values and adhere to them. The word "alignment" is apt. This is not just an ethical issue, but also a matter of practical implementation. Developers and researchers must devote themselves to fulfilling the alignment imperative, not just pursuing capabilities without addressing alignment. This is critical. There is now a new discipline called AI Safety Research1, analogous to Rocket Safety Engineering for aerospace. Just as rocketry requires specialized safety engineers, AI needs experts devoted to studying its safety issues. 
人类拥有两种智慧:发明技术和控制技术走向,二者要均衡,目前后者稍微落后了些。要解决AI和人类价值观对齐问题,第一,做技术的人要把技术和研究放到对齐上面,先要让机器理解人的价值,追随人的价值。对齐(alignment)这个词很好,其实这不仅仅是伦理的问题,还有如何实现的问题。做技术和研究的人要致力于实现对齐任务,不能只开发能力,不着力对齐问题,这是相当重要的问题。现在有门新学科AI Security Research,即AI安全研究,就像航天有门学科Rocket Safety Engineering,专门研究火箭安全工程。AI也需要有人专门研究安全问题。


Secondly, some basic principles need to be formulated and adhered to. In the 1950s, American science fiction writer Asimov defined three principles for robots and humans. In 2017, a group of scientists also formulated the "Asilomar AI Principles", which I think are the basic principles for people and machines. Machines are always subordinates, and humans are the masters. No matter whether it is machines, software, or robots, they are subordinate, and their master can be a person, or a company, or our current entities. 
第二,要制定和坚持一些基本原则。20世纪50年代美国科幻作家阿西莫夫定义了机器人和人类的三原则。2017年一批科学家又制定了《阿西洛马人工智能23条原则》(Asilomar AI Principles),我认为这是人和机器的基本原则。机器永远是从属体,人类是主体。不管机器、软件,或是机器人也好,它是从属的,其主体可以是人,也可以是公司,或者我们目前的实体。



Thirdly, AI cannot have its own independent ethical and value system. It serves the human system, its values are human values, and its ethical system is the human ethical system. We want it to obey this system and achieve this system. In addition, AI needs to be trustworthy, secure and controllable, which is also very important.
第三,AI不能有自己独立的伦理和价值系统。它服务人的系统,它的价值就是人的价值,它的伦理体系就是人的伦理体系。我们要让它服从这一体系,实现这一体系。此外AI要可信任,具备安全性和可控性,这点也非常重要。



These issues involve technical, ethical, moral and legal aspects. The work being done to address them is still limited at present. Europe has just signed a new AI regulation, the Cyberspace Administration of China has drafted the "Regulations on the Management of Generative Artificial Intelligence Services (Draft for Comments)," and the Ministry of Industry and Information Technology has introduced the "Measures for the Administration of Data Security in the Field of Industry and Information Technology (Trial)." Only when AI technology R&D advances hand-in-hand with ethics, morals, and legislative supervision, can AI develop in a truly healthy way.
这类问题涉及技术、伦理道德和法律层面。当前人们在这方面所做工作还不多。最近欧洲刚签署一个新规则,中国网信办起草了《生成式人工智能服务管理办法(征求意见稿)》,工信部出台《工业和信息化领域数据安全管理办法(试行)》。技术研发、道德伦理、立法监管等合力并进,才能让AI发展更健康。


We welcome government regulation and oversight, as regulation can help steer AI in the right direction. Even if progress may be slower, regulation is still needed to ensure we are moving in the right direction. Sometimes government policies and regulations are introduced relatively slowly, just as with the internet - in the beginning there was wild growth, and it was only after reaching a certain level of development that regulations could be introduced to standardize things.
我们欢迎政府立法监管,监管才能使AI方向正确。哪怕走得慢一点,也需要监管,以确保方向正确。有时政府政策法规出台相对慢些,就像互联网一样,刚开始野蛮生长,发展到一定程度,才能出台法规加以规范。因此,正如我之前所说的两种智慧要平衡,技术往前跑,监管来规范。信息社会技术发展快,人的意识形态、政策法律体系仍然按工业时代的节奏,自然会滞后些。



On being vigilant of the risks technology might bring: 


At Tsinghua University's Institute for Artificial Intelligence Industry Research (AIR), I emphasize that human values, value systems, and responsibility must be prioritized above technology in research and development. Aligning with this philosophy, AIR has chosen three key directions where AI will have huge influence in the next 5-10 years, and all research projects (including corporate partnerships) relate to these focus areas:
  1. Smart Internet of Things (IoT), focused on areas such as green computing and deploying small models at the edge for energy conservation and emissions reduction to help achieve the dual carbon goals (carbon peaking and carbon neutrality). IoT has many potential applications, but AIR concentrates on writing papers concerning the dual carbon goals.
  2. Smart transportation, robots, and self-driving, with safety as the top priority. Self-driving can increase safety by more than a factor of 10, since 90% of traffic accidents currently involve human error. AI driving can eliminate human mistakes and greatly improve safety, while enabling low carbon emissions and seamless integration of various applications at high efficiency.
  3. Smart healthcare, new drug R&D using AI, biotechnology, serving human life and health.

关于警惕科技所带来的风险:


我在清华大学智能产业研究院(AIR)强调,做研究或者做技术,一定要把人的价值、价值观和责任放在技术之上。因此,AIR选择了在未来五年十年AI具有巨大影响力的三个方向,研究课题(包括与公司合作项目)都与这一理念相关。一是智慧物联,面向双碳(碳达峰碳中和)的绿色计算、小模型部署到端等,节能减排。物联网应用广泛,可以做许多东西,但AIR选择围绕双碳做文章。二是智慧交通,机器人和无人驾驶,安全第一。无人驾驶安全性增加10倍以上,现在90%交通事故都是人为事故。AI驾驶可以排除人工驾驶中的失误,大大增加安全性。同时,低碳节能减排,各种应用无缝衔接,效率高。三是智慧医疗,AI新药研发、生物技术,服务人的生命健康。



Every technology is like nuclear technology – if humankind had a choice, perhaps it would be best not to discover dangerous materials like uranium and radium. While nuclear magnetic resonance provides medical benefits, nuclear weapons can destroy humanity. Similarly, early chemical and biological advances enabled weapons before the major world powers reached consensus to ban biological and chemical weapons. After the emergence of nuclear weapons, banning their use soon followed suit. Now with gene editing technology, countries worldwide have enacted clear legislation prohibiting its use to alter species, especially humans.
每项技术就如核技术,如果人类有选择,也许最好不去找像铀或镭这些放射性物质。核磁共振医学应用造福人类,而核武器却可以毁灭人类。像化学和生物,早先有生物战和化学武器。后来因为世界大国之间达成共识,立法禁止使用生物化学武器。核武器出现后,也就相应立法禁核。现在的基因编辑技术,世界各国也有明确的立法,不能用于改变物种,尤其是针对人类。


There needs to be some clear, basic rules to regulate and constrain technological development and application. AI, especially now at the level of large language models, has generative capabilities and unpredictability - people cannot completely foresee what it can generate.
需要有一些清晰的基本规则,来规范和约束技术的发展与应用。AI,尤其现在到大语言模型程度之后,由于具备生成式能力,具有不可预测性,它能生成什么人们并不能完全预知。



In the past, AI mainly helped people with analysis, decision-making and prediction, but now it is entirely possible to create new things. Without control, this may not be a good thing. Especially for banking, finance, or systems with critical missions, caution should be exercised when applying AI. For some AI capabilities, humanity still does not understand the causal relationships, such as bounded rationality problems, how AI achieves intelligence, the black box problem, or transparency problems - people are not clear about its causal relationships.
过去AI工作主要帮助人做分析、决策和预测,但现在它完全可能创造出新的东西,不加控制,未必是好事。尤其是银行金融或者具有关键使命的系统,应用AI时,小心保守些为好。AI有些能力的因果关系目前人类还不清楚,比如说智能有限问题、AI怎么达到智能、黑盒子问题或者透明性问题,人们并不明晰它的因果关系。


Sometimes we understand the "what" but not the "why" - we may be able to achieve things grasping only 30-40% of the mechanics involved, observing phenomena without comprehending the underlying reasons. In fact, understanding the "why" is critical. Since the inner workings of AI are still opaque, we must exercise abundant caution when applying AI to physical systems or mission-critical applications.
我们有时了解what而不太清楚why,可能了解what百分之三四十就可以做,知其然不知其所以然。其实Why很重要,我们现在不清楚AI Why的情况下,所以在应用到物理系统或关键使命体系时,更得小心保守。




On the importance and urgency of keeping risks from AI under control: 


I feel the importance of these issues growing, with increasing discussions on such topics. Take the Davos Forum as an example – when I first participated 10+ years ago, discussions centered on the profound changes and far-reaching impacts brought by AI to the 4th industrial revolution or social transformations. But starting in 2018, the Davos agenda began focusing mainly on the risks posed by AI and their governance. At the recent Tianjin Summer Davos Forum, 80% of the agenda addressed controling risks. In earlier Davos years, explorations of AI capabilities dominated, with people unconvinced that AI had special functions. They believed that AI was just software, only able to do a limited set of things and unlikely to profoundly disrupt industry.

关于控制技术风险的重要性和迫切性: 


现在越来越感受到问题的重要性,越来越多地讨论这些类问题。以达沃斯论坛为例,我十多年前参加达沃斯论坛的时候就在讨论AI对第四次工业革命或者社会变革的深刻变化和深远影响。但从2018年开始,达沃斯论坛的议题开始主要谈论AI发展所带来的风险及其管控,在刚刚闭幕的天津夏季达沃斯论坛上,80%议题都谈风险控制。早年我参加达沃斯论坛时,大家多在探讨AI的能力,对它具有什么特别功能并不确信,认为人工智能就是个软件,所能做的事情有限,不可能对产业有什么深远影响。



Later, AI-related topics multiplied, first worrying that big data enables data monopoly giants. In 2019, I attended Davos as Baidu CEO, when some 20-30 major company CEOs discussed corporate social responsibility. Especially after the Facebook data leak and election manipulation scandal, people thought that companies needed to take on more responsibility the more data they controlled and the stronger their AI capabilities. In recent years, risk awareness has grown – previously, Chinese media rarely spotlighted AI risks. People like you rarely asked me such questions, usually querying how AI transforms industries, investment opportunities, China-US competition, etc.
之后人们越来越多地谈论AI相关话题。先是谈数据,担心掌握大数据会形成数据垄断大公司。我2019年以百度总裁的身份参加达沃斯论坛。当时有二三十家大公司CEO都在讨论大公司大责任大担当的话题。特别是当时脸书(Facebook)数据泄露和操纵选举事件发生后,人们觉得公司掌握数据越多,AI能力越强,需要承担的责任也更大。这几年大家的风险意识增强。之前国内媒体很少谈论AI风险,像你这样问我的人很少,一般都是关注AI怎么改变产业,投资机会在哪里,中国和美国企业如何竞争等等。



We embrace AI's promise but its risks now attract increasing concern. Recently, two famous statements addressed AI risks – one by Tesla's Elon Musk and Life 3.0 author Max Tegmark, calling to pause training models more powerful than GPT-4 for at least 6 months.
我们拥抱AI,希望它走得更好。但目前对于AI可能的风险问题,已经引起越来越多的人关注和担心。最近关于AI风险有两份著名的公开声明,一份由特斯拉创始人、SpaceX公司CEO埃隆马斯克(Elon Musk)和美国未来生命研究所(Future of Life Institute)创始人、《生命3.0》作者迈克斯泰格马克(Max Tegmark)发起的,呼吁暂停训练比GPT4更强大的AI模型至少6个月。



Another statement was initiated by Cambridge University assistant professor David Krueger and signed by over 350 top researchers, engineers, and CEOs. Signatories included University of Toronto professor Geoffrey Hinton, deep learning pioneer and Turing Award winner Yoshua Bengio of the University of Montreal, Bill Gates, OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, and other experts in related fields. Their 22-word declaration stated: "Mitigating the risk of extinction from AI should be a global priority alongside other risks like pandemics and nuclear war.” ...
另一份由剑桥大学助理教授戴维克鲁格(David Kreuger)发起,包括多伦多大学教授杰弗里?辛顿(Geoffrey Hinton)和深度学习之父、图灵奖得主、蒙特利尔大学教授约书亚本吉奥(YoshuaBengio),比尔盖茨、OpenAICEO(主导ChatGPT)山姆奥特尔曼、谷歌 DeepMind 首席执行官戴米斯?哈萨比斯(Demis Hassabis)等多位顶级研究人员、工程师和CEO,就AI对人类可能构成的威胁发出最新警告,超过350位相关领域人员共同签署了这份只有22个单词的声明:“减轻AI带来的灭绝风险,应该与流行病和核战争等其他社会规模的风险,一起成为全球优先事项。”...


I did not sign the first statement, because I believe research is difficult to halt entirely. One company may pause R&D, but others may continue; or one country may pause research but cannot prevent another from progressing. I signed the second statement, because without risk awareness, AI researchers may not pay sufficient attention. Uncontrolled AI research could lead to disastrous risks. With risk awareness, government, companies, research institutes, and all parties in society will be vigilant at all times and strengthen supervision, just like for nuclear weapons and COVID. This will put technology on the right path, to achieve a balance between development and risk mitigation.
This achieves balance between progress and risk mitigation.
第一份声明我没签名,因为我觉得科研很难停得下来。一个企业可以暂停研发,其他企业未必会暂停;或者一个国家可以暂停相关研究,但并不能阻止另一国家继续。第二份声明我签名了,我认为做人工智能研究要是没有这样的风险意识,就不会重视,如果AI研究一旦失控就会带来灾难性的风险。有了风险意识之后,政府、企业、研究院校、社会各方就会像对待核武器、新冠疫情一样,时刻警惕,强化监管,使技术走在正确的道路上,从而达到发展和风险的平衡。





The following excerpt is from Ya-Qin Zhang’s commencement speech at Tsinghua University School of Economics and Management's 2023 graduation ceremony, titled “Leading the AI Era” (张亚勤:引领AI时代丨清华经管学院2023毕业典礼演讲全文).

▶ Cite Our TranslationConcordia AI. “Ya-qin Zhang — Chinese Perspectives on AI Safety.” Chineseperspectives.ai, 29 Mar. 2024, chineseperspectives.ai/Ya-qin-Zhang.

▶ Cite This Work

张亚勤(2023-6-26). “引领AI时代”. 在清华大学经济管理学院2023年毕业典礼上的演讲. https://air.tsinghua.edu.cn/info/1007/2057.htm




Over two millennia ago, the eminent Greek philosopher Socrates considered morality the soul's pursuit of truth. At around the same time, our great Chinese philosopher Confucius regarded the benevolence and righteousness inherent to human nature as the cornerstones for building the structure of society. These concordant insights from disparate cultures are no coincidence. Such wisdom proves even more vital today, as we navigate multiplying choices, confusion, and temptation.
两千多年前,伟大的希腊思想家苏格拉底将道德作为追求真理的灵魂。大约同一时期,我们伟大的中国哲学家孔子把人性的“仁义”作为构建社会结构的基石。在截然不同的文化下,两位伟大思想家所见略同,并非巧合。今天,当我们面临更多选择、迷茫和诱惑时,这一点就变得更加重要。


Technology is neutral, but innovators have a mission. With artificial intelligence capabilities rapidly advancing, so too do the potential risks. This compels examining the social, cultural, and ethical implications of AI, including the responsibilities it confers. We must reassess the relationship between humans and machines, as well as human nature and values themselves. Adequate preparation is essential to handle the uncertainty and complexity inherent to artificial intelligence.

技术是中立的,但创新者有使命。技术是工具,要为人类服务。随着人工智能能力的飞速发展,它所带来的潜在风险也在不断增加。这迫使我们思考人工智能技术对社会、文化、伦理等方面的影响和责任。我们要重新审视人类与机器的关系,以及人类自身的本质和价值。对于人工智能技术的不确定性和复杂性,我们必须做好充分准备和应对。

Last month, I personally signed the “Statement on AI Risk” launched by an AI safety organization. It calls for making the mitigation of existential AI risks a global priority, alongside other large-scale threats like pandemics and nuclear war. The Institute for AI Industry Research at Tsinghua University where I work, known as AIR, is committed to responsible AI development. The first email I sent AIR staff detailed the 3R principles of AI I formulated: Resilient, Responsive and Responsible. When studying theories, algorithms and applications, we must consider the meaning and potential impacts of the technology, placing ethical concerns and human values above technological capabilities.
上个月,我个人签署了由一个AI安全机构发起的《AI风险声明》,呼吁减轻被人工智能灭绝的风险,应该与流行病和核战争等其他大规模社会性风险一样,成为全球优先解决的事项。我所在的清华大学智能产业研究院AIR致力于做负责任的AI。我在AIR送出的第一个全员邮件就是我制定的AI的3R原则,Resilient, Responsive, Responsible,在研究理论、算法和应用模型时,必须考虑技术的意义和可能带来的结果,并将伦理问题和价值观置于技术之上。


As an optimist, I believe humans possess two vital types of wisdom: inventing technology and guiding its trajectory. I firmly trust in our ability to strike this balance. But we must maintain a sense of crisis and act now.
作为一个乐观主义者,我相信人类拥有两种智慧:发明技术的智慧和把握技术发展方向的智慧。我坚信我们有能力找到这种平衡,但我们必须保持危机意识,并立即采取行动。




Translator’s notes 


1. Although Zhang uses the English AI Security Research, AI alignment research is more typically described as AI Safety Research in English-language discourse.




Other Authors



Chinese Perspectives
on Existential Risks from AI

Contact us at: info@concordia-ai.com