About the author Dr. Ya-Qin Zhang is an academician of Chinese Academy of Engineering, Tsinghua University Chair Professor of Intelligent Science, director of the Tsinghua Institute for AI Industry Research, fellow of the American Academy of Arts and Sciences, foreign fellow of Australian Academy of Technology Sciences and Engineering, fellow of U.S. National Academy of Inventors, fellow of the International Eurasian Academy of Sciences, IEEE Fellow, and CAAI Fellow. He previously served as President of Baidu, Corporate Senior Vice President of Microsoft, Chairman of Microsoft Research Asia, Managing Director of Microsoft Research Asia, and Chairman of Microsoft China R&D Group. As a world-class scientist and entrepreneur in digital video and artificial intelligence, he holds over 60 U.S. patents, has published over 500 academic papers, and has authored 11 books. His inventions in image and video compression and transmission technologies have been adopted by international standards bodies and are widely used in HDTV, Internet video, multimedia retrieval, mobile video and image databases.


The following excerpts are translated from an interview with Ya-qin Zhang, titled "Embracing AI by Prioritizing Values over Technology” (张亚勤:将价值观放在技术之上拥抱AI). It was originally published in Internet Weekly, Issue 15, 2023.

▶ Cite Our TranslationConcordia AI. “Ya-qin Zhang — Chinese Perspectives on AI Safety.” Chineseperspectives.ai, 29 Mar. 2024, chineseperspectives.ai/Ya-qin-Zhang.

▶ Cite This Work

张亚勤(2023). “将价值观放在技术之上拥抱AI”. 互联网周刊, 2023(15).

Selected Excerpts

On AI Alignment:

I started talking about this issue 20 years ago. Whenever AI makes progress, this issue comes back. This is natural - the greater the technological capability becomes, the greater risks it will also have. In the early days, not many people believed in the capability of AI, and many people thought it was hot air. Now it seems that the AI scenarios and content in science fiction movies back then are getting closer and closer to reality now, and are becoming real step by step.



Humans have two kinds of wisdom: inventing technologies and controlling the direction those technologies take. The two need to be balanced, but currently the latter lags slightly behind the former. To solve the problem of aligning AI with human values, developers first need to build technology and research on a foundation of alignment with those values. They must enable machines to understand human values and adhere to them. The word "alignment" is apt. This is not just an ethical issue, but also a matter of practical implementation. Developers and researchers must devote themselves to fulfilling the alignment imperative, not just pursuing capabilities without addressing alignment. This is critical. There is now a new discipline called AI Safety Research1, analogous to Rocket Safety Engineering for aerospace. Just as rocketry requires specialized safety engineers, AI needs experts devoted to studying its safety issues. 
人类拥有两种智慧:发明技术和控制技术走向,二者要均衡,目前后者稍微落后了些。要解决AI和人类价值观对齐问题,第一,做技术的人要把技术和研究放到对齐上面,先要让机器理解人的价值,追随人的价值。对齐(alignment)这个词很好,其实这不仅仅是伦理的问题,还有如何实现的问题。做技术和研究的人要致力于实现对齐任务,不能只开发能力,不着力对齐问题,这是相当重要的问题。现在有门新学科AI Security Research,即AI安全研究,就像航天有门学科Rocket Safety Engineering,专门研究火箭安全工程。AI也需要有人专门研究安全问题。

Secondly, some basic principles need to be formulated and adhered to. In the 1950s, American science fiction writer Asimov defined three principles for robots and humans. In 2017, a group of scientists also formulated the "Asilomar AI Principles", which I think are the basic principles for people and machines. Machines are always subordinates, and humans are the masters. No matter whether it is machines, software, or robots, they are subordinate, and their master can be a person, or a company, or our current entities. 
第二,要制定和坚持一些基本原则。20世纪50年代美国科幻作家阿西莫夫定义了机器人和人类的三原则。2017年一批科学家又制定了《阿西洛马人工智能23条原则》(Asilomar AI Principles),我认为这是人和机器的基本原则。机器永远是从属体,人类是主体。不管机器、软件,或是机器人也好,它是从属的,其主体可以是人,也可以是公司,或者我们目前的实体。

Thirdly, AI cannot have its own independent ethical and value system. It serves the human system, its values are human values, and its ethical system is the human ethical system. We want it to obey this system and achieve this system. In addition, AI needs to be trustworthy, secure and controllable, which is also very important.

These issues involve technical, ethical, moral and legal aspects. The work being done to address them is still limited at present. Europe has just signed a new AI regulation, the Cyberspace Administration of China has drafted the "Regulations on the Management of Generative Artificial Intelligence Services (Draft for Comments)," and the Ministry of Industry and Information Technology has introduced the "Measures for the Administration of Data Security in the Field of Industry and Information Technology (Trial)." Only when AI technology R&D advances hand-in-hand with ethics, morals, and legislative supervision, can AI develop in a truly healthy way.

We welcome government regulation and oversight, as regulation can help steer AI in the right direction. Even if progress may be slower, regulation is still needed to ensure we are moving in the right direction. Sometimes government policies and regulations are introduced relatively slowly, just as with the internet - in the beginning there was wild growth, and it was only after reaching a certain level of development that regulations could be introduced to standardize things.

On being vigilant of the risks technology might bring: 

At Tsinghua University's Institute for Artificial Intelligence Industry Research (AIR), I emphasize that human values, value systems, and responsibility must be prioritized above technology in research and development. Aligning with this philosophy, AIR has chosen three key directions where AI will have huge influence in the next 5-10 years, and all research projects (including corporate partnerships) relate to these focus areas:
  1. Smart Internet of Things (IoT), focused on areas such as green computing and deploying small models at the edge for energy conservation and emissions reduction to help achieve the dual carbon goals (carbon peaking and carbon neutrality). IoT has many potential applications, but AIR concentrates on writing papers concerning the dual carbon goals.
  2. Smart transportation, robots, and self-driving, with safety as the top priority. Self-driving can increase safety by more than a factor of 10, since 90% of traffic accidents currently involve human error. AI driving can eliminate human mistakes and greatly improve safety, while enabling low carbon emissions and seamless integration of various applications at high efficiency.
  3. Smart healthcare, new drug R&D using AI, biotechnology, serving human life and health.



Every technology is like nuclear technology – if humankind had a choice, perhaps it would be best not to discover dangerous materials like uranium and radium. While nuclear magnetic resonance provides medical benefits, nuclear weapons can destroy humanity. Similarly, early chemical and biological advances enabled weapons before the major world powers reached consensus to ban biological and chemical weapons. After the emergence of nuclear weapons, banning their use soon followed suit. Now with gene editing technology, countries worldwide have enacted clear legislation prohibiting its use to alter species, especially humans.

There needs to be some clear, basic rules to regulate and constrain technological development and application. AI, especially now at the level of large language models, has generative capabilities and unpredictability - people cannot completely foresee what it can generate.

In the past, AI mainly helped people with analysis, decision-making and prediction, but now it is entirely possible to create new things. Without control, this may not be a good thing. Especially for banking, finance, or systems with critical missions, caution should be exercised when applying AI. For some AI capabilities, humanity still does not understand the causal relationships, such as bounded rationality problems, how AI achieves intelligence, the black box problem, or transparency problems - people are not clear about its causal relationships.

Sometimes we understand the "what" but not the "why" - we may be able to achieve things grasping only 30-40% of the mechanics involved, observing phenomena without comprehending the underlying reasons. In fact, understanding the "why" is critical. Since the inner workings of AI are still opaque, we must exercise abundant caution when applying AI to physical systems or mission-critical applications.
我们有时了解what而不太清楚why,可能了解what百分之三四十就可以做,知其然不知其所以然。其实Why很重要,我们现在不清楚AI Why的情况下,所以在应用到物理系统或关键使命体系时,更得小心保守。

On the importance and urgency of keeping risks from AI under control: 

I feel the importance of these issues growing, with increasing discussions on such topics. Take the Davos Forum as an example – when I first participated 10+ years ago, discussions centered on the profound changes and far-reaching impacts brought by AI to the 4th industrial revolution or social transformations. But starting in 2018, the Davos agenda began focusing mainly on the risks posed by AI and their governance. At the recent Tianjin Summer Davos Forum, 80% of the agenda addressed controling risks. In earlier Davos years, explorations of AI capabilities dominated, with people unconvinced that AI had special functions. They believed that AI was just software, only able to do a limited set of things and unlikely to profoundly disrupt industry.



Later, AI-related topics multiplied, first worrying that big data enables data monopoly giants. In 2019, I attended Davos as Baidu CEO, when some 20-30 major company CEOs discussed corporate social responsibility. Especially after the Facebook data leak and election manipulation scandal, people thought that companies needed to take on more responsibility the more data they controlled and the stronger their AI capabilities. In recent years, risk awareness has grown – previously, Chinese media rarely spotlighted AI risks. People like you rarely asked me such questions, usually querying how AI transforms industries, investment opportunities, China-US competition, etc.

We embrace AI's promise but its risks now attract increasing concern. Recently, two famous statements addressed AI risks – one by Tesla's Elon Musk and Life 3.0 author Max Tegmark, calling to pause training models more powerful than GPT-4 for at least 6 months.
我们拥抱AI,希望它走得更好。但目前对于AI可能的风险问题,已经引起越来越多的人关注和担心。最近关于AI风险有两份著名的公开声明,一份由特斯拉创始人、SpaceX公司CEO埃隆马斯克(Elon Musk)和美国未来生命研究所(Future of Life Institute)创始人、《生命3.0》作者迈克斯泰格马克(Max Tegmark)发起的,呼吁暂停训练比GPT4更强大的AI模型至少6个月。

Another statement was initiated by Cambridge University assistant professor David Krueger and signed by over 350 top researchers, engineers, and CEOs. Signatories included University of Toronto professor Geoffrey Hinton, deep learning pioneer and Turing Award winner Yoshua Bengio of the University of Montreal, Bill Gates, OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, and other experts in related fields. Their 22-word declaration stated: "Mitigating the risk of extinction from AI should be a global priority alongside other risks like pandemics and nuclear war.” ...
另一份由剑桥大学助理教授戴维克鲁格(David Kreuger)发起,包括多伦多大学教授杰弗里?辛顿(Geoffrey Hinton)和深度学习之父、图灵奖得主、蒙特利尔大学教授约书亚本吉奥(YoshuaBengio),比尔盖茨、OpenAICEO(主导ChatGPT)山姆奥特尔曼、谷歌 DeepMind 首席执行官戴米斯?哈萨比斯(Demis Hassabis)等多位顶级研究人员、工程师和CEO,就AI对人类可能构成的威胁发出最新警告,超过350位相关领域人员共同签署了这份只有22个单词的声明:“减轻AI带来的灭绝风险,应该与流行病和核战争等其他社会规模的风险,一起成为全球优先事项。”...

I did not sign the first statement, because I believe research is difficult to halt entirely. One company may pause R&D, but others may continue; or one country may pause research but cannot prevent another from progressing. I signed the second statement, because without risk awareness, AI researchers may not pay sufficient attention. Uncontrolled AI research could lead to disastrous risks. With risk awareness, government, companies, research institutes, and all parties in society will be vigilant at all times and strengthen supervision, just like for nuclear weapons and COVID. This will put technology on the right path, to achieve a balance between development and risk mitigation.
This achieves balance between progress and risk mitigation.

The following excerpt is from Ya-Qin Zhang’s commencement speech at Tsinghua University School of Economics and Management's 2023 graduation ceremony, titled “Leading the AI Era” (张亚勤:引领AI时代丨清华经管学院2023毕业典礼演讲全文).

▶ Cite Our TranslationConcordia AI. “Ya-qin Zhang — Chinese Perspectives on AI Safety.” Chineseperspectives.ai, 29 Mar. 2024, chineseperspectives.ai/Ya-qin-Zhang.

▶ Cite This Work

张亚勤(2023-6-26). “引领AI时代”. 在清华大学经济管理学院2023年毕业典礼上的演讲. https://air.tsinghua.edu.cn/info/1007/2057.htm

Over two millennia ago, the eminent Greek philosopher Socrates considered morality the soul's pursuit of truth. At around the same time, our great Chinese philosopher Confucius regarded the benevolence and righteousness inherent to human nature as the cornerstones for building the structure of society. These concordant insights from disparate cultures are no coincidence. Such wisdom proves even more vital today, as we navigate multiplying choices, confusion, and temptation.

Technology is neutral, but innovators have a mission. With artificial intelligence capabilities rapidly advancing, so too do the potential risks. This compels examining the social, cultural, and ethical implications of AI, including the responsibilities it confers. We must reassess the relationship between humans and machines, as well as human nature and values themselves. Adequate preparation is essential to handle the uncertainty and complexity inherent to artificial intelligence.


Last month, I personally signed the “Statement on AI Risk” launched by an AI safety organization. It calls for making the mitigation of existential AI risks a global priority, alongside other large-scale threats like pandemics and nuclear war. The Institute for AI Industry Research at Tsinghua University where I work, known as AIR, is committed to responsible AI development. The first email I sent AIR staff detailed the 3R principles of AI I formulated: Resilient, Responsive and Responsible. When studying theories, algorithms and applications, we must consider the meaning and potential impacts of the technology, placing ethical concerns and human values above technological capabilities.
上个月,我个人签署了由一个AI安全机构发起的《AI风险声明》,呼吁减轻被人工智能灭绝的风险,应该与流行病和核战争等其他大规模社会性风险一样,成为全球优先解决的事项。我所在的清华大学智能产业研究院AIR致力于做负责任的AI。我在AIR送出的第一个全员邮件就是我制定的AI的3R原则,Resilient, Responsive, Responsible,在研究理论、算法和应用模型时,必须考虑技术的意义和可能带来的结果,并将伦理问题和价值观置于技术之上。

As an optimist, I believe humans possess two vital types of wisdom: inventing technology and guiding its trajectory. I firmly trust in our ability to strike this balance. But we must maintain a sense of crisis and act now.

Translator’s notes 

1. Although Zhang uses the English AI Security Research, AI alignment research is more typically described as AI Safety Research in English-language discourse.

Other Authors

Chinese Perspectives
on Existential Risks from AI

Contact us at: info@concordia-ai.com