About the author Dr. XUE Lan is a Cheung Kong Chair Distinguished Professor at Tsinghua University, where he serves as the Dean of the Institute for AI International Governance (I-AIIG), Dean of Schwarzman College, Director of the China Institute for Science and Technology Policy, and Co-Director of Global Institute for SDGs. His teaching and research interests include global governance, crisis management, and science, technology and innovation policy. From 2000 to 2018, he served as Associate Dean, Executive Associate Dean and Dean of the School of Public Policy and Management at Tsinghua University. Currently, he also serves as a Counsellor of the State Council, Chair of China's National Expert Committee on Next Generation AI governance, and a member of the Standing Committee of the Chinese Association of Science and Technology. Dr. Xue is a recipient of the Distinguished Young Scholar Award from National Natural Science Foundation of China, the Distinguished Contribution Award from Chinese Association for Science of Science and S&T Policy, and the Second National Award for Excellence in Innovation in China. He holds a PhD in Engineering and Public Policy from Carnegie Mellon University. 


薛澜教授曾获国家自然科学基金委员会杰出青年基金,教育部 “长江学者” 特聘教授,中国科学学与科技政策研究会杰出贡献奖等。他于1991年获美国卡内基梅隆大学工程与公共政策博士学位。

The following excerpt is a translation of Lan Xue’s interview with The Future Forum in 2021, titled “The Theory and Practice of AI for Good” (未来论坛AI伦理与治理系列01期:AI向善的理论与实践).

Cite Our Translation
Concordia AI. “Lan Xue — Chinese Perspectives on AI Safety.”, 29 Mar. 2024,

Cite This Work 薛澜(2021). “AI向善的理论与实践”. 在未来论坛AI伦理与治理系列01期的发言.


With China's continued development, we have seen that in many new technological fields, China has already begun to take a leading position and started experimenting with the initial applications of those technologies. In previous Industrial Revolutions, the technologies we employed had already been used for many years in many Western countries. As a result, their various drawbacks and problems had been thoroughly researched, and they were relatively mature when applied in China. However, in the Fourth Industrial Revolution, we are also at the forefront, witnessing the emergence of many new technologies accompanied by many uncertainties. For example, technologies such as artificial intelligence (AI) and gene editing have a similar character: they have the potential to bring about substantial benefits and improve public welfare, but they could also generate potential problems and significant risks.



Therefore, seven or eight years ago, I began to pay attention to these issues. During the technological development process, we really need to focus on relevant governance issues. This way, we will avoid problems which, if neglected in the early stages, would become major issues later on. This could lead to greater resistance from the market to the technological applications, and it might even hinder technological development. Professor Guo is also an expert in this field, and we often collaborate, including the release of the Guidelines for Risk Governance of Artificial Intelligence Technology in China by the National Information Security Standardization Technical Committee (TC260) earlier this year. In June 2019, China’s AI Governance Expert Committee launched AI governance principles, aiming to provide guidance to relevant companies and researchers. Naturally, extensive discussions are still required on how to actually implement these principles or how to address these issues.

I believe there are several aspects to consider: firstly, when technological applications bring many benefits, there may be potential negative impacts, and this requires attention and should be understood clearly. Secondly, ethical issues may arise during the application of technology. We need to carefully consider questions about whether such applications are appropriate, whether they might bring negative impacts to society or individuals, and how we can balance the pros and cons. Thirdly, technological applications may bring about many unknown risks. The other speakers are all technical experts, and we are all discussing the question of whether true artificial general intelligence (AGI), once developed, will become our master. This potential, unknown risk is fairly concerning for everyone. Finally, implementation of AI governance faces significant challenges. Governance of other technologies can be achieved through controlling specific technical indicators. For instance, the speed limit system creates limitations on the speed of cars, achieving a balance on control. However, for AI governance, we fundamentally do not know how AI makes decisions, which presents a major challenge for governance, and these issues require a high level of attention.
我认为有几个方面需要考虑:一是技术应用带来很多向善的益处时,可能存在潜在的负面影响,这一点需要关注,需要了解清楚。二是技术应用过程中有时候可能出现伦理问题,到底应用是否合适,会不会给社会或者个人带来一些负面的影响,我们怎么去权衡利弊。三是技术应用可能带来很多未知风险,其他几位都是技术专家,真正所谓通用的人工智能发展起来,会不会变成我们的主人,这也是大家都在讨论的话题。这种潜在的不知名的风险,也是大家比较担心的。最后,人工智能治理的执行也存在极大的困扰。其他技术的治理可以通过具体的技术指标调控来实现,比如汽车,可以通过限速系统(Speed limit),将开车的速度加以限制,来获得一个控制的平衡点。但是就人工智能的治理而言,我们根本不知道它怎么做决策,所以这给治理也带来很大的挑战,这些问题需要予以特别高的关注。

Moderator: AGI is a very interesting topic. I would like to take this opportunity to ask the three experts: assuming that humans achieve AGI, AI may be able to do more while becoming more capable and powerful. However, when AGI develops to a certain stage, it may pose a level of threat to humanity. In such a situation, should we really develop this technology? In most discussions, everyone tends to say that scientific and technological research should be limitless and unbounded, and we should go wherever the research leads us. But, if we anticipate certain problems to occur, should we hit the pause button? Regarding development of AGI, should there be limits, and if so, where should they be set? I would like to ask the three experts to help us answer these questions.

Xue Lan: This question is very important. Although the United States currently views China as a major competitor in the field of AI, precisely due to the existence of such risks, the United States, China, and many other countries should closely cooperate to prevent potential attacks by malicious actors on society and the human community. This is an important foundation for our cooperation.

The following excerpt is a translation of Lan Xue’s essay, titled “Promoting Agile Progress in AI Governance through Mutual Trust and Interaction“ (薛澜:推动人工智能治理在互信互动中敏捷前行)

▶ Cite Our Translation
Concordia AI. “Lan Xue — Chinese Perspectives on AI Safety.”, 29 Mar. 2024,

▶ Cite This Work 
薛澜(2022-10-26). “推动人工智能治理在互信互动中敏捷前行”. 中国社会科学网.


Currently, an agile governance "prototype" oriented towards coordinating innovative development with governance is emerging. This involves government-led rulemaking along with the collaborative interaction of diverse entities, especially as agile thinking in the government and agile behavior in businesses are emerging. However, we must also be aware that, upon the basis of the concept and guiding principles of agile governance gaining recognition from various sectors, the specific implementation of governance mechanisms will still face multiple challenges, especially in some key dimensions guiding the formation of governance mechanisms.


There is a gap between societal expectations and the agile actions of businesses, with a lack of agreed-upon actions in industry. Under the heavy hand of regulations, agile behavior by businesses remains stuck in technical thinking, and the nurturing of governance thinking will take time. Most companies, beneath the sword of oversight, act blindly based solely on the direction of regulators, and aim to manage risks by relying on technical solutions to reduce algorithmic and product vulnerabilities. However, the risks of AI cannot be effectively solved or anticipated if companies fail to take broader, more forward-looking actions from the perspective of multi-stakeholder governance. In the ideal agile governance model, companies should proactively engage in dialogue with regulators, contribute critical knowledge and information, engage with industry peers, and make clear that collaboration is emphasized over competition. Companies should not only achieve consensus within industry on a set of best practices among companies, using this to develop key industry standards, promoting formation of mechanisms for common learning and peer feedback; leading companies should also more actively contribute governance practices and experiences to the industry, lead the dissemination of science and technology ethics concepts, and contribute regulatory wisdom to the entire industry. Therefore, the next steps for businesses are to progress from agile norms internally within companies to connect with the industry norms system and the governmental governance system, mobilizing the entire industry for agile collaboration through practical wisdom.


In negotiations around enacting oversight, the government should strike a balance between a hard and soft approach and control the rhythm. The institution and implementation of government regulatory rules are transitioning from a "one-size-fits-all" model to a "negotiation" stage. This also signifies that both sides need to rapidly interact to obtain timely feedback if they want to avoid the negotiation reducing their efficiency in innovation and development. Otherwise, the market will be afraid to innovate due to lack of guidance, and the government may revert to more simple and crude styles of regulation. Currently, mastering the pace of governance remains a challenge for regulatory agencies. The government faces the difficulty of needing to flexibly choose goals and tools to continuously tweak the direction of regulation. The first challenge is how to translate abstract governance principles into concrete laws and regulations. For example, efficiency, fairness, safety, and freedom can all be incorporated into governance goals in abstract scenarios, but the prioritization between these goals differs in different implementation scenarios. The ranking and trade-offs of diverse goals need to be based on rapid multi-party consensus. The second challenge is how to strike a balance between a heavy-handed and a light-touch approach in the selection and use of governance tools. For example, high-risk, clear violations require laws and regulations that can be clearly applied, while commercial activities that are still unclear but show signs of risk should be addressed with flexible methods. Developing government oversight that is agile and can learn requires cultivating the government's ongoing ability to understand and track industry developments, strengthening the construction of communication platforms and all sorts of research support institutions and think tanks, and emphasizing rapid follow-up in policy evaluation under an empirical paradigm.

In a situation of "common ignorance," building trusting relationships is the key to agile governance. The core of creating agile governance is constructing multi-party communication and trust while facing the challenge of "common ignorance" among companies, governments, and academia due to the high level of uncertainty of technological development. In simple applications of AI technology, businesses have a clearer understanding of the risk boundaries of technology and can continuously provide rules for business models within the regulatory framework. However, with the rapid updating and iteration of data and algorithms, there will be a state of "common ignorance" involving both businesses and regulators in a certain sense. Given the enormous uncertainties of innovation and risk, building trusted channels among multiple parties and maintaining smooth communication mechanisms are crucial. At the same time, actual implementation of a diversified, common governance system cannot ignore participation and guidance from the societal dimension. For example, balancing the relationship between societal opinion and technological development, preparing social risk contingency plans, and fostering emergency management thinking are critical. In addition, in discussions on AI ethics and governance, it is essential to both realize the critical role of scientific research institutions as trusted intermediaries and while also being aware that relevant discussions are not just the "privilege" of experts and scholars; all of society may have different views on AI governance issues and advocate for diverse governance ideas–and indeed they have this responsibility.

In the field of AI, we really hope to achieve an agile governance model oriented towards multi-stakeholder collaboration. Through industry coordination, government learning, and trust-building efforts, we can form a robust, trusted governance relationship, shifting from "common ignorance" to jointly addressing complex governance demands, from "negotiation" to advancing hand-in-hand with mutual trust and interaction.


Translator’s notes

1. Professor GUO Rui (郭锐) at Renmin University of China was another guest at this event. Note that the name of the TC260 document referenced by Professor Xue appears to be AI Ethics and Safety Risks Prevention Guide (人工智能伦理安全风险防范指引).

Other Authors

Chinese Perspectives
on Existential Risks from AI

Contact us at: