Qiqi GAO


About the author Qiqi Gao is currently a professor at East China University of Political Science and Law and director of the Institute of Political Science. His main research interests include comparative party politics, comparative political science theory, comparative regional politics research, and comparative ethnic politics. He has published more than 40 papers in core journals such as Political Science Research, Ethnic Studies, and World Economy and Politics. He was selected as one of the "2016 Most Influential Young Scholars in Humanities and Social Sciences in China." He is also the executive director of the Shanghai International Relations Society, the director of the Chinese Political Science Association, and the director of the National Association for the Study of International Politics in Universities.


The following excerpt is a translation of Gao Qiqi’s article “Artificial Intelligence, the Fourth Industrial Revolution, and the International Political Economy Landscape” (高奇琦:人工智能、四次工业革命与国际政治经济格局).

▶ Cite Our TranslationConcordia AI. “Qiqi Gao — Chinese Perspectives on AI Safety.” Chineseperspectives.ai, 29 Mar. 2024, chineseperspectives.ai/Qiqi-Gao.

▶ Cite This Work  
高奇琦(2019). 人工智能、四次工业革命与国际政治经济格局. 当代世界与社会主义(06),12-19.


In the era of intelligence, it is necessary to form a holistic consideration of artificial intelligence’s (AI) future direction from a global perspective. The reasons include the following four points.


First, AI could accelerate the advent of a risk-prone society. For instance, AI might be used in various illicit industries. In a sense, the speed at which AI is applied in illicit industries could surpass the speed at which it is applied in legitimate ones. Tempted by significant economic benefits, AI could be used more recklessly in non-compliant areas, posing severe risks.

Second, military applications of AI might escalate military competition among countries. AI development in the United States was initially driven by the military. The US military hopes to utilize AI on the battlefield, and there are already such deployments, such as extensive use of drones. However, this not only increases the gap between the US military and the militaries of other countries, but also poses significant ethical concerns. Decision-making by machines in military operations could become a convenient scapegoat for the US military to sidestep accountability. For example, in the event of a military drone attacking civilians, the US military might shift the blame onto the machine to avoid responsibility.

Third, as a disruptive technology, the impact of AI on human society will spill over to other countries. For example, the unemployment risks triggered by AI development are likely to have a global impact. If the risk of widespread unemployment spreads across the world, it will lead to serious societal problems. At the same time, the unemployment risks will also exacerbate the wave of anti-immigration sentiment, a trend that has already manifested in Europe and America. Therefore, it's critical for sovereign nations to collaborate in thinking about these challenges, creating a cohesive plan for AI development.

Fourth, countries should reach a consensus regarding the question of artificial general intelligence (AGI) development. At present, most Western countries encourage further development of AGI, as the Asilomar Principles are not opposed to AGI development. However, the ultimate development of AGI poses significant challenges to the meaning of human existence. If the development of AGI were to cause humanity to lose its meaning, such an outcome would be difficult for humanity to accept. Therefore, there is a pressing need for all countries to come together to reach a basic consensus on the direction of AGI development.

The following is from a speech Professor Gao gave in English at the International AI Cooperation and Governance Forum 2023. We have edited the speech for clarity, but the content remains the same. We have additionally translated this speech into Chinese.

▶ Cite Our TranslationConcordia AI. “Qiqi Gao — Chinese Perspectives on AI Safety.” Chineseperspectives.ai, 29 Mar. 2024, chineseperspectives.ai/Qiqi-Gao.

▶ Cite This Work
Qiqi Gao(2023-12-9). “Consensus Governance of LLMs from the perspective of Intersubjectivity”. Speech at the International AI Cooperation and Governance Forum 2023. https://aisafetychina.substack.com/p/concordia-ai-at-the-international

Original speech

Thank you. I'm Gao Qiqi, and I'm from East China University of Political Science and Law. I'm trying to use the concept of consensus governance of big models.
Chinese Translation


Part 0 Introduction

We need large language model governance, right? How to govern LLMs. I think there are a lot of technical discussions here. I think the academic community focuses more on technical issues. There are a lot of discussions about topics like specification gaming and reward hijacking. I think this is the most important topic.

Part. 0 引言

显然,我们需要对大语言模型进行治理。那么我们该如何治理大模型呢?我认为这里有很多技术治理讨论,学术界更关注这些技术问题,例如有很多关于规范博弈(specification gaming)、奖励破解(reward hacking)或其他方面的讨论。我认为这是一个非常重要的话题。 

For example, there is a lot of discussion about mechanistic interpretability. There's a lot of discussion about advancements here. You can see this in interviews and literature about linear classifiers abroad.
例如,关于机制可解释性(mechanistic interpretability),肯定有很多讨论,这是一些这个主题上的文献。

I don’t have enough time to go into all the details, but you can see all these advancements. So it's partly about global interpretability or local interpretability.

How to deal with it? There is also a lot of discussion about scalable oversight, about different directions such as least-to-most prompting, AI safety via debate, separating different missions of AI safety. Michael [Sellitto] has already proposed imitating the teacher, bio-behavior cloning, the three Hs [helpful, honest, and harmless], and there has been a lot of other discussion here.
关于可扩展监督(scalable oversight),也有很多关于不同方向的讨论,区分出了关于AI安全的许多不同的任务,比如模仿老师、行为克隆、HHH以及其他很多的讨论。

Paul Christino - AI alignment landscape

Paul Christiano has also talked about this landscape.
我喜欢Paul Christiano对AI对齐问题的分解框架。

What I'm trying to say is that alignment is very important, but if we equate governance with alignment, it might lead to misunderstanding. I think that alignment is an internal solution for the enterprise. I call it the placebo patch for LLM application. Without a patch, the scientist or enterprise will not be at ease. A lot of researchers have talked about this, that it is an impossible mission, a process, not a result. So, my question is, how to understand LLM governance deeply?

I'm trying to separate my presentation into several questions. First, I will give you guys my understanding of this governance picture. Second, I will talk about corporate governance, then national governance, and global governance. Finally, I will discuss consensus governance.

Part 1 The Current Situation of LLMs Governance Based on Subjectivity

Let me first talk about the picture right now. I think the discussion about LLM governance is mostly fragmented. Different stakeholders discuss in different directions and with different interests. But the larger governance issue is that I think LLMs is an implosion, not an explosion. We compare the effect of language models to a nuclear explosion, but I think the effect of LLMs is more like an implosion as discussed by Baudrillard. You don't fear it because there are not changes in appearance, but it has already changed fundamentally.

Part.1 基于主体性的大语言模型治理现状


Part 2 Corporate Governance: Power Struggle and Governance Structure

I see this power struggle as a part of corporate governance. The first one is internal governance. We already see in enterprises this bigger power struggle in OpenAI. I think that is a conflict between the non-profit and for-profit structures. I think Ilya [Sutskever] is resisting as a non-profit representative. But, it has returned to the rule of effective accelerationism. I think you also can feel the penetration of capital into the non-profit structure.

Part.2 企业治理:权力斗争与治理结构


A big thing that we have already talked about is Q* [Q-star]. What is Q*? There is a lot of discussion of “Tree of Thoughts,” or “process supervision,” or maybe synthetic data–such as Orca, which is close with Microsoft. We're not sure, but I think that might be a big thing for us to see the future of AGI.There is also a corporate competition as inter-corporate governance. So, as we talk about open-source or closed source approaches, I think the advantage of open source is democratization and equalization of power. It gives everyone an opportunity to participate. It’s a weapon of the weak. The disadvantage is the security issues, and there's a dual contradiction between insufficient competition and disorderly competition. So it's very hard to define.

But I think for corporate governance, we should have measures before deployment and after deployment. Before deployment, there are several discussions about gradual scaling, staged deployment, and Chief Risk Officer, internal auditing, red-teaming, etc. For after deployment, I think there must be some risk management, reporting safety incidents, and no unsafe open sourcing.

So I think there must be three elements in corporate governance:

1. Introducing big model governance into corporate governance architecture. This might be a substantive architecture rather than a formal architecture.

2. Ensure that a corresponding proportion of computing power and talent resources are used in governance.

3. Develop corresponding best practices as Brian’s team [Concordia AI] has done, and ensure governance rules by mutually reinforcing workflows through open industry associations or enterprise alliances.





Part 3 National Governance: the Relationship between Government and Enterprises under Development Pressure

The third topic is why national governance is very important. I think the nation or state is the best unit of governance and the primary responsible actor. But the problem with the state is that if you want to win the race of productivity, you have to follow the guidelines of enterprises.

Part.3 国家治理:发展压力下的政企关系


So, there is an asymmetric government-enterprise relationship. So, we talk a lot about regulation, but in practice, regulations are always suspended. But I think that maybe in the earliest stage of AI governance, with social conflicts continuing to escalate, maybe national power should face some adjustment.

The greater difficulty in national governance lies in its own insufficient capacity and relatively slow response to changes. Therefore, I believe that national governance should also include other players, such as scientists, social scientists, and active citizens.

Furthermore, I believe that more corresponding government government positions should be added for the governance of LLMs, and comprehensive third-party audits of models should be formed. I think that audits are very important, as is tracking the weights of LLMs, at least for the huge models, and reporting their computing power.

Part 4 Global Governance: Regulatory Deficiencies in Global Anarchy

About global governance, I think there has been a lot of talking about it. I think it is very important, especially for small states. I propose LLMs access and LLMs sovereignty. That's the reason why middle powers have to build their own LLMs. But, there is also a gap for developing countries.

Part.4 全球治理:全球无政府状态下的监管缺陷


And finally, I think international consensus is very crucial. We need to give a clear definition of AGI. I believe that right now, Ilya Sutskever already had this "Oppenheimer moment," and perhaps in the next few years, we may witness the "Hiroshima moment," seeing the real power of AGI. So, I think there are two key questions. First, should we use existing mechanisms or create new ones? Second, which approach should we take? Today, we have discussed the IAEA approach or IPCC approach, as well as bilateral or multilateral mechanisms.

As I already mentioned, I am very honored to have participated in the article "Managing AI Risks in an Era of Rapid Progress.”
对于《人工智能飞速进步时代的风险管理》(Managing AI Risks in an Era of Rapid Progress)这篇文章,我非常荣幸能参与其中。

Part 5 How to Form a Consensus Governance Architecture for LLMs?

And finally, why is consensus governance crucial? Right now, we are fragmented because we are using a subjective perspective, with individual perspectives and individual interests. Instead, we need a kind of intersubjective perspective. So this is where consensus governance is really from.

Part.5 共识治理:如何构建大模型共识治理架构?


We talk about alignment. I think the true goal of alignment should be human alignment. We ask machines to align with human values. The problem is that I think the OpenAI team are not aligned. So, the internal order of human society is too far from being aligned.

So why do we need machine alignment? There is recent movement on the topic of RLAIF method, which seems like an option for us, but actually, I'm a little worried about this, since we may delegate too much to the machine.

I think there are two types of alignment, from machine to human alignment and also from human to machine alignment. I don't have enough time to discuss the concept of human negative alignment tax. In the end, I think we should go back to human subjectivity.


Translator’s Notes

1. The original Chinese text says “three points,” but Professor Gao articulated four points in his article.

Other Authors

Chinese Perspectives
on Existential Risks from AI

Contact us at: info@concordia-ai.com