Linghan ZHANG



About the author Zhang Linghan, Ph.D., Professor at the Institute of Data Law and doctoral supervisor, China University of Political Science and Law, visiting scholar at Cornell University.

She has published extensively on cyberspace law and algorithmic governance. She is the author of “Research on Tort Liability of Online Virtual Property”and “Governing Power : Regulation of Algorithms in the Age of Artificial Intelligence.” She is a member of the UN High-Level Advisory Body on AI, an expert member of the Information and Communication Science and Technology Committee of the Ministry of Industry and Information Technology, and expert member of the Cybersecurity Legal Advisory Committee of the Ministry of Public Security. She has participated in the legislative advisory work on relevant laws related to algorithms regulation, platform governance, data security and artificial intelligence governance in China for years.



The following excerpt is a translation of the paper titled “From Traditional Governance to Agile Governance: Paradigm Innovation in the Governance of Generative Artificial Intelligence” (张凌寒,于琳丨从传统治理到敏捷治理:生成式人工智能的治理范式革新), co-authored by Linghan Zhang and Lin Yu. 

▶ Cite Our Translation
Concordia AI. “Linghan Zhang — Chinese Perspectives on AI Safety.”, 29 Mar. 2024,

▶ Cite This Work
张凌寒 & 于琳(2023). 从传统治理到敏捷治理:生成式人工智能的治理范式革新. 电子政务(09),2-13. 


(II) Evolution of Risks: Uncertainty of Risks Extends from External to Internal

With the advent of generative AI (artificial intelligence), represented by ChatGPT, the uncertainty of risks has extended from external to internal. Based on whether the risks are endogenous, AI risks can be divided into external risks and internal risks. Generally, the potential societal risks of emerging technologies, i.e., external risks, are highly uncertain. This uncertainty often relates to the development pattern of the technology, which always undergoes a process from inception to improvement and maturity, and external risks only gradually manifest during this process. This uncertainty is not only related to the development patterns of emerging technologies but also relates to AI’s strong human-machine interaction and unexplainable reasoning capabilities. The strong human-machine interaction of generative AI makes external risks highly uncertain due to their unpredictability, while its reasoning capabilities further extend this uncertainty from external to internal.



Due to the strong human-machine interaction of generative AI, its external risks are difficult to predict. Previously, whether in the era of professionally-produced content or user-generated content, platforms had strong control over the creation, publication, and dissemination of information content, with the main responsibility for information content security borne by platforms. Article 47 of the Cybersecurity Law was the first law or regulation to clarify the legal responsibility of network operators for managing user information publication, and subsequent regulations including the Provisions on Ecological Governance of Network Information Content and Opinions on Further Consolidating the Responsibility of Website Platforms as the Information Content Management Entity further detailed the responsibility for platforms as the main entity for information content security.

However, in the era of AI-generated content, generative AI service providers lack the ability to control the input end. Unlike traditional AI products or services that are delivered in one direction, from provider to user, generative AI products or services are provided through interaction with users. Concretely, users input their demands, and the model generates corresponding results based on the user's input. What content is generated depends largely on the specific demands of the user. The "user input + machine output" content generation pattern means that even if service providers fulfill their compliance obligations in research and development at the front end, users can still breach compliance at the input end. Although service providers usually take preventive measures to restrict user input behavior, the actual results are not always satisfactory. For example, although OpenAI explicitly prohibits the generation of malicious software in its usage policy, researchers found that users can deceive ChatGPT into writing code for malicious software through misleading prompts. The platforms’ difficulty in controlling the input end means that the content the users input, the results the model generates, and the impact of the generated results are all unknown.

Based on the reasoning capability of generative AI, its internal risks are difficult to predict. Starting from ChatGPT, the GPT series began to possess reasoning capabilities, and even the ChatGPT R&D team cannot explain the reasons behind the emergence of such capabilities. This has, to some extent, changed the nature of the algorithmic "black box" problem. The traditional meaning of the algorithmic "black box" is essentially a technological "black box," characterized by the technical principles being known only to certain people, while remaining unknown to others – namely, regulatory authorities and the public society. However, generative AI, represented by ChatGPT, has to some extent destroyed the traditional societal recognition and understanding of the algorithmic "black box." The essence of the algorithmic "black box" problem has shifted from "information asymmetry between people" to "humanity’s collective ignorance in the face of strong artificial intelligence."

(III) Adaptive Governance Mechanism: Distinguishing Known and Unknown Risks, Emphasizing Prevention and Response of Risks from Generative AI

The unknowns of generative AI risks do not mean that any particular risks it poses are impossible to predict. External risks will gradually become apparent in the process of the technology's popularization and application. At least, in the current stage, people have already gained knowledge of some application risks through the frequent use of generative AI. The DeepMind team has determined that there are 21 existing risks of large models, categorizing them into six risk areas: discrimination, exclusion, and toxicity; information hazards; misinformation harms; malicious uses; human-machine interaction harms; and environmental and socioeconomic harms. However, there are still unknown risks that are difficult to predict due to the unexplainable reasoning capabilities of generative AI and the unpredictability of technological and industrial development trends. Generative AI governance should differentiate known risks from unknown risks according to the predictability of risks, formulate targeted governance plans accordingly, and establish an adaptive governance mechanism for risks.



For unknown risks, establish a comprehensive response mechanism for during and after incidents. For risks occurring at the foundation model layer and specialized model layer, technical developers should be required to take emergency remedial measures, such as taking the model offline for repairs and suspending the model, to prevent more significant damage. They should also promptly fulfill obligations to notify users (including companies and individuals) and report to regulatory authorities. In addition, since foundation models not only provide model application services to end users but also offer pre-trained large model products to downstream companies in the industry, when major security incidents occur, the foundation model service provider should be required to immediately cease the provision of products to downstream companies. Given the general and empowering nature of foundation models, suspending a foundation model would have a ripple effect on the system. Therefore, the foundation model should be promptly restored to normal operation after repairs.

For risk incidents that occur at the service application layer, a basic assessment should be made regarding the source of the risk. When the risk incident originates from the user side, the service provider should not only fulfill the aforementioned emergency remedial and notification obligations but also implement corresponding restrictions and punitive measures against the user. For instance, if a risk incident bubbles up from the user conducting "data poisoning" of the model, the user should be held accountable afterwards. If the risk incident does not originate from the user side, it should be traced back to higher levels to further determine whether the risk comes from the foundation model layer or the specialized model layer. This helps identify the entity responsible for post-incident response and bearing liability.

The following excerpt is a translation of Linghan Zhang’s paper, titled “Legal Positioning and Tiered Governance of Generative Artificial Intelligence”(张凌寒:生成式人工智能的法律定位与分层治理).

▶ Cite Our Translation Concordia AI. “Linghan Zhang — Chinese Perspectives on AI Safety.”, 29 Mar. 2024,

▶ Cite This Work
张凌寒(2023). 生成式人工智能的法律定位与分层治理. 现代法学(04),126-141.


Generative artificial intelligence (AI) governance should adapt to the changes brought to society by technological development and reconsider how to construct a more effective governance framework after the underlying technological logic of AI governance has changed. Generative AI governance should change China’s existing regulatory system of "technology supporter—service provider—content producer" to layered regulation of "foundation model—specialized model—service application." Different levels should adapt to different regulatory approaches and tools.


One reason for tiered regulation is that the distinction between “service provider” and “content producer” is meaningful only at the service application layer of generative AI. At the foundation model and specialized model layers, this distinction is not strongly related to the regulatory objective of the “content producer” concept. This is because the concept of “content producer” stems from information content security regulations, which aim to ensure compliance with bottom-line obligations for ensuring information content security for information published for the public through service providers. However, the foundation model and specialized model layers of generative AI either are trained and operated solely within companies or provide interfaces to empower vertically segmented businesses (B-side) and do not directly interact with users (C-side). To achieve the regulatory goal for "content producers," it is sufficient to impose relevant requirements at the service application layer and implement effective filtering and review. The functionalities of generative AI extend far beyond "content generation" and have become a new type of digital infrastructure. Therefore, issuing requirements for the entire industrial chain for AI as only a "content producer" based on its ability to generate text, video, and audio does not comport with its functional models.


A second reason for tiered regulation is to reduce the obligation to be cautious at the technical end for the foundation model and specialized model layers, in order to promote industry development. In the present governance framework, technology providers have the lowest obligations regarding information content security, while it is highest for content producers. In the early stages of internet development, service providers enjoyed years of liability exemptions, which are considered to have been crucial for the rapid development of the internet industry. Generative AI can continue to adopt the previous regulatory regime for information content security at the service application layer, achieving the governance goals for information content security. If the foundation model and specialized model layers were also subjected to the requirements for "content producers," it would become a burden on technological innovation.

A third reason for tiered regulation is to encourage companies to assume risk prevention obligations and legal responsibilities at different levels according to different layers of their business operations. Tiered governance also encourages companies to separately develop the foundation model layer and the service application layer, differentiating between business-to-business (B2B) and business-to-customer (B2C) models. This approach can free the foundation model layer from the responsibilities of content producers and encourage companies to separate out responsibility for the dissemination of information content into services provided for users. Even without splitting up business entities and services, companies and employees can assume corresponding responsibilities based on the level of risk occurrence.

In tiered regulation, different levels have their own governance philosophies and regulatory focuses. The foundation model layer should be development-oriented, focusing on technology ethics, training data, and model parameters. The specialized model layer should adopt a philosophy of prudent inclusion, focusing on critical domains and scenarios, sources and security of training data, and personal information protection. In this layer, a tiered and categorized system can be introduced. The service application layer should focus on information content security, orderly market competition, and protection of user rights. It should utilize existing regulatory tools while incorporating emerging tools and refining compliance and exemption systems on a timely basis, allowing room for trial and error in the development of emerging technologies. This approach would transform China's governance from more one-dimensional, scenario-based algorithm regulation, into integrated, systematic governance for generative AI that adapts to different governance objectives.

Other Authors

Chinese Perspectives
on Existential Risks from AI

Contact us at: