Huaihong HE

About the author Huaihong He is a professor of the Department of Philosophy, and director of the Ethics Teaching and Research Section at Peking University. He is mainly engaged in research in ethics, philosophy of life, social history, and other fields. He is one of the most influential ethicists in China. His publications include the collection of essays, Social Ethics in a Changing China: Moral Decay or Ethical Awakening?


The following excerpts are translated from He’s Does humanity still have a future? (人类还有未来吗)(2020). We precede each excerpt with a bolded summary of the key point.

▶Cite Our Translation Concordia AI. “Huaihong He — Chinese Perspectives on AI Safety.”, 29 Mar. 2024,

▶Cite This Work
何怀宏(2020). 人类还有未来吗. 广西师范大学出版社.

Selected excerpts

The huge ability of human beings to control matter may cause catastrophe:

“The thousands of nuclear bombs hanging over human heads may cause catastrophe to humanity at any time, not to mention superintelligent machines and genetically modified species. Not only are there ready-made nuclear weapons, some developed countries that are currently non-nuclear also have the ability to quickly develop the ability to manufacture nuclear weapons, and the poisonous gas, biological and chemical weapons that have been gathering dust for many years may also be unleashed. In today’s technological world in which humans have such a huge ability to control matter, it is almost easy for human beings to inflict serious blows on themselves. And we cannot rule out that there will always be the desperate acts of madmen or the misjudgment of normal people.” (pp.258-259)



Risks that come from people controlling objects are distinct from those coming from the objects themselves:

“If we consider the dangers that man-made objects will bring to people, we may say that what we can see at present is the dangers brought about by people’s control and abuse of objects. One is the danger of some people using man-made weapons of mass murder such as nuclear bombs, biological and chemical weapons to massacre and even exterminate human beings. Another is the danger of humans using AI, genetic engineering, etc. to change and even possibly exterminate the human species. But these dangers still come from people rather than from the objects. Dangers that come from the objects themselves should come from their abilities and consciousness, for example, artificial objects (superintelligence) that have gained self-awareness and the identity of a machine agent ultimately rebelling against humans, eliminating humans, and replacing humans.” (pp.44-45)



War and preparation for it continue to stimulate the development of new technologies:

“Although nuclear disarmament has occurred, there are still thousands of nuclear missiles hanging above our heads at almost zero distance — enough to exterminate humanity many times over. To reduce the threat of nuclear weapons, we should not only reduce hostility, but also reduce misjudgment and proliferation. War and preparation for it continue to stimulate the development of new technologies; many new technologies were invented on an accelerated timeline because of war and only later became used by civilians. Technologies invented in times of peace are also continually entering into military use, such as weapons employing AI — drones, the Killer Bee1, and the space warfare that may occur in the future. Therefore, it seems that in the future all that we can rely on will increasingly be the spirit and ethics of human self-restraint.” (pp.250-251)



“Humanity coordination politics” is needed to deal with the serious problem of human-machine relations:

“We must also consider issues within the existing national and international systems; it is impossible for us to fundamentally change these systems. But I do want to propose a concept distinct from “domestic politics” and “international politics,” that of “humanity politics,” or more specifically, “humanity coordination politics.” Because the problems we will face cannot be solved by any country alone, and humanity has truly become a community with a shared future in the face of such an existential crisis2. In other words, humanity needs a coordinated politics to deal with the urgent and serious problem facing all of us, that is how to deal with the relationship between humans and intelligent machines, especially between humans and possible superintelligent machines in the future.” (pp.102-103)



In the face of common dangers, love for one’s country should give way to love of humanity:

“People must truly realize that humanity is a community with a shared future. But this awareness might only be able to be formed in the face of imminent disaster; only in the face of common dangers can human beings truly unite. If this disaster does approach, both international political relations and domestic politics may become less important than they used to be. Patriotism and “one’s country first” should give way to humanity-ism (人类主义)or “humanity first.” (pp.104-105)



We should first focus on preventing the worst outcomes from AI:

“I basically uphold a kind of “bottom line thinking” [with respect to AI], that is to say, first consider preventing the worst from happening at the level of the bottom line (this is first and foremost survival). Then think about striving for the best, or rather the least bad case… Secondly, this bottom line also refers to basic moral and even legal constraints and norms, which are the most basic principles for preserving life. The main consideration is how to establish rules for people regarding machine ethics, rather than how to cultivate machines with noble values. Moreover, the worst-case scenario for AI is likely to occur precisely when people think or demand the best, that is to say, at the boundary when this kind of intelligence becomes a general-purpose superintelligence that exceeds human intelligence. After this boundary or singularity, the most intelligent beings in the world will no longer be human beings, but some kind of superintelligent beings that are unknown to human beings now and will still be unknown in the future.” (pp.138-139)



The risks of superintelligent AI may be harder to guard against than nuclear energy:

“Nuclear energy aroused great fear and vigilance when it was first unveiled; people have a relatively full understanding of its destructiveness, and are therefore working hard to guard against it, restrict it, and regulate it. Not so with AI, which may tamely obey the will of humans up until it might suddenly have a will of its own and work according to that will. To give an extreme example that Bostrom mentioned in Superintelligence - Paths, Dangers, Strategies: There is a machine that is initially set to produce paper clips with maximum efficiency. Once it has attained an almost omnipotent ability that exceeds the intelligence of humans, it is possible that it will ignore human will and use all “materials” it can gather as resources for making paper clips.” (pp.98-99)



“Stuart Russell believes that creating a machine that is smarter than one’s own species is not necessarily a good thing. He suggested that in order to prevent robots from “taking over” from humans, humans must create robots with “selfless heart-minds.”3 Therefore, when programming a robot, you should program in altruism, humility, prudence, and other correct human values.” (pp.140-141)
“斯图尔特·拉塞尔(Stuart Russell)认为,制造出一种比自身物种更为聪明的机器并不一定是好事。他建议,为了防止机器人“接管”人类,人类必须制造出具有“无私心”的机器人。因此,在为机器人编程时,就应当将利他主义、谦虚谨慎的品质及其他一些人类的正确价值观编写进去。”

Civilization needs to consider the ten-year plan, the hundred-year plan, and the thousand-year plan, and prepare for future emergencies:

“Some scientists believe that people today might be overestimating the development speed of AI, and there will be no possibility of superintelligent machines developing or a big crisis emerging in the next twenty or thirty years. This would be great, and I even hope that this timeframe can be extended to fifty or sixty years, over a hundred years. However, civilization should not only consider the ten-year plan, the hundred-year plan, but also the thousand-year plan. Considering the rapid growth of science and technology in the past few decades, the possibility of various accidental discoveries, and a basic contradiction of modern human civilization that will be discussed later, human beings have to prepare for future emergencies.” (pp.236-237)



The “critical minority” are needed for the wellbeing of the majority and of all of humanity:

“It is the “critical minority” and not necessarily the majority that can currently be relied upon or appealed to when it comes to taking farsighted decisions and initiatives to prevent worst-case scenarios for AI. For the wellbeing of the majority and all mankind, a “critical minority” is needed; “let some people understand first.”This “critical minority” should include at least four types of people… front-line [AI] researchers… [AI] development companies’ owners, managers and investors… political leaders… and conceptual people, such as artists, humanities and social science scholars, media people and so on.” (pp.149-151)



Those with an information and wealth advantage could maintain power forever:

“Current “homo sapiens” will split into a minority of high-level people5 who have been transformed into material objects and a majority of low-level people seen as useless… Everyone may still be able to live a rich material life, but there will still be a power inequality between those who understand programs and those who do not, those who have the money to keep extending their lives and those who do not. People who can master and control information will be incomparably superior in their ability to control things than those who do not control information and do not understand algorithms. But if those rulers can try their best to maintain their own immortality, and then we factor in the huge asymmetric advantages of the technological means they can control, it becomes more likely that rule by power-centralizer(s) will emerge.6 And this time the ruled cannot hope that the natural law of life and death will take effect and interrupt this centralization of power…”



“Of course, someone centralizing power could also be a “benevolent” centralizer, he could provide the people with abundant “bread” and “games that make them happy.” He could gradually lessen violence and coercion. He has the conditions to give most people a prosperous material life. Most of the “useless” people will not become as impoverished as the proles in Orwell’s 1984.” (pp.40-41)

It is hard to consider what humans should do about human-machine relations when we cannot predict machines’ future attitude towards humans:

“The ethics relations between humans and non-human things is mainly about considering: in a position of strong versus weak, how should we treat animals and other non-human things kindly? Meanwhile, the ethics of human-machine relations is mainly about considering: although we are still in a strong versus weak relationship vis-a-vis machines, our respective positions may switch in the future. On the basis of anticipating how they will treat us, we have to consider what we should do now. What can we do to them? But a big dilemma is: although our current attitude towards them depends on their future attitude towards us, it is precisely this latter point that we are very unclear on or even unable to predict.” (pp.72-73)



There is something special about humans and their achievements:

“There is also this opinion: “Even if human beings are replaced by another species, then it may not be bad, and that new species, such as silicon-based organisms, may even be a more advanced species.” This is a very optimistic opinion. I can almost say with certainty that I cannot persuade anyone who is such a believer in progress, or to put it another way, is so unafraid of any change, as to hold this opinion. I admit that I still have my own stubbornly held opinion: although the history of human beings is not very long and the history of civilization is only 10,000+ years, and although human beings have a certain weakness as carbon-based organisms, it may be because of these factors that they have achieved plentiful spiritual and cultural achievements. I still cherish these achievements — even beyond all else — and I also cherish our daily lives as humans and all kinds of human feelings.” (pp.258-259)



Translator’s notes 

1. A kind of Unmanned Aerial Vehicle, see:

2. This draws on a slogan commonly used in Chinese politics and diplomacy, 人类命运共同体, which is often translated as ‘community with a shared future for mankind’.

3. For more on the translation of “心“ see here. Here the author is likely referring to Russell’s first principle for beneficial AI: The machine’s purpose is to maximize the realization of human values, and in particular has no purpose of its own and no innate desire to protect itself.

4. The structure of this statement mirrors Deng Xiaoping’s call for letting some people get rich first (to reach common prosperity faster).

5. The author uses the term "物化(人)", literally "materialization/materialized person" to denote the technologically mediated process of replacing one's flesh with synthetic materials. 

6. “集权者的统治” could also be translated as “totalitarian rule“, which may have negative connotations. We use a more literal, neutral-sounding translation to reflect the author’s later suggestion that centralized power could also be used in a benevolent manner.

Other Authors

Chinese Perspectives
on Existential Risks from AI

Contact us at: