Huaihong HE
何怀宏




About the author Huaihong He is a professor of the Department of Philosophy, and director of the Ethics Teaching and Research Section at Peking University. He is mainly engaged in research in ethics, philosophy of life, social history, and other fields. He is one of the most influential ethicists in China. His publications include the collection of essays, Social Ethics in a Changing China: Moral Decay or Ethical Awakening?



关于作者北京大学哲学系教授,伦理学教研室主任,主要从事伦理学、人生哲学、社会史等领域的研究。学术专著包括《人类还有未来吗》、《文明的两端》、《良心论──传统良知的社会转化》和《底线伦理》等。翻译著作包括《伦理学体系》、《正义论》等。







The following excerpts are translated from He’s Does humanity still have a future? (人类还有未来吗)(2020). We precede each excerpt with a bolded summary of the key point.


▶Cite Our Translation Concordia AI. “Huaihong He — Chinese Perspectives on AI Safety.” Chineseperspectives.ai, 29 Mar. 2024, chineseperspectives.ai/Huaihong-He.

▶Cite This Work
何怀宏(2020). 人类还有未来吗. 广西师范大学出版社.





Selected excerpts

The huge ability of human beings to control matter may cause catastrophe:


“The thousands of nuclear bombs hanging over human heads may cause catastrophe to humanity at any time, not to mention superintelligent machines and genetically modified species. Not only are there ready-made nuclear weapons, some developed countries that are currently non-nuclear also have the ability to quickly develop the ability to manufacture nuclear weapons, and the poisonous gas, biological and chemical weapons that have been gathering dust for many years may also be unleashed. In today’s technological world in which humans have such a huge ability to control matter, it is almost easy for human beings to inflict serious blows on themselves. And we cannot rule out that there will always be the desperate acts of madmen or the misjudgment of normal people.” (pp.258-259)
精选原文

人类巨大控物能力可能造成灭顶之灾:


“且不说超级智能机器和转基因物种,就是悬在人类头顶的数干颗核弹也随时可能对人类造成灭顶之灾。不仅有现成的核武,一些目前无核的发达国家也拥有可以很迅速地进入制造核武的能力,尘封多年的毒气、细菌和生化武器也还有可能启封。在今天人类掌握如此巨大控物能力的技术世界里,人类要给自己造成严重打击几乎是轻而易举。而我们不能排除总是有狂人的亡命之举或正常人的误判。”



Risks that come from people controlling objects are distinct from those coming from the objects themselves:


“If we consider the dangers that man-made objects will bring to people, we may say that what we can see at present is the dangers brought about by people’s control and abuse of objects. One is the danger of some people using man-made weapons of mass murder such as nuclear bombs, biological and chemical weapons to massacre and even exterminate human beings. Another is the danger of humans using AI, genetic engineering, etc. to change and even possibly exterminate the human species. But these dangers still come from people rather than from the objects. Dangers that come from the objects themselves should come from their abilities and consciousness, for example, artificial objects (superintelligence) that have gained self-awareness and the identity of a machine agent ultimately rebelling against humans, eliminating humans, and replacing humans.” (pp.44-45)

由人控制物和真正来自物的风险:


“如果考虑人造物将给人带来的危险,我们或许可以说,目前我们能够看到的还是由人控制物、滥用物所带来的危险:一是某些人使用人造的大规模杀人武器如核弹、生化武器来杀戮乃至毁灭人类:一是人通过人工智能、基因工程等改变乃至最后可能灭绝人类这一物种。但这些都还是来自人而非来自物。真正来自物本身的危险,是应该出自它的能力和意识,比如获得了自我意识和机器主体的认同的人造物(超级智能)最后反叛人类,消灭人类,取人类而代之。”



War and preparation for it continue to stimulate the development of new technologies:


“Although nuclear disarmament has occurred, there are still thousands of nuclear missiles hanging above our heads at almost zero distance — enough to exterminate humanity many times over. To reduce the threat of nuclear weapons, we should not only reduce hostility, but also reduce misjudgment and proliferation. War and preparation for it continue to stimulate the development of new technologies; many new technologies were invented on an accelerated timeline because of war and only later became used by civilians. Technologies invented in times of peace are also continually entering into military use, such as weapons employing AI — drones, the Killer Bee1, and the space warfare that may occur in the future. Therefore, it seems that in the future all that we can rely on will increasingly be the spirit and ethics of human self-restraint.” (pp.250-251)

战争和备战不断刺激新技术的发展:


“人类还能指望什么呢?虽然经过了核裁军,但今天也依然还有数千枚核导弹几乎可以说是零距离地悬在我们的头顶,足可以多次毁灭人类。要减少核武器的威胁,我们除了应该减少敌意,还要减少误判和扩散。战争和备战不断刺激新技术的发展,许多新技术正是因为战争的原因而加速发明出来,随后才转为民用,而和平年代发明的技术也在不断进入军事的应用,比如应用了人工智能的武器——无人机、杀人蜂,还有日后可能发生的太空战等。所以,我们未来所能依靠的,看来也就只能越来越多地是一种人类自我克制的精神和伦理了。”



“Humanity coordination politics” is needed to deal with the serious problem of human-machine relations:


“We must also consider issues within the existing national and international systems; it is impossible for us to fundamentally change these systems. But I do want to propose a concept distinct from “domestic politics” and “international politics,” that of “humanity politics,” or more specifically, “humanity coordination politics.” Because the problems we will face cannot be solved by any country alone, and humanity has truly become a community with a shared future in the face of such an existential crisis2. In other words, humanity needs a coordinated politics to deal with the urgent and serious problem facing all of us, that is how to deal with the relationship between humans and intelligent machines, especially between humans and possible superintelligent machines in the future.” (pp.102-103)

“人类协同政治”的概念:


“我们还必须在现有的国家体制和国际体系中来考虑问题,我们也不可能根本地改变这一体制和体系。但我的确想提出一个概念,一个有别于“国内政治”“国际政治”的“人类政治”的概念,或者更具体地来说,是“人类协同政治”的概念。因为我们将面临的问题不是任何一个国家能够单独解决的,人类面对这样一种生存危机真正成了一个人类命运的共同体。也就是说,人类需要一种协同的政治来应对这一全人类共同面对的迫切和严重的问题,那就是如何处理人与智能机器,尤其是人与未来可能的超级智能机器的关系。”



In the face of common dangers, love for one’s country should give way to love of humanity:


“People must truly realize that humanity is a community with a shared future. But this awareness might only be able to be formed in the face of imminent disaster; only in the face of common dangers can human beings truly unite. If this disaster does approach, both international political relations and domestic politics may become less important than they used to be. Patriotism and “one’s country first” should give way to humanity-ism (人类主义)or “humanity first.” (pp.104-105)

面对共同的危险爱国主义应让位于爱人类主义:


“人们要真正意识到人类是一个命运共同体。但这种意识也许需要大难当头才能形成,面对共同的危险,人类才能真正团结起来。如果这一大难真的临近,国际政治关系与国家内部的政治都可能变得不像过去那么重要了。爱国主义、“本国优先”应让位于爱人类主义或“人类优先”。”


We should first focus on preventing the worst outcomes from AI:


“I basically uphold a kind of “bottom line thinking” [with respect to AI], that is to say, first consider preventing the worst from happening at the level of the bottom line (this is first and foremost survival). Then think about striving for the best, or rather the least bad case… Secondly, this bottom line also refers to basic moral and even legal constraints and norms, which are the most basic principles for preserving life. The main consideration is how to establish rules for people regarding machine ethics, rather than how to cultivate machines with noble values. Moreover, the worst-case scenario for AI is likely to occur precisely when people think or demand the best, that is to say, at the boundary when this kind of intelligence becomes a general-purpose superintelligence that exceeds human intelligence. After this boundary or singularity, the most intelligent beings in the world will no longer be human beings, but some kind of superintelligent beings that are unknown to human beings now and will still be unknown in the future.” (pp.138-139)

人工智能的底线思维:


“对人工智能的思考也是如此。我基本上是秉持种“底线思维”,也就是说首先在底线(这首先是生存)的层次上,考虑防止最坏的情况发生。然后,再考虑去争取最好,或者毋宁说是最不坏的一种情况。我对“最好”的理解可能和许多人不同,也就是指“还不坏”。其次,这一底线也是指基本的道德乃至法律的约束和规范,这是保存生命的最基本的原则。主要考虑的是如何确立人对机器伦理的规则,而不是考虑如何培养机器具有高尚的价值观。而且,人工智能的最坏情况很可能恰恰发生在人们认为最好或者说要求最好的时候,也就是说发生在这种智能变成通用的、超过了人类智能的超级智能时的临界点。过了这个临界点或者说奇点,世界上最聪明的存在就不是人类了,而是人类现在尚且不知、以后依然不知其究竟的某种超级智能存在。”



The risks of superintelligent AI may be harder to guard against than nuclear energy:


“Nuclear energy aroused great fear and vigilance when it was first unveiled; people have a relatively full understanding of its destructiveness, and are therefore working hard to guard against it, restrict it, and regulate it. Not so with AI, which may tamely obey the will of humans up until it might suddenly have a will of its own and work according to that will. To give an extreme example that Bostrom mentioned in Superintelligence - Paths, Dangers, Strategies: There is a machine that is initially set to produce paper clips with maximum efficiency. Once it has attained an almost omnipotent ability that exceeds the intelligence of humans, it is possible that it will ignore human will and use all “materials” it can gather as resources for making paper clips.” (pp.98-99)

超级人工智能的风险可能比核能更难防范:


“核能一开始的亮相就引起了人们极大的恐惧和警惕,人们对它的毁灭性有相当充分的认识,于是人们即便在发展它的同时也在努力防范它,制约它,规范它。但人工智能却不然,它可能一直驯服地顺从人类的意志,直到它可能突然有一天有了自己的意志,将按照自己的意愿工作。举一个波斯特姆在《超级智能——路线图、危险性与应对策略》中举过的极端的例子:有一台被初始设定了最大效率地生产曲别针的机器,它一旦获得了超过了人的智能的、几乎无所不能的能力,它就有可能无视人类的意志,将一切可以到手的“材料”都用作资源来制造曲别针。“



“Stuart Russell believes that creating a machine that is smarter than one’s own species is not necessarily a good thing. He suggested that in order to prevent robots from “taking over” from humans, humans must create robots with “selfless heart-minds.”3 Therefore, when programming a robot, you should program in altruism, humility, prudence, and other correct human values.” (pp.140-141)
“斯图尔特·拉塞尔(Stuart Russell)认为,制造出一种比自身物种更为聪明的机器并不一定是好事。他建议,为了防止机器人“接管”人类,人类必须制造出具有“无私心”的机器人。因此,在为机器人编程时,就应当将利他主义、谦虚谨慎的品质及其他一些人类的正确价值观编写进去。”



Civilization needs to consider the ten-year plan, the hundred-year plan, and the thousand-year plan, and prepare for future emergencies:


“Some scientists believe that people today might be overestimating the development speed of AI, and there will be no possibility of superintelligent machines developing or a big crisis emerging in the next twenty or thirty years. This would be great, and I even hope that this timeframe can be extended to fifty or sixty years, over a hundred years. However, civilization should not only consider the ten-year plan, the hundred-year plan, but also the thousand-year plan. Considering the rapid growth of science and technology in the past few decades, the possibility of various accidental discoveries, and a basic contradiction of modern human civilization that will be discussed later, human beings have to prepare for future emergencies.” (pp.236-237)

文明要考虑十年大计、百年大计,还有千年大计并未雨绸缪:


“还有的科学家认为,现在的人们可能高估了人工智能的发展速度,未来的二三十年都不会有发展出超级智能机器的可能或者说出现大的危机,这也很好,我甚至希望这个时间再能延长到五六十年、上百年。但是,文明不仅要考虑十年大计、百年大计,还有千年大计。考虑到这几十年科技的飞速增长,以及各种意外发现的可能,以及下面将要谈到的人类现代文明的一个基本矛盾,人类就不能不未雨绸缪。”


The “critical minority” are needed for the wellbeing of the majority and of all of humanity:


“It is the “critical minority” and not necessarily the majority that can currently be relied upon or appealed to when it comes to taking farsighted decisions and initiatives to prevent worst-case scenarios for AI. For the wellbeing of the majority and all mankind, a “critical minority” is needed; “let some people understand first.”This “critical minority” should include at least four types of people… front-line [AI] researchers… [AI] development companies’ owners, managers and investors… political leaders… and conceptual people, such as artists, humanities and social science scholars, media people and so on.” (pp.149-151)

为了多数人和全人类的福祉需要“关键的少数”:


“在采取有关预防人工智能发生最坏情况、具有深谋远虑的决策和举措方面,目前能够依靠或诉诸的是“关键的少数”,而不一定是多数。为了多数人和全人类的福祉,需要“关键的少数”,“让一部分人先明白起来”。这个“关键的少数”至少应该包括四种人,即目前掌握科技、政治和舆论资源的四种人。”



Those with an information and wealth advantage could maintain power forever:


“Current “homo sapiens” will split into a minority of high-level people5 who have been transformed into material objects and a majority of low-level people seen as useless… Everyone may still be able to live a rich material life, but there will still be a power inequality between those who understand programs and those who do not, those who have the money to keep extending their lives and those who do not. People who can master and control information will be incomparably superior in their ability to control things than those who do not control information and do not understand algorithms. But if those rulers can try their best to maintain their own immortality, and then we factor in the huge asymmetric advantages of the technological means they can control, it becomes more likely that rule by power-centralizer(s) will emerge.6 And this time the ruled cannot hope that the natural law of life and death will take effect and interrupt this centralization of power…”

集权者长生不老统治的风险:


“现有的“智人”也要分裂,即一是从“智人”的“物化”变成的少数“高端人”“物化人”,二是还有被他们视作无用的多数原先的“智人”“低端人”,两者之间有一种极大的不平等。多数人将可能由于缺少能力和金钱而不可能追求,还有一些人可能是由于缺乏兴趣乃至强烈抵制而拒绝将自己“物化”。能够掌握和控制信息的人物在控制能力上将无比地优越于没掌握信息和不懂得算法的人。如果那些统治者能够尽量维持自己的长生不老,加上他们能够掌握的技术手段的极大不对称的优势,就更有可能出现集权者的统治了。而且这一次被统治者还无法希望自然生死规律发生作用而打断他们的集权了。”


“Of course, someone centralizing power could also be a “benevolent” centralizer, he could provide the people with abundant “bread” and “games that make them happy.” He could gradually lessen violence and coercion. He has the conditions to give most people a prosperous material life. Most of the “useless” people will not become as impoverished as the proles in Orwell’s 1984.” (pp.40-41)
“集权者当然还可以做一个“慈善”的集权者,他可以给大众提供丰富的“面包”和“快乐的游戏”,他可以渐渐淡化暴力与强制,他有条件给大多数人富足的物质生活。大多数“无用者”将不会像奥威尔《1984》中的“无产者”那样贫困。”


It is hard to consider what humans should do about human-machine relations when we cannot predict machines’ future attitude towards humans:


“The ethics relations between humans and non-human things is mainly about considering: in a position of strong versus weak, how should we treat animals and other non-human things kindly? Meanwhile, the ethics of human-machine relations is mainly about considering: although we are still in a strong versus weak relationship vis-a-vis machines, our respective positions may switch in the future. On the basis of anticipating how they will treat us, we have to consider what we should do now. What can we do to them? But a big dilemma is: although our current attitude towards them depends on their future attitude towards us, it is precisely this latter point that we are very unclear on or even unable to predict.” (pp.72-73)

人物关系和人机关系的伦理:


“人物关系的伦理主要是考虑:在一种强对弱的地位上,我们应该怎样善待动物等其他外物?而人机关系的伦理则是主要考虑:虽然目前我们对它们还是处在强对弱的地位,但未来有可能强弱易位。在一种预期它们将会怎样对待我们的基础上,我们要考虑现在应该怎么办?我们可以对它们做些什么?但一个很大的困境是:虽然目前我们对它们的态度有赖于未来它们对我们的态度,但恰恰是这后一点我们很不清楚甚至可以说无法预期。”



There is something special about humans and their achievements:


“There is also this opinion: “Even if human beings are replaced by another species, then it may not be bad, and that new species, such as silicon-based organisms, may even be a more advanced species.” This is a very optimistic opinion. I can almost say with certainty that I cannot persuade anyone who is such a believer in progress, or to put it another way, is so unafraid of any change, as to hold this opinion. I admit that I still have my own stubbornly held opinion: although the history of human beings is not very long and the history of civilization is only 10,000+ years, and although human beings have a certain weakness as carbon-based organisms, it may be because of these factors that they have achieved plentiful spiritual and cultural achievements. I still cherish these achievements — even beyond all else — and I also cherish our daily lives as humans and all kinds of human feelings.” (pp.258-259)

对于人类物种的个人执念:


“也有这样一种意见:“即便人类被另一种物种代替了,那么,可能也是不赖,甚至那新的物种——比如说硅基生物,还可能是一种更先进的物种呢。”这是一种非常达观或者说乐观的意见。我几乎可以肯定地说,我说服不了持这种一心相信进步或者说不畏惧任何变化的意见的人。我承认我还是有一点个人的执念:人类的历史尽管并不很长,文明史也就只有一万余年,人类尽管有一种作为碳基生物的软弱,但可能正是因此取得了丰硕的精神文化成果。我还是珍惜,甚至无比地珍惜这些成果,也珍惜我们人类的日常生活和各种属人的感情。”






Translator’s notes 


1. A kind of Unmanned Aerial Vehicle, see: https://www.swiftengineering.com/r-and-d/killer-bee-uas/

2. This draws on a slogan commonly used in Chinese politics and diplomacy, 人类命运共同体, which is often translated as ‘community with a shared future for mankind’.

3. For more on the translation of “心“ see here. Here the author is likely referring to Russell’s first principle for beneficial AI: The machine’s purpose is to maximize the realization of human values, and in particular has no purpose of its own and no innate desire to protect itself.

4. The structure of this statement mirrors Deng Xiaoping’s call for letting some people get rich first (to reach common prosperity faster).

5. The author uses the term "物化(人)", literally "materialization/materialized person" to denote the technologically mediated process of replacing one's flesh with synthetic materials. 

6. “集权者的统治” could also be translated as “totalitarian rule“, which may have negative connotations. We use a more literal, neutral-sounding translation to reflect the author’s later suggestion that centralized power could also be used in a benevolent manner.





Other Authors



Chinese Perspectives
on Existential Risks from AI

Contact us at: info@concordia-ai.com