Bo ZHANG
张钹




About the author Bo Zhang is the honorary dean of Tsinghua University’s AI institute and a professor in the university’s Department of Computer Science. He is an academician of the Chinese Academy of Sciences, Chief Scientist at AI safety/security start-up RealAI, and technical consultant for Microsoft Asia Research Institute. He is one of the founding figures of China’s AI field.



关于作者
张钹,清华大学计算机系教授,中科院院士。现任微软亚洲研究院技术顾问。他参与人工智能、人工神经网络、机器学习等理论研究,以及这些理论应用于模式识别、知识工程与机器人等技术研究。






This is an English translation of a piece titled “Academician Zhang Bo: Make Responsible AI” (张钹院士:做负责任的人工智能). The piece is a transcription of a speech delivered online by Zhang to the World Internet Conference Wuzhen Summit’s “AI and Digital Ethics Sub-Forum” on November 10, 2022. 

▶ Cite Our TranslationConcordia AI. “Bo Zhang — Chinese Perspectives on AI Safety.” Chineseperspectives.ai, 29 Mar. 2024, chineseperspectives.ai/Bo-Zhang.

▶ Cite This Work张钹(2022-11-10). “做负责任的人工智能”. 在世界互联网大会乌镇峰会人工智能与数字伦理分论坛上的演讲. https://mp.weixin.qq.com/s/OgGoqoy6dCzJiEopMpnNXw





Translation

Make Responsible AI



The first person who proposed that there might be ethical risks in the development of robots was Asimov, an American science fiction novelist, in his novel “Runaround.” The year was 1942, long before the birth of AI. He also proposed methods to avoid risks — the well-known “three laws of robotics.” Putting forward these issues at that time showed foresight. Later, the physicist Hawking and others continued to put forward similar warnings, but these warnings weren’t taken seriously by everyone, especially not by the AI community.
原文

做负责任的人工智能


最早提出机器人发展中可能存在伦理风险的是,美国科幻小说家阿西莫夫在他的小说“环舞”(Runaround)中提出的,时间是1942年,早在人工智能诞生之前,为此他还提出规避风险的方法,即大家熟知的“机器人三定律”,应该说这些问题的提出具有前瞻性。后来物理学家霍金等也不断地提出类似的警告,但这些警告并没有引起大家特别是人工智能界的重视。



The reason was that the foundations of the argument were not sufficient. Those sounding the alarm believed that the continuous progress and development of machines would one day lead to their intelligence exceeding that of humans, and especially when machines gained subjective consciousness, that is, when so-called "superintelligence" emerged, humans would lose control over machines and there would be catastrophic consequences1. This "technical logic" was not convincing for most AI researchers because everyone knew that AI research work was still in the exploratory stages, progress was slow, and there were still many difficult problems. It is not easy to build a "superhuman" robot. In addition, it has always been controversial whether the goal of "superintelligence" can be achieved through so-called "artificial general intelligence". Therefore, we believed that these risks were only concerns for the far future, and we were not in a hurry to consider them.
原因在于他们的立论依据不够充分,他们认为机器的不断进步和发展,有朝一日当它的智力超过人类,特别是机器具有主观意识时,即出现所谓的“超级智能”时,人类将会失去对机器的控制,从而带来灾难性的后果。这种“技术逻辑”对于大多数人工智能研究者来讲并不具有说服力。因为大家清楚地知道,人工智能研究工作目前还处于探索的阶段,进展缓慢、还受到很多问题的困扰,难以解决,制造“超人类”的机器人谈何容易。而且能不能通过所谓“通用人工智能”达到“超智能”的目标,也一直存在着争议。因此我们认为这些风险只不过是未来的“远虑”而已,不急于考虑。


However, after the rise of deep learning based on big data at the beginning of this century, people's understanding has changed a lot. They feel deeply that the ethical risks of AI are right in front of us and governance is a pressing problem! Why? As you know, deep learning based on big data has been widely used in various fields to complete tasks such as decision-making, prediction and recommendation, with significant impacts on human society. However, people soon found that deep learning algorithms based on big data had problems such as opacity, uncontrollability and unreliability, making it very easy to unintentionally misuse AI technology2, which could bring serious consequences to human society.
可是,当本世纪初基于大数据的深度学习在人工智能中崛起之后,人们的认识有了很大的变化,深切地感到人工智能的伦理风险就在眼前,治理迫在眉睫!这是为什么?大家知道,本世纪初基于大数据的深度学习被广泛地应用于各个领域,用来完成决策、预测和推荐等任务,给人类社会带来很大的影响。但是,人们很快发现基于大数据的深度学习算法具有不透明、不可控和不可靠等缺陷,导致AI技术很容易被无意误用,可能给人类社会带来严重的后果。



As you know, with current AI technology we can generate high-quality text and images that meet a user's requirements through a generative neural network. However, the same neural network can also generate text and images that are full of prejudice (racial, gender, etc.), partiality, and errors and do not conform with the user's requirements. This happens completely out of the user's control. It is conceivable that making decisions or predictions based on these incorrect generated texts could bring serious consequences that undermine fairness and impartiality.
大家知道,根据目前人工智能的技术,我们可以通过生成式神经网络根据使用者的要求生成符合要求且质量良好的文本和图像。但同样的神经网络也可以违背用户的要求生成充满(种族、性别等)偏见、不公正和错误百出的文本与图像,完全不受使用者的控制。可以设想,如果根据这些生成的错误文本做决策或预测,就可能带来破坏公平性与公正性的严重后果。



We previously thought that only when the intelligence of a robot approached or exceeded that of human beings would we lose control of it. Unexpectedly, despite machine intelligence still being so rudimentary, we have already lost control of it, much faster than anticipated. This is the very serious reality facing us.

我们原以为,只有当机器人的智能接近或超过人类之后,我们才会失去对它的控制。没有想到的是,在机器的智能还是如此低下的时候,我们已经失去对它的控制,时间居然来得这么快,这是摆在我们面前很严峻的现实。



Asimov put forward a plan to avoid the ethical3 crisis in the "Three Laws of Robotics", which are: "First, robots must not harm humans, or cause humans to be injured due to inaction; Second, robots must obey the commands of humans, unless these commands conflict with the first law; Third, robots must protect their own existence, as long as this protection does not violate the first or second law". In a word, human beings should maintain firm control over machines. Let machines be slaves to humans! Can this approach solve the ethical crisis posed by machines? The answer is obviously no!

阿西莫夫在《机器人三定律》中曾提经出规避伦理危机的方案,内容是“一,机器人不得伤害人类,或因不作为而让人类受到伤害;二,机器人必须服从人类的命令,除非这些命令与第一定律相冲突;三,机器人必须保护自己的存在,只要这种保护不违反第一或第二定律”。总之一句话,人类应该牢牢把握机器的控制权。让机器做人类的奴隶!这种办法能否解决机器的伦理危机?答案显然是否定的!



In fact, in the early days of "non-intelligent" machines we made "machines completely obey the commands of humans". But if we want machines to develop in the direction of intelligence, we can't let them be completely at the mercy of human beings. We need to give them a certain degree of freedom and initiative. It is based on this principle that generative neural networks make use of the mathematical tool of "probability" to enable machines to generate rich and diverse texts and images. But it’s also for this reason that there is definitely some probability (possibility) of generating text and images that don’t meet the standard required and are harmful. This is the price we have to pay when giving machines intelligence - it’s difficult to avoid.
实际上,让“机器完全听从人类的指挥”,在早期“无智能”的机器中我们就是这样做的。但是如果我们想让机器向智能化的方向发展,就不能让机器完全听候人类的“摆布”,需要赋予它一定的自由度和主动权。生成式神经网络就是根据这个原理,利用“概率”这一数学工具,使机器能够生成丰富多样的文本和图像。但也因为这个原因,就一定存在生成不合格和有害文本与图像的概率(可能性)。这是我们在赋予机器智能的时候所必须付出的代价,难以避免。



So is it possible for us to limit incorrect behavior from a machine by setting strict ethical principles for it? In fact, this is also very difficult! Not only because it is difficult to accurately describe "ethical" principles, but also because such principles - even if they could be defined - would be difficult to implement. To take a simple example, if a self-driving vehicle is driving along an ordinary road, if we stipulate that the vehicle must strictly abide by traffic rules, this "principle" should be very clear. However, if there are also manned vehicles and pedestrians on the road carrying out "intentional or unintentional violations of traffic rules", the self-driving vehicle cannot drive to complete its own task. For example, the self-driving vehicle might need to merge to the left in order to turn left, but be unable to do so because the vehicles in the left lane are not maintaining the prescribed distance between them. This shows that if a self-driving vehicle must strictly abide by traffic rules while also completing the task of reaching its destination, it is difficult to simultaneously attend to both goals in an uncertain traffic environment. We can see that the development of AI will inevitably disrupt the field of ethics and traditional norms.
那么我们有没有可能通过给机器规定严格的伦理准则来限制它的错误行为?实际上,这也很困难!不仅因为“伦理”的准则很难准确描述,但即便可以定义,也很难执行。举一个简单的例子,比如自动驾驶车(或无人车)行驶在普通的马路上,如果我们规定自动驾驶车必须严格遵守交通规则,这个“准则”应该是很明确的。但如果路上同时还有“有意或无意违反交通规则”的有人车和行人,自动驾驶车则无法行驶去完成自身的任务。比如,自动驾驶车需要向左并线以便左拐,由于左路车道上的车辆之间没有保持规定的车距,自动驾驶车就无法实现向左并线。这恰恰说明,自动驾驶车一方面要严格遵守交通规则,另一方面要完成达到目的地的任务,在不确定的交通环境下,这两项目标是难以兼顾的。可见,人工智能的发展必然带来对伦理和传统规范的冲击。


The insecure, untrustworthy and brittle nature of deep learning algorithms also brings opportunities for intentional abuse. People can maliciously exploit the brittleness (non-robustness) of algorithms to attack them, resulting in the failure of AI systems based on such algorithms, or even destructive actions. Deep learning can also be used to fake things - that is, so-called "deepfaking". Through AI "deepfaking", a large volume of realistic fake news (fake videos), fake speech (fake audio), etc. can be produced, disturbing social order and framing innocent people.

深度学习算法的不安全、不可信与不鲁棒,同时给有意的滥用带来机会。人们可以恶意利用算法的脆弱性(不鲁棒)对算法进行攻击,导致基于该算法的人工智能系统失效,甚至做出相反的破坏行为。深度学习还可以用来造假-即所谓“深度造假”,通过AI的“深度造假”,可以制造出大量逼真的假新闻(假视频)、假演说(假音频)等,扰乱社会的秩序、诬陷无辜的人。


Intentional abuse and unintentional misuse of AI both need governance, but the nature of governance is completely different for these two problems. The former will require legal constraints and oversight by public opinion: a form of governance involving compulsion. The latter is different. It will require formulating corresponding evaluation standards and rules, conducting strict scientific evaluation and end-to-end supervision of the research, development and use of AI, and taking remedial measures after problems occur to help people avoid misuse of AI.

人工智能无论是被有意的滥用还是被无意的误用都需要治理,不过对这两者的治理性质上完全不同。前者要靠法律的约束和社会舆论的监督,是带有强制性的治理。后一种则不同,需要通过制定相应的评估标准和规则,对人工智能的研究、开发和使用过程进行严格的科学评估和全程监管,以及问题出现之后可能采取的补救措施等,帮助大家避免AI被误用。



Fundamentally speaking, the research, development and application of AI need to be people-oriented; we need to make responsible AI, starting from ethical principles of impartiality and fairness. To this end, we need to work hard to establish interpretable and robust AI theory. Only on this basis can we develop safe, trustworthy, controllable, reliable and scalable AI technology, and ultimately promote the fair, impartial and universally beneficial application and industrial development of AI. This is the line of thinking we advocate for in developing the third generation of AI.

从根本上来讲,人工智能的研究、开发与应用都需要以人为本,从公正、公平的伦理原则出发,做负责任的人工智能。为此,我们需要努力去建立可解释、鲁棒的人工智能理论,在此基础上,才能开发出安全、可信、可控、可靠和可扩展的人工智能技术,最终推动人工智能的公平、公正和有益于全人类的应用和产业发展。这就是我们提倡的发展第三代人工智能的思路。


AI research and governance need the participation and cooperation of people in different fields all over the world. In addition to those engaged in AI R&D and use, they also need the participation of people in different fields such as law, morality and ethics. We need to clarify the standards of ethics and morality and what it means to be "moral" and "ethical". Different countries, ethnic and other groups and individuals have different understandings. Therefore, we need global cooperation to jointly develop a set of standards that are in line with the interests of all mankind. Human beings are a community of shared destiny4. We believe that through joint efforts, we will find standards that meet the common interests of mankind. Only when researchers, developers and users of AI abide by jointly formulated principles can AI develop healthily and benefit all mankind.”
人工智能研究和治理都需要全世界不同领域人员的参与与合作,除从事人工智能的研发和使用人员之外,还需要法律、道德、伦理等不同领域人员的参与。我们需要明晰伦理、道德的标准,什么是符合“道德”和“伦理”的,不同的国家、民族、团体和个人都有不尽相同的认识,因此需要全球范围的合作,共同制定出一套符合全人类利益的标准。人类是命运的共同体,我们相信通过共同的努力,一定会找到符合人类共同利益的标准。只有人工智能的研究、开发和使用人员,人人都遵守共同制定的原则,才能让人工智能健康地发展并造福于全人类。




Translator’s notes 


1. It is not clear whether ‘subjective consciousness’ (主观意识), here presented as a necessary condition for superintelligence, entails the ability to feel, an awareness of one’s situation in the world, or something else.

2. Misuse (误用) can also be translated as ‘use incorrectly,’ and is distinct from abuse (滥用). 

3. Zhang uses the term ‘ethical crisis’ to refer to a problem that AI safety researchers would typically describe as a control/safety problem.

4. This draws on a slogan commonly used in Chinese politics and diplomacy.







Other Authors



Chinese Perspectives
on Existential Risks from AI

Contact us at: info@concordia-ai.com