Home > Technology peripherals > AI > body text

100-year-old Kissinger talks about artificial intelligence: Don't wait until a crisis comes to start paying attention

王林
Release: 2023-06-05 22:23:35
forward
978 people have browsed it

In 2023, Henry Kissinger is 100 years old, but his mind is still clear and his thinking is still clear. He still participates in discussions on international issues as always, and can also make admirable predictions.

The Economist had an eight-hour conversation with Kissinger at the end of April. During this dialogue, Kissinger expressed concerns about the increasingly fierce competition between China and the United States for technological and economic leadership. He was also worried that artificial intelligence would greatly intensify the confrontation between China and the United States. Kissinger believes that artificial intelligence will become a key factor in the security field within five years, with disruptive potential comparable to movable type printing.

“We live in a world of unprecedented destructiveness,” Kissinger warned. Although humans will in principle be involved in machine learning feedback loops, AI could become a fully autonomous, unstoppable weapon.

Kissinger has always been very concerned about the development of artificial intelligence. He once said, "Technical people care about applications, but I care about impact." Recently, Kissinger, former Google CEO Eric Schmidt, and MIT Schwarzman School Dean Daniel Huttenloher also co-authored a book "The Age of Artificial Intelligence and the Future of Humanity", Kissinger proposed in this book that artificial intelligence will reshape global security and world order, and reflected on the impact of the development of artificial intelligence on individuals and human self-identity.

100-year-old Kissinger talks about artificial intelligence: Dont wait until a crisis comes to start paying attention

Since the beginning of recorded human history, security has always been the minimum goal pursued by an organized society. In every era, security-seeking societies have sought to transform technological advances into increasingly effective ways to monitor threats, train and prepare for war, exert influence beyond national borders, and strengthen military forces to achieve victory in times of war. . Advances in metallurgy, fortification, horse breeding, and shipbuilding were often decisive for the earliest organized societies. By the early modern period, innovations in firearms, naval vessels, navigation tools, and technology played a similar role.

As power grows, major powers weigh one another to assess which side will prevail in a conflict, what risks and losses such a victory would entail, what justifications there are for going to war, and what the other great power and what impact their military involvement in the conflict will have on the outcome. The combat capabilities, goals, and strategies of different countries are, at least in theory, set as a balance, or a balance of power.

Cyber ​​War in the Artificial Intelligence Era

Over the past century, there has been a disconnect between strategic alignment of means and ends. Technologies used in the pursuit of security are emerging and becoming more disruptive, while strategies for applying these technologies to achieve stated goals are becoming increasingly elusive. In today’s era, the emergence of networks and artificial intelligence has added extraordinary complexity and abstraction to these strategic calculations.

Today, after the end of the Cold War, major powers and other countries have used cyber capabilities to enhance their national arsenals. The effectiveness of these cyber capabilities stems mainly from their opacity and deniability, and in some cases, also from their From its use on the blurred frontiers of disinformation, intelligence gathering, sabotage and traditional conflict – this constitutes a variety of strategies for which there is no accepted theoretical dogma. At the same time, with every advancement comes new vulnerabilities revealed.

The era of artificial intelligence may further complicate the mysteries of modern strategy, which is not what humans intended, and may be completely beyond human understanding. Even if countries do not widely deploy so-called lethal autonomous weapons—automatic or semi-autonomous artificial intelligence weapons that are trained and authorized to select targets autonomously and attack without further human authorization—AI will still have the potential to augment conventional weapons, nuclear weapons and cyber capabilities, thereby making security relationships between adversaries more difficult to predict and maintain, and conflicts more difficult to limit.

No major country can ignore the security dimension of artificial intelligence. A race for strategic advantage in artificial intelligence has begun, especially between the United States and China, and of course Russia. As awareness or suspicion spreads that other countries are acquiring certain AI capabilities, more countries will seek to acquire these capabilities. Once introduced, this ability will spread rapidly. While creating a sophisticated AI requires a lot of computing power, propagating it or using it usually does not.

The solution to these complex problems is neither despair nor surrender. Nuclear technology, cyber technology and artificial intelligence technology already exist, and each of these technologies will inevitably play a role in strategy. There is no way we can go back to a time when these technologies were “uninvented.” If the United States and its allies shrink from the impact these capabilities could have, the result will not be a more peaceful world. Instead, it would be a less balanced world in which countries compete to develop and use the most powerful strategic capabilities without regard for democratic responsibilities and international balance.

In the coming decades, we will need to achieve a balance of power that takes into account both intangible factors such as cyber conflict and the spread of large-scale disinformation, as well as the unique properties of artificial intelligence-assisted warfare. The cruel reality forces people to realize that even in competition with each other, opponents in the field of artificial intelligence should be committed to limiting the development and use of extremely destructive, unstable and unpredictable artificial intelligence capabilities. Sober efforts at AI arms control are not incompatible with national security; they are an attempt to ensure that security is sought within the framework of humanity’s future.

The more digitally capable a society is, the more vulnerable it becomes

Throughout history, a country's political influence has tended to be roughly matched by its military power and strategic capabilities, an ability to wreak havoc on other societies even if primarily by exerting implicit threats. However, a balance of power based on this trade-off of forces is not static or self-sustaining. Rather, it relies first on the parties agreeing on what constitutes this power and the legal limits of its use. Second, maintaining a balance of power requires that all members of the system, especially adversaries, make consistent assessments of each state's relative capabilities, intentions, and consequences of aggression. Finally, maintaining a balance of power requires an actual, recognized balance. When one party in the system increases its power in a way that is disproportionate to other members, the system will try to adjust by organizing counterforces or by adapting to the new reality. The risk of conflict caused by miscalculation is greatest when the balance of power becomes uncertain, or when countries weigh their relative strengths in completely different ways.

In this day and age, these trade-offs are even further abstract. One of the reasons for this shift is so-called cyber weapons, which have both military and civilian uses and therefore have an ambiguous status as weapons. In some cases, the effectiveness of cyber weapons in exercising and enhancing military power stems primarily from the failure of their users to disclose their existence or acknowledge their full capabilities. Traditionally, it has not been difficult for parties to a conflict to recognize that fighting has occurred, or to recognize who the belligerent parties are. Opponents calculate each other's strength and evaluate how quickly they can deploy their weapons. However, these unbreakable principles on the traditional battlefield cannot be directly applied to the network field.

Conventional and nuclear weapons exist in physical space where their deployment can be detected and their capabilities can be at least roughly calculated. By contrast, much of the effectiveness of cyberweapons comes from their opacity; their power is naturally diminished if they are made public. These weapons exploit previously undisclosed software vulnerabilities to penetrate networks or systems without the permission or knowledge of authorized users. In the event of a "distributed denial of service" (DDoS) attack, such as an attack on a communications system, an attacker may overwhelm a system with a barrage of seemingly valid requests for information, rendering it unusable. In this case, the true source of the attack may be obscured, making it difficult or impossible (at least at the time) to identify the attacker. Even one of the most famous incidents of cyber-industry sabotage, when the Stuxnet virus compromised manufacturing control computers in Iran's nuclear program, was not officially acknowledged by any government.

Conventional weapons and nuclear weapons can be aimed at targets relatively accurately, and ethics and law require that they can only target military forces and facilities. Cyber ​​weapons can widely affect computing and communication systems, and can often cause a particularly powerful blow to civilian systems. Cyber ​​weapons can also be absorbed, modified and redeployed by other actors for other purposes. This makes cyber weapons similar in some respects to biological and chemical weapons, whose effects can spread in unintended and unknown ways. In many cases, cyber weapons affect large-scale human society, not just specific targets on the battlefield.

These uses of cyber weapons make cyber arms control difficult to conceptualize or implement. Nuclear arms control negotiators can publicly disclose or describe a class of nuclear warheads without denying the weapon's capabilities. Cyber ​​arms control negotiators (who do not yet exist) will need to resolve the paradox that discussion of the power of cyber weapons may lead to the loss of that power (allowing adversaries to patch vulnerabilities) or proliferation (adversaries being able to copy code or intrusion methods).

One of the central paradoxes of the digital age we live in is that the more digitally capable a society becomes, the more vulnerable it becomes. Computers, communications systems, financial markets, universities, hospitals, airlines and public transportation systems, and even the mechanisms of democratic politics are all vulnerable to cyber manipulation or attack to varying degrees. As advanced economies integrate digital command and control systems into power plants and power grids, move government projects into large servers and cloud systems, and transfer data to electronic ledgers, their vulnerability to cyberattacks also increases. Multiplied. These actions provide a richer set of targets so that just one successful attack can cause substantial damage. In contrast, in the event of a digital breach, low-tech states, terrorist groups, or even individual attackers may believe that the losses they sustain are relatively small.

Artificial intelligence will bring new variables to war

Countries are quietly, sometimes tentatively, but certainly developing and deploying artificial intelligence that facilitates strategic operations across a wide range of military capabilities, which could have a revolutionary impact on security policy .

War has always been a field full of uncertainty and contingency, but the entry of artificial intelligence into this field will bring new variables to it.

Artificial intelligence and machine learning will transform actors’ strategic and tactical choices by expanding the strike capabilities of existing weapon classes. AI could not only make conventional weapons more accurate, but also allow them to be targeted in new and unconventional ways, such as (at least in theory) targeting a specific person or object rather than a location. By studying large amounts of information, AI cyberweapons can learn how to penetrate defenses without needing humans to help them discover exploitable software vulnerabilities. Likewise, AI can be used defensively to locate and fix vulnerabilities before they are exploited. But since attackers can choose targets but defenders cannot, AI can give attackers an advantage, if not invincibility.

If a country faces an adversary that has trained artificial intelligence to fly aircraft, make independent targeting decisions, and fire, then adopting this technology will have a significant impact on tactics, strategy, or resorting to escalating the scale of war (even nuclear war). What changes have occurred in terms of willingness?

Artificial intelligence opens up new horizons in the capabilities of the information space, including in the field of disinformation. Generative AI can create massive amounts of plausible disinformation. Information warfare and psychological warfare fueled by artificial intelligence, including the use of fake people, pictures, videos and speeches, are exposing disturbing new vulnerabilities in today’s society at every turn, especially in free societies. The widely shared demonstrations were accompanied by seemingly authentic images and videos of public figures saying things they never actually said. In theory, AI could decide the most effective way to deliver this AI-synthesized content to people, aligning it with their biases and expectations. If a composite image of a country’s leader is manipulated by opponents to sow discord or issue misleading directives, will the public (or even other governments and officials) see through the deception in time?

Act before disaster actually happens

Every major technologically advanced country needs to understand that they are on the threshold of a strategic transformation. This transformation is as important as the emergence of nuclear weapons, but the impact will be more diverse, dispersed and unpredictable. Every society that is expanding the frontiers of artificial intelligence should work to establish a national-level agency to consider the defense and security of artificial intelligence and build bridges between the various sectors that affect the creation and deployment of artificial intelligence. This body should be entrusted with two functions: ensuring that the country's competitiveness in other parts of the world is maintained, and coordinating research on how to prevent or at least limit unnecessary escalation of conflicts or crises. On this basis, some form of negotiation with allies and adversaries will be crucial.

If this direction is to be explored, then the world’s two major artificial intelligence powers—the United States and China—must accept this reality. The two countries may conclude that whatever form of competition may emerge in this new phase of competition between the two countries, they should still seek to reach a consensus that they will not fight a cutting-edge technology war with each other. Governments of both parties could delegate oversight to a team or senior official and report directly to leaders on potential dangers and how to avoid them.

In the era of artificial intelligence, we should adjust our long-standing strategic logic. We need to overcome, or at least curb, this drive toward automation before disaster actually strikes. We must prevent artificial intelligence that can run faster than human decision-makers from making irreversible actions with strategic consequences. The automation of defense forces must be implemented without giving up the basic premise of human control.

Contemporary leaders can achieve the six major tasks of controlling weapons and equipment by extensively and dynamically combining conventional capabilities, nuclear capabilities, cyber capabilities, and artificial intelligence capabilities.

First, leaders of rival and hostile nations must be prepared to engage in regular dialogue with each other to discuss the forms of war they all want to avoid, just as their predecessors did during the Cold War. To help, the United States and its allies should organize around interests and values ​​they believe are common, inherent, and inviolable, including the experiences of generations who grew up at the end of the Cold War and in its aftermath. .

Second, we must pay new attention to the unsolved problems of nuclear strategy and recognize that their essence is one of the great strategic, technological and moral challenges faced by mankind. For decades, the memory of Hiroshima and Nagasaki reduced to scorched earth by atomic bombs has forced people to recognize the unusual and serious nature of the nuclear problem. As former U.S. Secretary of State George Shultz told Congress in 2018: “I worry that people have lost that sense of fear.” Leaders of nuclear-armed states must recognize that they have a responsibility to work together to prevent disaster. occur.

Third, leading powers in cyber and artificial intelligence technology should work to define their theories and limitations (even if all aspects of them are not made public) and identify points of connection between their own theories and those of competing powers. If our intention is deterrence rather than use, peace rather than conflict, limited conflict rather than universal conflict, these terms need to be re-understood and redefined in terms that reflect the unique dimensions of cyber and artificial intelligence.

Fourth, nuclear-armed countries should commit to conducting internal inspections of their command and control systems and early warning systems. Such fail-safe inspections should identify inspection steps to enhance protection against cyber threats and unauthorized, inadvertent or accidental use of weapons of mass destruction. These inspections should also include options to rule out cyberattacks on facilities related to nuclear command and control systems or early warning systems.

Fifth, countries around the world, especially technologically powerful countries, should develop strong and acceptable methods to extend decision-making time as much as possible under heightened tensions and extreme circumstances. This should be a common conceptual goal, especially among competitors, that can link the steps (both immediate and long-term) needed to control instability and build mutual security. In a crisis, humanity must bear ultimate responsibility for whether or not to use advanced weapons. In particular, competitors should strive to agree on a mechanism to ensure that potentially irrevocable decisions are made in a way that helps humans think about them and is conducive to human survival.

Sixth, major artificial intelligence powers should consider how to limit the continued proliferation of militarized artificial intelligence, or rely on diplomatic means and the threat of force to carry out systematic non-proliferation work. Who are the technology acquirers who are ambitious enough to use technology for unacceptable and destructive purposes? Are there any specific AI weapons that deserve our special attention? Who will ensure that this red line is not crossed? Established nuclear powers have explored this nonproliferation concept with mixed success. If a disruptive and potentially destructive new technology is used to arm the militaries of the world's most hostile or morally unconstrained governments, strategic balance may be elusive and conflict unmanageable.

Since most artificial intelligence technologies have dual-use characteristics, it is our responsibility to stay ahead in this technology research and development race. But it also forces us to understand its limitations. Waiting until a crisis strikes to start discussing these issues is too late. Once used in a military conflict, AI technology can respond so quickly that it is almost certain that it will produce results faster than diplomatic means. Great powers must discuss cyber and AI weapons, if only to develop a common discourse of strategic concepts and a sense of each other's red lines.

To achieve mutual restraint in the most destructive capabilities, we must not wait until a tragedy occurs before trying to make up for it. As humanity begins to compete in creating new, evolving, intelligent weapons, history will not forgive any failure to set limits on this. In the era of artificial intelligence, the long-lasting pursuit of national advantage must still be premised on safeguarding human ethics.

We will still give away three books to fans. Everyone is welcome to leave messages. We will actively select three good fan messages to give away books.

- end -

Sort alphabetically

Swipe up and down to read more

An Yun | Bao Wuwu | Bao Chengchao | Ben Xingzhen | Bo Guanhui | Bi Tianyu

Cao Jin | Cao Xia | Cao Wenjun | Chang Yaqiao | Chang Zhen | Chang Yuan

Cui Jianbo | Chen Xuanmiao | Chen Yifeng | Chen Ping | Chen Yuan | Chen Liqiu

Chen Jun | Chen Jueping | Chen Wenyang | Chen Yu | Chen Jinwei | Chen Guoguang

Chen Sijing | Chen Peng | Chen Wei | Chen Letian | Chen Yiping | Chen Liangdong

Chen Lianquan | Chen Huaiyi | Chen Wen | Cheng Nianliang | Cheng Yu | Cheng Zhou

Cheng Kun | Cheng Tao | Cui Ying | Cai Songsong | Cai Bin | Cai Xiao

Cai Zhipeng | Cai Zhiwen | Dai Yunfeng | Deng Jiongpeng | Dong Weiwei | Dong Chao

Dong Liang | Du Xiaohai | Du Yang | Du Pei | Du Guang | Feng Mingyuan

Fu Yixiang | Fu Bin | Fu Juan | Fu Weiqi | Fei Yi | Fan Jie

Fan Tingfang | Fang Yuhan | Fang Wei | Fang Kang | Fang Jian | Fang Chang

Gao Lanjun | Gao Yuan | Just climbed the peak | Ge Chen | Gu Yaoqiang | Gu Yihui

Gu Qibin | Gui Kai | Guo Rui | Guo Kun | Guo Xiangbo | Gong Huaizhi

Han Dong | Han Haiping | Han Bing | Hao Xudong | Hao Miao | He Shuai

He Xiaochun | He Qi | He Yiguang | He Zhe | Huo Dongjie | Hou Zhenxin

Hou Wu | Hou Jie | Hong Liu | Hu Xinwei | Hu Lubin | Hu Yibin

Hu Tao | Hu Wei | Hu Zhili | Hu Zhe | Huang Feng | Huang Li

Huang Lihua | Huang Bo | Huang Hao | Huang Qirui | Huang Yingjie | Jiang Cheng

Jiang Ying | Jiang Xin | Jiang Qiu | Jiang Yong | Jiang Qi | Jiang Hong

Ji Wenjing | Jiao Wei | Jia Peng | Jia Teng | Jin Shengzhe | Jin Xiaofei

Jin Zicai | Ji Xinxing | Ji Peng | Kuang Wei | Kong Lingchao | Lao Jienan

Lan Xiaokang | Lei Ming | Lei Min | Lei Zhiyong | Li Dehui | Li Chen

Li Xiaoxi | Li Xiaoxing | Li Yuanbo | Li Yaozhu | Li Yugang | Li Jianwei

Li Jian | Li Jiacun | Li Wei | Li Jing | Li Jun | Li Zhenxing

Li Xin | Li Shaojun | Li Rui | Li Wenbin | Li Biao | Li Yixuan

Li Zibo | Li Qian | Li Yan | Li Yin | Li Yemiao | Li Haiwei

Lian Zhaofeng | Liang Hao | Liang Hui | Liang Li | Liang Yongqiang | Liang Wentao

Liao Hanbo | Lin Qing | Lin Jianping | Lin Sen | Liu Bin | Liu Bo

Liu Huiyinhua | Liu Hui Dongfanghong | Liu Gesong | Liu Jiang | Liu Xiaolong

Liu Su | Liu Rui Dongfanghong | Liu Rui CITIC Prudential | Liu Ping | Liu Xiao

Liu Bing | Liu Xiao | Liu Kaiyun | Liu Yuanhai | Liu Xinren | Liu Zhihui

Liu Weiwei | Liu Peng | Liu Shiqing | Liu Wanjun | Lu Bin | Lu Zhengzhe

Lu Xin | Lu Hang | Lu Ben | Lu Wenkai | Luo Chunlei | Luo Shifeng

Luo Jiaming | Luo Yuanhang | Luo Ying | Lu Jiawei | Lu Yuechao | Lou Huiyuan

Ma Xiang | Ma Long | Mao Congrong | Mo Haibo | Miao Yu | Min Liangchao

Niu Yong | Ni Quansheng | Peng Lingzhi | Peng Chengjun | Pan Zhongning | Pan Ming

Pu Shilin | Qi Hao | Qi He | Qiu Jingmin | Qiu Dongrong | Qiu Jie

Qian Weihua | Qian Yafengyun | Qin Yi | Qin Xuwen | Qu Jing | Rao Gang

Ren Linna | Sang Lei | Song Haihan | Song Hua | Shi Haihui | Shi Bo

Shen Nan | Shen Xuefeng | Shi Wei | It’s Xingtao | Su Moudong | Su Junjie

Sun Fang | Sun Wei Minsheng Jia Yin | Sun Wei Dongfanghong | Sun Yijia | Sun Haozhong

Sun Mengyi | Shao Zhuo | Sheng Zhenshan | Tang Yiheng | Tang Hua | Tang Hui

Tan Donghan | Tan Pengwan | Tan Li | Tian Yulong | Tian Yu | Tian Hongwei

Tu Huanyu | Tao Can | Wan Jianjun | Wang Dapeng | Wang Dongjie | Wang Gang

Wang Junzheng | Wang Han | Wang Jun | Wang Pei | Wang Peng | Wang Xu

Wang Yanfei | Wang Zonghe | Wang Keyu | Wang Jing | Wang Shiyao | Wang Xiaoming

Wang Qiwei | Wang Xiaoling | Wang Yuanyuan | Wang Yin | Wang Wenxiang | Wang Rui

Wang Haitao | Wang Dengyuan | Wang Jian | Wang Delun | Wang Yiwei | Wang Haobing

Wang Bin | Wang Xiaoning | Wang Hao | Wei Xiaoxue | Wei Dong | Wei Mingliang

Weng Qisen | Wu Xing | Wu Da | Wu Peiwen | Wu Fengshu | Wu Yin

Wu Wei | Wu Yue | Wu Xian | Wu Jian | Wu You | Wu Xuan

Wu Jie | Xiao Ruijin | Xiao Weibing | Xiao Mi | Xie Shuying | Xie Zhendong

Xu Lirong | Xu Zhimin | Xu Cheng | Xu Bin | Xu Bo | Xu Zhihua

Xu Xijia | Xu Shuang | Xu Wenxing | Xu Yan | Xu Wangwei | Xu Liming

Xue Jiying | Xia Yu | Yan Yuan | Yan Xu | Yang Dong | Yang Hao

Yang Jin | Yang Ruiwen | Yang Fan | Yang Yuebin | Yang Ming | Yang Fei

Yang Xiaobin | Yao Yue | Yao Zhipeng | Ye Song | Ye Zhan | Yi Zhiquan

Yi Xiaojin | Yu Bo | Yu Yang | Yu Shanhui | Yu Haocheng | Yu Peng

Yu Xiaobin | Yuan Yi | Yuan Hang | Yuan Xi | Yuan Duowu | Yuan Zhengguang

Yu Xiaobo | Yu Yafang | Yu Kemiao | Zhang Danhua | Zhang Dongyi | Zhang Kai

Zhang Feng Fuguo | Zhang Feng ABC-CA | Zhang Feng | Zhang Hanyi | Zhang Hui

Zhang Hui | Zhang Jintao | Zhang Jun | Zhang Jianfeng | Zhang Ping | Zhang Fan

Zhang Yanpeng | Zhang Yingjun | Zhang Yichi | Zhang Hongyu | Zhang Hong | Zhang Hang

Zhang Yu | Zhang Yufan | Zhang Yang | Zhang Kun | Zhang Zhongwei | Zhang Xun

Zhang Jing | Zhang Liang | Zhang Xilin | Zhang Xiaolong | Zhang Haojia | Zhang Yahui

Zhang Ying | Zhang Heng | Zhang Hui | Zhang Xufeng | Zhang Xiuqi | Zhang Gewu

Zhan Cheng | Zhao Dazhen | Zhao Xiaodong | Zhao Qiang | Zhao Jian | Zhao Haotian

Zhao Wei | Zeng Gang | Zheng Chengran | Zheng Huilian | Zheng Ke | Zheng Lei

Zheng Weishan | Zheng Wei | Zheng Zehong | Zheng Ri | Zhou Yingbo | Zhou Keping

Zhou Liang | Zhou Xuejun | Zhou Yun | Zhou Yang | Zhou Ying | Zhou Hanying

Zhou Zhishuo | Zhou Wenqun | Zhu Ping | Zhu Yun | Zhu Xiaoliang | Zhong Yun

Zhong Shuai | Zhu Yi | Zuo Jinbao | Zhao Bei | Zhi Jian | Zou Lihu

Zou Weina | Zou Wei | Zou Xi

The above is the detailed content of 100-year-old Kissinger talks about artificial intelligence: Don't wait until a crisis comes to start paying attention. For more information, please follow other related articles on the PHP Chinese website!

source:sohu.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!