61 pages • 2 hours read
Yuval Noah HarariA modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
Harari explores the potential dangers and complexities of emerging computer-based networks, which are far more powerful than past bureaucratic systems. While these networks offer immense benefits, such as improving healthcare and education, they also pose existential risks, including the possible “destruction of human civilization” (306).
Throughout history, new technologies like those of the Industrial Revolution have caused significant upheaval before their benefits became clear, often leading to imperialism, wars, and ecological disasters. Learning to manage powerful technologies like AI may require navigating similar challenges, but with greater consequences due to their potential for rapid change and large-scale impact.
Harari argues that while democracies have managed past technologies through self-correcting mechanisms, the relentless pace of AI development and its potential to invade privacy, manipulate individuals, and destabilize job markets may threaten democracy itself. He urges that humans maintain oversight, use AI responsibly, and ensure that democratic principles (such as decentralization and mutual accountability) are applied to AI systems. Additionally, the unpredictability of the future job market due to automation may further strain democratic systems, as people must repeatedly retrain and adapt to new conditions.
In the 2010s and early 2020s, conservative parties in many democracies experienced a radical transformation, as leaders like Donald Trump “hijacked” (324) them away from traditional conservatism, which had focused on preserving existing institutions and gradual change. Instead, these parties adopted revolutionary rhetoric, attacking established democratic institutions, rejecting scientific and bureaucratic expertise, and embracing drastic societal changes. This shift surprised progressives, who found themselves defending the old order. This phenomenon may be driven by the rapid pace of technological and social change, making moderate conservative approaches seem unrealistic.
However, history shows that democracies have navigated such crises before—like the US response to the Great Depression with the New Deal—and adapted successfully. Harari suggests that democracy’s flexibility and self-correcting mechanisms, which have allowed it to withstand past upheavals, will be critical for managing future challenges, such as the rise of AI and social credit systems. To maintain democratic accountability in an era of complex algorithms, it will be essential to ensure that these systems remain understandable and transparent to citizens and courts.
Algorithms increasingly make crucial decisions about people’s lives, from college admissions to loans and prison sentences. The complexity and opacity of these systems threaten democratic accountability and transparency. The European Union’s GDPR attempted to address this with a “right to an explanation” (331), but the intricate nature of AI, as demonstrated by systems like AlphaGo, makes it difficult to fully understand or explain algorithmic decisions. For Harari, this raises concerns about the future of democracy, especially as AI systems handle more critical decisions without human comprehension.
Additionally, as AI influences public discourse (sometimes through manipulative bots), the growing unfathomability of these systems fuels populism, conspiracy theories, and mistrust in democratic institutions. This could lead to “digital anarchy” (340) in the public sphere, with algorithms orchestrating debates and undermining consensus, potentially pushing societies toward authoritarianism in exchange for stability. To prevent this, regulatory institutions must oversee AI while ensuring that the systems remain understandable and fair.
Democracies face significant challenges from AI and algorithms, especially regarding their influence on public discourse. To protect democratic conversation, governments can regulate AI in ways similar to counterfeiting laws, preventing bots from posing as humans. Philosopher Daniel Dennett suggests that nonhuman agents pretending to be human should be outlawed to safeguard trust in human interactions. He cites the laws against counterfeiting currencies as an example, suggesting that “governments should outlaw fake humans as decisively as they have previously outlawed fake money” (344).
Algorithms controlling public debate should also be vetted by human institutions, ensuring transparency and avoiding the spread of outrage. Harari believes that democracy’s survival depends on regulating these technologies. While the rise of AI threatens democratic conversation, its collapse is not inevitable. It results from a failure to regulate technology wisely. Currently, democratic information networks are breaking down, with increasing polarization, as seen in the US and other nations. Social media algorithms may contribute, but the complexity of the situation makes it difficult to pinpoint exact causes. Without addressing these issues, large-scale democracies may not survive the technological era.
In Chapter 10, Harari reflects on the reality of modern totalitarianism. As of 2024, many authoritarian and totalitarian regimes already control over half of the global population. While discussions often focus on the impact of AI on democracies, its potential to strengthen totalitarian regimes is equally significant. AI could enhance the efficiency of totalitarian systems by centralizing information and decision-making, a major advantage compared to past technological limitations. This could lead to the creation of more powerful surveillance systems, enabling regimes to exert tighter control over society. AI could “make resistance almost impossible” (350).
However, AI presents challenges for dictatorships too. While it can consolidate power, regimes struggle to control nonhuman agents like bots, which are immune to traditional methods of control like fear or punishment. Additionally, the danger arises that AI could manipulate or even control dictators by dominating information flows, as seen in historical examples where rulers were isolated and controlled by their subordinates. AI could push totalitarian rulers into becoming puppets of their own systems, as AI gains greater control over decision-making processes.
Thus, while AI has the potential to strengthen authoritarian regimes, it also introduces the risk of these regimes losing control to the very algorithms they rely on for power. In the coming years, dictators face a dilemma as they increasingly trust AI systems, despite AI’s potential risks. According to Harari, dictators are likely “the weakest spot in humanity’s anti-AI shield” (359). Totalitarian regimes, which traditionally place unwavering faith in the infallibility of their leaders, are now prone to extending that belief to AI, which could lead to dangerous outcomes.
Totalitarian regimes often lack self-correcting mechanisms; placing unthinking faith in AI could have catastrophic consequences if the technology makes errors without proper oversight. Dictators may find themselves becoming puppets of AI if they rely too heavily on it, or they may struggle to maintain control by building institutions to regulate AI, which could also limit their own power. While sci-fi often explores AI taking over in democratic societies, the real vulnerability may lie in dictatorships. This is not a prediction, Harari says, but a warning, similar to how both democratic and authoritarian regimes cooperated to manage the dangers of nuclear weapons. To avoid AI gaining uncontrollable power, dictators must be cautious and not assume it will automatically work in their favor. Otherwise, AI “will just grab power to itself” (360).
Harari discusses how AI poses global risks that transcend national boundaries, with the potential for “new arms races, new wars and new imperial expansions” (361). While AI is not yet advanced enough to destroy civilization on its own, the real danger lies in human mismanagement, such as dictators placing too much trust in AI systems that could lead to catastrophic consequences.
Harari explores how AI could fuel imperial ambitions by enabling a few powerful nations to dominate others through control of information, creating “data colonies” (370) that depend on AI-driven technologies. He also warns of the potential fragmentation of the world into separate digital empires with incompatible systems, creating geopolitical tensions and undermining global cooperation on critical issues like climate change. Furthermore, the rise of AI may deepen global inequalities, benefiting countries with the resources to lead in AI development while leaving poorer nations struggling to adapt, potentially leading to economic ruin in certain regions.
Harari speculates about a growing division of the world into two digital empires, separated by a “Silicon Curtain” (375) made of code. This divide, driven by different political, economic, and technological systems, creates distinct spheres of influence led by the US and China, which run on separate digital infrastructures, software, and hardware. In China, digital technologies are developed to strengthen the state and enforce surveillance, while in the US, AI tools are largely developed by private enterprises with the goal of maximizing profits.
These differences not only affect economic and political systems but could also lead to deep cultural and ideological divergences, creating incompatible digital spheres. People will be divided into “separate information cocoons” (377). Harari warns that if these spheres continue to drift apart, it could fuel international conflict, potentially leading to digital and geopolitical warfare. He also emphasizes the importance of global cooperation to manage AI and other disruptive technologies to avoid catastrophic consequences for humanity. Harari suggests that cooperation does not mean erasing national identities, but requires shared global rules and sometimes prioritizing the long-term survival of humanity over short-term national interests.
Harari discusses the challenges of creating international agreements to regulate AI, drawing parallels with nuclear and biological weapons regulation. However, AI regulation requires higher levels of trust and self-discipline due to the ease of hiding illicit AI projects and the dual civilian-military nature of AI technologies. Harari critiques “realist” (387) views, which argue that competition for power is inevitable in international relations. He challenges the assumption that human nature is fixed and conflict-driven, highlighting that human history shows increasing cooperation and the decline of war, especially after 1945.
The decline of war was influenced by technological, economic, and cultural shifts, such as the development of nuclear weapons and the rise of knowledge-based economies. Despite this progress, the recent resurgence of militarism, particularly Russia’s 2022 invasion of Ukraine, threatens to reverse these trends. Harari argues that historical views shape leaders’ decisions, and if leaders believe conflict is inevitable, they may act aggressively. However, history demonstrates the potential for change, and the responsibility lies with humanity to make better choices to avoid conflict and create a more peaceful world. Harari states that “the only constant of history is change” (393).
By the end of Part 3, the structure of Nexus becomes apparent. Over the course of several hundred pages, Harari has examined the past, the present, and the future in relation to what humanity can do to prepare for the rise of artificial intelligence, reflecting The Role of Change in History. The three parts of the book are structured in accordance with this theme, with Part 3 looking toward the future.
The move from past to present to future also correlates with Harari’s switch from a descriptive to a proscriptive narrative style. In Part 1, he used his experience as a historian to examine past examples of humanity’s changing relationship with information technology. In Part 2, he began to weave in contemporary examples of the way in which AI, algorithms, and computers were exacerbating the violence of the past in the present. In Part 3, he becomes more prophetic in tone.
Harari looks to the future and speculates about the ominous ways in which AI technology might greatly increase the human capacity for violence. At his most ominous, he suggests that humans could be entirely eliminated from any decision-making process until all humans are essentially beholden to AI. The gradual increase in foreboding across the book brings an urgency to the final chapters, reinforcing The Power Dynamics of Information Control. There is a need to act so as to prevent the worst of Harari’s predictions from coming true.
Throughout Nexus, Harari has explored the role of historical dictators and the threat they pose to democracy. Figures such as Stalin, Hitler, and Ceaușescu subjugated their people in the pursuit of absolute control over their lives. In terms of Ceaușescu, Harari went into detail about how his own family’s history was forever changed by the persecution of his grandfather by Ceaușescu’s dictatorship. In terms of the future, however, the dictators may play an important role in the proliferation of AI controls.
The commonality of dictators across history and the rise of autocrats in the present day, Harari implies, means that humanity cannot plan for a future that is free of dictators. The dictators must be accommodated to some extent, so that their totalitarian, undemocratic rule does not threaten The Importance of Self-Correcting Mechanisms that Harari believes are essential. Though the final stretches of Part 3 are largely optimistic in their portrayal of humanity’s chances for the future, Harari concedes that dictators are the biggest threat to the development of AI controls because they will likely still exist in the near future.
Part 3 of Nexus looks to the future and, as a result, is furthest away from Harari’s own area of expertise. He is a historian, so he is typically concerned with the past. When he needs to make predictions, then, he reverts to historical precedents. As a result, a subtle comparison can be drawn between Harari’s predictions for the future and his comments about holy books earlier in Nexus. When describing the way in which the books of the Bible were curated by a group of humans (thus undermining the infallibility of the text), Harari notes the various apocalypses that were available to choose from at the time. The prophets of the ancient world provided the Bible’s curators with a number of options for visions of the end of the world, only for the curators to pick just one.
In a similar fashion, Harari offers up a number of predictions for the various (and increasingly outlandish) ways in which AI might doom humanity. Rather than spiritual visions, however, these predictions are deliberately outlandish. The intent is to swamp the audience with a series of dark visions of the future, urging them to take action to prevent these visions from becoming a reality. Harari is specifically aiming to be proved wrong, rather than believing that he is conveying some divine vision of the future. Harari may be less comfortable dealing in predictions than he is in recounting history, but he urges the audience to action by invoking the past.
By Yuval Noah Harari