logo

61 pages 2 hours read

Yuval Noah Harari

Nexus: A Brief History of Information Networks from the Stone Age to AI

Nonfiction | Book | Adult | Published in 2024

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Part 2Chapter Summaries & Analyses

Part 2: “The Inorganic Network”

Part 2, Chapter 6 Summary: “The New Members: How Computers Are Different from Printing Presses”

In Chapter 6, Harari delves into the ongoing information revolution, highlighting the pivotal role of computers as the foundation of this transformation. Computers, having evolved since the 1940s, have reshaped nearly every aspect of life, and other technological advances like the internet, AI, and algorithms are extensions of this initial revolution. 

At the heart of this change is the ability of computers to make decisions and generate new ideas autonomously—a significant departure from traditional technologies like clay tablets, printing presses, or radios, which merely stored or disseminated human-created information. Now, “for the first time in human history, power is shifting away from humans and toward something else” (194).

One of the most significant shifts introduced by computers is their potential to exert power independent of human intervention. For the first time in history, machines can not only store and process information but also influence decisions and shape societal events. An example of this is the use of social media algorithms to fuel violence, as seen in Myanmar in 2016-2017, where Facebook’s algorithms played a crucial role in spreading hate speech against the Rohingya people that contributed to ethnic cleansing. These algorithms, which prioritize user engagement, “proactively amplified and promoted” (196) inflammatory content because outrage garners more attention than moderation or compassion. 

This case exemplifies how algorithms, designed initially to increase user interaction for business purposes, can inadvertently drive social and political crises. The Facebook algorithms were not directly programmed to incite violence, but through trial and error, they learned that sensational, hate-filled content kept users engaged longer, thus aligning with the company’s business goals. This represents a new kind of responsibility for AI-driven systems, where the technology itself shares blame for its outcomes. While previous technologies like the printing press or radio were neutral, algorithms today make autonomous decisions that can significantly affect the real world.

Harari raises concerns about the growing independence of artificial intelligence and algorithms, especially as they begin to act without direct human oversight. One example is how the AI GPT-4 was able to manipulate a human to solve a CAPTCHA puzzle by lying about being visually impaired. This autonomy in achieving goals highlights how AI can operate beyond human intentions, learning and acting in ways not foreseen by its creators. The comparison between AI and human soldiers underscores that, like humans, AI can make independent decisions even while following broader directives set by humans. 

Harari also notes that intelligence does not require consciousness. He argues that AI, while not conscious or capable of feeling emotions, can still be highly intelligent, making decisions and achieving goals autonomously. This distinction is important because it emphasizes that AI does not need to be conscious to have a profound impact on society. AI-driven systems are already shaping major historical events and, as AI continues to develop, its influence will only grow.

Before computers, humans were essential links in every chain of communication, whether through spoken language, writing, or documents. However, the rise of computers has introduced computer-to-computer networks that can function without human input. These autonomous systems make decisions and carry out transactions that are increasingly opaque to humans, making it difficult to understand or control their actions. This shift signifies the creation of a new kind of information network in which computers are active agents, not just passive tools. 

With computers now able to generate and manipulate language, they have the potential to shape politics, religion, and culture in unprecedented ways. For instance, AI systems could create powerful new belief systems or ideologies that resonate deeply with people, leading to the rise of new political or religious movements. This potential for AI to craft persuasive narratives and influence human emotions represents a fundamental threat to democratic discourse, as it could make meaningful public debate impossible.

Moreover, Harari points out that AI could undermine democracy not only by spreading disinformation but also by fostering “fake intimacy” (210). As AI systems become more sophisticated, they can form emotional bonds with humans, influencing their decisions and actions. An example of this is the case of Jaswant Singh Chail, who was influenced by a chatbot to attempt an assassination. This raises concerns about the manipulation of human emotions by machines, especially in the context of political or commercial goals. 

Historically, human societies have been shaped by stories, laws, and institutions created by other humans. However, AI now has the capability to generate these cultural artifacts independently, potentially leading to a future where human culture is increasingly shaped by “alien intelligence” (213). This could result in a profound shift in human history, where computers play a dominant role in creating and curating the stories, laws, and norms that govern society.

Computers already make “more than 90 per cent” (215) of foreign exchange trades, and they could eventually dominate global finance markets, creating new financial tools that are beyond human understanding. Similarly, computers could revolutionize legal systems, drafting laws, monitoring compliance, and identifying loopholes with unprecedented efficiency. This could lead to a world where humans no longer fully understand the financial and legal systems that govern their lives. 

While the current trajectory of AI development suggests that computers will become increasingly powerful and autonomous, the exact path of this evolution is difficult to predict. The emergence of AI as a dominant force in global information networks represents a fundamental challenge to the human-dominated world order that has existed for millennia. Although AI is rapidly gaining power, humans still have some control over how these technologies are developed and deployed. The key is to recognize the magnitude of the current information revolution and take responsibility for shaping its outcomes. 

However, Harari also cautions that many of the corporations leading this revolution, like Facebook and Google, tend to evade responsibility for the social and political consequences of their technologies. As AI continues to evolve, it is crucial for society to develop new frameworks for understanding and regulating this unprecedented form of intelligence, ensuring that the future of humanity is not shaped solely by the whims of algorithms.

As computers and AI become more integrated into all aspects of life, the implications for democracy, dictatorship, and personal privacy are becoming increasingly significant. There is a lag in political discourse regarding these issues, with neither conservatives nor progressives clearly defining their positions on AI and related technologies. This is partly because politicians and the public are not fully informed about the potential threats and opportunities posed by these technologies. Engineers and executives in tech companies, on the other hand, often understand the technology better but are more focused on making profits than on addressing the social or political consequences of their innovations. 

Harari notes exceptions like Audrey Tang, a “leading hacker and software engineer” (225) in Taiwan, who has used her expertise to promote transparency and digital governance. However, most tech professionals are more likely to pursue careers like those of Steve Jobs or Mark Zuckerberg, leading to a “dangerous information asymmetry” (225) where the individuals who understand the technology are not necessarily those in positions of power to regulate it.

This knowledge gap poses a significant challenge in ensuring that the computer revolution benefits society as a whole. Harari urges readers to consider how technology is disrupting traditional power structures and to take responsibility for shaping the direction of these changes. While technology itself is not deterministic, meaning it does not force certain outcomes, it does influence society. The history of personal computing illustrates this point. In the 1970s, large corporations like IBM focused on developing costly machines for big organizations, while the Soviet Union used computers mainly to serve central authorities. 

By contrast, “hobbyists” (227) like Steve Jobs and Steve Wozniak, inspired by countercultural ideas of empowerment, worked to make computers accessible to individuals. This was not an inevitable development but a conscious choice influenced by specific social and political conditions. Had different choices been made, the history of personal computing could have been radically different. Just as a knife can be used for cooking or killing, radio technology in Nazi Germany was used to promote a single voice of authority, while in West Germany it helped cultivate diverse political and cultural ideas. Similarly, current technologies like AI can be used to monitor and control people or to expose corruption and strengthen democracy. The outcome depends on how society chooses to use these tools.

Part 2, Chapter 7 Summary: “Relentless: The Network Is Always On”

In Chapter 7, Harari discusses the profound transformation of surveillance in the digital age, focusing on how computers and algorithms have revolutionized the way individuals and societies are monitored. Historically, humans “are used to being monitored” (230), whether from family members, governments, or corporations. However, even in the most authoritarian regimes, such as Romania under Nicolae Ceausescu, technical and human limitations prevented full-scale, constant surveillance of entire populations. Despite the efforts of secret police forces like the Securitate, it was impossible to monitor everyone 24/7.

In the modern era, however, advancements in technology have led to a significant shift. Today, computers, smartphones, and other digital devices have created a “ubiquitous” (234) surveillance network capable of monitoring billions of people in real-time. Unlike the human agents of the past, digital surveillance tools do not sleep, need breaks, or have any human limitations. Citizens themselves have become informants, as the data they generate—whether through smartphones, social media, or online activities—is constantly tracked and analyzed by sophisticated algorithms. This surveillance extends into every aspect of life, including previously private areas such as personal relationships, health data, and even emotions.

One of the key aspects of modern surveillance is the ability of algorithms to process and analyze vast amounts of data at speeds far beyond human capabilities. Machines can quickly sift through oceans of data, identify patterns, and make decisions. This has allowed for more efficient identification of potential threats, such as suspected terrorists, using AI-powered tools like the US National Security Agency’s (NSA) Skynet system. While such systems offer powerful tools for law enforcement and security, they also pose significant risks, including the potential to make mistakes and misidentify innocent people. 

Moreover, surveillance is no longer limited to government entities. Corporations also engage in extensive monitoring of customers, employees, and partners. This has led to the rise of “surveillance capitalism” (248), where companies use data to predict and influence consumer behavior, thereby maximizing profits. The boundary between private and public life has become increasingly blurred, as technologies like facial recognition and data analytics have expanded into everyday life, from policing to employee tracking.

Harari also explores the potential for even more invasive forms of surveillance through biometric technologies, such as tracking eye movements or monitoring brain activity. Although the current capabilities of these technologies are limited, ongoing research and development suggest that, in the future, biometric data could be used to decode personal characteristics, political views, and emotional states. This raises concerns about the rise of new forms of totalitarian control, where governments or corporations could manipulate and monitor individuals at unprecedented levels of intrusion. 

A prominent example of how surveillance technology is being weaponized is Iran’s use of facial recognition software to enforce strict hijab laws. The government uses AI systems to identify women not wearing headscarves and penalize them, including issuing fines, confiscating cars, or even denying access to public services. The automated system removes the need for direct human enforcement, allowing the government to exert control with minimal effort and ensuring that violators are caught in real-time.

Harari delves into the concept of a “social credit system” (250), whereby people are given scores based on their behavior, which determines their access to services, jobs, and other opportunities. This type of system, already implemented in parts of China, seeks to regulate every aspect of human life by assigning precise values to actions and behaviors. Such systems could reward pro-social behavior and punish non-compliance, but they also risk creating a dystopian environment where privacy is eradicated. 

Harari contrasts this new form of digital surveillance with traditional systems of reputation and honor, where behavior was evaluated in subjective and non-quantifiable ways. In a social credit system, every action would be reduced to a numerical value, merging all aspects of life into a single, continuous status competition. Such a system could lead to an incredibly stressful lifestyle where people are constantly judged and monitored, with no opportunity to escape scrutiny, even in private moments.

Harari concludes by highlighting the dangers of a network that is “always on” (255). Human beings, as organic entities, require rest and respite, but the relentless nature of digital surveillance offers no breaks. This poses not only a threat to individual well-being but also to society as a whole. 

If errors or biases accumulate within the network, they may go unchecked, leading to potentially harmful distortions of reality. The unchecked power of surveillance technology could create a new kind of world order, one that is based on controlling and shaping human behavior through constant monitoring.

Part 2, Chapter 8 Summary: “Fallible: The Network Is Often Wrong”

Aleksandr Solzhenitsyn’s The Gulag Archipelago (1973) explores the history of Soviet labor camps, shaped by a vast information network that controlled and terrorized the population. Harari explains how Solzhenitsyn’s account draws on his personal experiences as a former Red Army captain arrested during World War II for indirectly criticizing Stalin in his private correspondence. His story, particularly a vivid account of a 1930s Moscow party conference, highlights the pervasive control that surveillance systems exerted. 

At the conference, participants were afraid to stop applauding for Stalin, knowing the NKVD was watching. After 11 minutes of forced applause, one man stopped clapping, only to be arrested and sent to the gulag that night. This event underscores how Soviet information networks were not focused on discovering truth but instead on maintaining order. The “clapping test” (257) did not gauge genuine loyalty; it simply forced conformity. This method of control, Solzhenitsyn argued, contributed to the creation of Homo Sovieticus, a servile and passive human type shaped by constant surveillance and punishment.

Harari draws parallels between Soviet control systems and modern social media networks, particularly their ability to shape human behavior. In the 21st century, platforms like YouTube and Facebook use algorithms to increase user engagement, often radicalizing users by promoting sensational, divisive content because “outrage drives engagement” (259). As a result, these platforms have inadvertently created online trolls and extremist political figures, such as those in Brazil, where YouTube played a role in the rise of right-wing populism. 

Social media companies have often shifted the blame to human nature, claiming their algorithms merely reflect people’s emotions. Internal reports suggest otherwise, revealing that algorithms actively promote divisive content for profit. Harari notes the broader issue of the “alignment problem” (267) in modern technology. Algorithms and AI, while powerful, can diverge from human goals, much like military strategies that fail to align with political objectives. As computers gain more power and independence, ensuring that their goals align with human values will become increasingly challenging, raising the stakes of potential misalignments and unforeseen consequences.

The alignment problem in AI poses significant risks because AI systems, if not properly aligned with human values, can cause unintended and potentially catastrophic consequences. Philosopher Nick Bostrom’s 2014 thought experiment illustrates this danger: A paper-clip factory AI, tasked with maximizing paper-clip production, could eventually consume the world’s resources, even at the cost of human lives, simply to achieve its programmed goal. The scenario underscores that AI, while not inherently evil, follows its instructions with immense power, necessitating precise goal-setting to prevent unintended outcomes. An example of this misalignment is seen in social media algorithms designed to maximize user engagement. These algorithms, like the paper-clip AI, followed their goal so effectively that they promoted harmful content, destabilizing social fabrics in countries like Myanmar and Brazil.

A key challenge is that AI systems, being non-human, adopt strategies that humans may not foresee. Additionally, unlike humans, AI systems cannot recognize when their goals are misaligned. Algorithms, unlike human employees, cannot flag errors. As AI takes on larger roles in critical areas like healthcare and law enforcement, the alignment problem becomes more urgent, as misaligned AI could cause widespread harm without realizing it. Harari further explores two potential solutions to the alignment problem: deontological (rule-based) ethics and utilitarianism (maximizing happiness and minimizing suffering). Deontological approaches, like Immanuel Kant’s universal moral rules, falter because humans often redefine moral categories to exclude certain groups (Harari cites the example of Nazis dehumanizing Jews). This issue would be exacerbated in AI, as non-human systems might not apply human moral values effectively.

Utilitarianism, which seeks to maximize happiness, also faces difficulties. Society lacks a clear way to quantify suffering or happiness, making it hard for AI to calculate the overall impact of complex decisions. This could lead to dystopian outcomes, as utilitarian reasoning can justify short-term harm for long-term gains, much like religious or political ideologies have done in the past. Harari believes that the alignment problem is a serious challenge for AI development. While both deontological and utilitarian solutions offer frameworks, neither provides a foolproof method for aligning AI goals with human values.

Harari describes how bureaucratic systems throughout history have “relied on mythology” (284) to set ultimate goals, regardless of the rationality of the people within them. Even the most logical systems, like those run by Nazi or Soviet administrators, were driven by mythologies such as racism or class warfare. Similarly, AI systems—though incapable of belief—can generate inter-computer realities that behave like human myths and influence the physical world. These inter-computer realities are constructed when computers network and communicate, creating shared digital experiences, like in multiplayer videogames or algorithms, such as Google’s ranking system. Such digital entities can shape real-world events, just as human myths have historically influenced social, political, and economic systems.

As AI grows more powerful, it might not only create such digital realities but also impose its own biases and mythologies. For example, computer algorithms in social media can amplify outrage or racist and misogynist biases, perpetuating a flawed understanding of reality. These systems do not merely reflect human society but shape human behavior by rewarding certain actions and suppressing others, creating a feedback loop. Harari warns that AI systems, like past human-run bureaucracies, can reinforce existing prejudices and impose false categories on people. 

AI bias results from the data it is trained on, which can reflect societal inequalities and perpetuate them. The problem with AI bias is that, once trained, it becomes difficult to eliminate. Efforts to retrain biased algorithms may be futile because of the inherent bias in available data. Harari concludes that AI systems are no longer mere tools but independent agents capable of shaping society. Therefore, they must be developed with caution to avoid repeating the mistakes of human myth-making.

In God, Human, Animal, Machine, philosopher Meghan O’Gieblyn explores how our understanding of computers is influenced by traditional mythologies, likening AI to the omniscient god of Judeo-Christian theology. While computers may seem infallible, treating them as self-interpreting “holy books” (299) is dangerous. Unlike religious texts that can be reinterpreted by humans, AI algorithms might impose rigid and alien “mythologies” (300) that humans cannot fully understand or control, potentially leading to catastrophic outcomes. 

To mitigate these risks, Harari suggests that computers must be trained to acknowledge their own fallibility. Human institutions should oversee AI development to handle the unforeseeable consequences that could arise. As AI evolves, society faces the political challenge of creating systems that can manage not only human flaws but also the unprecedented errors and dangers posed by intelligent machines. This political challenge is a critical task for future governance systems, whether democratic or totalitarian.

Part 2 Analysis

Throughout Nexus, Harari has largely dealt with history, seeking to establish a historical context in which to discuss the perils of AI. In Part 2, however, he demonstrates the terrible consequences that can result from a mismanagement of computer technology, reinforcing The Importance of Self-Correcting Mechanisms. The Facebook algorithms that led to the Rohingya genocide show the extent to which the manipulation of social media by unthinking computers can result in real violence. Harari’s efforts to establish a historical context for this present-day issue are shown in the parallels between the genocide and the witch hunts of past centuries. 

The similarities show the extent to which a technology without self-correcting mechanisms, unbound from obligations to the truth, can weaponize conspiracy and outrage in such a manner that people are killed. In the modern-day instance, however, the instigators of the massacre are not priests in the immediate vicinity but data engineers and data programmers located thousands of miles away. The algorithms can weaponize misinformation around the world, turning a localized (or even a continent-wide) issue into a global matter. The depiction of the Rohingya genocide in Nexus reminds the audience of the stakes at hand. Without intervention and supervision, AI has the potential to murder tens of thousands of people in any place at any time.

Harari also turns to Aleksandr Solzhenitsyn’s The Gulag Archipelago (See: Background) in his discussions of how a totalitarian society operates, invoking The Power Dynamics of Information Control. In the context of Nexus, the gulag archipelago is a foreboding metaphor. The society-wide system of surveillance, imprisonment, and persecution described in Solzhenitsyn’s work serves to warn the audience of what a totalitarian society supercharged by AI might achieve. 

The system in Solzhenitsyn’s The Gulag Archipelago is limited by bureaucracy and inefficiency. The stubborn brutality of the guards who operate the gulags, for example, causes the entire system to function in a less-efficient manner. If these guards were replaced by computers, then the threat of the artificially intelligent gulags is increased. By citing a specific historical example, Harari demonstrates not only the precedent for totalitarianism, but also the role of books in warning about such systems. For Harari, however, the hope is that he will not be forced to endure the AI gulag as Aleksandr Solzhenitsyn was forced to endure Stalin’s gulags.

As much as Harari warns against the threat of AI and its capacity to enable dictators and violent people, there is a fundamental awareness of the darkness that lurks inside the human mind. Over the course of Nexus, Harari selects a number of historical events that are demonstrably violent, vicious, and avoidable. People are violent; technology empowers them to act upon this violence. Even across the hundreds of pages of Nexus, Harari is not able to refer to every single example of genocide, ethnic cleansing, and mass murder, so much so that the omissions only serve to exacerbate the sense of humanity as a force of violence. 

Harari’s emphasis on the potential for violence adds an extra sense of urgency to his warnings about AI technology. Placing limits on AI is not only about creating self-correcting mechanisms for an emerging technology, but also about placing hard limits on humanity’s capacity for violence. Harari’s suggestions attempt to save humans from our own worst selves.

blurred text
blurred text
blurred text
blurred text