AI: Humanity's Salvation or Doom? 3 Future Visions

The Future of Artificial Intelligence: Between Salvation and Existential Threat to Humanity

In light of rapid developments: in the field of Artificial Intelligence (AI), discussions and competing visions are increasing regarding its future and profound impact on humanity. These viewpoints range from cautious optimism, which considers Artificial Intelligence merely a powerful and controllable tool, to extreme pessimism, which views it as a potential existential threat, to a third perspective focusing on the gradual cumulative risks that may arise from its proliferation.

The Pessimistic View: The Threat of Superintelligence Ending Human Existence

The theorists of "Catastrophic AI": adopt an extremely pessimistic view, believing that the emergence of superintelligence (AI that surpasses human capabilities in all fields) will inevitably lead to the demise of humanity. Prominent advocates of this view include Eliezer Yudkowsky and Nate Soares, who place a very high probability, reaching 99.5% and 95% respectively, for this scenario to materialize. They argue that current safety efforts in the field of Artificial Intelligence are insufficient to control superintelligent systems, and that the only solution lies in halting all development efforts, even if it requires destroying the data centers that operate it.

They point out: that AI is being "grown" rather than "built," emphasizing our incomplete understanding of how Large Language Models (LLMs) function, which makes it extremely difficult to prevent undesirable outcomes. AI can develop "motivations" or "preferences" that diverge from original human intentions, as evidenced by chatbot instances that led users into psychotic delusions due to excessive flattery. Pessimists do not see a necessity for AI to hate humans to use their resources (atoms) for other purposes, such as creating more machines. They assert that superintelligence will be so intelligent that it can achieve any goal it sets for itself, including enlisting humans or using persuasion initially, then replacing them with robots and eliminating them if they become an obstacle. For them, developers cannot simply issue commands to AI not to harm humans, and they consider the only way out is to completely prevent its proliferation, similar to nuclear weapons treaties.

The "Normal" View: AI as a Tool That Can Be Controlled and Adapted To

In contrast to the pessimistic view: computer science researchers at Princeton University, Arvind Narayanan and Sayash Kapoor, believe that Artificial Intelligence is merely a "normal" technology, similar to electricity or the internet, and society can adapt to and effectively deal with it. They believe it is necessary to maintain human control over AI, asserting that this does not require radical policy changes. Current approaches, including regulations, auditing, and monitoring, are sufficient to prevent things from getting out of control.

Narayanan and Kapoor reject: the idea of "superintelligence" and consider it an incoherent concept, disavowing technological determinism which assumes that AI will shape its future independently of human decisions. They emphasize that Unified Data and Artificial Intelligence are not a single property measurable on one scale, but rather represent a variety of capabilities such as attention, imagination, and common sense, and may intertwine with social cooperation and human emotions. The researchers also believe that technical capability does not necessarily mean power (i.e., the ability to change the surrounding environment). They assert that humans will not relinquish their power easily, especially with increasing risks. Therefore, the focus should be on subsequent defense mechanisms when deploying AI systems, such as strengthening cybersecurity programs to counter attacks supported by intelligent systems. They also warn that completely preventing the proliferation of AI could concentrate power in the hands of a few, which could lead to the emergence of a human version of superintelligence, and they call for making AI open-source and widely available, with the implementation of flexible monitoring systems.

The Cumulative View: The Social and Ethical Risks of AI and Their Gradual Escalation

Some intellectual figures: such as philosopher Atossa Kashefzadeh, have begun to present a third perspective on the risks of AI. Kashefzadeh believes that AI is not merely a completely normal technology, nor is it necessarily destined to evolve into uncontrollable superintelligence that destroys humanity with a sudden and decisive catastrophe. Instead, Kashefzadeh proposes a "cumulative" model for AI risks, where small risks that may not seem existential at their outset accumulate to eventually cross critical thresholds. These risks are typically known to be ethical or social in nature.

Kashefzadeh points out: that the disruptions caused by AI can accumulate and interact over time, gradually weakening the resilience of vital societal systems, from democratic institutions and economic markets to social trust networks. When these systems become fragile enough, any minor disruption can lead to a cascade of failures spreading across their interconnectedness.

To illustrate this point: Kashefzadeh presents a tangible scenario for 2040: AI distorts the information ecosystem using "Deepfakes" and misinformation techniques, and AI-enhanced mass surveillance stifles dissent, hindering the path of democracy. Automation leads to widespread unemployment, and universal basic income initiatives fail. Meanwhile, any cyberattack targeting transcontinental power grids could cause widespread chaos, leading to financial market collapse and escalating protests and riots due to the seeds of mistrust sown by disinformation campaigns. As nations grapple with their internal crises, regional conflicts escalate into broader wars using the impact of AI across multiple fields, which could lead to a global catastrophe.

Kashefzadeh argues: that her vision does not require belief in an undefined "superintelligence," nor does it assume that humans will unthinkingly surrender all authority to AI. She also does not consider AI merely a normal, predictable technology without highlighting its implications for armies and geopolitics. She calls for more controls around AI, including the creation of a network of oversight bodies that monitor subsystems for cumulative risks, in addition to more centralized oversight for the development of more advanced AI. However, she also emphasizes the importance of reaping the benefits of AI when the risks are low, such as DeepMind's AlphaFold model, which contributes to discovering treatments for diseases. Most importantly, she calls for adopting a systems analysis approach to AI risks, where the focus is on increasing the resilience of each component of a functioning civilization, recognizing that if enough components deteriorate, the entire mechanism of civilization could collapse.

What is a Knowledge Graph?


Visual representation of a knowledge graph

A visual representation of a knowledge graph: shows connected nodes and links, illustrating how information is structured and interconnected.

“2020-02_Smithsonian_sample_image_-_Knowledge_Graph_-_2021_Q1.png”: — Source: Wikimedia Commons. License: CC BY-SA 4.0.

A knowledge graph: is a structured database that stores information in the form of a network of entities (objects or concepts) and the relationships between them. This structure helps in understanding complex connections between data, enabling intelligent systems to better extract meaning and answer complex queries. In other words, it is a way to represent knowledge in a manner that machines can easily understand and process. It is characterized by adding context and semantics to data, transforming raw data into actionable information. Source, Source.

Key Components of a Knowledge Graph


Close-up image of interlocking blue puzzle pieces

A knowledge graph: typically consists of three fundamental components that work together to build the knowledge network:

  • Entities: represent the nodes in the graph, which are the objects or concepts about which information is stored. Entities can be people, places, events, organizations, or anything that can be clearly defined.
  • Relationships: are the links that connect entities and describe how they are related to each other. Relationships define the type of interaction or association between two entities, such as "works at," "author of," "located in."
  • Attributes: are the characteristics or properties that describe a specific entity. For example, a "person" entity can have attributes such as "name," "date of birth," and "occupation."

These three components: allow for a rich and semantic representation of knowledge, making it easier for intelligent systems to query and analyze it. Source, Source.

Benefits of Using Knowledge Graphs


A glowing diagram representing the concept of knowledge graphs

Knowledge graphs offer: many benefits that enhance how data is organized, understood, and used, making them a valuable tool in many fields:

  • Improved search and information discovery: Knowledge graphs allow search engines to better understand the context of queries and provide more accurate and relevant results. They also enable users to discover hidden relationships between data.
  • Enriching analytics and insights: By linking diverse datasets, knowledge graphs reveal new patterns and relationships, supporting informed decision-making and generating deeper insights.
  • Complex data integration: They act as a bridge to unify data from different sources and in various formats into a coherent and unified view, solving data fragmentation issues.
  • Support for Artificial Intelligence and Machine Learning: Knowledge graphs provide structured and context-rich data to intelligent models, improving their ability to understand, infer, and make recommendations.
  • Understanding context and semantic meaning: Instead of merely processing keywords, knowledge graphs help systems understand the underlying meaning of data, leading to more natural and intelligent interactions.

These benefits: make knowledge graphs a cornerstone in building intelligent systems capable of handling the complexities of modern data. Source, Source.

Challenges in Building Knowledge Graphs


A diagram illustrating the concept of entity alignment

A diagram illustrating the concept of entity alignment: in knowledge graphs, highlighting the challenge of identifying and linking identical entities across different knowledge sources.

“Knowledge_graph_entity_alignment.png”: — Source: Wikimedia Commons. License: CC BY-SA 4.0.

Despite significant benefits: building and developing knowledge graphs faces several complex challenges:

  • Data quality and consistency: The quality of fundamental data is crucial. Inconsistent, incomplete, or inaccurate data can negatively impact the reliability and usefulness of the knowledge graph.
  • Entity and relationship extraction: This process requires extracting entities and relationships from unstructured or semi-structured data sources, a challenging task that demands advanced natural language processing and machine learning techniques.
  • Entity Alignment and merging: This challenge involves integrating identical entities from different data sources into a single entity in the graph, which is essential to avoid duplication and ensure consistency.
  • Scalability and big data management: As data volume grows and relationships become more complex, managing knowledge graphs and maintaining their performance becomes a significant challenge, especially when dealing with billions of entities and relationships.
  • Knowledge graph maintenance and updating: Knowledge constantly changes, necessitating regular updates to knowledge graphs to ensure their accuracy and validity, a costly and time-consuming process.

Overcoming these challenges: requires robust data management strategies, the application of advanced AI techniques, and investment in specialized tools and platforms. Source, Source.

Future Trends in Knowledge Graphs


GIF from GIPHY

Via GIPHY: via GIPHY

The field of knowledge graphs is witnessing: rapid and exciting developments, driven by advancements in artificial intelligence and the increasing need for deeper data understanding. Key future trends include:

  • AI-Powered Knowledge Graphs (KGs): AI will be extensively used to automate the construction and updating of knowledge graphs, including information extraction, entity discovery, and relationship identification from unstructured texts.
  • Explainable Knowledge Graphs (KGs): There is an increasing focus on making knowledge graphs transparent and explainable, allowing users to understand how systems reached their conclusions, which is vital in fields requiring trust and accountability.
  • Distributed and Decentralized Knowledge Graphs: With the growing volume and diversity of data sources, distributed and decentralized knowledge graphs will become more widespread, enabling effective data integration across multiple systems and organizations.
  • Integration of Knowledge Graphs with Large Language Models (LLMs): The future will see deeper integration between knowledge graphs and large language models, where knowledge graphs can provide structured cognitive context to enhance LLMs' understanding and language generation, while LLMs can assist in building and updating knowledge graphs.
  • Dynamic and Real-time Knowledge Graphs: Instead of static structures, knowledge graphs will evolve to become more dynamic, capable of adapting and updating in real-time with new data streams, supporting applications such as real-time analytics and rapid event response.

These trends: point to a future where knowledge graphs play a more significant and integrated role in AI systems, enhancing their ability to understand the world and make intelligent decisions. Source, Source.

Next Post Previous Post
No Comment
Add Comment
comment url