Symbolic AI vs Machine Learning in Natural Language Processing

Bridging the Gap Between Symbolic and Subsymbolic AI

symbolic ai examples

It contained 100,000 computer-generated images of simple 3-D shapes (spheres, cubes, cylinders and so on). The challenge for any AI is to analyze these images and answer questions that require reasoning. A Neuro-Symbolic AI system in this context would use a neural network to learn to recognize objects from data (images from the car’s cameras) and a symbolic system to reason about these objects and make decisions according to traffic rules. This combination allows the self-driving car to interact with the world in a more human-like way, understanding the context and making reasoned decisions. Symbolic AI was the dominant paradigm from the mid-1950s until the mid-1990s, and it is characterized by the explicit embedding of human knowledge and behavior rules into computer programs.

Symbolic Artificial Intelligence (Symbolic AI) is a foundational

approach to AI that focuses on the manipulation of symbols and the

application of logical rules to simulate intelligent behavior. Unlike

statistical approaches such as machine learning and neural networks,

Symbolic AI is deeply rooted in formal logic and aims to model human

reasoning through structured representations and inference processes. Neuro-symbolic AI blends traditional AI with neural networks, making it adept at handling complex scenarios.

symbolic ai examples

It is also usually the case that the data needed to train a machine learning model either doesn’t exist or is insufficient. In those cases, rules derived from domain knowledge can help generate training data. Symbolic Artificial Intelligence (AI) has been a fascinating field of research and application for decades. Unlike its counterpart, sub-symbolic AI (such as neural networks), which focuses on pattern recognition and statistical inference, symbolic AI deals with the representation and manipulation of explicit knowledge using symbols and rules. In this blog post, we will delve into specific examples of symbolic AI in practice, shedding light on its technical intricacies. The neural component of Neuro-Symbolic AI focuses on perception and intuition, using data-driven approaches to learn from vast amounts of unstructured data.

Resources for Deep Learning and Symbolic Reasoning

The deep nets eventually learned to ask good questions on their own, but were rarely creative. The researchers also used another form of training called reinforcement learning, in which the neural network is rewarded each time it asks a question that actually helps find the ships. Again, the deep nets eventually learned to ask the right questions, which were both informative and creative. And unlike symbolic AI, neural networks have no notion of symbols and hierarchical representation of knowledge.

The symbolic representations are manipulated using rules to make inferences, solve problems, and understand complex concepts. Thanks to natural language processing (NLP) we can successfully analyze language-based data and effectively communicate with virtual assistant machines. But these achievements often come at a high cost and require significant amounts of data, time and processing resources when driven by machine learning. Symbolic AI encodes knowledge through a detailed process of symbol manipulation, where each symbol correlates with real-world entities or ideas.

Good Old-Fashioned Artificial Intelligence Inside Our Program Master of Science in Artificial Intelligence – Northwestern Engineering

Good Old-Fashioned Artificial Intelligence Inside Our Program Master of Science in Artificial Intelligence.

Posted: Fri, 18 Jan 2019 08:00:00 GMT [source]

It is one form of assumption, and a strong one, while deep neural architectures contain other assumptions, usually about how they should learn, rather than what conclusion they should reach. The ideal, obviously, is to choose assumptions symbolic ai examples that allow a system to learn flexibly and produce accurate decisions about their inputs. In this scenario, the symbolic AI system utilizes rules to determine the appropriate action based on the current state and desired goals.

We can do this because our minds take real-world objects and abstract concepts and decompose them into several rules and logic. These rules encapsulate knowledge of the target object, which we inherently learn. Symbolic AI, GOFAI, or Rule-Based AI (RBAI), is a sub-field of AI concerned with learning the internal symbolic representations of the world around it. The main objective of Symbolic AI is the explicit embedding of human knowledge, behavior, and “thinking rules” into a computer or machine. Through Symbolic AI, we can translate some form of implicit human knowledge into a more formalized and declarative form based on rules and logic.

With each new encounter, your mind created logical rules and informative relationships about the objects and concepts around you. The first time you came to an intersection, you learned to look both ways before crossing, establishing an associative relationship between cars and danger. Now researchers and enterprises are looking for ways to bring neural networks and symbolic AI techniques together.

Step 1 – defining our knowledge base

There are many advantages of Neuro-Symbolic AI, including improved data efficiency, Integration Layer, Knowledge Base, and Explanation Generator. Artificial Intelligence (AI) includes a wide range of approaches, with Neural Networks and Symbolic AI being the two significant ones. Opposing Chomsky’s views that a human is born with Universal Grammar, a kind of knowledge, John Locke[1632–1704] postulated that mind is a blank slate or tabula rasa. The universe is written in the language of mathematics and its characters are triangles, circles, and other geometric objects.

symbolic ai examples

The other two modules process the question and apply it to the generated knowledge base. The team’s solution was about 88 percent accurate in answering descriptive questions, about 83 percent for predictive questions and about 74 percent for counterfactual queries, by one measure of accuracy. Such causal and counterfactual reasoning about things that are changing with time is extremely difficult for today’s deep neural networks, which mainly excel at discovering static patterns in data, Kohli says. The researchers broke the problem into smaller chunks familiar from symbolic AI.

In the days to come, as we  look into the future, it becomes evident that ‘Neuro-Symbolic AI harbors the potential to propel the AI field forward significantly. This methodology, by bridging the divide between neural networks and symbolic AI, holds the key to unlocking peak levels of capability and adaptability within AI systems. In this method, symbols denote concepts, and logic analyzes them—a process akin to how humans utilize language and structured cognition to comprehend the environment. Symbolic AI excels in activities demanding comprehension of rules, logic, or structured information, such as puzzle-solving or navigating intricate problems through reasoning. Symbolic AI plays a significant role in natural language processing

tasks, such as parsing, semantic analysis, and text understanding.

„Deep learning in its present state cannot learn logical rules, since its strength comes from analyzing correlations in the data,“ he said. We perceive Neuro-symbolic AI as a route to attain artificial general intelligence. Through enhancing and merging the advantages of statistical AI, such as machine learning, with the prowess of human-like symbolic knowledge and reasoning, our goal is to spark a revolution in AI, rather than a mere evolution. Symbolic AI systems use predefined logical rules to manipulate symbols

and derive new knowledge.

RAAPID’s retrospective and prospective solution is powered by Neuro-symbolic AI to revolutionize chart coding, reviewing, auditing, and clinical decision support. Our Neuro-Symbolic AI solutions are meticulously curated from over 10 million charts, encompassing over 4 million clinical entities and over 50 million relationships. By integrating these capabilities, Neuro-Symbolic AI has the potential to unleash unprecedented levels of comprehension, proficiency, and adaptability within AI frameworks. The

“Vehicle” class is the superclass, with “Car,” “Truck,” and

“Motorcycle” as its subclasses.

As one might also expect, common sense differs from person to person, making the process more tedious. Symbolic AI, which involves the use of logic and explicit symbolic representations to solve problems and make decisions, has historically been a significant area within AI research. In recent discussions, it’s been highlighted that while deep learning and other machine learning techniques have seen significant advancements, symbolic AI still holds potential, especially when integrated with other methods. In this line of effort, deep learning systems are trained to solve problems such as term rewriting, planning, elementary algebra, logical deduction or abduction or rule learning.

One challenge is the knowledge acquisition bottleneck, as manually encoding all the rules and facts can be time-consuming and labor-intensive. Additionally, symbolic AI systems may struggle with handling uncertainty and reasoning about incomplete or ambiguous information, which are areas where sub-symbolic AI techniques like probabilistic models and neural networks excel. It uses deep learning neural network topologies and blends them with symbolic reasoning techniques, making it a fancier kind of AI Models than its traditional version. We have been utilizing neural networks, for instance, to determine an item’s type of shape or color. However, it can be advanced further by using symbolic reasoning to reveal more fascinating aspects of the item, such as its area, volume, etc. Symbolic processes are also at the heart of use cases such as solving math problems, improving data integration and reasoning about a set of facts.

It encompasses

reasoning about causality, spatial relationships, and general domain

knowledge. Knowledge representation is a crucial aspect of Symbolic AI, as it

determines how domain knowledge is structured and organized for

efficient reasoning and problem-solving. As the author of this article, I invite you to interact with “AskMe,” a feature powered by the data in the knowledge graph integrated into this blog. ” This development represents an initial stride toward empowering authors by placing them at the center of the creative process while maintaining complete control. Using LLMs to extract and organize knowledge from unstructured data, we can enrich the data in a knowledge graph and bring additional insights to our SEO’s automated workflows. As noted by the brilliant Tony Seale, as GPT models are trained on a vast amount of structured data, they can be used to analyze content and turn it into structured data.

For example, we can use the symbol M to represent a movie and P to describe people. A newborn does not know what a car is, what a tree is, or what happens if you freeze water. The newborn does not understand the meaning of the colors in a traffic light system or that a red heart is the symbol of love. A newborn starts only with sensory abilities, the ability to see, smell, taste, touch, and hear. These sensory abilities are instrumental to the development of the child and brain function.

Artificial Intelligence: Shaping The Future of Deep Learning, ML

Additionally, you will cultivate the essential abilities to conceptualize, design, and execute neuro-symbolic AI solutions. Neuro-symbolic AI is an emerging approach that aims to combine the

strengths of Symbolic AI and neural networks. It seeks to integrate the

structured representations and reasoning capabilities of Symbolic AI

with the learning and adaptability of neural networks. By leveraging the

complementary strengths of both paradigms, neuro-symbolic AI has the

potential to create more robust, interpretable, and flexible AI systems.

  • Like Inbenta’s, “our technology is frugal in energy and data, it learns autonomously, and can explain its decisions”, affirms AnotherBrain on its website.
  • We typically use predicate logic to define these symbols and relations formally – more on this in the A quick tangent on Boolean logic section later in this chapter.
  • These are just a few examples, and the potential applications of neuro-symbolic AI are constantly expanding as the field of AI continues to evolve.
  • „There have been many attempts to extend logic to deal with this which have not been successful,“ Chatterjee said.
  • He also has full transparency on how to fine-tune the engine when it doesn’t work properly as he’s been able to understand why a specific decision has been made and has the tools to fix it.

For example, if an AI is trying to decide if a given statement is true, a symbolic algorithm needs to consider whether thousands of combinations of facts are relevant. This is important because all AI systems in the real world deal with messy data. „This is a prime reason why language is not wholly solved by current deep learning systems,“ Seddiqi said. So, to verify Elvis Presley’s birthplace, specifically whether he was born in England refer the above  diagram , the system initially converts the question into a generic logical form by translating it into an Abstract Meaning Representation (AMR). Each AMR encapsulates the meaning of the question using terminology independent of the knowledge graph, a crucial feature enabling the technology’s application across various tasks and knowledge bases.

First, symbolic AI algorithms are designed to deal with problems that require human-like reasoning. This means that they are able to understand and manipulate symbols in ways that other AI algorithms cannot. Second, symbolic AI algorithms are often much slower than other AI algorithms. This is because they have to deal with the complexities of human reasoning. Finally, symbolic AI is often used in conjunction with other AI approaches, such as neural networks and evolutionary algorithms.

It defines a common understanding of the domain and allows

for the integration of knowledge from different sources. The historical context of Symbolic AI reveals a rich tapestry of ideas,

achievements, and challenges. From its early beginnings at the Dartmouth

Conference to its current state, Symbolic AI has played a crucial role

in shaping our understanding of intelligence and pushing the boundaries

of what machines can accomplish.

These models are adept at tasks that require deep understanding and reasoning, such as natural language processing, complex decision-making, and problemsolving. Symbolic AI is still relevant and beneficial for environments with explicit rules and for tasks that require human-like reasoning, such as planning, natural language processing, and knowledge representation. It is also being explored in combination with other AI techniques to address more challenging reasoning tasks and to create more sophisticated AI systems. New deep learning approaches based on Transformer models have now eclipsed these earlier symbolic AI approaches and attained state-of-the-art performance in natural language processing.

The grandfather of AI, Thomas Hobbes said — Thinking is manipulation of symbols and Reasoning is computation. Symbolic artificial intelligence, also known as Good, Old-Fashioned AI (GOFAI), was the dominant paradigm in the AI community from the post-War era until the late 1980s. You can notice that each condition on the left-hand-side of the rule and the action are essentially object-attribute-value (OAV) triplets. Working memory contains the set of OAV triplets that correspond to the problem currently being solved. A rules engine looks for rules for which a condition is satisfied and applies them, adding another triplet to the working memory.

The development of neuro-symbolic AI is still in its early stages, and much work must be done to realize its potential fully. However, the progress made so far and the promising results of current research make it clear that neuro-symbolic AI has the potential to play a major role in shaping the future of AI. As such, Golem.ai applies linguistics and neurolinguistics to a given problem, rather than statistics. Their algorithm includes almost every known language, enabling the company to analyze large amounts of text. Notably because unlike GAI, which consumes considerable amounts of energy during its training stage, symbolic AI doesn’t need to be trained. Generative AI (GAI) has been the talk of the town since ChatGPT exploded late 2022.

The final puzzle is to develop a way to feed this information to a machine to reason and perform logical computation. We previously discussed how computer systems essentially operate using symbols. It’s flexible, easy to implement (with the right IDE) and provides a high level of accuracy.

Deep learning fails to extract compositional and causal structures from data, even though it excels in large-scale pattern recognition. While symbolic models aim for complicated connections, they are good at capturing compositional and causal structures. Using symbolic AI, everything is visible, understandable and explainable, leading to what is called a ‘transparent box’ as opposed to the ‘black box’ created by machine learning.

Integrating both approaches, known as neuro-symbolic AI, can provide the best of both worlds, combining the strengths of symbolic AI and Neural Networks to form a hybrid architecture capable of performing a wider range of tasks. The research community is still in the early phase of combining neural networks and symbolic AI techniques. Much of the current work considers these two approaches as separate processes with well-defined boundaries, such as using one to label data for the other. The next wave of innovation will involve combining both techniques more granularly. Both symbolic and neural network approaches date back to the earliest days of AI in the 1950s. On the symbolic side, the Logic Theorist program in 1956 helped solve simple theorems.

It was at

this conference that the term “Artificial Intelligence” was coined by

McCarthy, and the key ideas and goals of AI were articulated. Throughout the paper, we strive to present the concepts in an accessible

manner, using clear explanations and analogies to make the content

engaging and understandable to readers with varying levels of expertise

in AI. I usually take time to look at our roadmap as the end of the year approaches, AI is accelerating everything, including my schedule, and right after New York, I have started to review our way forward. SEO in 2023 is something different, and it is tremendously exciting to create the future of it (or at least contribute to it). No one has ever arrived at the prompt that will be used in the final application (or content) at the first attempt, we need a process and a strong understanding of the data behind it.

In other words, I do expect, also, compliance with the upcoming regulations, less dependence on external APIs, and stronger support for open-source technologies. This basically means that organizations with a semantic representation of their data will have stronger foundations to develop their generative AI strategy and to comply with the upcoming regulations. The emergence of relatively small models opens a new opportunity for enterprises to lower the cost of fine-tuning and inference in production. It helps create a broader and safer AI ecosystem as we become less dependent on OpenAI and other prominent tech players.

However, Transformer models are opaque and do not yet produce human-interpretable semantic representations for sentences and documents. Instead, they produce task-specific vectors where the meaning of the vector components is opaque. Defining the knowledge base requires skills in the real world, and the result is often a complex and deeply nested set of logical expressions connected via several logical connectives.

symbolic ai examples

Neurosymbolic AI is also demonstrating the ability to ask questions, an important aspect of human learning. Crucially, these hybrids need far less training data then standard deep nets and use logic that’s easier to understand, making it possible for humans to track how the AI makes its decisions. What the ducklings do so effortlessly turns out to be very hard for artificial intelligence. This is especially true of a branch of AI known as deep learning or deep neural networks, the technology powering the AI that defeated the world’s Go champion Lee Sedol in 2016. Such deep nets can struggle to figure out simple abstract relations between objects and reason about them unless they study tens or even hundreds of thousands of examples.

The work in AI started by projects like the General Problem Solver and other rule-based reasoning systems like Logic Theorist became the foundation for almost 40 years of research. Symbolic AI (or Classical AI) is the branch of artificial intelligence research that concerns itself with attempting to explicitly represent human knowledge in a declarative form (i.e. facts and rules). If such Chat GPT an approach is to be successful in producing human-like intelligence then it is necessary to translate often implicit or procedural knowledge possessed by humans into an explicit form using symbols and rules for their manipulation. Artificial systems mimicking human expertise such as Expert Systems are emerging in a variety of fields that constitute narrow but deep knowledge domains.

Furthermore, the paper explores the applications of Symbolic AI in

various domains, such as expert systems, natural language processing,

and automated reasoning. We discuss real-world use cases and case

studies to demonstrate the practical impact of Symbolic AI. Symbolic AI algorithms are used in a variety of AI applications, including knowledge representation, planning, and natural language processing.

For example, a neural network for optical character recognition (OCR) translates images into numbers for processing with symbolic approaches. Generative AI apps similarly start with a symbolic text prompt and then process it with neural nets to deliver text or code. For the first method, called supervised learning, the team showed the deep nets numerous examples of board positions and the corresponding “good” questions (collected from human players).

Early deep learning systems focused on simple classification tasks like recognizing cats in videos or categorizing animals in images. Now, researchers are looking at how to integrate these two approaches at a more granular level for discovering proteins, discerning business processes and reasoning. Seddiqi expects many advancements to come from natural language processing. Language is a type of data that relies on statistical pattern matching at the lowest levels but quickly requires logical reasoning at higher levels.

The inference mechanism in Symbolic AI involves applying logical rules to the knowledge base to derive new information or make decisions. Symbolic AI’s strength lies in its knowledge representation and reasoning through logic, making it more akin to Kahneman’s „System 2“ mode of thinking, which is slow, takes work and demands attention. That is because it is based on relatively simple underlying logic that relies on things being true, and on rules providing a means of inferring new things from things already known to be true.

Throughout the rest of this book, we will explore how we can leverage symbolic and sub-symbolic techniques in a hybrid approach to build a robust yet explainable model. You can also train your linguistic model using symbolic for one data set and machine learning for the other, then bring them together in a pipeline format to deliver higher accuracy and greater computational bandwidth. Likewise, this makes valuable NLP tasks such as categorization and data mining simple yet powerful by using symbolic to automatically tag documents that can then be inputted into your machine learning algorithm. „We are finding that neural networks can get you to the symbolic domain and then you can use a wealth of ideas from symbolic AI to understand the world,“ Cox said.

This article aims to demystify Symbolic AI, a branch of artificial intelligence that promises not just advancements in technology but strides towards transparency and trust in AI systems. The weakness of symbolic reasoning is that it does not tolerate ambiguity as seen in the real world. One false assumption can make everything true, effectively rendering the system meaningless. „Neuro-symbolic [AI] models will allow us to build AI systems that capture compositionality, causality, and complex correlations,“ Lake said. Domain2– The structured reasoning and interpretive capabilities characteristic of symbolic AI. However, traditional symbolic AI struggles when presented with uncertain or ambiguous information.

„With symbolic AI there was always a question mark about how to get the symbols,“ IBM’s Cox said. The world is presented to applications that use symbolic AI as images, video and natural language, which is not the same as symbols. David Farrugia is a seasoned data scientist and a Ph.D. candidate in AI at the University of Malta. David Farrugia has worked in diverse industries, including gaming, manufacturing, customer relationship management, affiliate marketing, and anti-fraud.

As you can easily imagine, this is a very heavy and time-consuming job as there are many many ways of asking or formulating the same question. And if you take into account that a knowledge base usually holds on average 300 intents, you now see how repetitive maintaining a knowledge base can be when using machine learning. Constraint solvers perform a more limited kind of inference than first-order logic. They can simplify sets of spatiotemporal constraints, such as those for RCC or Temporal Algebra, along with solving other kinds of puzzle problems, such as Wordle, Sudoku, cryptarithmetic problems, and so on.

Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains. It also empowers applications including visual question answering and bidirectional image-text retrieval. In conclusion, neuro-symbolic AI is a promising field that aims to integrate the strengths of both neural networks and symbolic reasoning to form a hybrid architecture capable of performing a wider range of tasks than either component alone.

What to know about the security of open-source machine learning models

This representation method allows Symbolic AI systems to perform reasoning tasks by applying logical rules to these symbols. Symbolic AI, a fascinating subfield of artificial intelligence, stands out by focusing on the manipulation and processing of symbols and concepts rather than numerical data. This unique approach allows for the representation of objects and ideas in a way that’s remarkably similar to human thought processes. In this context, a Neuro-Symbolic AI system would employ a neural network to learn object recognition from data, such as images captured by the car’s cameras. Additionally, it would utilize a symbolic system to reason about these recognized objects and make decisions aligned with traffic rules. This amalgamation enables the self-driving car to interact with its surroundings in a manner akin to human cognition, comprehending the context and making reasoned judgments.

1) Hinton, Yann LeCun and Andrew Ng have all suggested that work on unsupervised learning (learning from unlabeled data) will lead to our next breakthroughs. Like our product, our medium articles are written by novel generative AI models, with human feedback on the edge cases. It is a large collection of entities grouped together using is-a inheritance relationship. It allows answering questions like „What is Microsoft?“ – the answer being something like „a company with probability 0.87, and a brand with probability 0.75“. However, in some cases we might want to start with an empty knowledge about the problem, and ask questions that will help us arrive to the conclusion.

Note that implicit knowledge can eventually be formalized and structured to become explicit knowledge. You can foun additiona information about ai customer service and artificial intelligence and NLP. For example, if learning to ride a bike is implicit knowledge, https://chat.openai.com/ writing a step-by-step guide on how to ride a bike becomes explicit knowledge. Explicit knowledge is any clear, well-defined, and easy-to-understand information.

Typically, an easy process but depending on use cases might be resource exhaustive. Another concept we regularly neglect is time as a dimension of the universe. Some examples are our daily caloric requirements as we grow older, the number of stairs we can climb before we start gasping for air, and the leaves on trees and their colors during different seasons. These are examples of how the universe has many ways to remind us that it is far from constant. Based on our knowledge base, we can see that movie X will probably not be watched, while movie Y will be watched.

It refers to a explicit specification of a problem domain using some formal knowledge representation. The simplest ontology can be just a hierarchy of objects in a problem domain, but more complex ontologies will include rules that can be used for inference. Logical Neural Networks (LNNs) are neural networks that incorporate symbolic reasoning in their architecture. In the context of neuro-symbolic AI, LNNs serve as a bridge between the symbolic and neural components, allowing for a more seamless integration of both reasoning methods. Symbolic techniques were at the heart of the IBM Watson DeepQA system, which beat the best human at answering trivia questions in the game Jeopardy! However, this also required much human effort to organize and link all the facts into a symbolic reasoning system, which did not scale well to new use cases in medicine and other domains.

symbolic ai examples

As far back as the 1980s, researchers anticipated the role that deep neural networks could one day play in automatic image recognition and natural language processing. It took decades to amass the data and processing power required to catch up to that vision – but we’re finally here. Similarly, scientists have long anticipated the potential for symbolic AI systems to achieve human-style comprehension.

Implementations of symbolic reasoning are called rules engines or expert systems or knowledge graphs. Google made a big one, too, which is what provides the information in the top box under your query when you search for something easy like the capital of Germany. These systems are essentially piles of nested if-then statements drawing conclusions about entities (human-readable concepts) and their relations (expressed in well understood semantics like X is-a man or X lives-in Acapulco). These components work together to form a neuro-symbolic AI system that can perform various tasks, combining the strengths of both neural networks and symbolic reasoning.

Serre, of Brown, thinks this hybrid approach will be hard pressed to come close to the sophistication of abstract human reasoning. Our minds create abstract symbolic representations of objects such as spheres and cubes, for example, and do all kinds of visual and nonvisual reasoning using those symbols. We do this using our biological neural networks, apparently with no dedicated symbolic component in sight.

Kahneman describes human thinking as having two components, System 1 and System 2. System 1 is the kind used for pattern recognition while System 2 is far better suited for planning, deduction, and deliberative thinking. In this view, deep learning best models the first kind of thinking while symbolic reasoning best models the second kind and both are needed.

Popular categories of ANNs include convolutional neural networks (CNNs), recurrent neural networks (RNNs) and transformers. CNNs are good at processing information in parallel, such as the meaning of pixels in an image. New GenAI techniques often use transformer-based neural networks that automate data prep work in training AI systems such as ChatGPT and Google Gemini.

Through logical rules, Symbolic AI systems can efficiently find solutions that meet all the required constraints. Symbolic AI is widely adopted throughout the banking and insurance industries to automate processes such as contract reading. Another recent example of logical inferencing is a system based on the physical activity guidelines provided by the World Health Organization (WHO).

Parsing, tokenizing, spelling correction, part-of-speech tagging, noun and verb phrase chunking are all aspects of natural language processing long handled by symbolic AI, but since improved by deep learning approaches. In symbolic AI, discourse representation theory and first-order logic have been used to represent sentence meanings. Latent semantic analysis (LSA) and explicit semantic analysis also provided vector representations of documents. In the latter case, vector components are interpretable as concepts named by Wikipedia articles.

Show Comments

Schreibe einen Kommentar