Home » Technical Topics » Knowledge Engineering

A feedback loop alternative to RAG that aligns LLMs with knowledge graph models

  • Alan Morrison 
A feedback loop alternative to RAG that aligns LLMs with knowledge graph models

Image by Gerd Altmann from Pixabay

Discover AI–a YouTube channel run by an unidentified Austrian man who distills and highlights the findings of AI papers in an engaging and visual way on YouTube– featured the October 2024 paper I’ll discuss in this post. The title of the paper is “Knowledge Graph-Based Agent for Complex, Knowledge-Intensive Quality Assurance in Medicine.” The paper’s authors are Xiaorui Su, Yibo Wang,  Shanghua Gao, Xiaolong Liu, Valentina Giunchiglia, Djork-Arné Clevert, and Marinka Zitnik.

These researchers from Harvard, the University of Illinois, Imperial College London, and Pfizer got together to create synergies between language models and knowledge graphs. The agent they created, which they named KGARevion, makes various forms of beneficial interactions between probabilistic, informal language models and more formal, deterministic knowledge graphs that harness the power of logically specified relationships possible.  

This approach helps with complex medical question answering that they say must rely on multiple reasoning methods. According to Marinka Zitnik, an assistant professor at Harvard who’s one of the authors in a post on X, 

“Medicine relies on various reasoning strategies, including rule-based, prototype-based, case-based, and analogy-based vertical reasoning (Patel et al., 2005). This variety calls for approaches that accommodate different styles and integrate specialized, in-domain knowledge. For instance, an organism such as Drosophila is used as an exemplar to model a disease mechanism, which is then applied by analogy to other organisms, including humans. In clinical practice, the patient serves as an exemplar, with generalizations drawn from many overlapping disease models and similar patient populations.”

These sorts of exemplars or analogies must harness lots of different sources as well as the effective digitization of various types of reasoning, which the knowledge graph approach facilitates.

A blended LLM + vector embeddings + knowledge graph solution

More generally, my impression is that  KGARevion improves accuracy by correcting for the weaknesses of language models, vector embeddings, and knowledge graphs, and at the same time capitalizing on their respective strengths vis-a-vis one another. 

As the authors point out neural net transformer-based language models (including very large LLMs such as Llama) have creative generation potential but can be prone to hallucinations and various kinds of inaccuracies. Knowledge graphs, by contrast, can be inherently more accurate and do provide a means of validating contextually related information, but have only a limited generation ability of their own. And knowledge graphs on their own are often incomplete.

So why not align LLMs and knowledge graphs and then use one to inform the other in order to rectify their most significant deficiencies? And why not use vectorization to help with the alignment process? 

Such a blended approach promises to boost results overall, raising accuracy to above 70 percent in the case of open reasoning with KGARevion from as low as 30 percent. So a need for more nuanced, complex reasoning to boost the richness of the knowledge input and therefore achieve higher accuracy seems to be the rationale behind the agent the paper describes.

Quality assurance through knowledge graph verification of LLM generated triples 

The medical community shares a lot of knowledge. Integrating the right knowledge for a particular purpose requires a knowledge graph development approach that is consistent across inconsistent, often incomplete data sources. 

To improve overall cohesion across data sources, KGARevion makes it possible for a medical LLM to generate new triples. A triple is the basic subgraph of meaning in standard knowledge graphs. Each triple consists of a subject, verb, and object, elements that can be part of the LLMs concept extraction process. The larger graph can then be the means of testing, alignment, and verification of triples that the LLM has initially generated.

The process of verifying LLM-generated triples to a knowledge graph involves mapping entities (the nouns or subjects and objects) generated by the LLM to standard medical terms that are part of the Unified Medical Language System. Different sources use these same standard medical terms. Only the entities that logically correspond to the UMLS end up in the knowledge graph.

Each triple included in the knowledge graph is also accompanied by associated, pre-trained embeddings (representations of the triple’s structural relationships in low-dimensional vector spaces). Those embeddings on the knowledge graph side can be matched up or aligned with the LLM’s own internal token embeddings. 

Discover AI explains that the LLM “applies a linear transformation or projection layer to the token embeddings, which adjusts their dimensionality, semantics, and structure to match the KG embeddings.”

The LLM can then be fine-tuned on the revised knowledge graph with the newly included and verified triples, harnessing KG-like reasoning in the process. The LLM can also do error correction internally based on what it’s learned from the knowledge graph.

Conclusion: A richer, hybrid approach beyond RAG

In their research, Apple, Microsoft, and others have used Retrieval Augmented Generation to bring information into a neural net model from a knowledge graph. On the face of it, KGARevion appears to have a higher level of sophistication than other knowledge graph RAG approaches in terms of its ability to tap into the relationship richness of knowledge graphs, providing a path to enriching LLMs in the process. With such an approach, knowledge graphs and vector embeddings together become key components of a knowledge verification process. Seems a promising direction for many knowledge-intensive industries, not just biomedical.

Leave a Reply

Your email address will not be published. Required fields are marked *