Using AI to Solve the Deepest Math Conjectures
Tools such as OpenAI can on occasion give the impression that they are able to prove theorems and even generalize them. Whether this is a… Read More »Using AI to Solve the Deepest Math Conjectures
Author and Publisher at MLtechniques.com. Machine learning scientist, mathematician, book author (Wiley), patent owner, former post-doc at Cambridge University, former VC-funded executive, with 20+ years of corporate experience including CNET, NBC, Visa, Wells Fargo, Microsoft, eBay. Vincent also founded and co-founded a few start-ups, including one with a successful exit (Data Science Central acquired by Tech Target).
Tools such as OpenAI can on occasion give the impression that they are able to prove theorems and even generalize them. Whether this is a… Read More »Using AI to Solve the Deepest Math Conjectures
In this article, I share my latest Gen AI and LLM advances, featuring innovative approaches radically different from both standard AI and classical ML/NLP. The… Read More »LLM 2.0, RAG & Non-Standard Gen AI on GitHub
What I mean here is that traditional LLMs are trained on tasks irrelevant to what they will do for the user. It’s like training a… Read More »There is no such thing as a Trained LLM
It is becoming a bit difficult to follow all the new AI, RAG and LLM terminology, with new concepts popping up almost every week. Which… Read More »LLM Chunking, Indexing, Scoring and Agents, in a Nutshell
This book features new advances in game-changing AI and LLM technologies built by GenAItechLab.com. Written in simple English, it is best suited for engineers, developers,… Read More »New Book: Building Disruptive AI & LLM Technology from Scratch
In this article, I describe some of the most common types of databases that apps such as RAG or LLM rely upon. I also provide… Read More »Large AI Apps: Optimizing the Databases Behind the Scenes
This is the third and final article in this series, featuring some of the most powerful features to improve RAG/LLM performance. In particular: speed, latency,… Read More »30 Features that Dramatically Improve LLM Performance – Part 3
In this article, I explore 10 additional features that have a big impact on LLM/RAG performance, in particular on speed, latency, results relevancy (minimizing hallucinations,… Read More »30 Features that Dramatically Improve LLM Performance – Part 2
Many are ground-breaking innovations that make LLMs much faster and not prone to hallucinations. They reduce the cost, latency, and amount of computer resources (GPU,… Read More »30 Features that Dramatically Improve LLM Performance – Part 1
If you, your team or your company design and deploy AI architecture, data pipelines or algorithms, having great diagrams to illustrate the workflow is a… Read More »Building Professional Diagrams: LLM/RAG Example with Source Code