The Case Against LLMs: Why Yann LeCun Isn’t Impressed
Yann LeCun, a pioneer in the AI space and Chief AI Scientist at Meta, isn’t shy about sharing his opinions. At the Nvidia GTC 2025 conference, he dropped a bombshell: “I’m not so interested in LLMs anymore.” For those who’ve been following the hype around ChatGPT and similar models, this might sound shocking. But LeCun’s skepticism isn’t unfounded. He argues that LLMs, while impressive, are limited in their ability to understand the physical world, reason, and plan—key components of true artificial general intelligence (AGI).
LeCun points out that LLMs excel at generating text by predicting the next word in a sequence. But text, he says, is a poor model for understanding the complexities of the real world. “Text is a very lossy compression of the world,” he explains. “It’s like trying to understand a movie by reading the script.” In other words, LLMs can mimic human language, but they lack the depth and richness of human understanding.
World Models: The Missing Piece of the AI Puzzle
So, if LLMs aren’t the answer, what is? LeCun believes the key lies in “world models”—systems that can understand the physical world as humans do. A world model is essentially a mental representation of how the world works. For example, we know that if we push a bottle from the top, it’ll tip over, but if we push it from the bottom, it’ll slide. These intuitive understandings of physics are something humans develop in the first few months of life. But AI systems, despite their computational power, struggle with this.
LeCun argues that the architectures we use for AI today—like transformers, which power LLMs—are ill-suited for building world models. Transformers are great at predicting the next token in a sequence, but they fall short when it comes to reasoning about the physical world. “Tokens are discrete,” LeCun explains. “But the world is continuous and high-dimensional.” This mismatch, he says, is why AI systems still struggle with tasks that come naturally to humans and even animals.
Meet JEPA: The Future of AI Architecture
Enter JEPA, or Joint Embedding Predictive Architecture, LeCun’s proposed solution to the limitations of LLMs. JEPA is designed to learn abstract representations of the world, allowing AI systems to reason and plan more like humans. Unlike generative models, which try to predict every detail of an image or video, JEPA focuses on understanding the underlying structure. This makes it more efficient and better suited for real-world tasks.
LeCun and his team have been working on JEPA for years, and they’re now gearing up to release version 2. Early results are promising, with JEPA demonstrating the ability to predict physical outcomes in videos—like determining whether a sequence of events is physically possible. This is a significant leap forward, as it moves AI closer to understanding the world in a more human-like way.
System 1 vs. System 2: Why AI Needs Both
Another critical piece of the puzzle, according to LeCun, is the distinction between System 1 and System 2 thinking. System 1 is fast, intuitive, and automatic—like driving a car on a familiar route. System 2, on the other hand, is slow, deliberate, and analytical—like learning to drive for the first time. Current AI systems, LeCun argues, are stuck in System 1. They’re great at reactive tasks but struggle with the kind of deep reasoning and planning that System 2 enables.
LeCun believes that to achieve AGI, we need AI systems that can seamlessly transition between System 1 and System 2. This would allow them to handle complex tasks with the efficiency of System 1 and the adaptability of System 2. It’s a tall order, but LeCun is optimistic that architectures like JEPA will get us there.
What This Means for the Future of AI
LeCun’s vision of AI’s future is both ambitious and humbling. It’s ambitious because it challenges us to rethink the foundations of AI, moving beyond the hype of LLMs to explore new architectures and models. It’s humbling because it reminds us how much we still have to learn about intelligence—both artificial and human.
So, what’s next? If LeCun is right, the future of AI will be less about generating text and more about understanding the world. It’ll be less about mimicking human behavior and more about replicating human understanding. And it’ll be less about brute-force computation and more about elegant, efficient architectures like JEPA.
Join the Conversation
What do you think? Are LLMs just a stepping stone, or are they the future of AI? Do world models hold the key to AGI? Share your thoughts in the comments below and become part of the iNthacity community. Together, we can explore the frontiers of technology and imagine a brighter, smarter future. Don’t forget to like, share, and join the debate. The future of AI is too important to leave to the experts alone—let’s shape it together.
Wait! There's more...check out our gripping short story that continues the journey: The Iron Veil
Disclaimer: This article may contain affiliate links. If you click on these links and make a purchase, we may receive a commission at no additional cost to you. Our recommendations and reviews are always independent and objective, aiming to provide you with the best information and resources.
Get Exclusive Stories, Photos, Art & Offers - Subscribe Today!
Post Comment
You must be logged in to post a comment.