The recent interview with LeCun was very exciting for me to read because he shares two key insights whose significance is easy to miss for those unfamiliar with the background. From my research into the conceptual architecture of LLMs, it has became clear to me that the architecture of LLMs cannot support true intelligence - we are still at the level of sophisticated stochastics.
LLMs manipulate language really well. ... LLMs are limited to the discrete world of text. They can’t truly reason or plan, because they lack a model of the world. They can’t predict the consequences of their actions.
Those two short phrases are a nod to the symbol grounding problem (Harnad 1990) and the frame problem (Fodor 1987). LeCun:
The key is to learn an abstract representation of the world and make predictions in that abstract space, ignoring the details you can’t predict.
This is exactly the frame problem, and it is a very hard problem to solve. The significance of these two papers (and the problems they identify) is that both need to be solved to have intelligence. Given the tens of billions of dollars and the incredibly smart minds being put into AI research, there must be people aware of these two papers and their significance, but this interview with LeCun is the first reference that I’ve come across. If LeCun left Meta to found this new venture, that would imply that Meta was not working on solving these two papers, and most likely neither are the other major AI providers. If so, this is stunning to me.
As Fodor wrote in his paper: “If I did, I’d have solved the frame problem and I’d be rich and famous.” Whoever solves the two problems has the golden keys to the kingdom of true human-level intelligence, and the current valuations will mean little. LeCun is betting on being able to solve both problems. If people think AI models are sophisticated now (and they are), we have seen nothing yet compared to having true AI.