In summary
- Yann LeCun, head of AI at Meta and pioneer in neural networks, is recognized as Person of the Year for his impact on artificial intelligence in 2024.
- LeCun has promoted self-supervised learning as key to the evolution toward more autonomous and intelligent AI.
- He has criticized scaremongering about AI and rejects regulating fundamental models, advocating a focus on specific applications.
Yann LeCun, Chief AI Scientist at Meta and Turing Award winner, has long been a central figure in artificial intelligence. However, over the past year, his work has not only continued to expand the boundaries of AI research, but has also generated critical debates about how society should address the opportunities and risks posed by this transformative technology.
Born in 1960 in Soisy-sous-Montmorency, France, LeCun has been a driving force in AI innovation. From being the founding director of the New York University Data Science Center in 2012 and co-founder of Meta AI in 2013, to shaping the future of open source artificial intelligence, LeCun’s pragmatic vision makes him the right person. of the Year according to Emerge.
“On the technical side, he’s been a visionary. There are only a couple of people you could genuinely say that about, and he’s one of them,” said Rob Fergus, Professor of Computer Science at New York University. , to Decrypt in an interview. “Recently, his advocacy for open source and open research has been crucial to the Cambrian explosion of startups and people building on these LLMs.”
More Read
Fergus is an American computer scientist specializing in machine learning, deep learning and generative models. As a professor at NYU’s Courant Institute and a researcher at Google DeepMind, he co-founded Meta AI (formerly Facebook Artificial Intelligence Research) with Yann LeCun in September 2013.
LeCun’s impact on AI goes back decades, spanning his pioneering work in machine learning and neural networks. As a former Silver Professor at New York University, he has long advocated self-supervised learning—a technique inspired by how humans learn from their environment. In 2024, this vision drove advances in AI systems that can perceive, reason and plan with increasing sophistication, similar to living beings.
“There was a phase around 2015 when reinforcement learning was seen as the path to AGI. Yann had an analogy with a cake: unsupervised learning is the body, supervised learning is the icing, and reinforcement learning is the icing on the cake,” Professor Fergus recalled. “This was mocked by many at the time, but it has been proven correct. Modern LLMs are primarily trained with unsupervised learning, fine-tuned with minimal supervised data, and refined using reinforcement learning based on human preferences.”
Whether developing cutting-edge systems like Meta’s open source language models, including Llama AI, or addressing the ethical and regulatory challenges of AI, LeCun has become a central figure in the global debate on the role of intelligence. artificial.
“It has been wonderful to see him up close and all the incredible things he has done,” Professor Fergus said. “More people should listen to it.”
AI regulations
One of LeCun’s most debated positions this year has been his outspoken opposition to the regulation of fundamental AI models.
“He has told me that he doesn’t think AI regulations are necessary or right,” NYU Mathematics Professor Russel Caflisch told Decrypt. “I think he’s an optimist and sees all the good things that can come from AI.”
Caflisch, director of the Courant Institute for Mathematical Sciences at New York University, has known Professor LeCun since 2008 and has witnessed the emergence of modern machine learning.
In June, LeCun stated in X that regulating the models themselves could stifle innovation and impede technological advancement.
“Holding technology developers responsible for misuses of products built from their technology will simply halt technological development,” LeCun said. “It will certainly stop the distribution of open source AI platforms, which will kill the entire AI ecosystem, not just startups but also academic research.”
LeCun advocated focusing regulations on applications where risks are more context-specific and manageable, and has spoken out against regulating fundamental AI models, suggesting that regulating applications rather than the underlying technology would be more beneficial.
“Yann has done the fundamental work that has made AI successful,” Caflisch said. “Its current importance lies in being accessible, articulate and having a vision to advance AI towards artificial general intelligence.”
Criticism of alarmism about AI
LeCun has been vocal in countering what he perceives as exaggerated fears about the potential dangers of AI.
“He doesn’t give in to scaremongering and is optimistic about AI, but he’s not a cheerleader either,” Caflisch said. “It has also promoted a path to improve this through robotics, collecting data from the physical world.”
In an April appearance on the Lex Fridman Podcast, he dismissed the catastrophic predictions often associated with runaway superintelligence or uncontrolled AI systems.
“AI catastrophists imagine all kinds of catastrophic scenarios about how AI could escape or take over and basically kill us all, and that’s based on a lot of assumptions that are mostly false,” LeCun said. “The first assumption is that the emergence of superintelligence could be an event that at some point we are going to have, we are going to discover the secret, and we are going to turn on a machine that is superintelligent, and like we have never done before, it is going to take control of the world and kill us all. That is false.
Since the launch of ChatGPT in November 2022, the world has entered what many call an AI arms race. Facilitated by a century of Hollywood movies predicting the coming robotic apocalypse and aided by news that AI developers are working with the US government and its allies to integrate AI into their frameworks, many fear that a superintelligence of AI take control of the world.
LeCun, however, disagrees with these views, saying that the most intelligent AI will only have the intelligence level of a small animal and not the global hive mind of the Matrix.
“It’s not going to be an event. We’re going to have systems that are as intelligent as a cat and have all the characteristics of human-level intelligence, but their level of intelligence would be like that of a cat or a parrot,” LeCun continued. “Then we’re going to work on making those things smarter. As we make them smarter, we’re also going to put some limits on them.”
In a hypothetical doomsday scenario where rogue AI systems emerge, LeCun suggested that if developers can’t agree on how to control the AI and one goes rogue, a ‘good’ AI could be deployed to combat the rebels
Yann LeCun says AI language models cannot reason or plan – even models like OpenAI’s o1 – instead they are only doing intelligent retrieval and are not the path towards human-level intelligence pic.twitter.com/wQb4pVaRpX
— Tsarathustra (@tsarnick) October 23, 2024
The way forward for AI
LeCun advocates what he calls “Goal-Driven AI,” where AI systems don’t just predict sequences or generate content, but are goal-driven and can understand, predict, and interact with the world with a depth similar to that of living beings. This process involves creating AI systems that develop “models of the world”—internal representations of how things work—that enable causal reasoning and the ability to plan and adapt strategies in real time.
LeCun has long been a proponent of self-supervised learning as a method to advance AI toward more autonomous and general intelligence. It envisions AI learning to perceive, reason, and plan at multiple levels of abstraction, which would allow it to learn from vast amounts of unlabeled data, similar to how humans learn from their environment.
“The real AI revolution has not yet arrived,” LeCun said during a speech at the K-2024 Global Science and Technology Forum in Seoul. “In the near future, every one of our interactions with the digital world will be mediated by AI assistants.”
Yann LeCun’s contributions to AI in 2024 reflect a drive for technological innovation and pragmatic foresight. His opposition to harsh regulation of AI and rejection of alarmist narratives highlight his commitment to advancing the field. As AI continues to evolve, LeCun’s influence ensures that it remains a force for technological progress.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Crypto Keynote USA
For the Latest Crypto News, Follow ©KeynoteUSA on Twitter Or Google News.