Don't you hate it when the godfathers disagree?
On one side, we have former Google scientist Dr. Geoffrey Hinton warning that we're going too fast and AIs could ruin everything from jobs to truth. On the other side, we find Meta's Yann LeCun.
Both scientists once worked together on Deep Learning advancements that would change the world of AI and triggered the flurry of advancements in AI algorithms and large language models that brought us to this fraught moment.
Hinton delivered his warning earlier this year to The New York Times. Fellow Turing Award-winner LeCun largely countered Hinton and defended AI development in a wide-ranging interview with Wired's Steve Levy.
"People are exploiting the fear about the technology, and were running the risk of scaring people away from it," LeCun told Levy.
LeCun's argument, which in its TLDR form is something making to, "Don't worry, embrace AI," breaks down into a few key components that may or may not make you think differently.
I particularly enjoyed LeCun's open-source argument. He told Levy that if you accept that AI may end up sitting between us and much of our digital experience, it doesn't make sense for a few AI powerhouse companies to control it. "You do not want that AI system to be controlled by a small number of companies on the West Coast of the US," said LeCun.
Now, this is a guy who works as Meta's Chief AI Scientist. Meta (formerly Facebook) is a big West Coast company (which recently launched its own open-source LLM LLAMA 2). I'm sure the irony is not lost on LeCun but I think he may be targeting OpenAI. The world's leading AI purveyor (maker of ChatGPT and DALL-E, and a major contributor to Microsoft's CoPilot) started as an open and non-profit company. It's now getting a lot of funding from Microsoft (also a big West Coast company) and LeCun claims OpenAI no longer shares its research.
LeCun has been vocal on the subject of AI regulation but maybe not in the way you think. He's basically arguing against it. When Levy asked about all the damage an unregulated and all-powerful AI could do, LeCun insisted that not only are AIs built with guardrails but if these tools are used in industry, they'll have to follow pre-existing and rigid regulations (think the pharmaceutical industry).
"The question that people are debating is whether it makes sense to regulate research and development of AI. And I don't think it does," LeCun told Wired.
There's been a lot of talk in recent months about the potential for Artificial General Intelligence (AGI), which may or may not be much like your own intelligence. Some, including OpenAI's Sam Altman, believe it's on the near horizon. LeCun, though is not one of them.
He argued that we can't even define AGI because human intelligence is not one thing. He has a point there. My intelligence would not be in any way comparable to Einstein's or LeCun's.
There's little question in LeCun's view that AIs will eventually be smarter than humans but he also notes that they will lack the same motivations as us.
He likens these AI assistants to 'super-smart humans" and added working with them might be like working with super-smart colleagues.
Even with all that intelligence, LeCun insists that these AIs won't have human-like motivations and drives. Global Domination won't be a thing for them simply because they're smarter than us.
LeCun doesn't discount the idea of programming in a drive (a superseding goal) but sees that as "objective-driven AI" and since part of that objective could be an unbreachable guardrail, the safeguards will be baked in.
Do I feel better? Is less regulation, more open source, and a firmer embrace of AI mediation the path forward to a safer future? Maybe. LeCun certainly thinks so. Wonder if he's spoken to Hinton lately.
Continue reading here:
4 reasons why this AI Godfather thinks we shouldn't be afraid - TechRadar
- 12 Days: Pole Attachment Changes in 2023 Set Stage for BEAD Implementation - BroadbandBreakfast.com - December 23rd, 2023 [December 23rd, 2023]
- The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - Medium - December 23rd, 2023 [December 23rd, 2023]
- To Accelerate or Decelerate AI: That is the Question - Quillette - December 23rd, 2023 [December 23rd, 2023]
- Its critical to regulate AI within the multi-trillion-dollar API economy - TechCrunch - December 23rd, 2023 [December 23rd, 2023]
- MIT Sloan Executive Education Launches New Course Offerings Focused on AI Business Strategies - AiThority - December 23rd, 2023 [December 23rd, 2023]
- Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project - December 23rd, 2023 [December 23rd, 2023]
- The Era of AI: 2023's Landmark Year - CMSWire - December 23rd, 2023 [December 23rd, 2023]
- AGI Will Not Take All Our Jobs. But it will change all our jobs | by Russell Lim | Predict | Dec, 2023 - Medium - December 23rd, 2023 [December 23rd, 2023]
- Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists - December 23rd, 2023 [December 23rd, 2023]
- Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com - December 23rd, 2023 [December 23rd, 2023]
- AI consciousness: scientists say we urgently need answers - Nature.com - December 23rd, 2023 [December 23rd, 2023]
- On the Eve of An A.I. Extinction Risk'? In 2023, Advancements in A.I. Signaled Promise, and Prompted Warnings from ... - The Debrief - December 31st, 2023 [December 31st, 2023]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium - December 31st, 2023 [December 31st, 2023]
- Stand with beneficial AGI | Sharon Gal Or | The Blogs - The Times of Israel - December 31st, 2023 [December 31st, 2023]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium - December 31st, 2023 [December 31st, 2023]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations - December 31st, 2023 [December 31st, 2023]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium - December 31st, 2023 [December 31st, 2023]
- Meta's long-term vision for AGI: 600,000 x NVIDIA H100-equivalent AI GPUs for future of AI - TweakTown - January 27th, 2024 [January 27th, 2024]
- Chrome Will Use Experimental AI Feature to Organize Tabs, Write Reviews - CNET - January 27th, 2024 [January 27th, 2024]
- Meta is joining its two AI teams in pursuit of open-source artificial general intelligence - MediaNama.com - January 27th, 2024 [January 27th, 2024]
- Tech CEOs Agree Human-Like AI Is Inevitable. Just Don't Ask Them What That Means - The Messenger - January 27th, 2024 [January 27th, 2024]
- Dream On, Mark Zuckerberg. Your New AI Bet Is a Real Long Shot - CNET - January 27th, 2024 [January 27th, 2024]
- Demystifying AI: The ABCs of Effective Integration - PYMNTS.com - January 27th, 2024 [January 27th, 2024]
- Artificial Intelligence & The Enigma Of Singularity - Welcome2TheBronx - February 4th, 2024 [February 4th, 2024]
- Former Tesla FSD Leader, Karpathy, Offers Glimpse into the Future of Artificial General Intelligence - Not a Tesla App - February 4th, 2024 [February 4th, 2024]
- Dembski: Does the Squawk Around AI Sound Like the Tower of Babel? - Walter Bradley Center for Natural and Artificial Intelligence - February 4th, 2024 [February 4th, 2024]
- AI child unveiled by award-winning Chinese scientist in Beijing - South China Morning Post - February 4th, 2024 [February 4th, 2024]
- I Asked GPT4 To Predict The Timeline Of Breakthrough Advances In Superintelligence Until The Year - Medium - February 4th, 2024 [February 4th, 2024]
- AGI: A Big Deal(Baby) That Needs Our Attention - Medium - February 4th, 2024 [February 4th, 2024]
- Human Intelligence Is Fundamentally Different From Machine Intelligence - Walter Bradley Center for Natural and Artificial Intelligence - February 4th, 2024 [February 4th, 2024]
- What is Artificial General Intelligence (AGI) and Why Should You Care? - AutoGPT Official - AutoGPT - February 4th, 2024 [February 4th, 2024]
- Artificial General Intelligence: Unleashing a New Era of Accessibility for People with Disabilities - Medriva - February 4th, 2024 [February 4th, 2024]
- Scholar Creates AI-Powered Simulated Child - Futurism - February 4th, 2024 [February 4th, 2024]
- AGI: Machines vs. Organisms - Discovery Institute - February 4th, 2024 [February 4th, 2024]
- Meta's AI boss says LLMs not enough: 'Human level AI is not just around the corner' - Cointelegraph - February 20th, 2024 [February 20th, 2024]
- Understanding the Intersection between AI and Intelligence: Debates, Capabilities, and Ethical Implications - Medriva - February 20th, 2024 [February 20th, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva - February 20th, 2024 [February 20th, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat - February 20th, 2024 [February 20th, 2024]
- Amazon AGI Team Say Their AI Is Showing "Emergent Abilities" - Futurism - February 20th, 2024 [February 20th, 2024]
- ICEF Podcast: The impact of Artificial General Intelligence on international education - ICEF Monitor - February 20th, 2024 [February 20th, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI - February 20th, 2024 [February 20th, 2024]
- There's AI, and Then There's AGI: What You Need to Know to Tell the Difference - CNET - February 20th, 2024 [February 20th, 2024]
- Will Artificial Intelligence Increase Unemployment? No. - Reason - April 30th, 2024 [April 30th, 2024]
- The Babelian Tower Of AI Alignment - NOEMA - Noema Magazine - April 30th, 2024 [April 30th, 2024]
- Elon Musk's xAI Close to Raising $6 Billion - PYMNTS.com - April 30th, 2024 [April 30th, 2024]
- Learning Before Legislating in Texas' AI Advisory Council - dallasinnovates.com - April 30th, 2024 [April 30th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic - April 30th, 2024 [April 30th, 2024]
- The Future of Generative AI: Trends, Challenges, & Breakthroughs - eWeek - April 30th, 2024 [April 30th, 2024]
- Drink the Kool-Aid all you want, but don't call AI an existential threat - Bulletin of the Atomic Scientists - April 30th, 2024 [April 30th, 2024]