Credits
Nathan Gardels is the editor-in-chief of Noema Magazine.
As generative AI models become ever more powerful on their way to surpassing human intelligence, there has been much discussion about how they must align with human values so they end up serving our species instead of becoming our new masters. But what are those values?
The problem is that there is no universal agreement on one conception of the good life, nor the values and rights that flow from that incommensurate diversity, which suits all times, all places and all peoples. From the ancient Tower of Babel to the latest large language models, human nature stubbornly resists the rationalization of the many into the one.
Despite the surface appearance of technological convergence, a deep ontological plurality profoundly different beliefs about the nature of being still informs the active values of variegated societies.
This is most readily evident in the politico-cultural clash of the leading AI powers, Silicon Valley and China. At the risk of reductive essentialism for the purpose of brevity, the values of the former are aligned with the libertarian worldview of the sovereign individual long cultivated in the Judeo-Christian West. The values of the latter are aligned with the concept of the collectively embedded person rooted in Confucian, Buddhist and Daoist beliefs of social interdependence.
An early mission statement by OpenAI, which developed GPT, reflects the deep well from which its innovations have sprung: We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.
By contrast, after Alibaba released its latest version of generative AI in 2023, theCyberspace Administration of China quicklylaid down the law:Content generated by generative artificial intelligence should embody core socialist values and must not contain any content that subverts state power, advocates the overthrow of the socialist system, incites splitting the country or undermines national unity.
When I once asked Kai-Fu Lee, one of Chinas top AI entrepreneurs, whether the censorship regime there would distort its LLMs from accurately reflecting reality, he simply noted that different cultural zones with different values will censor different things. While the Chinese state might censor any criticism of the Party, in the West there is a kind of culturally driven woke censorship over sensitive speech on race and gender. In the Islamic world, there will be censorship over blasphemy against the Prophet Muhammad. Each Grossraum, or great cultural space, will align what is acceptable or not in its LLM algorithms according to the pertinent sensitivities.
Even within the common ontological grounding of the West, values and norms are not homogeneous.
Arthur Mensch is the celebrated entrepreneur behind the French startup Mistral, lauded as one of the few potential European champions in an AI world dominated by America and China. The issue with not having a European champion is that the road map gets set by the United States, he told the New York Times. It wasnt safe to trust the U.S. tech giants, he said. We cant have a strategic dependency.
Echoing a sentiment expressed recently in Noema concerning the teleological enthusiasms of Silicon Valley accelerationists who proselytize salvation through technology, Mensch said he felt uncomfortable with this very religious fascination with AI. The whole A.G.I. [artificial general intelligence] rhetoric is about creating God, he said. I dont believe in God. Im a strong atheist. So I dont believe in A.G.I.
A more imminent threat, he told the Times, is the one posed by American AI giants to cultures around the globe. These models are producing content and shaping our cultural understanding of the world, Mensch said. And as it turns out, the values of France and the values of the United States differ in subtle but important ways.
Where all this ironically leaves us is that aligning AI with universal values must, above all, mean the recognition of particularity plural belief systems, contesting worldviews and incommensurate cultural sensibilities that reflect the diverse disposition of human nature.
Go here to read the rest:
The Babelian Tower Of AI Alignment - NOEMA - Noema Magazine
- 12 Days: Pole Attachment Changes in 2023 Set Stage for BEAD Implementation - BroadbandBreakfast.com - December 23rd, 2023 [December 23rd, 2023]
- The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - Medium - December 23rd, 2023 [December 23rd, 2023]
- To Accelerate or Decelerate AI: That is the Question - Quillette - December 23rd, 2023 [December 23rd, 2023]
- Its critical to regulate AI within the multi-trillion-dollar API economy - TechCrunch - December 23rd, 2023 [December 23rd, 2023]
- MIT Sloan Executive Education Launches New Course Offerings Focused on AI Business Strategies - AiThority - December 23rd, 2023 [December 23rd, 2023]
- Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project - December 23rd, 2023 [December 23rd, 2023]
- The Era of AI: 2023's Landmark Year - CMSWire - December 23rd, 2023 [December 23rd, 2023]
- AGI Will Not Take All Our Jobs. But it will change all our jobs | by Russell Lim | Predict | Dec, 2023 - Medium - December 23rd, 2023 [December 23rd, 2023]
- Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists - December 23rd, 2023 [December 23rd, 2023]
- Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com - December 23rd, 2023 [December 23rd, 2023]
- 4 reasons why this AI Godfather thinks we shouldn't be afraid - TechRadar - December 23rd, 2023 [December 23rd, 2023]
- AI consciousness: scientists say we urgently need answers - Nature.com - December 23rd, 2023 [December 23rd, 2023]
- On the Eve of An A.I. Extinction Risk'? In 2023, Advancements in A.I. Signaled Promise, and Prompted Warnings from ... - The Debrief - December 31st, 2023 [December 31st, 2023]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium - December 31st, 2023 [December 31st, 2023]
- Stand with beneficial AGI | Sharon Gal Or | The Blogs - The Times of Israel - December 31st, 2023 [December 31st, 2023]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium - December 31st, 2023 [December 31st, 2023]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations - December 31st, 2023 [December 31st, 2023]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium - December 31st, 2023 [December 31st, 2023]
- Meta's long-term vision for AGI: 600,000 x NVIDIA H100-equivalent AI GPUs for future of AI - TweakTown - January 27th, 2024 [January 27th, 2024]
- Chrome Will Use Experimental AI Feature to Organize Tabs, Write Reviews - CNET - January 27th, 2024 [January 27th, 2024]
- Meta is joining its two AI teams in pursuit of open-source artificial general intelligence - MediaNama.com - January 27th, 2024 [January 27th, 2024]
- Tech CEOs Agree Human-Like AI Is Inevitable. Just Don't Ask Them What That Means - The Messenger - January 27th, 2024 [January 27th, 2024]
- Dream On, Mark Zuckerberg. Your New AI Bet Is a Real Long Shot - CNET - January 27th, 2024 [January 27th, 2024]
- Demystifying AI: The ABCs of Effective Integration - PYMNTS.com - January 27th, 2024 [January 27th, 2024]
- Artificial Intelligence & The Enigma Of Singularity - Welcome2TheBronx - February 4th, 2024 [February 4th, 2024]
- Former Tesla FSD Leader, Karpathy, Offers Glimpse into the Future of Artificial General Intelligence - Not a Tesla App - February 4th, 2024 [February 4th, 2024]
- Dembski: Does the Squawk Around AI Sound Like the Tower of Babel? - Walter Bradley Center for Natural and Artificial Intelligence - February 4th, 2024 [February 4th, 2024]
- AI child unveiled by award-winning Chinese scientist in Beijing - South China Morning Post - February 4th, 2024 [February 4th, 2024]
- I Asked GPT4 To Predict The Timeline Of Breakthrough Advances In Superintelligence Until The Year - Medium - February 4th, 2024 [February 4th, 2024]
- AGI: A Big Deal(Baby) That Needs Our Attention - Medium - February 4th, 2024 [February 4th, 2024]
- Human Intelligence Is Fundamentally Different From Machine Intelligence - Walter Bradley Center for Natural and Artificial Intelligence - February 4th, 2024 [February 4th, 2024]
- What is Artificial General Intelligence (AGI) and Why Should You Care? - AutoGPT Official - AutoGPT - February 4th, 2024 [February 4th, 2024]
- Artificial General Intelligence: Unleashing a New Era of Accessibility for People with Disabilities - Medriva - February 4th, 2024 [February 4th, 2024]
- Scholar Creates AI-Powered Simulated Child - Futurism - February 4th, 2024 [February 4th, 2024]
- AGI: Machines vs. Organisms - Discovery Institute - February 4th, 2024 [February 4th, 2024]
- Meta's AI boss says LLMs not enough: 'Human level AI is not just around the corner' - Cointelegraph - February 20th, 2024 [February 20th, 2024]
- Understanding the Intersection between AI and Intelligence: Debates, Capabilities, and Ethical Implications - Medriva - February 20th, 2024 [February 20th, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva - February 20th, 2024 [February 20th, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat - February 20th, 2024 [February 20th, 2024]
- Amazon AGI Team Say Their AI Is Showing "Emergent Abilities" - Futurism - February 20th, 2024 [February 20th, 2024]
- ICEF Podcast: The impact of Artificial General Intelligence on international education - ICEF Monitor - February 20th, 2024 [February 20th, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI - February 20th, 2024 [February 20th, 2024]
- There's AI, and Then There's AGI: What You Need to Know to Tell the Difference - CNET - February 20th, 2024 [February 20th, 2024]
- Will Artificial Intelligence Increase Unemployment? No. - Reason - April 30th, 2024 [April 30th, 2024]
- Elon Musk's xAI Close to Raising $6 Billion - PYMNTS.com - April 30th, 2024 [April 30th, 2024]
- Learning Before Legislating in Texas' AI Advisory Council - dallasinnovates.com - April 30th, 2024 [April 30th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic - April 30th, 2024 [April 30th, 2024]
- The Future of Generative AI: Trends, Challenges, & Breakthroughs - eWeek - April 30th, 2024 [April 30th, 2024]
- Drink the Kool-Aid all you want, but don't call AI an existential threat - Bulletin of the Atomic Scientists - April 30th, 2024 [April 30th, 2024]