Hundreds of AI products flooded the market last year, sparking awe and confusion. People are dazzled by the idea that AI is reshaping our world, yet often find themselves scratching their heads at the reality – many of these products aren’t living up to the hype. It’s a rollercoaster: AI unicorns pop up everywhere, and some newly minted unicorns, such as AI for medicine, fail spectacularly– one more mythical than the other, with ever-expanding definitions of AI.
And yet, this stands in contrast to the grounded beginnings of AI. John McCarthy, who first coined ‘artificial intelligence,’ described it simply as the science of creating intelligent machines. There’s no consensus within the industry; companies like McKinsey and Google each spin their version of AI, highlighting the difference in thinking. It’s as if AI, once a straightforward scientific endeavor, has become a catch-all phrase for algorithm-driven functionality.
Much of the current AI buzz focuses on ‘Generative AI.’ Oracle defines Generative AI as a subset of machine learning technologies adept at creating content responding to textual prompts. I.B.M. explains how these models ‘understand’ data, but the jargon makes the concept more difficult for the average user to grasp. An issue journalists cannot shake yet.
Amidst this backdrop of ever-evolving AI definitions, President Biden has stepped up with the U.S. government’s first-ever AI Executive Order. The AI EO represents a significant shift towards proactive governance in AI. It mandates rigorous safety and security standards, requiring AI developers to be transparent about their safety protocols with federal authorities before public deployment. While this executive order marks a progressive step for the U.S. government in addressing the potential dangers of AI, experience tells us that government policies don’t always yield effective outcomes in regulation. This situation is similar to the SEC’s stringent cryptocurrency regulation, though the parade of crypto scams helped strengthen Biden’s approach.
President Biden’s executive order on AI has sparked various reactions, with industry leaders acknowledging it as a step forward but still insufficiently addressing the full spectrum of AI-related concerns.
Mitchell, researcher and Chief Ethics Scientist at AI startup Hugging Face suggests the EO may rely too much on post hoc solutions to AI issues rather than preemptive measures to minimize foreseeable harms, such as improving data practices before model deployment. Mitchell’s point about improving systems before AI models hit the market is valid. However, this approach might slow down AI’s speed and progress. It’s not about slamming the brakes on innovation; instead, it’s finding that sweet spot in the EO that balances rapid development with responsible oversight.
From the comments on the web, since the EO was announced, one of the most coherent views comes from Nathan Benaich, founder and general partner of Air Street Capital. He points out that the reporting requirements might disproportionately benefit established players over new entrants in the field, potentially stifling competition. In this scenario, new companies and startups might find themselves sidelined, especially those innovating with their models or leveraging open-source AI. The Executive Order (EO) could pave the way for a monopoly in the AI space.
Andrew Ng is a co-founder of Coursera, Stanford CS adjunct faculty, and former Head of Baidu AI Group/Google Brain shares a perspective similar to Nathan Benaich’s, focusing on the potential over-regulation of AI. Andrew expresses concern about how such regulation might disproportionately favor large tech companies, especially regarding sizeable and open-source models. Ng also addresses the common misperceptions surrounding AI, cautioning against policy decisions driven by unfounded fears, such as the notion of AI leading to human extinction. He advocates for well-considered regulation that supports the open-source community and fosters innovation.
With the Executive Order proposed by President Biden, other countries immediately began working on proposals for AI regulation. This effort was evident in the UK Summit, which gathered several government officials and tech CEOs. The UK Summit, proposed by Prime Minister Rishi Sunak, aimed to address the rapid developments in AI.
Sunak opened the summit by commenting on AI’s enormous potential for societal progress and its inherent risks that must be addressed. The Summit concluded by establishing two main initiatives: the Bletchley Declaration and the commitment to forming an AI Safety Institute. These regulations and the overall haste and apprehension in regulating AI gave the impression that AI is dangerous if not tightly controlled.
Analyzing the initiatives from the summit reveals their beneficial yet limited scope. The Bletchley Declaration and the AI Safety Institute set international guidelines for developing and deploying AI. Delving deeper into the declaration, it emphasizes the need for human-centric approaches that protect human rights and acknowledge AI’s transformative potential alongside its risks. Crucially, it points to the necessity of informing the public about AI advancements rather than withholding information. While governments and tech companies will make the final decisions, it’s deemed fair for the public to be aware of what’s happening behind the scenes since these developments will impact the products and services they use and contribute to.
A positive aspect of the declaration is its focus on safety evaluations, particularly against misuse and the negative consequences of AI, including the creation of deceptive content. This is pertinent given the risks posed by deepfakes and AI models that mimic human voices, which could be used to destabilize governments or create new social structures, not to mention the potential for new types of scams. If there’s a significant takeaway from these guidelines, it’s that they begin to address genuinely concerning issues related to AI misuse. In practical terms, the summit’s governmental insights offered little novelty or innovation.
A notable exception was the statement from Ursula von der Leyen, President of the European Commission. She proposed that effective governance should be built upon four pillars: a well-resourced and independent scientific community, accepted testing procedures and standards, a thorough investigation of significant incidents caused by AI errors or misuse, and a system of alerts from trusted flaggers. These suggestions provide a practical framework for action.
Regarding industry leaders’ opinions, it’s essential to recognize their reluctance toward complete control over AI systems. Elon Musk’s comment that overregulation could impede positive advancements is particularly insightful, especially considering his casual approach to social media. Mark Surman, President of the Mozilla Foundation, echoes this sentiment, advocating for openness in AI systems and warning against the pitfalls of tight, proprietary control.
We are in a phase of anticipation, waiting to see how regulations will shape the future of AI. For now, the best course of action is to develop and deploy our AI models or systems to the public before they become subject to stringent regulation, reminiscent of the early days of cryptocurrency in 2013.