kenny eliason 5afenxnLDjs unsplash kenny eliason 5afenxnLDjs unsplash

Silicon Valley’s Generative AI Explained (2023)

Generative AI products such as Bard, ChatGPT, and Grok feature blunt inaccuracies commonly known as software bugs but characterized by their founders as hallucinations of a software-soon-to-be-sentient. This speculative fiction story used to market AI faced immediate opposition from equally extreme A.I. doomers. But redressing buggy software as just another bump on the road to the promised land highlights a different issue for me –– what value can generative AI provide to society when we know Big Tech depends on acquiring more of our private data? Can we trust OpenAI, Microsoft, or any other privately held AI company when, over a weekend, OpenAI went from proclaiming sentient A.I. to just another over-hyped Silicon Valley startup without a CEO and a diversity problem?

No matter how sophisticated, computer code cannot replicate human insight or reasoning, so instead of trying, Silicon Valley wants to pull it out of the equation entirely. In Bay Area parlance, A.I. scales knowledge and abstracts (human) subjectivity, claiming that singularity is here. With government regulation closing in, Internet entrepreneurs are looking at A.I. to get ahead of the game and extract the last few billion from users’ wallets and advertisers before it’s too late. But this time around, humans are fighting back.

First, let’s clarify –– Generative A.I. is not the A.I. you and I imagine. Generative A.I. generates strings of words that match human sentences. It cannot think, feel, reason, or instinctively fear danger. Generative A.I. is an expensive approach to data crunching. In Computer Science, brute force algorithms are commonly used to hack passwords or find mathematical patterns. They generate solutions to mathematical equations by testing every possibility until the answer is found –– but it doesn’t understand the answer, the problem, or the process, yet.

Brute force algorithms consume massive amounts of energy and require thousands of powerful computers called GPUs (Graphics Processing Units) to find human-sounding patterns. That’s what the AI GPU buzz is all about: engineers are using thousands of high-end data-crunching machines called GPUs to generate human-sounding patterns at scale using brute force algorithms. Let me explain further. Generative A.I. mimics human insight through pattern recognition by using terabytes of private data (your data, books, movies, dialogues, podcasts, etc.), and proprietary algorithms instantly generate word patterns that sound human based on that data.

Generative A.I. algorithms are run on thousands of expensive gaming chips (GPUs) that crunch data instantly. But as NVIDIA crushes market expectations, there are red flags we must pay attention to. According to Forbes, NVIDIA sells 500,000 GPUs to China to avoid the GPU ban by Pres. Joe Biden. Why does this happen if domestic demand is strong? Why is Sam Altman compelling Saudi investors to back a competitor? The Saudis have since been obliged by Joe Biden’s administration to divest from that venture.

Let’s look at the users. ChatGPT claims 100M active weekly users, with bots using it to build and spread dangerous content on X, but more than ten percent of those users vanished six months after launch. Understandably, as soon as the bug-ridden and prone-to-hallucination ChatGPT was released, Satya Nadella, the CEO of Microsoft, bought it and resold it as a lifeline to Bing because nothing fixes a search engine better than automated software that indiscriminately publishes copyrighted material and steals code skipping human review in the process.

The introduction of generative A.I. to the masses

ChatGPT appeared in a format familiar to most Internet users: a chat window. ChatGPT is the vehicle Microsoft chose for this race, with Satya Nadella riding shotgun with a ten billion dollar cash bag. The investment, in part, goes to OpenAI’s 770 ride-or-die coders but is mainly spent on purchasing expensive data-crunching equipment sold by one company, NVIDIA, at $10,000 a pop. Beyond the philosophical A.I. debate lies El Dorado. A considerable loot that can birth a new economy and crown new kings with billions.

Oddly enough, OpenAI, the poster child of this hype cycle, was at the center of self-inflicted public drama. The weekend before Thanksgiving of 2023, OpenAI went from a self-proclaimed world-changing A.I. startup to a crude, underwhelming reality. Riding shotgun with Sam Altman proved a dangerous exercise for Nadella. Propped up by Y.C.’s hype machine, Sam Altman, entrepreneur extraordinaire and former President of Y-Combinator, remixed a product and a story so compelling Microsoft committed ten billion dollars on January 23rd, 2023, only eight weeks after ChatGPT was released to the public. A few months later, Sam Altman got fired, halting this hype cycle’s chapter, in part due to the bold actions of Helen Toner.