Over the past two years, as artificial intelligence has stormed into the public consciousness, Apple has been remarkably quiet. Apple is a 48-year-old company, mature by Silicon Valley standards; old by AI standards. To younger competitors, the company's lack of an AI strategy was a welcome sign of decline. Wall Street more or less agreed. It wasn't just the silence and age that raised doubts, it was the nature of Apple as a company that raised doubts. The company is a $3 trillion control freak. It takes all the nasty parts of technology and tames them with engineering and design until it feels orderly, inevitable, and above all, profitable. How do you do that with AI, a technology that burns cash and can't be fully explained, let alone controlled?
Apple offered the answer at its Worldwide Developers Conference on Monday. It's not a sexy answer, nor is it a risky one. But it's the first rational theory of AI for the masses that I've heard, and it does what every good corporate strategy should do: identify a gaping hole in the market and make sure it lines up precisely with your company's strengths.
First, the theory. Apple thinks a lot of the discussion about AI over the past few years has been completely insane. “We're trying to help people in their daily lives,” John Giannandrea, Apple's senior vice president of machine learning and AI strategy, told me after WWDC. “We're not trying to create sentient creatures. To talk about AI as a new species” — as the CEO of Microsoft AI recently said — “seems complete nonsense to me. It's a technology, and we're trying to apply it in the most practical, useful way.”
Early in the development process, Apple bet the franchise that most people wouldn't want a trillion-parameter neural network, because most people have no idea what those words even mean. What people want is an AI that can navigate their calendar and email to give them a little tweak to their day. They want Siri to perform multi-step tasks, like finding photos of their kid in a pink coat for Christmas and putting them together into a movie with music they like. If the AI ​​were to generate its own visuals, they'd prefer emojis based on their friends' descriptions, not deepfakes. And, of course, they want all the privacy that Apple typically guarantees.
Follow this author Josh Tyrangiel's opinion
Apple calls these AI-driven tasks “personal context.” Both are meaningful improvements for the iPhone, where over a billion people do most of their computing and generate the majority of Apple's profits. And while these tasks require relatively small bursts of computing power, this is where AI creates the most cost. By limiting itself, Apple says it can perform most of these functions with an AI model with 3 billion parameters that is contained entirely within the device, meaning it never communicates with external servers and there are no privacy risks. This sounds easy, but from an engineering perspective, it's incredibly hard unless you manufacture your own silicon, run your own supply chain, and train your own AI models with licensed, high-quality data. That's the beauty of being a control freak.
Still, Apple acknowledges that some tasks are too complex for smaller AI models to handle. For medium-sized jobs that require web searches, the company has built an array of private cloud servers that can be called upon for help. The company says that user data is never accessed or stored by Apple or any other company. And for the largest AI tasks, it has built an external product called ChatGPT into the iPhone's operating system.
Wait, what? Isn't Apple the old man that OpenAI is trying to take down? Isn't OpenAI the chaotic child that wants to move fast and break things, threatening to make Apple turn the car around?
When Silicon Valley alliances seem contradictory, it usually means that both sides are temporarily taking advantage of the other. The Apple executives I spoke to were not too happy about OpenAI's recent self-inflicted PR blow, but they acknowledge that ChatGPT is the best and most powerful consumer AI on the market. (GPT-4 has about 1.5 trillion parameters, an 18-wheeler compared to an iPhone-powered three-wheeler.) For iPhone users who want to analyze thick PDFs or do generative writing or coding, integrating ChatGPT (without having to create an account, and for free) is a pretty nice perk. And the iPhone operating system asks for the user's permission before forwarding queries to ChatGPT.
For OpenAI, access to Apple's user base is literally priceless. If the company can penetrate the lives of just a fraction of iPhone's 1.4 billion users with the tacit backing of one of the tech world's most respected gatekeepers, it could help expand its already wide reach. My gut feeling is that both parties will get something out of this deal and go their separate ways.
For Apple, the path seems set. The company will use AI as a life hacker, improving email to save time and creating little creative delights that draw users ever deeper into Apple's devices. It will be safe, it will be profitable, it will be inevitable, so inevitable that all friction will be removed. It won't even be called artificial intelligence. In an act of sublime marketing hubris, Apple has decided to market this new product frontier as something else: Apple Intelligence. Great dad joke.