On Monday, Apple announced software features for a range of products, including the iPhone and iPad, as part of its Worldwide Developers Conference, and some of the most anticipated announcements from the show were details on how the company will integrate artificial intelligence into its phones and operating system.
During the presentation, Apple executives showed how the tech giant's AI systems (which the company explicitly calls “Apple Intelligence” rather than artificial intelligence) can help with text and photo search, image creation, grammar and spelling correction, text summarization, and photo editing.
After the announcement, tech pundits, internet millionaires, and people in the cheap seats around the world complained that these features were inconsequential. CNET's Katie Collins wrote that Apple's most interesting new features were long overdue, summing up the reaction with “finally.” Bloomberg's Mark Gurman called them “minor upgrades.” My colleague Jordan Hart said that these new features are not the silver bullet Apple needs to revitalize the company. And Elon Musk expressed his disappointment by sharing a silly meme. In short, many are disappointed with Apple's practical integration of AI. Sure, summarizing long emails and creating phone call transcripts may sound boring compared to the speculation that AI can be used to detect cancer early, but so what? Apple's scale and the specificity of its vision also make it the first major technology company to get AI integration right.
Apple is using AI to do what the technology can do: be an assistant. Sure, the virality of OpenAI’s ChatGPT-3 showed off AI’s potential. But using AI to power robots that do chores or answer open-ended questions is still highly imperfect. Chatbots lie, hallucinate, and tell coworkers to eat glue. Google’s launch and then rollback of providing AI answers to people’s search queries is just one sign that the current iteration of the technology can’t address all the use cases Silicon Valley dreams of. Venture capitalist Marc Andreessen claims that AI will “save the world,” “make war better,” and become our therapists, tutors, confidants, and collaborators, ushering in a “golden age” of the arts.
Apple's update is a plea for everyone to calm down. It's a wake-up call for other tech companies to actually deliver on what they promise to consumers and provide AI products that make life a little easier, instead of confusing them with over-promises. Apple's use of AI's best features is also the best way for regular people to understand how AI works. It's a way to build trust. Of course, one day AI may figure out how to destroy civilization or whatever, but for now, AI is good at finding pictures of dogs dressed as pickles taken in 2019. And for the vast majority of people, that's totally fine.
What does AI do?
The fact that people are disappointed with Apple says more about the hype around AI capabilities than it does about Apple. Musk has been promising Tesla will build self-driving robot cars since 2019, and has been touting its driver-assistance technology as “Autopilot” for even longer. OpenAI’s internal squabbles have become palace intrigues and media fodder, centered on concerns about the speed at which AI’s awesome power will transform humanity, not about the limitations of its current practical applications. The biggest models, the most powerful Nvidia chips, the brightest teams plucked from the hottest startups — that’s the drumbeat of Silicon Valley and Wall Street AI news. We’ve seen tech hype cycles before, but most of them are about fundraising and stock sales. Time will tell if the investments Wall Street and Silicon Valley are making in AI infrastructure will actually produce the returns they deserve. That’s the game.
Apple's update is a call for everyone to exercise self-restraint.
But in all that noise, the reality of what AI is currently good (and not good at) is lost. Especially when it comes to the large-scale language models that underpin most of the new AI tools consumers use, like virtual assistants and chatbots. The technology is based on pattern recognition. Rather than making value judgments, LLMs simply scan the vast library of information they collect — books, web pages, speech transcripts, etc. — and guess which word would be most logically next in the chain. This design has inherent limitations. Sometimes a fact is unlikely, but what makes it a fact is that it is provable. It may not make sense that the capital of New York State is Albany and not New York City, but it is a fact. Using glue to stick cheese on pizza might make sense if it were a robot with no context for what “food” is. But it never does. Large-scale language models cannot make this value judgment between patterns and facts. It's unclear whether they will ever be able to do so. Yann LeCun, Meta's lead AI scientist and one of the “godfathers of AI,” said LLMs have “very limited understanding of logic” and “do not understand the physical world, have no persistent memory, cannot reason through logical definitions, or make plans.” He also said LLMs are mentally inferior to a house cat because they cannot learn anything beyond the data they are trained on, and cannot learn anything new or unique.
In other words, it's not perfect.
Enter Apple, a company known for its culture of perfectionism. The company was slow to embrace the hype surrounding AI, and as mentioned above, refused to use the term “artificial intelligence” for a while, preferring instead the long-outdated and boring name “machine learning.” Apple began developing its own generative AI after ChatGPT-3 was launched in 2022, but the new features were only made public when it felt ready. This technology is the foundation for features like Genmoji, where you can describe and create custom emojis to fit what's going on, such as an emoji of a person crying while eating an entire pizza. It also lends itself to more practical applications, like writing an email to your boss when you're not feeling well, or showing you a link your mom texted you. Now, these basic call-and-response applications are where LLM excels.
Apple's strict standards serve as a way to firmly establish AI's current capabilities (or limitations, depending on how you look at it).
If you want to use the latest Apple products to venture into the stranger, fungible world of talking to chatbots, Siri can summon ChatGPT to set you free. This is Apple's attempt to clearly distinguish between the limits of reliability and the beginnings of a world of technical disagreement. For Apple, this distinction makes sense: they want their products to be associated not just with cutting-edge technology, but also with efficiency and productivity.
But this distinction is not helpful to others in Silicon Valley or its venture capital investors. Anyone raising or investing in this technology will want to see AI's capabilities and value as a moving target, specifically one that moves up, to the right, and quickly. Apple's strict standards serve as a way to firmly establish AI's current capabilities, or limitations (depending on how you look at it). The other way, like we see with other companies, is that users are the guinea pigs, accustomed to technology that makes them question what they see. Societies around the world are already grappling with a crisis of trust in institutions, but flawed AI will only spread that distrust wider and faster. It will add another stone to the wall between people's trust and what they read on the internet. In that sense, Apple's cautious approach may be helpful to others in the technology industry. By slowly getting its user population used to an AI that makes their lives better rather than frustrating them, Apple is making the technology feel like a natural upgrade rather than a scary, unreliable intrusion.
Sure, Apple's AI isn't sexy or scary, but it at least doesn't seem stupid, and ideally, that means it doesn't make our world any stupider.
Lynette Lopez is a senior correspondent for Business Insider.