If we believe what Google, Microsoft, and OpenAI are saying, artificial intelligence will revolutionize the way we work, communicate, and generally use phones and computers.
Over the past two weeks, all three tech giants have held events filled with demonstrations of AI's role in shaping how we interact with the internet, but to make these visions a reality they need trust from the public.
The tech industry hasn't been doing so well in the past few years, as evidenced by episodes like Meta's Cambridge Analytica scandal and the Google location tracking controversy, which have raised questions and concerns about the pervasiveness of Big Tech, its impact on our lives, and the way these companies handle consumer information.
There's already some evidence that people are skeptical of AI becoming more prevalent in our lives: A Pew Research Center survey found that 52% of Americans are more concerned than excited about the growing use of artificial intelligence, and a report from Bentley University and consulting firm Gallup found that 79% of Americans don't believe companies will use AI responsibly.
Earning and maintaining trust is essential for companies like Google, Microsoft, and OpenAI, and how people embrace these new AI tools may determine the winners and losers of the next big shift in computing.
“All this progress could be for naught if we can't sufficiently mitigate the risks and make AI trustworthy for humans,” said Arun Chandrasekaran, an analyst specializing in artificial intelligence and cloud computing at Gartner.
Read more: Google is finally feeling like a search company again
Google, Microsoft, and OpenAI have all shown off major advances in their AI systems over the past two weeks, highlighting how rapidly the technology is evolving.
Google's Gemini assistant was a star at last week's Google I/O conference. Gemini is now more conversational with a new mode called Gemini Live, a more conversational version of the voice assistant for users who subscribe to Gemini Advanced. It can also plan custom trip itineraries for users and answer questions related to what they're doing on their phone. Google's Gemini Nano model uses on-device processing to listen in on calls and monitor for potential fraud in real time.
But Alphabet and Google CEO Sundar Pichai did show what the next iteration of AI helpers might look like: He gave the example of an AI agent that could one day return a pair of shoes for you by finding the order number in your email, filling out a return form, and scheduling a UPS pickup. It's a bland example, but it hints at a future where AI agents do more than just answer questions.
Google CEO Sundar Pichai spoke about the company's vision for AI agents at Google I/O.
Screenshot/James Martin/CNET
On May 20, just before its Build conference, Microsoft unveiled a new class of AI-powered computers called Copilot Plus PCs, which are built to specific specifications so that they can handle the processing of AI algorithms on-device without relying on the cloud.
One of the new features that will be available on these PCs is Recall, which takes a snapshot of your PC's desktop and lets you replay what you did previously, if needed. The feature is positioned as an easy way to find recently viewed files, apps, and websites without having to manually dig through the content, and the processing happens on your device. You can prevent specific apps and websites from being logged, and Microsoft says “you're always in control.”
OpenAI held its product launch event on May 13, showcasing its latest flagship model, GPT-4o. One of the highlights was ChatGPT's human-like voice, which many described as resembling actress Scarlett Johansson. OpenAI eventually retracted ChatGPT's voice over the similarities, saying the resemblance was unintentional. In response, Johansson voiced her concerns and filed legal action against OpenAI, as reported by NPR. OpenAI also showcased how the chatbot can solve math and coding problems and interpret video in addition to voice.
OpenAI announced GPT-4o at an event on May 13th.
Open AI
What these developments have in common is that they're more proactive and natural than the tech tools we've used for years. Features like Recall and Google's real-time spam detection aim to anticipate our needs, while OpenAI's upgraded ChatGPT and Google's Gemini helper point to a future where AI bots feel more like friendly colleagues or personal shoppers than question-and-answer machines.
It may seem like a stretch to have AI return purchases, monitor your phone calls, and help you with your homework. After all, most people don't even like interacting with automated customer service menus, according to a survey by The Conversation. People typically opt to bypass automated agents and speak to a real human.
New chatbots like ChatGPT and Gemini are much more advanced than the automated phone menus we're used to, but these new tools assume that people will trust the technology to get things done for them.
Mohsen Bayati, a professor of operations, information and technology at the Stanford Graduate School of Business, believes people will be open to such tools as long as they have some say, such as approving a deal before it is finalized.
“I'm very concerned about applications that aim for full automation,” he said, adding that concerns about reliability would be greatly alleviated if users were allowed to be the “final arbiter of that loop.”
This is an important point because, although these AI systems are highly sophisticated, they are not without flaws: they are prone to hallucinations, which can lead to unreliable answers, and there have been concerns about whether the AI ​​is getting the information accurately and appropriately.
Google's Gemini came under scrutiny earlier this year for generating historically inaccurate images, and more recently, the company's new AI-generated summaries that appear above search results, known as “AI Overviews,” were criticized for surfacing incorrect answers, as pointed out by social media users. In one particularly high-profile example, Google's AI Overview suggested putting glue on pizza to stop the cheese from sliding off.
The Google AI Overview feature provides an AI-created snapshot that aims to answer your questions quickly and conversationally, but is not necessarily accurate.
Screenshot: Lisa Eadicicco/CNET
In a statement provided to CNET, a Google spokesperson said the examples the company saw were “typically highly unusual queries” and “not representative of most people's experiences,” adding that “the vast majority of our AI summaries provide high-quality information.”
“We conducted extensive testing before launching this new experience, and we'll continue to use these specific examples to refine the system as we go along,” the statement said.
Several newspapers and digital news outlets, including The New York Times, have filed a lawsuit against OpenAI and its partner Microsoft for using copyrighted material for training purposes.
As reported by Wired and other outlets, OpenAI recently disbanded its Superalignment team, instead integrating its work into other divisions within the company. The Superalignment team, which OpenAI announced last year, was created to ensure that super-advanced AI systems were developed safely.
OpenAI CEO Sam Altman and President Greg Brockman posted a memo about X outlining the company's commitment to safety and its overall strategy for X in response to these changes. “We are focused on both delivering significant benefits and mitigating serious risks. We take our role here very seriously and will carefully consider any feedback on our actions,” the memo reads.
Google and Microsoft have also been vocal about their commitment to safety and building responsible AI products. Google highlighted its AI principles at its I/O conference, which include testing for safety, accountability to people, avoiding creating or reinforcing unfair bias, and societal good. Microsoft's Responsible AI page lists similar values, including fairness, inclusion, transparency, trust, and safety.
But often, larger societal issues arising from technological advances only become apparent later, such as the impact of social media on the mental health of young people in particular.
“Until something happens that brings public attention to the type of data we're sharing and how companies are using it, people aren't going to pay attention,” said Hanan Hibshi, an assistant professor at Carnegie Mellon University's Information Network Institute.
At its Build conference, Microsoft made a big push towards integrating more AI into PCs.
Microsoft
Getting this right is crucial for technology companies, given that AI is being touted as a major shift in personal computing. For the last decade, the tech industry has been searching for the next iPhone moment – the next big breakthrough that will change our relationship with technology the way the smartphone does. Over the past decade, we've seen a variety of ideas for what that might look like, from virtual reality headsets to smart speakers to smartwatches.
But ChatGPT's overwhelming popularity after its release in late 2022 was a groundbreaking moment. Like smartwatches, it takes a while for new technology to become widespread, but it didn't take long for generative AI to infiltrate everything from smartphones to PCs.
Whether we are ready to trust AI may still be up for debate. But ultimately, as AI becomes more critical to everyday work, those concerns may be replaced by benefits. Robert Seamans, a professor at New York University's Stern School of Business, likens this to entering credit card information online. About 25 years ago, people may have been hesitant to do so, but today many people accept it as a given and are willing to accept some risk.
“You might imagine that the technology might not work or that people might not be able to interact with it well,” he says. “But as more people start using it, [and] I think as soon as we start using it ourselves, those fears will disappear.”
Editor's note: CNET has used an AI engine to create dozens of articles and labeled them accordingly. The notes you're reading are attached to articles that substantively address AI topics, but are all created by our expert editors and writers. For more information, see our AI policy.