One AI to rule them all or a device with multiple personalities?
- A single artificial intelligence (AI) device can simplify user interactions. However, it may risk losing customization and precision that specialized variants can offer.
- The final verdict remains to be seen, as users may prefer multiple AI tools for different needs.
“No, I did not say Deftones. I said Dödsrit.” I snap at the overly friendly AI system in my car. It has yet again misinterpreted my request for music.
My family and I love listening to music in the car, though this enjoyment often sparks annoyance, usually because our music tastes clash. To avoid debates on what good music is, we’ve created a simple system where we take turns playing one song each. This workaround has solved our human disagreements. Now, however, a new frustration has emerged: The AI in the car often fails to recognize the artist we want to listen to.
I suspect it’s because my family delves into genres such as black metal, niche punk rock, and obscure post rock, and the AI simply doesn’t recognize the band names. To a request, it instead cheerfully responds “I will ask Spotify to play [some completely different artist]”, which never fails to annoy us.
A new paradigm in AI devices
As we shift toward a new paradigm of devices, I can’t help but wonder if similar conflicts with AI will become more common. Recently, Alibaba launched new AI glasses, while Ray-Ban Meta and similar products are already available on the market. AI glasses are a branch of smart glasses, but unlike augmented reality (AR) glasses, they don’t provide any overlays of digital objects in your field of vision. Instead, equipped with cameras and microphones, the glasses ‘see what you see’ and ‘hear what you hear,’ and in turn provide spoken feedback through built-in speakers.
You can use these glasses for many of the same things you use your smartphone for: to make calls, listen to music, and take photos. The novelty lies in bringing AI into your sensed surroundings. For example, you can receive instant analysis of what you are looking at — whether it is a tourist attraction or a traffic sign — or ask the glasses to translate what the French speaking person in front of you is saying.
It is neat. And as functionality evolves, I believe these glasses will become a natural part of our personal device ecosystem. Imagine stepping off a train and asking your personal AI for directions, without pulling out your phone. It could respond, “Do you see the fork in the road ahead? Keep left, and you’ll reach the café in about 100 meters. Yes, you’re on the right path”. This type of intuitive interaction—with a friendly AI on your shoulder – could make countless tasks simpler as AI moves out from your screen into your reality.
Rethinking everyday AI interactions
But there are of course conundrums, and here is one I am currently pondering: when we move away from the phone, how will our interactions with devices, services, and apps shift? Will the AI interface on new AI glasses mimic the smartphone logic, with individual apps for commuting, banking, music, and more? Or will it resemble the car’s AI setup, where a single assistant handles all external services? Similar to the AI in our car responding, “I will ask Spotify to play artist X.”? While if you want to interact with the app directly, you need to use your phone or other device.
Currently, app developers for Ray-Ban Meta glasses get access to both the glasses camera and microphone, but not to Meta AI. So, for Spotify, developers need to build their own AI solutions. However, it is not yet clear whether there will be an option to converse with such domain-specific AI in the glasses, or whether Meta AI will handle all requests.
Balancing trust and specialization in AI devices
What are the benefits and drawbacks of different modes of interaction?
On the one hand, the single-AI approach could simplify interaction — you have one AI that supports with all your tasks. But, as in the car example above, it risks losing the customization and precision that a specialized app or domain-specific AI can provide, which may lead to user frustration.
On the other hand, for specialized needs, dedicated AIs trained in specific domains may provide better experiences. Yet, interacting with multiple AI voices could make the glasses feel busy and overwhelming. Additionally, with multiple AI seeing what you see, you might feel like you are surrounded by a vast audience watching every detail of our life.
From the developer perspective, this is a question of who gets to own the interaction with the users. If AI glasses adopt the car logic, with a single AI for interaction, app and service developers stand to lose part of their interface, and in some way their relation, to their end-users. From the end user perspective, it may depend on trust, specifically, users' trust in a general-purpose AI versus domain-specific AIs. My thinking is that if users trust domain-specific AIs for certain tasks more than the general AI, they will find a way to use them.
Hopefully, we'll soon find out. My colleagues at Ericsson Research, Consumer and Industry Lab, are wrapping up a survey on consumer expectations for AI glasses. I was glad to include questions that probe differences in trust between a central, orchestrating AI and smaller, domain-focused AIs.
As we await the results from my colleagues’ study, I’d love to hear your thoughts: would you trust a single orchestrating AI to manage all your tasks, or would you prefer a support team of domain-expert AIs accompanying you during the day?
Stay tuned for the results of the study that is planned for publication in 2026.
Read more
Explore users’ experiences with AI-powered smart glasses
Explore how AI plays a key role in enabling automation, managing complexity and scalability
RELATED CONTENT
Like what you’re reading? Please sign up for email updates on your favorite topics.
Subscribe nowAt the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.