I’ve been using Google Nest speakers since they were still called Google Home, back when the company was handing them out like candy. Over the years, I’ve mostly stuck to the basics of using the smart speakers to set timers, control lights, and get quick answers to random questions, but even carrying out those simple tasks is not without frustration. Part of the challenge of these devices is how particular they are about how you speak to them, but I’ve learned a few tricks that make it easier.
Smart speakers in general are in a bit of an awkward phase right now. Most are still stuck with software that can only understand a handful of very specific phrases, and can get stuck if you don’t phrase a question or request just so. Meanwhile, LLMs like ChatGPT, Gemini, and Claude are somehow able to understand complex instructions, even if they sometimes struggle to follow those instructions.
It may be a while before smart speakers are dragged into our LLM-enabled future, but there are a few things you can do to make them work better in the meantime. In this article I’m focusing on Google Home and its Nest speakers because that’s the ecosystem I personally use, but many of these tips will apply to other smart speaker systems as well. For example, while Google has Voice Match, Amazon’s Echo has Voice ID; both of these tools identify who’s speaking to them. Even if you’re in a different smart speaker ecosystem, it’s worth poking around to see what your options are.
Try out the Gemini preview (if you can)
Arguably, the most function for an LLM like Gemini is interpreting voice commands, but for now Gemini is still locked behind a Public Preview. Though “public” might be a bit of a misnomer. While you can opt-in to trying out Gemini on your smart speakers, there are several conditions. You must:
-
Be a Nest Aware subscriber. Ostensibly, the Nest Aware subscription is mainly for video features on your Nest cameras, but Google has a tendency to lump other smart home features into it. The Gemini preview is one of those. A subscription costs $8/month or $80/year, but we probably wouldn’t recommend getting it just to try out Gemini early.
-
Enroll in the Google Home app public preview. There’s a separate public preview for new Google Home features that you’ll have to opt-in to before you can even get to the Gemini preview. You can find full instructions here based on your devices.
-
Opt-in to experimental AI features. Once you’re in the Google Home public preview, you’ll get a message in the Google Home app inviting you to enable experimental AI features. Make sure this is toggled on as well, or you’ll miss the Gemini option.
-
Then…wait. Even after all of this, Google doesn’t guarantee you’ll immediately gain access to the Gemini preview, which is annoying. But if you want a shot at trying it out, you’ll need to jump through the above hoops.
For now, this isn’t going to be practical for most people, but if you’re already a Nest Aware subscriber, it might be worth giving it a try. Google Nest devices currently default to the Google Assistant, which does little more than scan your requests for simple keywords. If you want to talk to your speaker in real, human sentences, it’s inevitably going to take Gemini. It’s just a question of when you can get it.
Create your own commands with Automations
Until Gemini is broadly available as a voice assistant, we’re stuck trying to fit our requests into the narrow box of smart speakers. Fortunately, Google Home has a really handy tool to make them less cumbersome: Automations. In a dedicated tab in the Google Home app, you can create automations (called Routines) that trigger multiple, complex actions from simple phrases.
One of my favorites, I’ve created a routine that activates when I say, “Hey, Google: movie sign!” This little script will turn off the overhead lights in my living room, pause any smart speakers that happen to be playing music, and turn on the TV backlight. Normally, all of these would have to be individual commands, and while Google Assistant can sometimes handle multiple instructions at once, it can often fail. This way rarely does.
Routines have some built-in functions such as adjusting your smart home devices, playing certain media, sending texts, or even getting the weather. If there’s not already a preset action in the Routines menu, you can also add custom instructions. These will run as though you told Google Assistant to do them yourself. It’s handy if you need to run a command with a particular phrasing, but one that Google often misunderstands when spoken aloud.
Enable Voice and Face Match to get better results
Google advertises Voice Match as a way to get personalized results based on who’s asking a question. For example, if you say “What’s on my calendar?” you can get a rundown from your personal Google account, but someone else in your household will get theirs (and guests can’t access anyone’s calendar). While that’s well and good, personally I find this feature useful for a much different reason: it can help Google know what each person in your house sounds like.
Any household with both masculine and feminine voices is familiar with this particular failure. Someone with a feminine voice says “turn on kitchen…turn on kitchen…turn on kitchen!” Then the masculine voice, from across the room, bellows, “Turn on kitchen.” And that one works.
There are complicated reasons for this—which can range from simple coincidence to how microphones pick up higher and lower frequencies—but Voice Match can sometimes (sort of) help with this. While it doesn’t magically make the device’s microphone better, or make it easier to distinguish a voice from background noise, it can help Google decide better how to handle commands.
For example, two people who each have Voice Match set up on the same device can set different default music services. Similarly, recommendations based on previous activity will be tailored to that person’s profile, rather than all activity going through one account.
What do you think so far?