Google Assistant improves in context and pronunciations with your help
Google is now updating its AI assistant to better understand the context of queries and the pronunciations of names. It is based on reports following the update, shared with some helpful videos highlighting the changes.
Beyond context changes, the biggest change – at least for users who use the Google Assistant to make calls – is the ability to teach. AI pronunciations. Specifically, these are pronunciations for contacts stored in association with a Google account. So, the setting can be found by going to the Google app – or another Google app that has account settings in the settings menu – and then to the “More” option.
From there, users should go to “Settings”, “Google Assistant”, then “Your Contacts”.
Now Google offers options to select the “default” pronunciation or to dictate and record one. Then the system will remember this pronunciation in Google Assistant iterations such as the Nest smart speakers in the future.
Beyond pronunciations, what’s new with Google Assistant?
The other changes included in this update will probably be less obvious to users. But they will improve the overall experience nonetheless.
For starters, the Google Assistant “improves” to differentiate what people talk about when they make requests. In summary, Google says it can now “respond almost 100% accurately” to certain requests. For example, those associated with timers and alarms.
This is especially true for situations where multiple alarms or timers are set. The system will now be able to more easily discern between them. Effectively give more precise answers and make more precise modifications if these are necessary.
Language processing in general also improves, in terms of contextual understanding and follow-up questions. And Google Assistant can now also take contextual hints from what is displayed on user screens. It is for smart screens and smartphone screens. All of this is aimed at making Google Assistant even more conversational than ever.
This is happening now, if you are in the right region
Of course, like most features of the Google Assistant, these changes are being rolled out in the United States first. It starts with changes to improve contextual awareness. Changes allowing pronunciation dictation will be added over the next few days. But it may take some time for each user to see the changes.
However, the update does not seem to require a specific version. So this implies that this is happening on the server side and users shouldn’t have to do anything in particular to take advantage of it.