Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Android Cellphones Communications Google Software Technology

Google's New Voice Recognition System Works Instantly and Offline (If You Have a Pixel) (techcrunch.com) 41

Google's latest speech recognition works entirely offline, eliminating the delay that many other voice assistants have to return your query. "The delay occurs because your voice, or some data derived from it anyway, has to travel from your phone to the servers of whoever operates the service, where it is analyzed and sent back a short time later," reports TechCrunch. "This can take anywhere from a handful of milliseconds to multiple entire seconds (what a nightmare!), or longer if your packets get lost in the ether." The only major downside with Google's new system is its limited availability. As of right now, it's only available to people with a Pixel smartphone. From the report: Why not just do the voice recognition on the device? There's nothing these companies would like more, but turning voice into text on the order of milliseconds takes quite a bit of computing power. It's not just about hearing a sound and writing a word -- understanding what someone is saying word by word involves a whole lot of context about language and intention. Your phone could do it, for sure, but it wouldn't be much faster than sending it off to the cloud, and it would eat up your battery. But steady advancements in the field have made it plausible to do so, and Google's latest product makes it available to anyone with a Pixel.

Google's work on the topic, documented in a paper here, built on previous advances to create a model small and efficient enough to fit on a phone (it's 80 megabytes, if you're curious), but capable of hearing and transcribing speech as you say it. No need to wait until you've finished a sentence to think whether you meant "their" or "there" -- it figures it out on the fly. So what's the catch? Well, it only works in Gboard, Google's keyboard app, and it only works on Pixels, and it only works in American English. So in a way this is just kind of a stress test for the real thing.
"Given the trends in the industry, with the convergence of specialized hardware and algorithmic improvements, we are hopeful that the techniques presented here can soon be adopted in more languages and across broader domains of application," writes Google in their blog post.
This discussion has been archived. No new comments can be posted.

Google's New Voice Recognition System Works Instantly and Offline (If You Have a Pixel)

Comments Filter:
  • by Anonymous Coward

    is simply because pixel is google, and the spy shit will still end up getting transmitted later when a connection is available. it has nothing to do with 'computing power' of the device. early dragon naturallyspeaking worked on lowly 486dx and pentiums running windows 95 and nt 4. all it takes it a little 'training' of the user's voice, and a trained dragon 1.0 did just as well back then, as current shit does today. current iterations of 'voice assistants' still do the 'training' for voices.. just 'in the c

    • by Solandri ( 704621 ) on Tuesday March 12, 2019 @11:23PM (#58265634)
      Most of the software functionality of the Pixel 3 has been hacked and extracted [xda-developers.com]. You can install it on your Android device running Nougat or later if you're rooted with Magisk. If this offline voice recognition is done in software instead of dedicated hardware (like the original Moto X), expect it to be made available for other rooted devices as well.
    • by ShanghaiBill ( 739463 ) on Wednesday March 13, 2019 @12:18AM (#58265762)

      early dragon naturallyspeaking worked on lowly 486dx and pentiums running windows 95 and nt 4

      That was just speech-to-text. Google is going much further than that, with semantic understanding of what you are saying. That requires way more compute power. On a cell phone, this has only been viable with sub-second response times since mobile GPUs got decent support for CUDA and OpenCL.

      • Comment removed based on user account deletion
        • by AmiMoJo ( 196126 )

          Their stated aim is the computer on Star Trek TNG, i.e. you can have a natural language conversation with it. They are actually getting there too - it understands context and follow-up questions in many cases.

          The computer on Star Trek is actually kinda great. You can ask it for a cup of tea and it will ask what kind, what temperature etc. But you can also use shortcuts, like "tea, Earl Gray, hot". It teaches you how to use it just by talking to it, because next time you remember the follow up questions it h

      • You mean like a text adventure that ran on a 8088?
    • by AmiMoJo ( 196126 )

      I was wondering how the conspiracy would evolve in light of this news.

    • by epine ( 68316 ) on Wednesday March 13, 2019 @08:37AM (#58266866)

      and a trained dragon 1.0 did just as well back then, as current shit does today

      You're completely nuts.

      Dragon did okay back in the day if you bought exactly the right condenser microphone, positioned it exactly right on your headband (about 2" away from your lips just off to the side of your mouth), trained it properly in exactly that configuration, and you used it in quiet environment with no dogs barking, slamming doors down the hall, traffic noises through the open window, etc. Also, it was good to avoid getting allergies or coming down with a cold, to start/stop smoking unless you wanted to train your model again with your "new" voice.

      It's the same deal with squash rackets. The original graphite rackets from the early 1980s had a powerful sweet spot, but it wasn't very big. They also shattered every tenth time you scuffed the wall hard by accident. Then they started to monkey with the head shape, and the sweet spot expanded to the size of a cantaloupe. The graphite eventually became less brittle, too.

      But that old sweet spot the size of a mandarin orange sure was just as good as the modern shit today.

    • The reason it's just the pixel's is that the Pixel phones contain a special piece of Google produced silicon (ASIC) that can dramatically accelerate speech to text. They are some of the only phones with this extra AI chip it was also one of the huge selling points of the pixel as they use this same AI chip to do their photo magic.

  • Voice recognition for more ads.
  • by darkain ( 749283 ) on Tuesday March 12, 2019 @09:41PM (#58265442) Homepage

    I love how their reasoning is battery life... The mic is already turned on 24/7 to listen to the "OK Google" command, so that doesn't change. And then the actual audio is only about 1-5 seconds in length that takes about the same amount of time to process. Having the CPU at max for such a short period of time does absolutely nothing to significantly drain the battery. Do they think that having the radio turned on to transmit/receive data from the cloud magically uses less data?

    • I'm assuming that until recently the processors weren't able to handle the processing quickly enough. I wouldn't be surprised to learn if there's some dedicated hardware that's been added to the SoCs in the latest phones that enable doing this on the device itself. Apple put in some dedicated neural network hardware in their latest SoC to help with photography and other workloads. I believe that the Qualcomm SoC in the Pixel has similar hardware that might be useful for doing speech to text.
      • by SuperKendall ( 25149 ) on Wednesday March 13, 2019 @09:59AM (#58267308)

        I wouldn't be surprised to learn if there's some dedicated hardware that's been added to the SoCs in the latest phones that enable doing this on the device itself.

        Yes, just like Apple has the Neural Engine, Google has the Pixel Visual Core [androidauthority.com]

        The name is misleading because from what I can tell (and what the article says) it is like the Apple chip, and can help with arbitrary neural network processing.

        What I'm not sure of is the speed of the iPhone chip compared to the Pixel one, the iPhone chip took quite a leap in speed this year...

    • The OK google listening never leaves the phone, it's processed locally to save bandwidth and battery and doesn't send anything off the phone until it's recognized the OK google.

  • by Etcetera ( 14711 ) on Tuesday March 12, 2019 @11:55PM (#58265698) Homepage

    Whomever wrote this story speaks with the voice of someone who seems like they couldn't possibly understand why *anyone* would prioritize data-stays-on-device, non-cloud, privacy-related living.

    Is this what the new generation of tech journalists is like? With no conception of out-dated functions like data locality and operational independence? Someone who couldn't imagine why someone would download local audio instead of streaming it from their cloud service?

    • I guess they're young people for whom being spied 24/7 by Google and Facebook is a normal thing. They may have even not known anything else so they're not used to concepts like privacy and having data locally on your devices or the may even think those things are outdated concepts only missed by "old" people.
      • by Anonymous Coward

        In Korea, only old people have digital privacy.

  • by Anonymous Coward

    ... snarfing up enough voice samples to last them a few decades ...

  • 15 to 20 years ago there was voice recognition software that ran on PCs without using cloud. The average smartphone has more computing power then a 20 year old PC and should be able to do it easily.

  • Yea, lots of power (Score:4, Insightful)

    by Khyber ( 864651 ) <techkitsune@gmail.com> on Wednesday March 13, 2019 @08:50AM (#58266966) Homepage Journal

    "but turning voice into text on the order of milliseconds takes quite a bit of computing power."

    Uhh, Dragon Naturally Speaking worked on fucking Pentium II processors. It only takes a lot of computing power today because nobody knows how to fucking code.

    • by Anonymous Coward

      Dragon Naturally Speaking never worked. It made letters appear in your word processor, sure, but it never worked.

    • A Pentium II alone drew more power than the charger for a Pixel can even theoretically supply, and you'd still need more power for the rest of the computer around that Pentium II.

  • "and it only works in American English"

    Correction. It only works in white, upper middle-class Californian English.

    It will not work for black men from Atlanta, Bostonian housewives, anyone from Appalachia, Valley Girls, anyone from "New Yawk", anyone that knows the words to Blake Shelton's "Boys Round Here", etc.

    Having a deep voice, and born and raised in central North Carolina, my experience with all voice recognition I've ever encountered has been "less than pleasurable". Getting it to work generally inv

  • I've got like, 480,000 pixels!
  • This can take anywhere from a handful of milliseconds to multiple entire seconds (what a nightmare!), or longer if your packets get lost in the ether.

    Dunno about a nightmare, but it can suck.

    User: Wiretap. How late is the hardware store open?
    ...
    User: Hello? Wiretap?
    User: HELLO COMPUT...
    Wiretap: The hardware store is open until 6pm.
    User: ... ok.
    Wiretap: I'm sorry. I don't understand compute.
    ...

Swap read error. You lose your mind.

Working...