Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Cellphones Privacy

Cell Phones That Learn the Sounds of Your Life 121

An anonymous reader writes "Researchers at Dartmouth College have developed new software that uses the microphone on the iPhone to track and interpret a user's everyday activities using sound. The software, called SoundSense, picks up sounds and tries to classify them into certain categories. SoundSense can recognize completely unfamiliar sounds and runs entirely on the phone. It automatically classifies sounds as 'voice,' 'music,' or 'ambient noise.' If a sound is repeated often enough or for long enough, SoundSense gives it a high 'sound rank' and asks the user to confirm that it is a significant sound and offers the option to label the sound. In testing, the SoundSense software was able to correctly determine when the user was in a particular coffee shop, walking outside, brushing her teeth, cycling, and driving in the car. It also picked up the noise of an ATM and a fan in a particular room. The results [PDF] of the experiments were recently presented at the MobiSys 2009 conference."
This discussion has been archived. No new comments can be posted.

Cell Phones That Learn the Sounds of Your Life

Comments Filter:
  • by al0ha ( 1262684 ) on Friday July 10, 2009 @04:27PM (#28654541) Journal
    could be to anonymously feed thousands of opt-in users current sound input into a public algorithmic service which provides data facilitating the creation of truely random numbers. Similar to the use of atmospheric noise by sites like Random.org; but this could be better because the sound input devices are decentralized and always moving.

    There are the obvious problems to overcome with attempts to game this sort of system, but I think it an interesting idea.
  • Re:Privacy (Score:3, Interesting)

    by timeOday ( 582209 ) on Friday July 10, 2009 @04:38PM (#28654679)
    But in contrast to previous technologies, at least this runs standalone on the device, rather than as a web service.

    Also, it doesn't record everything verbatim, but rather just tries to find characteristics of different environments, and classify them. (I don't have direct knowledge of this, but it would be very resource-intensive and pointless to record all the ambient noise used to recognize you're in the office, for example).

    I think it is a sensible idea. Obviously humans use their senses to be aware of where they are and what they should be doing, and AI will be no different. Sound will certainly be part of that. However, my concern for the value of this technology is that smartphones also have GPS, which seems to greatly decrease the need for using sound signatures just to infer general context. (Of course specific information such as speech must still be recognized, but that's not what this is, from what I can tell).

  • by jammindice ( 786569 ) on Friday July 10, 2009 @05:02PM (#28654925) Homepage
    Actually this sounds like the echo location batman used in the Dark Night at the end except for being self contained on the phone versus using just the mic from all phones and processing centrally
  • by evanbd ( 210358 ) on Friday July 10, 2009 @05:13PM (#28655011)

    Or you could just use the quantum resistor noise from the input impedance of your sound card [digital-scurf.org]. Seriously, people seem to think getting good quality quantum entropy is hard. It isn't. The hard part is turning that entropy into bits: ensuring you have enough entropy to be generating as many bits as you are, and making sure that those bits are whitened and debiased properly, and that the entropy is properly distributed among them so that the non-random stuff in your input stream doesn't leak through. Not to mention more mundane things like ensuring that if your sound card or geiger counter or whatever dies and stops producing entropy you notice before you output a whole lot of "random" bits that look somewhat random but don't have any real entropy in them.

    Can we stop talking about how the random bits come from background sound, lava lamps, radioactive decay, or wherever, and instead worry about whether they've been handled properly after they were gathered? After all, that's the hard part.

  • Re:Oh great.... (Score:3, Interesting)

    by orkybash ( 1013349 ) <`tim.bocek' `at' `gmail.com'> on Friday July 10, 2009 @05:19PM (#28655071)
    At least until the hard panning starts driving you slowly insane... some people can wear one ear all day, while I can't last ten minutes like that.
  • by merreborn ( 853723 ) on Friday July 10, 2009 @06:34PM (#28655675) Journal

    Arbitron is already using something somewhat like this.

    They have their "test families", or whatever they're called, carry small devices they call "portable people meters [wikipedia.org]". Television programming includes sonic markers outside of the audible range, which these devices pick up via a small mic.

    One would think that being able to identify television/radio programming without pre-inserting said inaudible watermarks could simplify their process.

Old programmers never die, they just hit account block limit.

Working...