Rather than encoding specific rules that depict when a person is making a specific expression, we instead focus our attention on building intelligent algorithms that can be trained to recognize expressions. Through our partnerships across the globe, we have amassed an enormous emotional database from people driving cars, watching media content, etc. A portion of the data is then passed on to our labeling team, who are certified in the Facial Action Coding System...we have gathered 5,313,751 face videos, for a total of 38,944 hours of data, representing nearly two billion facial frames analyzed.
They got their start testing advertisements, and now are already working with a third of all Fortune 500 companies. ("We've seen that pet care and baby ads in the U.S. elicit more enjoyment than cereal ads -- which see the most enjoyment in Canada.") One company even combined their technology with Google Glass to help autistic children learn to recognize emotional cues.