"The robot was a game changer here," said Capt. Jack Ewell, a tactical expert with the Sheriff's Department -- the largest sheriff's department in the nation. "We didn't have to risk a deputy's life to disarm a very violent man."
It was only later when the robot came back to also pull down a wire barricade that the 51-year-old suspect realized his gun was gone.
Science chatted with theoretical physicist Claudius Gros about his proposed Genesis Project, which would send artificially intelligent probes to lifeless worlds to seed them with microbes. Over millions of years, they might evolve into multicellular organisms, and, perhaps eventually, plants and animals. In the interview, Gros talks artificial intelligence, searching for habitable planets, and what kind of organisms he'd like to see evolve.
"The robots will have to decide if a certain planet should receive microbes and the chance to evolve life," the physicist explains -- adding that it's very important to avoid introducing new microbes on planets where life already exists.
Narrativa is part of the booming automatic content generation industry which uses algorithms to convert data sets into narratives. Related: How a robot wrote for Engadget.
The hope was that the list would allow police to provide social services to people in danger, while also preventing likely shooters from picking up a gun. But a new report from the RAND Corporation shows nothing of the sort has happened. Instead, it indicates that the list is, at best, not even as effective as a most wanted list. At worst, it unnecessarily targets people for police attention, creating a new form of profiling.
The police argue they've updated the algorithm and improved their techniques for using it. But the article notes that the researchers began following the "heat list" when it launched in 2013, and "found that the program has saved no lives at all."
For example, the NSF created a new tool (which they've recently patented) called NetEgg, which generates code for controlling software-defined networks, as well as Automata Tutor and AutoProf, which provide automated feedback to computer science students.
The Electronic Frontier Foundation wrote "We think that this initiative by DARPA is very cool, very innovative, and could have been a little dangerous." Sharing their blog post about automated security research, the EFF's staff technologist Peter Eckersley writes: EFF is asking, does research like that need a safety protocol?
The machine-learning algorithm was "trained on a million Yahoo article comments," according to the article, and Slashdot reader AmiMoJo writes "The system could help AIs avoid being tricked into making abusive comments themselves, as Microsoft's Tay twitter bot did earlier this year."