Further reading: Amazon Admits Its AI Alexa is Creepily Laughing at People.
Further reading: Amazon Admits Its AI Alexa is Creepily Laughing at People.
Rather predictably, the technology has already been used to generate a number of counterfeit celebrity porn videos. But the method could also be used to create a clip of a politician saying or doing something outrageous. DARPA's technologists are especially concerned about a relatively new AI technique that could make AI fakery almost impossible to spot automatically. Using what are known as generative adversarial networks, or GANs, it is possible to generate stunningly realistic artificial imagery.
While most of Xiaoice's interactions have been in text conversations, Microsoft has started allowing the chat bot to call people on their phones. It's not exactly the same as Google Duplex, which uses the Assistant to make calls on your behalf, but instead it holds a phone conversation with you. "One of the things we started doing earlier this year is having full duplex conversations," explains Nadella. "So now Xiaoice can be conversing with you in WeChat and stop and call you. Then you can just talk to it using voice." (The term "full duplex" here refers to a conversation where both participants can speak at the same time; it's not a reference to Google's product, which was named after the same jargon.)
YouTube Music is free with ads, but will cost $9.99 for ad-free listening. There is also YouTube Premium, which will cost $11.99 per month, and will include both the ad-free music service and the exclusive video content from the now-defunct YouTube Red.
[...] AI is core to Microsoft's strategy, Nadella said: "AI is the run time which is going to shape all of what we do going forward in terms of applications as well as the platform." Microsoft is rethinking its core products by using AI to connect them together, he said, giving an example of a meeting using translation, transcription, Microsoft's HoloLens and other devices to improve decision-making. "The idea that you can now use all of the computing power that is around you -- this notion of the world as a computer -- completely changes how you conduct a meeting and fundamentally what presence means for a meeting," he said.
Beyond general non-discrimination practices, the declaration focuses on the individual right to remedy when algorithmic discrimination does occur. "This may include, for example, creating clear, independent, and visible processes for redress following adverse individual or societal effects," the declaration suggests, "[and making decisions] subject to accessible and effective appeal and judicial review."
John Gruber now asks: "How many real-world businesses has Google Duplex been calling and not identifying itself as an AI, leaving people to think they're actually speaking to another human...? And if 'Victor' is correct that Hong's Gourmet had no advance knowledge of the call, Google may have violated California law by recording the call." Friday he added that "This wouldn't send anyone to prison, but it would be a bit of an embarrassment, and would reinforce the notion that Google has a cavalier stance on privacy (and adhering to privacy laws)."
The Mercury News also reports that legal experts "raised questions about how Google's possible need to record Duplex's phone conversations to improve its artificial intelligence may come in conflict with California's strict two-party consent law, where all parties involved in a private phone conversation need to agree to being recorded."
For another perspective, Gizmodo's senior reviews editor reminds readers that "pretty much all tech demos are fake as hell." Speaking of Google's controversial Duplex demo, she writes that "If it didn't happen, if it is all a lie, well then I'll be totally disappointed. But I can't say I'll be surprised."
The original submission cites Isaac Asimov's Three Laws of Robotics from the 1950 collection I, Robot.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The original submission asks, "If you programmed an AI not to be able to break an updated and extended version of Asimov's Laws, would you not have reasonable confidence that the AI won't go crazy and start harming humans? Or are Asimov and other writers who mulled these questions 'So 20th Century' that AI builders won't even consider learning from their work?"
Wolfrider (Slashdot reader #856) is an Asimov fan, and writes that "Eventually I came across an article with the critical observation that the '3 Laws' were used by Asimov to drive plot points and were not to be seriously considered as 'basics' for robot behavior. Additionally, Giskard comes up with a '4th Law' on his own and (as he is dying) passes it on to R. Daneel Olivaw."
And Slashdot reader Rick Schumann argues that Asimov's Three Laws of Robotics "would only ever apply to a synthetic mind that can actually think; nothing currently being produced is capable of any such thing, therefore it does not apply..."
But what are your own thoughts? Do you think Asimov's Three Laws of Robotics could ensure safe AI?
Elon Musk tweeted about the accident:
It's super messed up that a Tesla crash resulting in a broken ankle is front page news and the ~40,000 people who died in US auto accidents alone in past year get almost no coverage. What's actually amazing about this accident is that a Model S hit a fire truck at 60mph and the driver only broke an ankle. An impact at that speed usually results in severe injury or death.
The Associated Press defended their news coverage Friday, arguing that the facts show that "not all Tesla crashes end the same way." They also fact-check Elon Musk's claim that "probability of fatality is much lower in a Tesla," reporting that it's impossible to verify since Tesla won't release the number of miles driven by their cars or the number of fatalities. "There have been at least three already this year and a check of 2016 NHTSA fatal crash data -- the most recent year available -- shows five deaths in Tesla vehicles."
Slashdot reader Reygle argues the real issue is with the drivers in the Autopilot cars. "Someone unwilling to pay attention to the road shouldn't be allowed anywhere near that road ever again."
Suppose, for example, that a drugstore decides to entrust its pricing to a machine learning program that we'll call Charlie. The program reviews the store's records and sees that past variations of the price of toothpaste haven't correlated with changes in sales volume. So Charlie recommends raising the price to generate more revenue. A month later, the sales of toothpaste have dropped -- along with dental floss, cookies and other items. Where did Charlie go wrong? Charlie didn't understand that the previous (human) manager varied prices only when the competition did. When Charlie unilaterally raised the price, dentally price-conscious customers took their business elsewhere. The example shows that historical data alone tells us nothing about causes -- and that the direction of causation is crucial.
Axios asked Google for the name of the hair salon or restaurant, in order to verify both that the businesses exist and that the calls were not pre-planned. We also said that we'd guarantee, in writing, not to publicly identify either establishment (so as to prevent them from receiving unwanted attention). A longtime Google spokeswoman declined to provide either name.
We also asked if either call was edited, even perhaps just cutting the second or two when the business identifies itself. And, if so, were there other edits? The spokeswoman declined comment, but said she'd check and get back to us. She didn't.
The resigning employees' frustrations range from particular ethical concerns over the use of artificial intelligence in drone warfare to broader worries about Google's political decisions -- and the erosion of user trust that could result from these actions. Many of them have written accounts of their decisions to leave the company, and their stories have been gathered and shared in an internal document, the contents of which multiple sources have described to Gizmodo.
[...] Human observers are widely used to monitor catch in quota-managed fisheries, and they're expensive: It costs roughly $700 a day for an observer in New England. The biggest cost of electronic monitoring is the labor required to review the video. Perhaps the most effective way to cut costs is to use computers to review the footage. Christopher McGuire, marine program director for TNC in Massachusetts, says there's been a lot of talk about automating the review, but the common refrain is that it's still five years off. To spur faster action, TNC last year spearheaded an online competition, offering a $50,000 prize to computer scientists who could crack the code -- that is, teach a computer how to count fish, size them, and identify their species. The contest exceeded McGuire's expectations. "Winners got close to 100 percent in count and 75 percent accurate on identifying species," he says.
The image classification task, where most of the image is invisible to the judge, is a sort of stand-in for complex problems where it wouldn't be possible for a human judge to analyze the entire dataset to judge bot performance. The judge would have to rely on the facets of the data highlighted by debating robots, the researchers say. "The goal here is to model situations where we have something that's beyond human scale," says Geoffrey Irving, a member of the AI safety team at OpenAI. "The best we can do there is replace something a human couldn't possibly do with something a human can't do because they're not seeing an image."
The bachelor's degree program in computer science teaches students to think broadly about methods that can accomplish a wide variety of tasks across many disciplines, said Reid Simmons, research professor of robotics and computer science and director of the new AI degree program. The bachelor's degree in AI will focus more on how complex inputs -- such as vision, language and huge databases -- are used to make decisions or enhance human capabilities, he added. AI majors will receive the same solid grounding in computer science and math courses as other computer science students. In addition, they will have additional course work in AI-related subjects such as statistics and probability, computational modeling, machine learning, and symbolic computation. Simmons said the program also would include a strong emphasis on ethics and social responsibility. This will include independent study opportunities in using AI for social good, such as improving transportation, health care or education.