dcblogs writes "The Senate's immigration bill may force the large offshore outsourcing firms to reduce their use of H-1B visa-holding staff, forcing them to hire more local workers and raising their costs. But one large Indian firm, Infosys, will try to offset cost increases with software robotics. Infosys recently announced a partnership with IPsoft, a New York-based provider of autonomic IT services. With IPsoft's tools, work that is now done by human beings, mostly Level 1 support, could be done by a software machine. Infosys says that IPsoft tools can 'reduce human intervention.' More colorfully, Chandrashekar Kakal, global head of Infosys's business IT services, told the Times of India, that 'what robotics did for the auto assembly line, we are now doing for the IT engineering line.' James Slaby, a research director of HFS Research who has been following the use of autonomics closely, wrote in a recent report that the IPsoft partnership may help Infosys 'reap fatter margins by augmenting and replacing expensive, human IT support engineers with cheaper, more accurate, efficient automated processes,' and by improving service delivery."
Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!
kkleiner writes "Rice University professor Moshe Vardi has been evaluating technological progress in computer science and artificial intelligence and has recently concluded that robots will replace most, if not all, human labor by 2045, putting millions out of work. The issue is whether AI enables humans to do more or less. But perhaps the real question about technological unemployment of labor isn't 'How will people do nothing?' but 'What kind of work will they do instead?'"
An anonymous reader sends this excerpt from Wired: "[Henry] Markram was proposing a project that has bedeviled AI researchers for decades, that most had presumed was impossible. He wanted to build a working mind from the ground up. ... The self-assured scientist claims that the only thing preventing scientists from understanding the human brain in its entirety — from the molecular level all the way to the mystery of consciousness — is a lack of ambition. If only neuroscience would follow his lead, he insists, his Human Brain Project could simulate the functions of all 86 billion neurons in the human brain, and the 100 trillion connections that link them. And once that's done, once you've built a plug-and-play brain, anything is possible. You could take it apart to figure out the causes of brain diseases. You could rig it to robotics and develop a whole new range of intelligent technologies. You could strap on a pair of virtual reality glasses and experience a brain other than your own."
Google's I/O annual conference is ramping up at San Francisco's Moscone Center. Last year, in the conference keynote, the company took its biggest-yet dive into hardware when it introduced the Nexus 7 tablet, Google Glass, and the ill-fated Nexus Q. The secret is out on Glass, of course: this year, there's a pavilion inside the conference center where I'm sure they'll be showing off applications for it. (Quite a few of the people in the endless lines here are wearing their own, too.) Anticipating the announcements at I/O is practically its own industry, but it's easy to guess that there will be announcements from all the major pots in which Google has its many thousands of (tapping) fingers. Android, search, Chrome, mapping, and all the other ways in which the behemoth of Mountain View is watching what you do. You can watch the keynote talk (talks, really) streamed online from the main conference link above, but this story will be updated with highlights of the announcements, as well with stories that readers contribute. Update: 05/15 16:22 GMT by T : Updates below. Update: 05/15 19:02 GMT by T :Update details: Notes (ongoing) added below on maps, gaming, the Play store, Google+, and more. And, notable, Larry Page is (at this writing) on stage, with an unannounced Q & A session.
A while ago you had the chance to ask mathematician and theoretical physicist Freeman Dyson about his work in quantum electrodynamics, nuclear propulsion, and his thoughts on the past, present, and future of science. Below you'll find his answers to your questions.
An anonymous reader writes "We're seeing a new revolution in artificial intelligence known as deep learning: algorithms modeled after the brain have made amazing strides and have been consistently winning both industrial and academic data competitions with minimal effort. 'Basically, it involves building neural networks — networks that mimic the behavior of the human brain. Much like the brain, these multi-layered computer networks can gather information and react to it. They can build up an understanding of what objects look or sound like. In an effort to recreate human vision, for example, you might build a basic layer of artificial neurons that can detect simple things like the edges of a particular shape. The next layer could then piece together these edges to identify the larger shape, and then the shapes could be strung together to understand an object. The key here is that the software does all this on its own — a big advantage over older AI models, which required engineers to massage the visual or auditory data so that it could be digested by the machine-learning algorithm.' Are we ready to blur the line between hardware and wetware?"
jtogel writes "This New Scientist article describes our AI system that automatically generates card games. The article contains a description of a playable card game generated by our system. But card games are just the beginning... The card game generator is a part of a larger project to automatise all of game development using artificial intelligence methods — we're also working on level generation for a variety of different games, and on rule generation for simple arcade-like games."
kkleiner writes "Willow Garage spinoff IPI has developed a visual system for its line of robotic arms that enable the machines to perceive a specific object in the midst of random ones. On-site videos show the 'sensing' robots analyzing stacks of random boxes, selecting certain ones, and tossing them to a human handler. The software is also used in an automated box unloader that requires no human supervision."
ceview writes "Researchers at Sheffield Centre for Robotics have demonstrated that a swarm of 40 robots can carry out simple fetching and carrying tasks. This is done by grouping around an object and working together to push it across a surface. They believe that this could provide opportunities for us mere humans to harness such power to do all sorts of things like safety — what like catching falling workers perhaps? Youtube action here."
moon_unit2 writes "Tech Review has a story about a garage in Ingolstadt, Germany, where the cars park themselves. The garage is an experiment set up by Audi to explore ways that autonomous technology might practically be introduced; most of the sensor technology is built into the garage and relayed to the cars rather than inside the cars themselves. It seems that carmakers see the technology progressing in a slightly different way to Google, with its fleet of self-driving Prius. From the piece: 'It's actually going to take a while before you get a really, fully autonomous car,' says Annie Lien, a senior engineer at the Electronics Research Lab, a shared facility for Audi, Volkswagen, and other Volkswagen Group brands in Belmont, California, near Silicon Valley. 'People are surprised when I tell them that you're not going to get a car that drives you from A to B, or door to door, in the next 10 years.'"
JimmyQS writes "The Harvard Business Review blog has an invited piece about Innovation Software. Tony McCaffrey at the University of Massachusetts Amherst talks about several pieces of software designed to help engineers augment their innovation process and make them more creative, including one his group has developed called Analogy Finder. The software searches patent databases using natural language processing technology to find analogous solutions in other domains. According to Dr. McCaffrey 'nearly 90% of new solutions are really just adaptations from solutions that already exist — and they're often taken from fields outside the problem solver's expertise.'"
UgLyPuNk writes "Blizzard has revealed its 'something new' at PAX East 2013: Hearthstone: Heroes of Warcraft — a 'charming collectible strategy game set in the Warcraft universe.'" Blizzard says this game is a departure from their normal development process: it was made with a team of just 15, will release this year, and it's free-to-play. Hearthstone is built for Mac OS, Windows, and iPads. There's a deck builder, a match-finder, and AI for those who don't want to play against other people. While it's free to play, and players will earn new packs of cards by playing, there will also be an option to purchase new packs.
sciencehabit writes "Pharmaceuticals often have side effects that go unnoticed until they're already available to the public. Doctors and even the FDA have a hard time predicting what drug combinations will lead to serious problems. But thanks to people scouring the web for the side effects of the drugs they're taking, researchers have now shown that Google and other search engines can be mined for dangerous drug combinations. In a new study, scientists tried the approach out on predicting hypoglycemia, or low blood sugar. They found that the data-mining procedure correctly predicted whether a drug combo did or did not cause hypoglycemia about 81% of the time."
dstates writes "ProPublica, the award winning public interest journalism group and frequently cited Slashdot source, has published an interesting guide to app technology for journalism and a set of data and style guides. Journalism presents unique challenges with potentially enormous but highly variable site traffic, the need to serve a wide variety of information, and most importantly, the need to quickly develop and vet interesting content, and ProPublica serves lots of data sets in addition to the news. They are also doing some cool stuff like using AI to generate specific narratives from tens of thousands of database entries illustrating how school districts and states often don't distribute educational opportunities to rich and poor kids equally. The ProPublica team focuses on some basic practical issues for building a team, rapidly and flexibly deploying technology and insuring that what they serve is correct. A great news app developer needs three key skills: the ability to do journalism, design acumen and the ability to write code quickly — and the last is the easiest to teach. To build a team they look to their own staff rather than competing with Google for CS grads. Most news organizations use either Ruby on Rails or Python/Django, but more important than which specific technology you choose is to just pick a server-side programming language and stick to it. Cloud hosting provides news organizations with incredible flexibility (like increasing your capacity ten-fold for a few days around the election and then scaling back the day after), but they're not as fast as real servers, and cloud costs can scale quickly relative to real servers. Maybe a news app is not the most massive 'big data' application out there, but where else can you find the challenge of millions of users checking in several times a day for the latest news, and all you need to do is sort out which of your many and conflicting sources are providing you with straight information? Oh, and if you screw up, it will be very public."
halls-of-valhalla writes "Using advances in 3D laser mapping technology, Oxford University has developed a car that is able to drive itself along familiar routes. This new self-driving automobile uses lasers and small cameras to memorize everyday trips such as the morning commute. This car is not dependant on GPS because this car is able to tell where it is by recognizing its surroundings. The intent is for this car to be capable of taking over the drive when on routes that it has traveled before. While being driven, the car is capable of developing a 3D model of its environment and learning routes. When driving a particular journey a second time, an iPad on the dashboard informs the driver that it is capable of taking over and finishing the drive. The driver can then touch the screen and the car shifts to 'auto drive' mode. The driver can reclaim control of the car at any time by simply tapping the brakes."
Lucas123 writes "Applying the same technology used for voice recognition and credit card fraud detection to medical treatments could cut healthcare costs and improve patient outcomes by almost 50%, according to new research. Scientists at Indiana University found that using patient data with machine-learning algorithms can drastically improve both the cost and quality of healthcare through simulation modeling.The artificial intelligence models used for diagnosing and treating patients obtained a 30% to 35% increase in positive patient outcomes, the research found. This is not the first time AI has been used to diagnose and suggest treatments. Last year, IBM announced that its Watson supercomputer would be used in evaluating evidence-based cancer treatment options for physicians, driving the decision-making process down to a matter of seconds."
Shipud writes "We live in the post-genomic era, when DNA sequence data is growing exponentially. However, for most of the genes that we identify, we have no idea of their biological functions. They are like words in a foreign language, waiting to be deciphered. The Critical Assessment of Function Annotation, or CAFA, is a new experiment to assess the performance of the multitude of computational methods developed by research groups worldwide and help channel the flood of data from genome research to deduce the function of proteins. Thirty research groups participated in the first CAFA, presenting a total of 54 algorithms. The researchers participated in blind-test experiments in which they predicted the function of protein sequences for which the functions are already known but haven't yet been made publicly available. Independent assessors then judged their performance. The challenge organizers explain that: 'The accurate annotation of protein function is key to understanding life at the molecular level and has great biochemical and pharmaceutical implications, explain the study authors; however, with its inherent difficulty and expense, experimental characterization of function cannot scale up to accommodate the vast amount of sequence data already available. The computational annotation of protein function has therefore emerged as a problem at the forefront of computational and molecular biology.'"
New submitter TheRedWheelbarrow writes "The singularity looms as the Human Brain Project gets up to $1.34 billion in funding. 'The challenge in AI is to design algorithms that can produce intelligent behavior and to use them to build intelligent machines. It doesn't matter whether the algorithms are biologically realistic — what matters is that they work — the behavior they produce. In the HBP, we're doing something completely different...we will base the technology on what we actually know about the brain and its circuitry.'"
moon_unit2 writes "An AI researcher at MIT suggests that Ray Kurzweil's ambitious plan to build a super-smart personal assistant at Google may be fundamentally flawed. Kurzweil's idea, as put forward in his book How to Build a Mind, is to combine a simple model of the brain with enormous computing power and vast amounts of data, to construct a much more sophisticated AI. Boris Katz, who works on machines designed to understand language, says this misses a key facet of human intelligence: that it is built on a lifetime of experiencing the world rather than simply processing raw information."
First time accepted submitter Shaterri writes "Which is more likely: that a low-ranked player could play through a high-level tournament at grandmaster level, or that they were getting undetected assistance from a computer? How about when that player is nearly strip-searched with no devices found? How about when their moves correlate too well with independent computer calculations? Ken Regan has a fascinating article on one of the most complex (potential) cheating cases to come along in recent memory."