13 Confusing Photos… You Will Have to Look More Than Once

You Are Here: 🏠Home  »  Tech   »   Google Researchers Explore Ways To Ensure Safety Of Future AI Systems

Google has released technical paper on AI safety produced in collaboration with researchers from Stanford University, University of California, Berkeley and OpenAI.

Concerns about things going wrong with the artificial intelligence systems of the future to have gotten the attention of researchers at Google.Barely two weeks after the company’s DeepMind group announced a partnership with researchers at the University of Oxford to develop a kill switch for rogue AI systems, Google has released a technical paper devoted to addressing AI safety risks.The paper, titled "Concrete Problems in AI Safety," was written in collaboration with researchers at Stanford University, University of California, Berkeley and OpenAI, a non-profit company focused on artificial intelligence research.It outlines five basic problems that the researchers say are relatively minor today, but predicts they will assume much greater importance as machines get smarter in the future. The goal in writing the paper was to explore practical approaches to solving these problems and ensuring that AI systems are engineered to operate in a reliable and safe manner, Google researcher Chris Olah said on the company’s Research Blog. “While possible AI safety risks have received a lot of public attention, most previous discussions has been very hypothetical and speculative,” Olah said. “We believe it’s essential to ground concerns in real machine learning research and to start developing practical approaches for engineering AI systems,” to operate safely, he said.Machine learning and artificial intelligence are important areas for Google. The company has said it wants to leverage advances in these areas to make its core technologies better. The company already applies AI and machine intelligence techniques in applications like Google Translate, Google Photos and voice search.Company CEO Sundar Pichai has said that Google expects to see AI radically transforming the way people travel, accomplish daily tasks and tackle problems in areas like health care and climate change.But advancing AI means making AI systems both, smarter and safer, OpenAI researchers Paul Christiano and Greg Brockman said in blog post announcing the company’s role in the newly released technical paper. That means "ensuring that AI systems do what people actually want them to do,” the researchers said.One of the five problems that the newly released Google technical paper examines involves a way to ensure that an AI system will not impact its environment negatively when performing its functions. As an example, Olah pointed to a cleaning robot being programmed so it would not knock over a vase just because it is faster to complete the task by doing that.Other problems involve figuring out ways to ensure that robots do not engage in activities with negative consequences, such as sticking a wet mop into an electrical outlet, or that, they operate in a suitably safe manner in different environments such as on a shop floor or in office.“Many of the problems are not new, but the paper explores them in the context of cutting-edge systems,” Christiano and Brockman from OpenAI systems said.The new technical paper is part of what appears to be a deepening focus on addressing AI safety related issues. Google’s research with Oxford University for instance is focused on ensuring that the hyper-intelligent AI systems of the future will never be capable of actively blocking interruption by a human operator. The goal is to ensure that engineers have a way of safely shutting down an AI system if it starts behaving in an erratic or unsafe manner.

- eWeek

By Admin

Leave a Reply

Your email address will not be published. Required fields are marked *


This website uses cookies to deliver its services and analyze traffic. If you continue to use this website, you accept this. This notification is displayed only once per session. Learn more about this: Privacy Policy