Страницы

Monday, February 25, 2019

AI security

In high-stakes situations, you don’t want tech that doesn’t know what it’s doing.

Scientists in the Pentagon’s research office are working to build artificial intelligence systems capable of something that stumps most humans: owning up to their own incompetence.

On Tuesday, the Defense Advanced Research Projects Agency kicked off the Competency-Aware Machine Learning program, an effort to build AI tools that can model their own behavior, recall past experiences and apply knowledge to new situations.

Given these skills, officials said, AI could ultimately assess its own expertise for a given task, and let people know if it doesn’t know what it’s doing.

In general, AI systems work best when they’re applied to very explicit, narrow tasks, and even the most finely tuned tech could fail if situations are slightly changed. A tool that classifies dogs might work flawlessly in broad daylight, but mistake a golden retriever for a black lab when it’s cloudy outside, for instance.

The tech itself doesn’t know how accurate it will be in a given situation or whether it’s properly trained for the task at hand. And in the high-stakes, rapidly changing world of military operations, this uncertainty could be particularly problematic, DARPA said in the solicitation.

No comments:

Post a Comment