Frosty the Snowman, CC BY 2.0 Sue Cantan/Flickr

How do you get an artificial intelligence to become more trustworthy? You teach it to think like a baby. The question and answer might read like a joke. Yet, as SFI Professor Melanie Mitchell explains, teaching AI systems to think more like babies is one of the strategies that scientists are starting to deploy to create better AI.

One of AI’s greatest challenges, Mitchell explains, is that AI systems lack common sense. A self-driving car doesn’t know whether to slam on the breaks for a snowman or to proceed through a flock of birds because it lacks the kind of experiential knowledge that human beings possess.

Humans begin to develop the core knowledge that underlies common sense almost as soon as they perceive the world. Infants actively seek causal relationships to try to figure out how everything works. They also spend time puzzling over systems that defy their expectations. AI systems, in contrast, are more passive. They rely on deep neural networks, algorithms trained to spot patterns based on statistics. In order to develop better AI systems, researchers are beginning to investigate whether machines can be made to engage more actively in the kind of discernment that children engage in when they are learning. 

Ultimately, Mitchell argues, if we want to develop more trustworthy AI, we must train machines to learn a little more like we do — starting right from the inception of cognition.

Read the article in Aeon (May 31, 2019)