"The Meaning of Information" by Joerael Elliott

In 1948, Claude Shannon, motivated by the engineering challenge of encoding, transmitting, and decoding electronic signals, took the radical step of defining “information” in a way that completely disregarded whatever meaning a transmitted signal might contain.

For Shannon, the statistical properties of signals sent from sender to receiver were the information. His ideas have since been widely applied in the physical, biological, and social sciences.

Meanwhile, over in linguistics and philosophy, scholars continued to wrestle with definitions of information that were all about meaning and its interpretation — and focused almost exclusively on human minds and language.

“They’ve been thinking primarily in terms of language, and of the semantics of true sentences — what they call propositions,” says philosopher of mind SFI External Professor Dan Dennett (Tufts). “Propositions have distracted philosophers for nearly a century.”

“Shannon’s theory and its emphasis on the statistical properties of information have been useful in many scientific and engineering contexts,” says SFI External Professor Chris Wood. “But in other contexts, and not just those involving humans, information without meaning seems limiting and unproductive.”

Is extracting meaning from the world the provenance of human minds? Could a machine generate meaning from its inputs?

To help address the 70-year divide between Shannon information and semantic information, Dennett and Wood have organized a January SFI working group, “The Meaning of Information,” that brings together perspectives from physics, engineering, evolutionary biology, linguistics, philosophy, and neuroscience.

Their approach is to identify the most fundamental cases of semantic meaning and explore their properties and consequences.

One such example, offered by Harvard biologist David Haig in a recent essay, is a simple binary system that strikes a spark. If only oxygen is present, nothing happens, but if hydrogen is present, an explosion occurs. Next, consider the same system but with a key difference: a hydrogen sensor. If no hydrogen is detected, the system strikes a spark, but if hydrogen is detected, it does not. The system with the sensor acts differently based on its environment. Can the system be said to interpret the environment? If so, does that interpretation contain meaning?

The participants start with Haig’s essay, “Making Sense: Information as Meaning,” which proposes that meaning “be considered the output of the interpretive process of which information is the input.”

“We’re hoping Haig’s ideas may be the basis for getting us all the way from molecules to poets and scientists and philosophers while keeping the same definitions of information, interpretation, and meaning throughout,” says Dennett.

Read more about the working group.