Ellie Pavlick at Brown University is building models that help understand how large language models (LLMs) process language compared to humans.
Pavlick explains that it’s hard to gain transparency into AI, describing it as a frontier question as we hear AI referred to as a black box.
There are different ways to understand a system, and Pavlick is using neuroscience to better understand LLMs.
Large language models are challenging what we mean when we talk about human-like behavior and forcing us to clarify our terms around meaning, understanding, and thinking.
We don’t know what we mean when we say thinking, understanding, or consciousness, and language models are forcing us to make these ideas more precise and scientific.
Language is plastic and dynamic, and LLMs could lead to a collapse of linguistic diversity and innovation if people start talking to them exclusively.
People are adapting their language when talking to computers, which could be an example of humans adapting their language to technology.