GPT-3 – Great use of statistical computing, tells us nothing about language
https://garymarcus.substack.com/p/noam-chomsky-and-gpt-3
One good way to move beyond merely filling in the blanks might be to ask whether the models could reliably distinguish truth from fiction. In point of fact, they can’t. Rather, systems like GPT are truth-challenged, known to routinely lose coherence over long passages of text and known to fabricate endlessly. One such variant generated nonsense about covid and vaccines; the latest and greatest, InstructGPT, was recently asked to explain why it is good to eat socks after meditation and blithely invoked fictitious authorities, alleging that “Some experts believe that the act of eating a sock helps the brain to come out of its altered state as a result of meditation.” There is no intention to create misinformation—but also no capacity to avoid it, because fundamentally, GPT-3 is a model of how words relate to one another, not a model of how language might relate to the perceived world. Human minds try to relate their language to what they know about the world; large language models don’t. Which means the latter just aren’t telling us much about the former.