These Strange New Minds is a comprehensive book for lay readers wondering how large language models (LLMs) work and how they might help or harm human culture.
Its author, the cognitive neuroscientist Christopher Summerfield, faces an inherent challenge: The pace of change in AI makes it difficult for any traditionally published book to feel fully up to date. Books from major publishers can take more than a year to move from manuscript to finished copy. Summerfield addresses this by adding a later-written afterword noting that LLMs are already reasoning and conversing more effectively than they did just two years ago. They are becoming more “agentic,” helping users accomplish tasks rather than merely answering prompts, while also becoming more capable tools for crime and fraud.
Summerfield does not believe LLMs will destroy humanity. But he makes clear that dismissing what they can already do, or what they are likely to do, is shortsighted. Anyone who organizes their work or daily life through computers should not ignore AI’s looming impact. That remains true even if how “deep learning” achieves its results is still, in some respects, “mysterious.”
Summerfield engages seriously with skeptics who claim that, because LLMs merely predict or echo patterns derived from the vast corpus of human writing on which they are trained, they are not truly thinking or meaningfully imitating the human mind. LLMs, he acknowledges, “work by multiplying together large matrices of numbers,” while our brains operate through “electrical signals in an organic medium.” But that does not mean the outcomes—effective understanding and communication—are always meaningfully distinguishable. To “say that LLMs do not think at all,” Summerfield writes, “requires a new and rather convoluted definition of what it means to ‘think.'”