Artificial Intelligence has left higher ed scrambling. In a world that is increasingly hostile towards thought for thought’s sake, AI seems like the death knell for the project of the Liberal Arts College: after all, why learn to think when you can outsource all of life’s big questions to a machine that answers them succinctly and understandably?
The worst day of my summer was spent grappling with exactly these questions when an AI policy expert came to talk to my Davidson in Washington seminar. He was friendly, knowledgeable, and genuinely concerned about our futures in a world with a constantly changing labor market; indeed, the white collar desk jobs that many of our well-to-do parents enjoyed at cushy consulting firms are at tremendous risk due to this technology. McKinsey & Company, one of those nondescript “consulting firms” that Davidson students strangely seem to covet jobs at, recently celebrated that it was one of the “top global users” of ChatGPT. My friend, the presenter, was certainly correct: AI will have a seismic impact.
This man, whose insights were valuable and came from a good place, indicated that our future was one of management. Instead of being the engines of labor in any field, from consulting to teaching (this notion seems especially sinister), we are to become overseers of this technology, which enthusiasts like Sam Altman are willing to say is already smarter than most of us. Our presenter seemed to concur with that notion.
What is really meant by those who think AI is or will be smarter than us is a very reductive version of intelligence. In fact, what is really being communicated by that sentiment is not that it is smarter than us, but rather that it is more efficient (and therefore more profitable) than us. By characterizing this soulless efficiency as “intelligence,” the (very few) people who will profit from extracting your wealth are manufacturing consent for that very event. By the time it comes around they hope that you will happily accept it, because they will have mainstreamed a rhetoric around your soul and your brain that essentially says this: you are a self-conscious computer, but you are, unfortunately, not as efficient as the supercomputers that we are destroying the environment to create and integrate into virtually every area of human life, right on down to the movies you watch and the books you read. Your humanity has no essential worth.
Our humanity does not have to be degraded like this, and our defense against that very degradation starts with how we are educated. At one point in the presentation, our presenter asked how many of us were paying for a subscription to the premium version of a Large Language Model. When only three students raised their hands, he said that all of us should be. Needless to say, I do not share this vision. I recognize that in some fields it has tremendous worth and potential, but not in the study of the humanities, which have always been central to a Davidson education. Elite Liberal Arts Colleges will be under tremendous pressure from these tech giants and the labor market that they are creating to “integrate” AI into every facet of the learning experience, but I implore the College to do whatever it can to, at the very least, keep this noxious technology out of the humanities.
Speaking for the study of English, my major, there is nothing about AI that will make you any better at it. Using it to summarize a reading, proofread an essay, generate ideas for a paper, or to tell a story, all of these things do nothing other than make you worse at what your brain has always been capable of, things which you are put on earth to meaningfully struggle with. If the study of the humanities is, at its core, to be able to understand and communicate the raw experience of being human, then there is nothing that a robot can do to add to this experience other than to cheapen it, to implicitly characterize it as frivolous.
The inheritors of our bleak future will be luddites, while the permanent underclass will be made up of those who forgot how to read, write, and think because they accepted the manufactured inevitability of a technology that never had to dominate every area of our lives. I came to Davidson to learn how to think. I refuse to accept a future in which my brain will be reduced to an emergent arm of a soulless machine. The College, as it charts its path forward with regard to AI, must keep this in mind. It must continue to enshrine thought for its own rewarding sake, pushing back against the putrid ideas of the antihuman tech monopolists.
















































K • Nov 15, 2025 at 5:57 pm
I agree wholeheartedly! Thanks for your perspective.
Annelise Hawgood • Nov 7, 2025 at 11:38 am
I really appreciated this perspective. In the case of writing, I’d argue that AI produces inherently mediocre content. Since it’s taking all human-made content online, mixing it into grey matter, and reorganizing it to suit the prompt, this result would be an average of all the good and bad writing out there. Excellence can only come from the individual.