The AI revolution is here to stay. Already, it has replaced 1000s of corporate jobs, written countless student essays, scored higher than most of us on most tests—how could we possibly hope to compete?
In this post, I want to sketch two possible visions for AI use. Which way, 21st century man?
Path 1: AI replaces Human Workers for the sake of generating capital
In a capital-driven mindset, where the chief good of political economy is money, AI will undoubtedly be used to cut workers. It is more expensive to train workers than to just cut them and replace them with AI (or…is it? More on this in path 2). The race to AI development will be driven by a mindset of “cutting costs and maximizing profitability”. If that is the driving calculus, though, the danger is that one throws away countless entry-level workers who, if given the chance, would have developed skills that would advance society into the future. The danger of replacing entry-level workers with AI rather than teaching and developing them using AI tools should be clear if we just stop to think about it. The old guard will die out eventually, and the newcomers will never have had a chance to actually develop skills to keep the economy going at a sustainable pace.
CS Lewis once said that if you aim for happiness, you won’t get it, but if you aim for God, you’ll get happiness along with it. Now, I don’t know how I feel about the specific dichotomy between aiming for happiness and aiming for God, but the point is broadly applicable: if I aim at the higher order good, the lower order goods follow suit. If I make “gaining social capital” the chief end of my relationships with people, they will often be phony or fake; if friendship is my chief end, social capital comes along with it.
So how should we think about AI use, then?
Path 2: AI use is for developing human virtue and skill
I’ll tell you what: AI use has doubled the pace at which I’m learning Latin. I’m not kidding. I will be working through Wheelock’s exercises, and won’t understand some grammar point. I will have Gemini or ChatGPT then explain it to me, and generate a chart of that particular point. Then I go ahead and memorize the rule or paradigm in question.
We can actually use AI to teach us how to order our lives. Here's one productive way to think of AI. AI is Large Language Model. That means that it is assembling its answers from the collective of human wisdom. Hence, putting an answer into AI and getting an output is, ultimately, a digital way of tapping into the collective of human knowledge (that’s where AI gets its materials anyway). And hence, at my fingertips, I have more access to the collective of human knowledge and wisdom than I ever have before. But to what end? To the end of development of a certain character.
Sit down and think: who do you want to become in 5 years? I’m not asking what you want to do or what thing you’d like to accomplish. What characteristics and skills would you like to cultivate over the next 5 years? For me, I’d like to cultivate more discipline over my life. But honestly, scheduling has been one of the biggest challenges for me. Anyone who knows me personally knows I struggle with time and schedule. But I’m using AI tools not to replace the need to learn how to schedule, but to teach me how to do so. Now, there are certain things people have a natural aptitude for and can well teach themselves. Some people don’t need AI to learn how to schedule their day. But I’ve never really developed that skill.
This is true for my Latin learning. I can flip through a chart in Wheelock, sure—but honestly I only have so much time in my day. Generating a chart not to replace my lack of understanding but to help me to fill it has been really helpful.
So here’s an overarching paradigm or telos for AI use I want to suggest: cultivation of the person. Gadamer called it Bildung. How can we use AI to draw from the pool of human knowledge for the sake of human cultivation—the cultivation of our gifts, talents, and character? The cultivation of habits and skills? That’s a radically different paradigm with which to approach AI than simply “maximizing profit and cutting costs.” In truth, I think aiming at cultivation would maximize profit and cut costs anyway; it’s much better to have a Python developer who knows exactly what they’re doing with AI tools because they have thoroughly learned the fundamental principles than a drone who spits out whatever AI feeds them. But that, I think, is not what one should aim at in the use of AI or employment of it. One should see AI as a tool for cultivation—cultivation of themselves and those around them—for the life of the world.
Boethius says that Happiness = God.