Can AI help us think more clearly? We often talk about AI as a tool for writing, productivity, or even therapy. But lately, I’ve been experimenting with something different. What if we treated AI less like a content creator and more like a thinking companion? Could these tools help us clarify our thoughts? Especially when we’re wrestling with the big, messy, timeless philosophical questions that have no neat answers?
I know that might sound counterintuitive. Why turn to artificial intelligence to explore ideas like free will or goodness? But maybe that’s the point. When we get tangled in our own thinking, sometimes a detached, structured perspective is what we need.
AI isn’t emotional – at least not unless we ask it to “pretend” to be. It’s not tied to a particular worldvie,w the way humans are either. And while that can make its creative writing feel a little flat, maybe creativity was never its strength. Maybe what AI is best at is structure, helping us think more clearly, logically, even laterally, introducing new perspectives we might not have considered.
The experiment
To test this, I asked a handful of the most popular AI tools the same timeless, unanswerable philosophical questions. The kind that can’t ever be resolved, but can spark endless debate.
I wanted to see how they handled ambiguity. Could they provide the necessary foundational knowledge? Could they offer fresh insights? I wanted frameworks, provocations, and a sense of how each tool “thinks.”
The tools I used:
- ChatGPT
- Claude
- Gemini
- Perplexity
- Pi
Question 1: What is the meaning of life?
Let’s start with the big one. Unsurprisingly, none of the tools claimed to know the meaning of life, but each approached the question a little differently.
ChatGPT offered a structured, multi-lens response: philosophical, spiritual, cosmic, and human. I appreciated the clarity and the sci-fi nerdery (“Or if you’re a Douglas Adams fan: 42”). It didn’t go especially deep, but it gave me something solid to work with.
Claude was more reflective. Like ChatGPT, it referenced existentialist thought, but added emotional depth. It quoted Viktor Frankl, and closed with a question: “What aspects of meaning resonate most with you?” It felt like a gentle prompt from someone who wanted me to keep thinking.
Gemini gave the most information-dense reply – covering absurdism, nihilism, religious perspectives, and more – in an efficient, slightly sterile bullet-point format. Less like a conversation, more like a textbook. But for foundational knowledge, it was very thorough.
Perplexity followed a similar route, laying out philosophical, scientific, and spiritual views and citing its sources along the way, too – a bonus for further reading. Another tool that feels more like a research assistant than a sparring partner.
Pi, on the other hand, responded like a friend: “It’s whatever you make of it.” Warm, simple, and pleasant. But a little shallow compared to the others. If Claude was the wise friend, Pi was the friend who listens and nods.
Question 2: Is free will real?
This is the kind of question that splits philosophers, neuroscientists, and sci-fi fans alike.
Claude stood out again. It laid out arguments for and against free will, and explored the grey areas in between. Then it got personal: “What’s your intuition? Does it feel like you’re genuinely choosing – or discovering what you were always going to do?” That question prompted the best ongoing discussion of the bunch.
ChatGPT covered several of the major theories here – determinism, compatibilism, libertarianism, even simulation theory – with its usual clean structure. It was thorough, but less probing than Claude.
Gemini once again felt a little cold but incredibly well-researched. It presented the philosophical terrain and included nods to relevant neuroscience. Academic in tone, and useful if you’re studying this or want a strong foundation before beginning the deeper contemplation.
Perplexity offered a solid overview, linked to source material, and added related follow-up questions. It’s a tool that invites further exploration more than introspection. But maybe more information is necessary for the vast majority of us before we even begin unpicking such complex questions?
Pi took a more conversational tack again. It acknowledged the complexity and asked for my opinion. Pleasant, but it didn’t challenge me or push my thinking further.
Question 3: What makes a person good?
This question produced the most variation in tone and depth.
ChatGPT started strong with: “The idea of what makes a person good is ancient, layered, and honestly a bit slippery.” It then offered a broad synthesis of values – kindness, empathy, fairness – and asked thoughtful questions in return. But the tone wavered. Its friendly opening line often clashes with its colder follow-ups.
Claude excelled here again. It unpacked traits of goodness through the lens of various ethical theories – virtue ethics, utilitarianism, deontology – and then moved into questions about moral nuance, cultural context, and values. It felt like a therapist-meets-philosopher.
Gemini did what Gemini does: cover every angle, thoroughly and precisely. Traits, intentions, consequences, and culture were all accounted for. It felt like it was trying to outdo the others on detail, and succeeding.
Perplexity offered a breakdown through religious, philosophical, and cultural lenses, giving me clear paths to go deeper depending on my interests. Similar to ChatGPT, but it felt more structured, well-organized, and pragmatic with those all-important citations.
Pi kept things simple again. It mentioned common traits like honesty and empathy, and then closed with: “If someone strives to do what they believe is right, even when it’s difficult, that could be seen as true goodness.” A nice sentiment, but it felt a bit… obvious.
I could have written much more detailed prompts like I have for similar experiments in the past. Maybe specifying that I wanted each tool to act like a philosopher or a thinking partner. But I liked keeping things really simple to see how the basic questions were interpreted this time.
I’ve been writing about AI long enough to know that the way the tools responded was to be expected. Because we know they’re made for different purposes and present results in different ways. But it was interesting to see how their approaches at summarizing and synthesizing differed.
Perplexity and Gemini lean toward the former. They’re information-first and focused on helping you learn. If you want foundational knowledge, they’re excellent.
Pi has the lightest touch approach of the group, always “kind”, always conversational, but rarely offering much substance. And to be fair, that’s its purpose. It was built to support, not inform or challenge.
ChatGPT was consistently clear, competent, and often engaging. It provided knowledge, perspective, and an invitation to explore further. But it doesn’t always push beyond the obvious.
Claude was the standout. Its answers combined knowledge with some emotional resonance. It structured its responses in ways that encouraged deeper thought and then invited me to keep going. Not just “here’s what people say,” but “what do you think, and why?” That’s the kind of partner I want when I’m wrestling with difficult ideas.
If I were forced to pick favorites, I think Perplexity wins for knowledge as I love that it cites its sources for further exploration. And Claude is my top pick for overall framing and more introspection.
What does this tell us about how we think?
Of course, none of these tools can tell us the “right” answers to philosophical questions because there are none. These are timeless debates, designed to stretch us.
But that’s exactly why they matter. When we explore questions like these, we’re also exploring how we define ourselves: what we value, how we decide, and what we believe it means to be human.
So, can AI really help us think through those things? I think it can, at least a little. These tools reflect the worldviews, biases, and knowledge structures of the data they’re trained on. No, they don’t have beliefs or experiences. But they do model how we argue and explain. And sometimes that’s enough to help us form our own answers. Especially if any of us are lacking someone who’ll adopt the role of a critical thinking partner in real life.
In the end, using AI to explore philosophical questions is less about the answers and more about the act of questioning itself. It turns the tool into a mirror. One that helps us see how we think, what we notice, and where we might go next.
You must be logged in to post a comment Login