
We can raise children who think before they prompt
Opinion + AnalysisScience + Technology
BY Emma Wilkins 26 AUG 2025
We may not be able to steer completely clear of AI, or we may not want to, but we can help our kids to understand what it is and isn’t good for, and make intentional decisions about when and how to use it.
ChatGPT and other artificial “help” is already so widely used that even parents and educators who worry about the ways it might interfere with children’s learning and development, seem to accept that it’s a tool their kids will have to learn to use.
In her forthcoming book The Human Edge, critical thinking specialist Bethan Winn says that because AI is already embedded in our world, the questions to ask now are around which human skills we need to preserve and strengthen, and where we draw the line between assistance and dependence.
By taking time to “play, experiment, test hypotheses, and explore”, Winn suggests we can equip our kids and ourselves with the tools to think critically. This will help us “adapt intelligently” and set our own boundaries, rather than defaulting lazily and unthinkingly to what “most people” seem okay with.
What we view as “good”, what decisions we make, and encourage, and discourage, our children to make, will depend on what we value. One of the reasons corporations and governments have been so quick to embrace AI is that they prize efficiency, productivity and profit; and fear falling behind. But in the private sphere, we can make different decisions based on different values.
If, for example, we value learning and creativity, the desire to build up skills and knowledge will help us to resist using AI to brainstorm and create on our behalf. We’ll need to help our kids to see that how they produce something can matter just as much as what they produce, because it’s natural to value success too. We’ll also need to make learning fun and satisfying, and discourage short-term wins over long-term gains.
Myself and my husband are quick to share cautionary tales – from the news, books, podcasts, our own experiences and those of our friends – about less than ideal uses of AI. He tells funny stories about the way candidates he interviews misuse it, I tell funny stories about how disastrously I’d misquote people if I relied on generated transcripts. I also talk about why I’m not tempted to rely on AI to write for me – I want to keep using my brain, developing my skills, choosing my sources; I want to arrive at understanding and insight, not generate it, even if that takes time and energy. (I also can’t imagine prompting would be nearly as fun).
Concern for the environment can also offer incentive to use our brains, or other less energy-intensive tools, before turning to AI. And if we value truth and accuracy, the reality that AI often presents information that’s false as fact will provide incentive to think twice before using it, or strong motivation to verify its claims when we do. Just because an “AI overview” is the first result in an internet search, doesn’t mean we can’t scroll past it to a reputable source. Tech companies encourage one habit, but we can choose to develop another.
And if we’ve developed a habit of keeping a clear conscience, and if we value honesty and integrity, we’ll find it easier to resist using AI to cheat, no matter how easy or “normal” it becomes. We’ll also be concerned by the unethical ways in which large language models have been trained using people’s intellectual property without their consent.
As our kids grow more independent, they might not retain the same values, or make the same decisions, as we do. But if they’ve formed a habit of using their values to guide their decisions, there’s a good chance they’ll continue it.
In addition to hoping my children adopt values that will make them wise, caring, loving, human beings, I hope they’ll understand their unique value, and the unique value all humans have. It’s the existential threat AI poses, when it seems to outperform us not only intellectually but relationally, that might be the most concerning one of all.
In a world where chatbots are designed to flatter, befriend, even seduce, we can’t assume the next generation will value human relationships – even human life and human rights – in the way previous generations did. Already, some prefer a chatbot’s company to that of their own friends and family.
Parents teaching their children about values is nothing new. Nor is contradicting our speech with our actions in ways our children are bound to notice. We know we should set the example we want our kids to follow, but how often do we fall short of our own standards? In our defense, we’re only human.
We’re “only” human. In other words, we’re not divine. And AI is neither human nor divine. Whether or not we agree that humans are made in the image of God – are “the work of his hands” – I hope we can agree that we’re more valuable than the work of our hands, no matter how incredible that work might be.
Of all the opportunities AI affords us and our children, the prompt to consider what it means to be human, to ask ourselves deep questions about who we are and why we’re here, may be the most exciting one of all.


BY Emma Wilkins
Emma Wilkins is a journalist and freelance writer with a particular interest in exploring meaning and value through the lenses of literature and life. You can find her at: https://emmahwilkins.com/
Ethics in your inbox.
Get the latest inspiration, intelligence, events & more.
By signing up you agree to our privacy policy
You might be interested in…
Opinion + Analysis
Science + Technology
Why the EU’s ‘Right to an explanation’ is big news for AI and ethics
Opinion + Analysis
Business + Leadership, Science + Technology
Is it ok to use data for good?
Opinion + Analysis
Relationships, Science + Technology
To fix the problem of deepfakes we must treat the cause, not the symptoms
Opinion + Analysis
Relationships, Science + Technology