No Surprises: A Practical Principle for AI Transparency
Christians can navigate AI's grey areas wisely.
Not Your Grandparents’ Church Newsletter
You get an email from your church. It’s a calendar of upcoming events, each with a short description. Unremarkable, right? But then you read this at the top:
This events calendar was created with AI.
Would you consider that a good thing, since it shows transparency? Or would you see it as a red flag? Would your mind leap to the possibility that the upcoming sermon might be written by a bot?
How Can We Respond to AI Practically?
I recently wrote that the first casualty of AI is trust. The timing of this technology’s arrival is troubling because our confidence in our institutions and our neighbors is already low. In that post, I suggested that Christians can push back by building strong face-to-face friendships. I think deep church community will prove essential in an era when you can’t always believe your eyes.
But with AI poised to bring massive economic and political disruption, I’m not suggesting that strong friendships are all Christians will need to weather the storm. It’s a good time for believers to ask: What practical steps can we take to help guide AI toward beneficial outcomes?
Two Kinds of AI Transparency
AI transparency is a very big topic, and I think two big buckets will help clarify our thinking. First, there is transparency for users of AI. This is personal transparency. Since nearly everyone will soon be using AI in some form, this bucket offers practical choices anyone can make. That kind of transparency will be the focus of this post.
There is also the fledgling regulatory framework that will govern the (legal) use of AI. This relates to public transparency. For most believers, influence here will be more indirect, and not all Christians will want to be part of the conversation. But as I’ll discuss in the next post, there’s great opportunity for Christian influence on public AI transparency.
Pay Attention to When You Use AI
As I said, I’m going to focus on bucket number one in this post: personal AI transparency. This is a great place to start because everyone, sooner or later, will need to think about this. The first step is straightforward: just practice being aware of how you use AI. Pay attention to when you choose to use it, like when you open ChatGPT or a similar app. And be especially alert to when it snakes its way in: did you finish writing the last sentence in that email? Or did it auto-complete?
This step might seem insignificant, but we can’t be transparent about something we’re not aware of. New technologies can become integrated into our lives so quickly and thoroughly that it becomes hard to tell where our use of them begins or ends. Twenty years ago, most people had never used a smartphone. Today, many check their phones over a hundred times a day. Ask yourself: how often is glancing at your phone a conscious decision?
Talk About Your Use of AI
AI will become similarly woven into our lives. I don’t think it makes sense to fear this, any more than it would have made sense to fear the expanding role of the internet in the late 1990s. But don’t be passive as this happens either. Choose to be aware of when you use AI, and—this next step is important—talk about it. Bring it up with friends, and urge them to be alert to AI creep in their lives as well. In this sense, a lot of AI transparency will be informal. Awareness will lay a foundation for openness in everyday conversation.
No Surprises: A Principle for When to Disclose AI Use
On the other hand, some uses call for more formal disclosure. An example of this is right at the top of this post. Under the picture it says generated with Microsoft Image Creator. This does a few things. No one will have to wonder if I moonlight as an illustrator, or if I might be violating an artist’s copyright. It also enables others to experiment with the same technology.
The question I ask is: Would it surprise someone to learn I did this with AI? AI image generation has gotten good enough that some results could be mistaken for human artwork. And awareness of what AI can do varies widely from person to person. So to avoid any surprises, I label AI images as such.
How I Use AI When I Write
Favoring transparency, as I do, doesn’t require taking things to an extreme. For example, I use Chat GPT and Perplexity to do quick research while I write. But I don’t bother slapping AI used for fact-finding on each post.
But why not?
Firstly, I think that would get repetitive and annoying. But more importantly, few would be shocked to learn that I use AI tools to do research as I write about AI. And the way I prompt a tool like ChatGPT for fact-finding isn’t really different from how I use Google. Since writers have been using search engines for decades, this passes the no surprises test. Furthermore, I never take anything an AI tells me at face value and always check sources. You’ll sometimes find links to these sources within posts. It’s both transparent and helpful to anyone who wants to learn more.
Similarly, I use AI to proofread my work before I publish it. Since tools like Grammarly have been around for more than a decade, I don’t think this would surprise anyone. For this, I use Claude and prompt it to check for any grammatical errors or problems with clarity. List any changes you recommend.
This way I only get suggestions for minor changes. I can ignore any I don’t like, which happens a lot. In the true spirit of a machine, AI often wants to iron out the quirks that make human writing interesting.
Practical Applications for Churches Using AI
Let’s apply the no surprises principle to a couple of possible AI uses for churches. Many have pointed out how AI can speed up routine tasks, and how this can benefit ministries. Think back to the example I started with: an email about upcoming events. This is the kind of task that AI excels at.
But ask: would anyone receiving that email be surprised to learn that it was AI-generated? Two years from now, the answer might be no in some churches. But as of today, a lot of people are still unaware of what generative AI can do. In my opinion, it’s a good idea to include a note on the email clarifying that you used AI to write it.
It won’t upend a church, of course, if someone discovers that AI wrote a routine email that doesn’t acknowledge that fact. But since AI poses a major threat to trust in general, I’d encourage Christians to err on the side of transparency. I think the no surprises principle, combined with putting yourself in the shoes of the least-tech-savvy person in your church, will prove helpful.
And a person who discovers that a church didn’t disclose the AI origins of an email might wonder: What else are they using it for and not telling us about?
The Sermon Spark: Would You Be Okay With This?
I’m going to start with an example involving a hypothetical pastor. Then, further down I’ll show how the principles this case involves are relevant to any believer. Most Christians, I’m confident, would consider it wrong to use AI to produce an entire sermon. And I think very few pastors would go that far. On the other hand, using an LLM like ChatGPT the same way many use Google during sermon prep is fine, at least in my view. But there’s a grey area in between those two examples. And that’s where one believer’s shrug at the “new normal” will collide with another’s I can’t believe any Christian would do that.
Case in point: How would you react if you learned your pastor was using this AI software to create sermon outlines? I don’t personally think it’s a good idea. If a pastor doesn’t have time to write outlines for his own sermons, my opinion is that members of the church need to step up to take other tasks off his plate. But I’m also sympathetic to the pastor who responds that’s easier said than done.
And what about the pastor who says, “I get the concern, but I just use the software to jog my thinking. The outline I end up using is always completely different from what AI gives me”? This underscores the problem facing all believers, leaders and laypeople alike. AI doesn’t present a simple fork in the road with one path for Christians who embrace it and another for those who reject it.
What it brings is a fog of ambiguity. If you stand over here with one particular use case, you have decent visibility; the line between wise and unwise use of AI seems clear. But walk over there to a slightly different use and suddenly you can’t see your hand in front of your face.
Live With Integrity and Make Disciples
Psalm 86:11 offers insight:
Teach me your way, O Lord, that I may walk in your truth; unite my heart to fear your name.
Integrity is the particular idea I want to pluck from this verse. The words unite my heart are a reminder to avoid the kind of deceit warned about in places like Psalm 101:7 and James 4:8. AI tempts us to deception and double-mindedness. It offers shortcuts to completing tasks that, until very recently, required a human. Those efficiency boosts aren’t necessarily wrong. But deception slips in the moment you give someone the impression that something a computer did was your own personal effort.
Counter Mistrust With Discipleship
It will also help to get Matthew 28:19 in the mix:
Go therefore and make disciples of all nations, baptizing them in the name of the Father and of the Son and of the Holy Spirit.
Disciple-making can take many forms. But as Jesus’ example with the apostles shows, it has to involve time spent together face-to-face. It also requires openness and transparency. Many think of discipleship as strictly involving those times when our Bibles are in front of us. No doubt, we need lots of that. But discipleship can also involve one Christian teaching another something practical. Possibilities are endless, from fixing a leaky faucet to—you guessed it—sharing how you accomplish a task with AI. Here’s how I bring technology into my study of the Word. Or this is how I use it to prepare for teaching. Or this is the workflow I use for my job. Since we’re all at square one when it comes to using AI tools wisely, we need to be transparent with one another and willing to teach each other.
These principles apply to all Christians, not just those in formal leadership roles. Are you a college student using ChatGPT for quick research as you write a paper? Sooner or later, you’ll feel the tug to let it write your last page. Maybe you’re the employee of a company that’s implementing AI. I’ve experienced this firsthand, and ethical pitfalls abound. Sharing what you’re doing—and avoiding—with someone else will help you resist the temptations that come with powerful technology.
Mind the (Integrity) Gap
When social media appeared a couple of decades ago, far too many Christians allowed a gap to open up between images they were presenting to others (“your best life now”) and what was actually going on (anxiety, sorrow, Facebook creeping).
AI has the potential to drive a similar wedge between how we want others to perceive us and who we really are when no one is watching. But sincere, imperfect efforts at transparency counter AI’s gravitational pull toward mistrust.
And deep discipleship offers a full reversal: instead of isolation and mistrust, we can grow our confidence in each other. What’s the first step you’re going to take?