Generative AI Is My Research and Writing Partner. Should I Disclose It?

If I use an AI tool for research or to help me create something, should I cite it in my completed work as a source? How do you properly give attribution to AI tools when you use them?”

—Citation Seeker

Dear Citation,

The straightforward answer is that if you’re using generative AI for research purposes, disclosure is probably not necessary. Yet, attribution is probably required if you use ChatGPT or another AI tool for composition.

Anytime you’re feeling ethically conflicted about disclosing your engagement with AI software, here are two guiding questions I think you should ask yourself: Did I utilize AI for research or composition? And might the recipient of this AI-assisted composition feel misled if the tools were revealed to be synthetic instead of organic? Sure, these questions may not map perfectly to every situation, and academics are definitely held to a higher standard when it comes to proper citation, nevertheless I fully believe taking five minutes to reflect can help you understand appropriate usage and avoid unnecessary headaches.

Distinguishing between research and composition is a crucial first step. If I’m using generative AI as a kind of unreliable encyclopedia that can point me toward other sources or broaden my perspective on a topic, but not as part of the actual writing, I think that’s less problematic and unlikely to leave the stench of deception. Always double-check any facts you run across in the chatbot’s outputs, and never reference a ChatGPT output or Perplexity page as a primary source of truth. Most chatbots can now link to outside sources on the web, so you can click through to read more. Think of it, in this context, as part of the information infrastructure. ChatGPT can be the road you drive on, but the final destination should be some external link.

Let’s say you decide to use a chatbot to sketch out a first draft, or have it come up with writing/images/audio/video to blend with yours. In this case, I think erring on the side of disclosure is smart. Even the Dominos cheese sticks in the Uber Eats app now include a disclaimer that the food description was generated by AI and may list inaccurate ingredients.

Every time you use AI for creation, and in some cases for research, you should be honing in on the second question. Essentially, ask yourself if the reader or viewer would feel tricked by learning later on that portions of what they experienced were generated by AI. If so, you totally should use proper attribution by explaining how you used the tool, out of respect for your audience. Not only would generating parts of this column without disclosure go against WIRED’s policy, it would also just be a dry and unfun experience for the both of us.

By considering the people who are going to be enjoying your work and your intentions for creating it in the first place, you can add context to your AI usage. That context is helpful for getting through tricky situations. In most cases, a work email generated by AI and proofread by you is probably just fine. Even so, using generative AI to draft a condolence email after a death would be an example of insensitivity—and something that has actually happened. If a human on the other side of the communication is seeking to connect with you on a personal, emotional level, consider closing out of that ChatGPT browser tab and pulling out a notepad and pen.


“How can educators teach adolescents how to use AI tools responsibly and ethically? Do the advantages of AI outweigh the threats?”

—Raised Hand

artificial intelligenceChatGPTeducationethicsThe Prompt