Generative AI is a Research and Writing Partner. Should I Reveal You?
“If I use an AI tool to research or help me create something, do I have to cite it in my finished work as a source? How do you best provide AI tools when you use them? “
– Citation researcher
Dear Citation,
The straight answer is that if you are using AI for production for research purposes, disclosure is probably not necessary. However, attribution is required if you use ChatGPT or another AI tool in practice.
Whenever you feel ethically conflicted about disclosing your interactions with AI software, here are two guiding questions I think you should ask yourself: Have I used the AI for research or design? And would the recipient of this AI-assisted creation feel misled if the tools were revealed to be artificial rather than biological? Of course, these questions may not go well in all cases, and academics are held to a high standard when it comes to proper citation, however I fully believe that taking five minutes of reflection can help you understand proper usage and avoid unnecessary headaches.
Distinguishing between research and composition is an important first step. If I use a productive AI as a kind of unreliable encyclopedia that I point to other sources or expand my opinion on the topic, but not as part of the original writing, I think that is not a problem and it is impossible to leave the smell of deception. . Always double check any facts you find in chatbot output, and never refer to ChatGPT output or the Perplexity page as the primary source of truth. Most chatbots can now link to external sources on the web, so you can click through to learn more. Think of it, in this context, as part of the information infrastructure. ChatGPT can be the road you drive to, but the final destination should be an external link.
Let’s say you decide to use a chatbot to draw up a first draft, or have it come up with text/images/audio/video to blend with yours. In this case, I think erring on the side of disclosure is wise. Even the Dominos cheese sticks on the Uber Eats app now include a disclaimer that the food description is generated by AI and may list inaccurate ingredients.
Every time you use AI in creativity, and in some cases research, you have to ask a second question. In fact, ask yourself if the reader or viewer would feel cheated by learning later that parts of what they experienced were generated by AI. If so, you should use a proper annotation explaining how you used the tool, out of respect for your audience. Reproducing parts of this column without disclosure would not only be against WIRED policy, but it would be dry and unpleasant for both of us.
By considering the people who will enjoy your work and your goals for creating it in the first place, you can add context to your use of AI. That context helps to navigate tricky situations. In most cases, a job email generated by AI and reviewed by you is probably correct. However, using artificial intelligence to write a condolence email after a death would be an example of insensitivity—and something has definitely happened. If the person on the other end of the conversation wants to connect with you on a personal, emotional level, consider closing that ChatGPT browser tab and pulling out a notebook and pen.
“How can teachers teach young people how to use AI tools responsibly and ethically? Do the benefits of AI outweigh the threats?”
— Raise your hand
Source link