wired.com

Generative AI Is My Research and Writing Partner. Should I Disclose It?

Animation: Christa Jarrold

“If I use an AI tool for research or to help me create something, should I cite it in my completed work as a source? How do you properly give attribution to AI tools when you use them?”

—Citation Seeker

Dear Citation,

The straightforward answer is that if you’re using generative AI for research purposes, disclosure is probably not necessary. Yet, attribution is probably required if you use ChatGPT or another AI tool for composition.

Anytime you’re feeling ethically conflicted about disclosing your engagement with AI software, here are two guiding questions I think you should ask yourself: Did I utilize AI for research or composition? And might the recipient of this AI-assisted composition feel misled if the tools were revealed to be synthetic instead of organic? Sure, these questions may not map perfectly to every situation, and academics are definitely held to a higher standard when it comes to proper citation, nevertheless I fully believe taking five minutes to reflect can help you understand appropriate usage and avoid unnecessary headaches.

Distinguishing between research and composition is a crucial first step. If I’m using generative AI as a kind of unreliable encyclopedia that can point me toward other sources or broaden my perspective on a topic, but not as part of the actual writing, I think that’s less problematic and unlikely to leave the stench of deception. Always double-check any facts you run across in the chatbot’s outputs, and never reference a ChatGPT output or Perplexity page as a primary source of truth. Most chatbots can now link to outside sources on the web, so you can click through to read more. Think of it, in this context, as part of the information infrastructure. ChatGPT can be the road you drive on, but the final destination should be some external link.

Let’s say you decide to use a chatbot to sketch out a first draft, or have it come up with writing/images/audio/video to blend with yours. In this case, I think erring on the side of disclosure is smart. Even the Dominos cheese sticks in the Uber Eats app now include a disclaimer that the food description was generated by AI and may list inaccurate ingredients.

Every time you use AI for creation, and in some cases for research, you should be honing in on the second question. Essentially, ask yourself if the reader or viewer would feel tricked by learning later on that portions of what they experienced were generated by AI. If so, you totally should use proper attribution by explaining how you used the tool, out of respect for your audience. Not only would generating parts of this column without disclosure go against WIRED’s policy, it would also just be a dry and unfun experience for the both of us.

By considering the people who are going to be enjoying your work and your intentions for creating it in the first place, you can add context to your AI usage. That context is helpful for getting through tricky situations. In most cases, a work email generated by AI and proofread by you is probably just fine. Even so, using generative AI to draft a condolence email after a death would be an example of insensitivity—and something that has actually happened. If a human on the other side of the communication is seeking to connect with you on a personal, emotional level, consider closing out of that ChatGPT browser tab and pulling out a notepad and pen.

“How can educators teach adolescents how to use AI tools responsibly and ethically? Do the advantages of AI outweigh the threats?”

—Raised Hand

Dear Raised,

When it comes to education about generative AI, I think we should start young and stay realistic. Kids are beginning to learn computer literacy skills in elementary school and continuing through their senior year of high school. Lessons about safe, effective use of AI tools would not only help build strong technical skills, but also potentially help students forge a healthy amount of emotional distance from chatbots.

Teachers and parents are rightly worried about kids using generative AI to fraudulently write their essays for them, or using ChatGPT and other homework helpers like ByteDance’s Gauth AI to quickly grab answers. Lesson plans more focused on in-class practice and discussion may help alleviate this issue. But focusing just on homework misses another looming threat to students. Over the next few years, I expect teenagers to dive further into long, heartfelt, and sometimes inappropriate conversations, not with random strangers online but with sweet-talking chatbots like Character.AI or Replika.

During a difficult, awkward stage of life, already complicated by the harsh spotlight of contemporary social media, teenagers will likely turn even more inward and asocial, relying on synthetic companions to understand the world around them. Earlier in 2024, a teenager in Florida was an avid user of roleplaying chatbots and confided thoughts of self-harm to the AI before his suicide, according to reporting from The New York Times. Teaching kids how to safely use AI is not only about avoiding false information, but also about avoiding unreal relationships and staying tethered to reality.

Whether the advantages of generative AI in the classroom outweigh the threats is a bit of a moot point heading into 2025. The tools have already seeped into the daily lives of students. As educators, equipping these kids with the knowledge and skills to navigate the world around them is paramount. An unquestioning embrace of generative AI may not be smart, but a blind avoidance could be just as catastrophic.

At your service,

Reece

Seeking advice on how to navigate the world of artificial intelligence tools? Submit any questions you’d like Reece Rogers to answer to mail@wired.com, and use the subject line The Prompt.

Read full news in source page