Credit: Created using Magic Media, a generative AI app by Canva

This article is based on a Medium post by Damian Radcliffe. We asked Claude AI to shorten the article and then it was edited by a human. Damian Radcliffe also used ChatGPT to write and refine the original article.

As we begin another term, it is crucial to address the role of generative AI in education. A recent poll of my students revealed increased AI usage in their work, highlighting the need for clear guidelines on its implementation.

While these tools can enhance creativity and productivity, we must ensure students do not become overly dependent on them. Or to overlook the environmental impact these tools can have. Our goal is to balance innovation with academic integrity and critical thinking skills – the cornerstone of creative and journalistic work.

Rather than banning AI tools outright, I believe we should teach students to use them responsibly. The future of journalism and communication will belong to professionals who can effectively leverage these technologies while maintaining ethical standards.

Recommended uses

Students are encouraged to use tools like ChatGPT, Claude, QuillBot, Gemini, Perplexity, and others for:

  • Brainstorming: Generating ideas for stories, headlines, or social media content.
  • Outlining: Structuring your assignments or projects before diving into detailed writing.
  • Editing: Refining drafts, improving clarity, and suggesting stylistic changes.
  • Visuals and Presentations: Designing slides, infographics, or templates using AI-powered platforms like SlidesAI.io or Canva.

Documentation and attribution

Transparency is mandatory. All AI usage must be documented in submissions, including:

  • The specific tool used: e.g. ChatGPT, Gemini, DALL-E ect.
  • When and where it was applied in your workflow
  • How it was used: e,g. "I used DALL-E to generate a concept image for my presentation."
  • You may also be required to submit raw outputs or logs of your interactions with these tools to verify proper use. Failure to do so could result in a grade penalty.

Ethical use and originality

While AI can augment your work, the core ideas must remain your own. Students cannot:

  • Submit entirely AI-generated assignments
  • Use AI to create deceptive or unverified content
  • Misuse of AI tools will be treated as academic misconduct and may result in a grade penalty, failure or other disciplinary action

Best practices

  1. Fact-check everything: AI can produce incorrect information or fabricate details. Verify all AI-generated content against reliable sources, including your own reporting.
  2. Maintain your voice: AI should enhance, not replace, your unique perspective. Content lacking a human touch is often easily detectable and less compelling.
  3. Critical engagement: Consider how AI affects your learning and creative process. Ask yourself:
  • Is AI limiting your creativity and development?
  • Are you becoming too dependent on these tools?
  • Does your work reflect your own thinking and professionalism?

Professional considerations

Be prepared to discuss your AI experiences in job interviews. Some positions require disclosure of AI use, and certain applications may be excluded if AI was utilised. Develop the ability to articulate both the benefits and limitations of AI in your field.

Damian Radcliffe is a journalist, researcher, and professor based at the University of Oregon. He holds the Chambers Chair in Journalism and is a Professor of Practice, an affiliate faculty member of the Department for Middle East and North Africa Studies (MENA) and the Agora Journalism Center, and a Research Associate of the Center for Science Communication Research (SCR).

Free daily newsletter

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).