Empowering Students in the Age of AI

Co-Creating Responsible Use Policies

Chad M. Topaz
5 min readSep 11, 2024
Photo by Desola Lanre-Ologun on Unsplash

As generative AI becomes an increasingly prevalent tool in education, balancing its benefits with its risks is more important than ever. Today, I’m sharing how my students and I tackled this challenge by collaboratively developing our own policies for AI use. Through reflection, discussion, and shared decision-making, we created a policy document that captures our collective commitment to using AI thoughtfully and responsibly — an approach that I hope will resonate with educators and students alike as we navigate this new frontier.

Learning Data Science and Social Justice

At Williams College, I teach a course called “Data for Justice,” where students dive into data science through the lens of social justice. In this class, students learn everything from acquiring and cleaning data to visualizing and exploring it, all while tackling real-world issues like criminal justice, environmental justice, diversity in the arts, and education equity. We also adopt critical perspectives, emphasizing the ethical use of data and the impact of data on society. Designed for beginners, the course assumes no prior experience in programming or statistics — just an interest in social justice and a curiosity about how data can help drive meaningful change.

Generative AI in Class

In the era of large language models like ChatGPT, Gemini, and Claude, I encourage my students to use these tools as companions in their learning journey. Just as I wouldn’t ask students to do long division by hand when calculators are available, I see AI as a valuable resource for generating, debugging, and understanding code. For example, when students get stuck on syntax errors or need help troubleshooting, AI can act as an interactive tutor, offering suggestions and explanations that help them move forward. This allows students to focus on the broader concepts of data science, like problem-solving and critical thinking, without getting bogged down by minor technical hurdles.

Beyond coding, students often see the potential of AI for other tasks, such as conducting background research, generating summaries to organize their thoughts, or brainstorming new project ideas. However, these uses come with challenges. For instance, AI-generated content can be biased, misleading, or overly simplistic, which could undermine the quality and integrity of their work if not critically assessed. That’s why it’s essential to engage with these tools thoughtfully, ensuring that they enhance rather than replace the learning process.

Class Activity: Collaborative Development of a Generative AI Policy

Recognizing the benefits and pitfalls of AI, I knew it was crucial for our class to establish clear guidelines for its use. Students need to be aware of the risks, such as privacy concerns, the perpetuation of biases, and the broader impact on society. To build this awareness, we read a paper titled Taxonomy of Risks Posed by Language Models, which outlines key areas of concern, including privacy, security, bias, fairness, transparency, accountability, and environmental impact.

But learning about risks isn’t enough; we needed policies that actively mitigate them. Rather than imposing a set of rules, I wanted to create a sense of shared responsibility by involving students directly in the policy-making process. During the second class session of the semester, students reflected on their views about AI in our course, considering their concerns and what guidelines they thought were necessary for ethical use.

Students worked in pairs to address specific aspects of AI use, such as academic integrity, privacy and data security, and bias and fairness. Each pair engaged in lively discussions, weighing the benefits and ethical challenges AI presents, and documented their ideas in a shared Google Doc. Each pair presented their guidelines to the class, opening the floor for feedback and further refinement directly within the document. This collaborative approach not only empowered students to voice their concerns but also helped us collectively craft a policy that is thoughtful, inclusive, and grounded in real-world challenges.

Collaboratively Generated Policies

I’m pleased to share the final set of policies we arrived at.

Academic Integrity and Transparency

  • Use AI Responsibly and Honestly: AI should support your academic work, not replace your effort. Do not copy AI-generated content directly; instead, critically engage with the information provided and incorporate it meaningfully into your own work.
  • Cite AI Contributions: Always disclose and cite any use of AI in your assignments, including specific outputs, prompts, or ideas. Transparency about AI usage helps maintain academic integrity and accountability.
  • Know Usage Guidelines: Be aware of when AI is allowed or restricted based on the context of the assignment. Understand the boundaries of AI use, particularly in writing-intensive tasks versus technical problem-solving.

Collaboration and Learning

  • Use AI as a Learning Partner: Use AI as a tool to enhance your learning experience, akin to a personal tutor. AI can help you explore new ideas, simulate scenarios, or provide feedback, but it should not be used as a shortcut to completing assignments without understanding.
  • Reflect and Engage Critically: Engage thoughtfully with AI outputs, reflecting on how AI supports your learning objectives. Focus on deepening your understanding by questioning AI’s suggestions and using them as a springboard for your own critical thinking.
  • Take Personal Responsibility in Learning: Balance the benefits of AI assistance with the need for personal effort. Ensure that the learning process remains active, intentional, and driven by your own critical analysis and decision-making.

Privacy and Data Security

  • Protect Sensitive Information: Avoid inputting personal, sensitive, or confidential data into AI systems. Design prompts that maintain privacy while still being effective in guiding AI outputs.
  • Anonymize Data: When interacting with AI, ensure that any data used is anonymized to safeguard privacy and comply with ethical standards.

Bias and Fairness

  • Critically Assess AI Outputs: AI often reflects biases from its training data. Review AI-generated outputs critically, particularly in contexts related to social justice, to identify and address any biases.
  • Design Prompts Mindfully: Craft prompts carefully to minimize potential biases and ensure that AI’s outputs are as fair and accurate as possible. Actively question and correct biases if detected.
  • Limit AI in Sensitive Analysis: Use AI for technical tasks where human insight is less critical. For sensitive interpretations, especially those involving social justice, rely on human judgment.

Ethical Considerations

  • Fact-Check AI Outputs: Always verify information generated by AI, as it can contain inaccuracies or biases. Approach AI-generated content with scrutiny, just as you would with other sources.
  • Exercise Active Ethical Judgment: Ensure that ethical judgments remain your responsibility, especially in complex social and environmental contexts. Maintain accountability for your own judgments.

Conclusion

I’m genuinely impressed with the policies the students developed — they are thoughtful, practical, and show a deep engagement with the ethical complexities of using AI in academic settings. This collaborative approach not only fostered a sense of shared responsibility but also led to a set of guidelines that are both useful and implementable in our course. I’m sharing this process and the resulting policies in the hope that they might help others facing similar challenges with AI in the classroom. As AI continues to evolve, having these conversations and involving students in policy-making are crucial steps toward creating a learning environment that is both innovative and ethically sound.

--

--

Chad M. Topaz

Data Scientist | Social Justice Activist | Professor | Speaker | Nonprofit Leader