This is the first of a weekly blog series where I'll be sharing my insights on integrating AI into classroom settings. This journey is more than just about technology; it's about transforming teaching and learning by encompassing three core actions: Think, Try, and Transform.
Think: Understanding AI Bias
AI systems, though powerful, are not immune to the prejudices that permeate our society. They learn from vast datasets that often contain human biases, leading to skewed outcomes. For instance, an AI generating images might consistently associate certain professions with one gender or depict scenes that lack diversity.
Understanding AI bias is crucial, as it reflects and can reinforce existing societal stereotypes. We must think critically about the data we feed into these systems.
With AI becoming increasingly prevalent in schools, our students are certain to encounter AI bias. And what is concerning is that there seems to be evidence emerging that biased interactions with AIs can linger and inform subsequent decision-making.
So, as teachers, we need to educate students about bias in AI systems. We already do this in several other subjects, such as the humanities, so the students should have already begun to develop the skillset required to discern bias: critical thinking; understanding of social and cultural contexts; communication; collaboration etc.
Try: Engaging with AI to recognise bias
For younger students, educators can curate a selection of AI-generated content, such as images or stories, and guide the class in identifying stereotypes. Below are two images, with accompanying prompts, that could spark interesting discussions and debates in the classroom.
For older students, the activity becomes more interactive. They can use AI tools to generate their own content, then critically assess the results. Students can compare the AI’s portrayal of different genders, ethnicities, and professions, or analyse the diversity in AI-created narratives. They should be encouraged to ask questions like:
Does the content reflect the diversity of the real world?
What stereotypes are present or challenged?
How might this AI output influence someone’s perception of a particular group?
Is the content mirroring bias in society?
You could use resources such as this brilliant article in the Washington Post to begin discussions around how tools like Dall-E displays a ‘tendency toward a Western point-of-view’.
An extension of this would be for pupils to us different image generators with the same prompt. They could discuss the differences and research how these companies are addressing bias in their systems in different ways.
Transform:
By actively generating and critiquing AI outputs, students don't just read about bias, they learn to identify and understand it in practice. They don’t simply consume technology, but they question and engage with it. This could help to foster a generation of responsible, ethical technology users. It could transform the ways in which they engage with digital technology, thereby becoming more informed and critical users and creators of technology.
And I think this pursuit is mutually beneficial, as teachers will inevitably learn more about how these systems work. By engaging with pupils about the pitfalls of a technology, students feel more included in the learning process and teachers will better understand how they engage with AIs. Teachers will be able to pick up bias more readily if they spend time teaching about AI bias.
Furthermore, by simply interacting and prompting an AI, a teacher is more likely to continue using these tools to enhance their practice and improve their workflow. It’s one step closer to what Ethan Mollick calls the 10-hour rule; the minimum amount of time someone needs to spend experimenting with AI such that they will start using it more in their daily life or at work.