Here's a link to an interesting opinion post by an educator from the US. Titled "How can AI be used ethically when it’s been linked to suicide?", the author argues that instead of educators and students trying to work out how to ethically use generative AI technologies, the responsibility should be on the tech companies to develop a products that don't,
- drive young people to suicide or to AI psychosis.
- Contribute to climate and environmental problems.
- Have a negative impact on learning, confidently including information that's false, and adding citations that don't exist.
"Here’s my proposal: It’s not on us, on you and me, to use AI ethically or responsibly. It’s on the companies to build safe, reliable, ethical products. If you can’t do that and still make money, you don’t deserve to make money. And until that happens, I’d like our educational institutions, at least, to lead with the message that these generative AI programs as they currently exist simply cannot be used ethically. That doesn’t mean unenforceable bans, but it does mean telling the truth."
I keep coming back to this statement from the Curriculum for Wales, that we want learners to be "ethical, informed citizens of Wales and the world". If we want our students to be ethical and informed, then maybe we as teacher, should model the same qualities in the decisions that we make in using these generative AI tools to create resources, especially when we know about the many major concerns associated with the development and use of these tools?
Science and Technology AoLE - Learners (and teachers?).... need to develop the ability to meaningfully ask the question, ‘Just because we can, does that mean we should?’
No comments:
Post a Comment