The use of artificial intelligence tools like ChatGPT for educational purposes raises questions about the ethical and safety implications of students leaning heavily on these tools to finish their schoolwork.
Dr. Ron Darvin, an assistant professor in the department of language and literacy education at the faculty of education, discussed how artificial intelligence is changing how we teach students and how they learn.
How can educators use AI to supplement their teaching?
Artificial intelligence is a broad term covering things like machine learning, language processing, computer vision, and robotics. Teachers already use AI when they Google for info or use tools like Kahoot in class. Teachers have been using AI to enhance their teaching for a while, and they can keep doing so with more tools coming out. For example, they can use chatbots like ChatGPT to help with explaining complex ideas, answering questions, or practicing language skills. Teachers should learn what AI can and cannot do, see if it fits their teaching, and use it wisely.
How can the use of AI in classrooms benefit young students?
Chatbots are not just for simplifying tough concepts. They are handy for students in many ways, like giving writing tips, suggesting presentation ideas, or even having virtual chats with historical figures to learn about history. The Khan Academy, for example, is trying out Khanmigo, a tutoring platform powered by ChatGPT-4, to help with math homework and essays. The key is to use chatbots not to give answers but to support their learning by asking questions and helping students think and reflect on their ideas.
Are there any risks with children using AI?
When students depend on chatbots for assignments and answers without learning how to think critically, it can hinder their critical thinking and problem-solving skills. Relying on AI to just provide answers could also make students less motivated to explore and learn independently. Plus, chatbots might give out wrong, inappropriate, or biased information. While they are handy for some stuff, it is essential to verify AI-generated info by referring to trusted sources.
What about ethical and safety concerns?
Chatbots work by using big language models (LLMs), which are computer algorithms that predict words that can be strung together into an answer. They learn from massive data sets from news articles to social media. This raises questions about who owns the content they generate. Since chatbots do not give sources, using their work without proper credit becomes an issue of academic integrity. The problem is that AI detectors cannot always tell if a text is written by a human or a chatbot.
Also, engaging with chatbots that do not have age-verification tools means kids might share personal details and provide info in their conversations that can pose privacy risks. Parents need to be aware of what data is collected, how it is used, and if it is shared with others.
Are K-12 schools cracking down on AI? What does the future look like?
K-12 schools have different approaches to AI in education. Some schools limit access to AI tools like ChatGPT on school Wi-Fi and devices, but this could be unfair, as wealthier students with personal devices can readily access these tools at home. Others embrace AI, teaching them how to use it responsibly.
I believe AI is pushing us towards flipped classrooms, where students learn content outside of classrooms and focus on practical applications and problem-solving activities in class. Going forward, teachers may need to use more oral exams and presentations, while essays will need to involve more personal reflection that a chatbot cannot replicate.