ChatGPT 101: The risks and rewards of generative AI in the classroom
The rise of generative artificial intelligence tools like ChatGPT is prompting many educators to reimagine the role of technology in the classroom.
At the University of Toronto, Susan McCahan, vice-provost, academic programs and vice-provost, innovations in undergraduate education, has been on the front lines of the response to this fast-evolving technology.
McCahan, a professor of mechanical and industrial engineering in the Faculty of Applied Science & Engineering, says the proliferation of generative AI tools presents both opportunities and challenges for higher education.
Her office is supporting and .
She recently spoke to 做厙TV News about the lessons that have been learned about the academic implications of generative AI and the big questions that still remain.
What are some of the ways generative AI is impacting teaching and learning?
Large language models have significant implications for how we teach coding and writing because it will change the way people code and write particularly when it comes to routine tasks.
A lot of the writing I do in a day isnt deeply intellectual. Its the kind of writing that LLMs do pretty well. However, its probably not going to write as well as me when Im writing an academic paper, because of my knowledge and understanding of the field and my own unique perspective.
Right now, the technology is pretty good at writing at the level of a first-year or second-year student, but its not up to what would be expected of a student in their third or fourth year.
The biggest challenge is making sure students are still progressing to that third- or fourth-year level if they are taking shortcuts in their first years of university or even high school or middle school.
People have compared this to a calculator, but I dont think thats the right analogy because a calculator is a very domain-specific tool and generative AI has much broader applications.
There was an existential crisis in math education in the 1980s when calculators capable of symbolic manipulation came along. Educators questioned if we should teach our students how to do differentials and integrals if these programs can solve those complex equations. Yet, we came through that, and we still teach students how to add and subtract, multiply and divide, do differentials and integrals. We also teach students how to use these symbolic manipulation programs in ways that allow them to go deeper than if they were to do it all by hand.
I think we will come to a point where people recognize when it is useful to use AI to help and when is it not going to be very helpful. Hopefully, we will arrive in a place where it allows people to advance through the basics faster and move on to more complex writing and coding.
Does 做厙TV consider the use of generative AI tools to be cheating?
We expect students to complete individual assignments on their own. If an instructor decides to explicitly restrict the use of generative AI tools, then their use would be considered an unauthorized aid under the . This is considered an academic offence and will be treated as such.
Some might ask why we dont classify this as plagiarism. One of the biggest misconceptions that people have is that LLMs take whats on the internet, mash up the text and ideas and repackage it as a compilation. However, thats not how the technology works.
Tools like ChatGPT are trained on large amounts of online materials to identify patterns of speech and make predictions about words most likely to go together. If I say, one, two, three, it knows that four probably comes next. It knows four is a noun, but it doesnt associate the concept with a square or the horsemen of the apocalypse.
When you enter a prompt into ChatGPT, its not combing through information to produce sentences or paragraphs or ideas its making word-by-word predictions that imitate patterns of speech around a subject. Thats why we dont treat the use of these tools as plagiarism; we treat it as an unauthorized aid.
What resources are available to help instructors adapt to this emerging technology? Are there any best practices they should follow?
Weve put together an addressing some of the considerations around generative AI, while providing instructors with resources to help them communicate what technology is or isnt allowed in their courses.
I think were in a moment when its really important for faculty to be really clear on their syllabi about whether they explicitly allow it or explicitly dont. If it is permitted, it should be clear how AI tools can be used, for what assignments and to what degree, and if students must explain, document or cite what tools they use and how.
This is new, and both faculty and students are not altogether clear if this will be the next Wikipedia of the world where everyone uses it, but no one talks about it anymore. Or if it should never be used because its just unreliable.
What are some other considerations around the use of generative AI in an academic context?
LLMs often get things wrong and very confidently wrong. For example, back in January, I asked ChatGPT for my biography. It told me that I had worked at the University of British Columbia and I was a leading researcher in biomedical engineering things that seem believable, but are factually untrue. The technology has improved since then, but LLMs still get things wrong in ways that are not immediately apparent or obvious. These are called hallucinations, and they can be so subtle that theyre hard to detect unless you really know the subject.
Ultimately, the student is responsible for the material they submit, and if theyre submitting material that is factually wrong, theyre responsible for it. You cant blame the chatbot, the same way the chatbot cant take credit. Its not like a team project where youre working with another student, and you can say, It wasnt me, it was my partner. If your partner is AI, you are responsible for all of the work you submit whether or not there are parts that were co-created with AI.