There have always been many ways for students to cheat in the classroom. Sometimes the cheating can be detected, and other times not. I had a case of plagiarism in the first higher education course I ever taught. The student almost got away with it, but made the mistake of writing about my specialty subject area. He copied verbatim (text and graphics) an as-yet-unpublished research paper that a colleague had asked me to peer review just months before. I recognized it immediately, confronted the student, and appropriate disciplinary action was taken. After that, I started using plagiarism checkers (e.g., Turnitin), but I’m not sure that an unpublished draft would have raised any red flags.
Now, with the introduction of artificial intelligence chatbots that can write papers and reports from scratch (e.g., ChatGPT), the problem of catching cheaters has become more complicated. Even if the instructor suspects that a report or paper is not the student’s original work, there is no way to prove that the student used an AI chatbot to write it.
There are many stories about what individual teachers are doing to address the AI chatbot challenge. Some are requiring students to write their first drafts of papers in the classroom, using computers that can monitor where they go to do their research. Some are moving away from requiring written papers to having more oral exams and handwritten assignments. Many schools have banned use of the AI chatbots on their school computer networks. Others are moving away from open-book or take-home assignments.
Some are planning to use AI chatbot-generated reports as a springboard for critical analysis. What’s missing in the chatbot’s analysis. Are there other ways to interpret the same research results? Other instructors are trying to revise their materials to focus on literature and other texts that the AI chatbots may not have been “trained” on. And to focus their testing on critical analysis, rather than synthesis of easily researched material (the AI chatbot is good at synthesis, less skilled at analysis).
Online education will be affected in similar ways by the AI chatbot revolution. Online instructors will be equally stumped to identify chatbot-informed papers and test results. In some ways, the online platform may make it even easier for students to fool their instructor with AI chatbot-generated content. Some of the responses being tested in brick-and-mortar classrooms (e.g., in-person testing and paper-writing) cannot easily be implemented in an online format.
What to do to ensure that online students produce original work? Share your ideas with our readers by emailing me.
In the end, it’s the student who is most harmed by cheating. They don’t learn the material, and that may come back to haunt them in the post-graduation real world.
No comments:
Post a Comment