Although AI has some educators worried, it’s possible to identify AI-generated student work from human-generated work

6 tips to detect AI-generated student work


Although AI has some educators worried, it’s possible to identify AI-generated student work from human-generated work

Key points:

  • AI has a place in the classroom, but students shouldn’t rely on it to write papers
  • Educators can use a few key strategies to identify AI-generated work
  • See related article: How to redefine learning in the digital age

As the school year starts, the excitement and stress about the potential use of generative AI has K-12 teachers and university faculty collectively stressed about these new tools and their potential impact on instruction. A recent professional development meeting about AI at a midwestern university set a new attendance record for such events.

There is no sure-fire way to identify text as generated by AI, and some of the early tools offered to do such have either been shown to be only somewhat effective or have been withdrawn from public use as not meeting their developer’s standards. A spate of AI detectors are available, including CopyLeaks, Content at Scale, and GPTZero, but most will note it is important to consider the results in conjunction with a conversation with the student involved. Asking a student to explain a complex or confusing portion of a submission might be more effective than any of the AI detectors.

Instructors at all levels should consider the following criteria to help them determine whether text-based submissions were student or AI-generated:

1. Look for typos. AI-generated text tends not to include typos, and such errors that make our writing human are often a sign that the submission was created by a human.

2. Lack of personal experiences or generalized examples are another potential sign of AI-generated writing. For instance, “My family went to the beach in the car” is more likely to be AI-generated than “Mom, Betty, and Rose went to the 3rd Street beach to swim.”

3. AI-generated text is based upon looking for patterns in large samples of text. Therefore, more common words, such as the, it, and is are more likely to be represented in such documents. Similarly, common words and phrases are more likely to appear in AI-generated submissions.

4. Instructors should look for unusual or complete phrases that a student would not normally employ. A high school student speaking of a lacuna in his school records might be a sign the paper was AI-generated.

5. Inconsistent styles, tone, or tense changes may be a sign of AI-derived materials. Inaccurate citations are often common in AI-generated papers. The format is correct, but the author, title, and journal information were simply thrown together and do not represent an actual article. These and other such inaccurate information from a generative AI tool are sometimes called hallucinations.

6. Current generative AI tends to be based off training materials developed no later than 2021. So, text that references 2022 or more recent events, etc. is less likely to be AI-generated. Of course, this will continue to change as AI engines are improved.

This article is not intended to dissuade instructors from using AI detection software, but to be aware of the limits of such tools.

In the end, like in any other student issue, speaking with the student is the best way to determine if the student is submitting their own work or that of a machine. One potential method would be to randomly ask one or two students to orally explain how they developed their submission for the class for each assignment. This oral exam method might go far in encouraging students to be prepared to defend their own work and to not rely on AI.

Related:
Upskilling for the AI era: Empowering teams to harness generative AI

Sign up for our newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

Steven M. Baule, Ed.D., Ph.D.