AI Detection Tools Can’t Be Trusted—I Have Proof

In my day we wrote essays in a blue book. The security on those was pretty good :grinning:

While that idea is creative, and I can see why some may be interested in trying it, I don’t recommend it. I think doing something like that undermines trust between teacher/professor and student. Once trust is broken, it is difficult to restore. Once this becomes known to students, they will no longer trust what they receive from the instructor. They will be looking for “hidden” words. Trust is essential for superior teaching and learning.

I think a better but admittedly more complex approach is revamping assessments so that the majority of written work is done in class. This inevitably means “covering” less material in class, going deeper with less material, and shifting more reading to out of class (a modified form of the “flipped classroom). Class time is reserved for lectures, debating, writing, lab work, coding, CAD, etc. We are still working through this; we have not arrived at “the” solution, but I would not advise embedding hidden words in documents sent to students.

3 Likes

It also feels like these detectors are trying to automate the wrong thing.

Grading students assignments, unless we are literally talking about a Scantron form with A/B/C/D checkboxes, is one of the areas where teachers add substantial value.

IMHO essays are not about getting the correct answer. They are about the thought process. And almost by definition, you can’t encourage or measure creative thought with automated tools. A student will also likely never come up with a novel solution that will be marked correct by the automatic grading tools.

There are plenty of useful targets for automation in a classroom setting. But essay grading doesn’t seem to be one of them.

1 Like

I agree. But, to clarify, I do not believe many educators use AI tools to grade written work, but (I may be naive. What I think is more likely is that teachers/professors are running papers through AI detectors to assess if the writing is the student’s or largely AI-generated. My experiment indicates that AI detectors are unreliable, leading to false positives and accusations.

1 Like

+1000 As an adult educator, I’ve been moving in this direction for over a decade. No AI was required to get me to change. I design exercises to deliver a learning experience and then debrief the learning.

2 Likes

Fair enough…

Another suggestion I have seen is something like this for an assignment:

Give this question to AI for a first draft and submit that draft. Then improve it yourself and show what you did to improve AI’s first draft.

1 Like

That’s good. I’d add, “And explain why you made the changes and how those changes better reflect your thinking and improve your essay.”

2 Likes

I don’t know if it is 100% here yet, but I have heard a number of rumblings that lead me to believe it is coming.

For example, I have heard a number of people talking about online systems where students are required to submit their homework, essays, etc., where the online system rejects submissions that do not meet some specifications.

There is also at least one major university – I think it is Caltech – that uses an online system to grade large swaths of computer programming assignments.

That said, I think the “automating the wrong thing” concept is a general problem, not isolated to any particular discipline. :grinning:

3 Likes

Wow. This is sad. As this happens, I hope people vote with their feet and wallets.

University is expensive, people deserve real feedback.

1 Like

I was wrong on the university - it’s UC Berkeley. Although per the article, it’s apparently common in CS programs:

That article isn’t necessarily about the auto-grader giving improper results, but it shows that it exists.

Here’s one from Texas though, where a prof almost failed an entire class because he trusted ChatGPT to find AI writing:

iA Writer’s just released blog post points out the same thing:

Some companies in the AI field claim that they are able to discern human and artificial authorship. Some claim that they already can. That’s like Baron Münchhausen lifting himself (and his horse) out of a swamp by his own hair—it’s a bold, almost funny lie. We maintain that, in the end, only the author really knows. That’s another reason why we do our best to empower authors to remember what they wrote.