It’s going to make homework assignments “interesting,” and I feel for instructors and teaching assistants that have to determine if a student’s work is original, AI-generated, or plagiarized.
Unfortunately, it appears to be very good at following instructions – without regard to accuracy in many cases.
Given a prompt that doesn’t in fact have an accurate answer, it will often essentially make something up – complete with plausible (but completely fictitious) citations.
I’ve seen examples where it helpfully provides realistic-sounding skin-cream recipes including methylmercury (highly toxic), generates a pretty convincing “Nigerian prince” spam email, and constructs a convincing history of nonexistent events complete with fabricated source citations containing made-up URLs that accurately mimic the structure of legitimate articles from real publications (including the Smithsonian, etc.).
Some of these required specific prompting, but that’s the point: ChatGPT can be used to create very persuasive and potentially harmful frauds.
I realize a hammer can be used to kill people too. I’m not arguing there’s no place for a tool like an AI chatbot. But use caution.
Teachers should ask for reference material they got that information from.
Cross Quiz for some additional material around the subject that the student should have known before submitting the “suspected assignment”
Right, but examples I’ve seen show (a) fake citations that look very plausible (easy enough to check if there is a dead-end link, much harder otherwise) and (b) actual citations to real articles that don’t in fact support the claim being made.
Pretty sure most high school and college level classes do t have a low enough student-teacher ratio for the instructors to check every source and read through every supporting article to ensure they are not only real, but say what it is claimed that they say.
Someone is probably working on an AI to detect AI-generated text. That will solve the problem. Until the generating AI is changed to evade the AI detectors, etc.
Someone on Mastodon (whom is lost to the sands of time) said that, rather than calling this Artificial Intelligence," we should call it “Artificial Mimicry.” And that seems to be a more descriptive name.
Someone asked ChatGPT why an abacus was better for computation than a GPU, and ChatGPT went on at great length justifying it. I tried today, and it apparently had learned that wasn’t the case.
Still, despite its follies, it is amazing technology.
Did you try? I did and while I couldn’t find a “Sample” button I did find a Sample menu item. Alas, choosing did not reveal any hotkey info that I could see.
This “Sample” function (just double click on any process within Activity Monitor) contains a lot of useful informations, and with the “Analysis” function you could even get sometimes a stuck app working again, but, in the last 10+ years I use this function time by time, I have never observed a list of Hotkeys there.
The Big Problem with a “AI” like that is, that in our days only a fraction of the people are questioning the informations they will find on the Internet.
It is a high risk, to have Bots, AI´s and similar systems, who spread a wide range of wrong and/or falsified informations.
This could, in worst case, even destroy Democracies!
We actually talk a lot about this at my university, what to do with written assignments and exams. This amazing tool is unfortunately not good for us teachers (and students!). I would love there to be some kind of hidden watermark on text from openAI, so that at least a plagarizing-tracker can spot it, but doesn’t seem to be something that is coming.
My University has changed the rules for Exams written at home under the Supervision of someone on a Video.
While it was sufficient in the past, to use the build in camera of your computer, to see the surrounding of the student, to make sure nobody is there for “whispering”, it is now required, to use an external camera, to show not only the workplace, but also the display of the computer in use.
I’ve read there is something along these lines. Iirc: machine learning algorithms lean heavily on statistical distributions, and those can be detected.
Maybe not by harried adjuncts with too many students in too many classes, however.
I read a post recently from a professional translator who said Google Translate and equivalent have driven the cost of translation way down, and thus translators’ income as well – many people and companies will make do with a so-so translation that costs nearly nothing; in projects that are bid out, the low-cost winner is often someone who just applies machine translation
So yes, I’m sure in some spheres these AI tools will simple eliminate some drudgery from creative jobs and make other things easier, giving creative professionals more time to do other work even better. In others, not so much.