Ethical Issues in Using Artificial Intelligence?

This morning I published a blog post on Using Artificial Intelligence — It’s A Question of Ethics As Well as Utility.

I’ve been trying to figure out how I can best use AI in Craft and other sources. One of the questions I am grappling with is the ethical implications of using AI in my writing.

I’m still trying to figure all of this out and would welcome hearing others’ views.

1 Like

Training these AI chatbots on the intellectual property of writers, musicians, and visual artists seems unethical in the extreme to me.


I would expect looming lawsuits over plagiarized content and theft of intellectual property. And as far as I know there is no way to find the source of the data to even attempt attribution.


I understand there are many ethical issues involving the AI developer’s use of intellectual property, etc., but I’m not focused on that. That’s primarily their ethical issue.

What I’m focused on is my more personal ethical issues in using AI in my writing.

1 Like

I’m sure the lawyers will be busy but I don’t think it is as clear-cut as some suggest and neither are the ethics.

If I read a book, or look at a picture in a newspaper or watch a movie, some of that becomes a part of my memory, shapes how I see and think about things and may even become part of my habits and behaviour (from humming earworms to using a catchphrase to using impressionist palettes in my own paintings) No-one would seriously suggest that there’s any breach of copyright: I’m not copying but creating my life and own work with a lot of ingredients which I have processed beyond all recognition.

The challenge from AI is that a similar process has been automated and is therefore very much more powerful and capable of using much more data than what was ever possible before. The whole point of generative AI is that it does not copy but uses inputs to build new models to build new outputs.


You are not a computer, robot, photocopier, electronic storage, or electronic communication system. Human beings are not excluded from reading and regurgitating what they have learned. But plagiarism is a thing. You will get in trouble if you use someone else’s words verbatim in your writing or speech-making, for example. As a reminder, here’s some text from a typical book’s copyright page.

All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher, except as permitted by U.S. copyright law.


Yet Google Search has violated those “restrictions” forever. Why is ChatGPT any different?

1 Like

I’m struck by that idea that a writer is ethically obliged to develop a voice. It seems like the least ethically fraught way to incorporate an LLM in that process would be in editing and reviewing. Neither are considered plagiarism even when feedback is substantial.

1 Like

Thanks for your insight. To be honest, my thinking about whether “developing a voice” is an ethical obligation is not at all settled, or maybe not even consistent. I’m pushing out in several directions and trying to get my bearings. :slightly_smiling_face:

My comments about developing a voice were really more in response to the idea that “serving my audience” means the mere production of prose, as opposed to providing them with my creative voice.

I think for me and my practice as I define it, I do have some ethical obligations to provide my readers with something more than borrowed prose. I agree with Anne Janzer’s quote, at least in respect to my practice, “All writers share an obligation to bring creativity to every piece, whether in crafting a title, identifying a unique perspective, or finding precisely the right turn of phrase.” I had that view of my practice before I read the quote.

For me, I think there are ethical implications in the need to develop my voice and provide creativity to my readers. Does this apply to every writer? Probably not. But it does to me, and for me, there are ethical obligations involved.

Perhaps I need to restructure my post to make this more clear?


Don’t search engines just guide you to the sources? They don’t really quote anybody, just provide potential research material. Perhaps the difference is that ChatGPT actually quotes sources without giving citations?


As I said, the lawyers will be busy, but I don’t think ChatGPT et al do anything as simple as “quoting”. They build models of language which predict what word(s) are most likely to be an acceptable answer to a prompt, given all the examples they have been fed. They genuinely “generate” new language. That’s not intrinsically different to me reading for research and then writing a new paper. If I’ve read it in my browser and made notes, those words have been transmitted across the internet and copied onto my computer and captured and processed there. No-one asks permission. The assumption is that by making something public on the web the publisher has given permission. How far that can be pushed will be the argument and I don’t think there’s any obvious answer to that yet.

1 Like

Let’s just consider when you use these tools in your own writing, the concern of the OP. There is no difference from copying from Google Search results and copying from ChatGPT results. You can certainly paraphrase from either, but I expect some people think that they can just copy directly from ChatGPT (otherwise, why use it??) without consequence.

There is one line in the article that I think summarizes when and how to use AI:

On the other hand, it is possible for writers to benefit from AI by carefully choosing how to use AI to assist and improve their writing, while at the same time not allowing AI to replace their creative writing.

I see AI as an ideation tool - I am looking to extend and develop my ideas in directions that I may not have originally thought. I’ve also been using ChatGPT as a workhorse to develop templates and clean up text. Copying text straight from ChatGPT and presenting it as my own thoughts makes me feel queasy.

1 Like

Google has been sued. Plenty of people think Google’s behavior is unethical also. I don’t control the court system. I said some AI chatbot training is unethical. However, it may well prove to be legal. Two different things.

1 Like

I’ve read your blog post a couple of times and am having difficulty grasping your intent. Are you asking the age-old question of whether it’s OK to lift words from somewhere else and claim them as your own, or have you identified something in particular that’s new about doing that with AI?

1 Like

Thanks for your comment. This is more of a “thinking out loud” kind of post for me, as opposed to an explanation of my settled thinking. That’s why I value the input of comments like yours. :slightly_smiling_face:

I do think that a part of the issue is plagiarism. I think people tend to view the results of AI in a different class than, for instance, a book or article with a named author. I sense a willingness on the part of many to just adopt wholesale (cut & paste) the product of AI and claim them as their own, where they likely wouldn’t do that with a book or article.

However, I think that AI is new in that It’s a much more encompassing extension of prior AI that was used to aid writers, such as spell-checkers and aids like Grammarly, which not only did spell-checking but also made rewording suggestions. With the new level of AI, you can ask it to produce an article on a topic for you, or feed it an outline and tell it to write an article. I think putting your name on that, even though it doesn’t have a named author, is a form of plagiarism.

A related issue with the “new AI” has to do with the question of how much help is too much? Spell-checkers and grammar-checkers were very helpful but didn’t really interfere with the creative process of the writer. I see people using AI to such an extent that it doesn’t seem like help anymore for the creative process, but a replacement for it. I’m not sure where exactly to draw the lines. That’s a new question with AI that didn’t exist before.

Maybe? Every human neural network learns (at least in part) by being exposed to the work of others. I’m not sure that there’s a significant difference just because a learning system is artificial. I’m not, not sure either, but I don’t think it’s a cut and dry a question as we would like it to be.

I missed your response and posted much the same thing. Credit where it’s due :slight_smile:


You may be on to something there. Probably more difficult to track down the origin, at least in the early going. I’m not convinced that the person who would do that wouldn’t already have had an established slimy system for stealing other peoples’ work, though.

The editor in me thinks, heck, clean copy is clean copy, regardless of how it got clean. The proof is still in the substance and the reference list.

The fact checker in me thinks, where, specifically, are you seeing this?

I’m a software developer and my experience of using ChatGPT and other tools like copilot is that the code it spits out might work or it might not. It takes my training and experience to take that code and change it into a working program.

So it is more like the raw materials than any kind of finished product. It is still my code in the end because I’ve had to change and test it to get it to do what I want it to do.