It’s not principally the ‘discussion’ which causes me personally not to use Gen AI, it’s the ‘writing’.
If you find that the initial process helps you to refine your thoughts, then I’m not seeking to stop you, of course.
But as for writing, you clearly envisage more than ‘expressing my thoughts, in my way’:
They’re better at writing than most people. So that’s a win - who doesn’t what to read stuff that’s well written? That said, you might find that you don’t like the writing, so you use it as a first draft and clean it up.
I am not just not convinced that tweaking something produced by an algorithm constitutes ‘writing’ in any meaningful sense, and it is not something I wish to do.
More generally, I am not particularly happy with the way the models have been trained, almost certainly on ‘stolen’ data from living authors who have not been recompensed for the use, at great environmental cost. Of course, the rush to shove AI into everything in the vague hope that it will make money means that I am subject to it all the time, but when I do have a choice, I choose to avoid it, and as far as I can see, I am not alone in this.
Hence the need for writers to be honest about their use of AI upfront so that those who wish to opt out can, or if they continue to read, can do so knowing what they’re reading.
This seems to me to be such an obvious point that I’m intrigued by your unwillingness to adopt it, given that you’re very open in this forums about the process (and I genuinely do thank you for that). Why is it such a big step to include a short paragraph in the introduction or acknowledgements when the information will benefit the reader?
As an aside, some publishing houses now require their authors to guarantee that they have not used Gen AI in writing their book.