I feel the need to say that what you get in response to a prompt to an LLM is not in any way an answer (e.g. which evaluates the question, assesses evidence and retrieves facts) but using the model built on vast training data, to predict what similar inputs should produce as a response. The “AI” is producing language that is in line with training data and the correlations and patterns in language associated with it based on the prompt you have given.
Where the “question” is mainly about language (e.g. proof-reading, producing a new text in a consistent, perhaps new tone) this can be very effective. Where the question is “real” (e.g. the answer depends on evaluation of circumstances, weighting of factors, discerning emotions or ideas, or seeking a new synthesis) you risk getting something that looks appropriate but may well not be, especially if it’s in an area where the training model did not have vast amounts of neutral data.
I see a place for the current AIs in helping us wrangle text, or possibly even generating text as a stepping off point for writing (e.g. helping us come up with plausible names for characters or places, or to hit an intended tone) and in generating illustrations and similar images. There’s limited but perhaps some value in using them to “brainstorm”. Those are all examples where you are seeking something which is “in line” with the patterns and correlations that make up the model (e.g. “draw on the model to generate examples of birthday activities for pre-teen girls” - and that list might include some you personally do not know or have forgotten). It can’t “reality check” that the “answer” would be a good or helpful one, or even that what it is saying exists or is true.
That kind of thing can be useful, but it can’t match the hype (which is needed to sustain investment as very little of this is in any way sustainable otherwise) and you might or might not find it personally useful.
It’s also been extremely well known and documented since at least the 1950s, that humans have a deeply embedded tendency to perceive and interact with things around us that give any hint of human characteristics as if they are actually human and so to ascribe to them human motivations, emotional states, understanding, perception etc… I find myself thanking ATMs sometimes.
Something you can reasonably “chat” with triggers that innate response very strongly and that makes it very hard for us to evaluate these apps: it’s very interesting that people are so willing to adapt themselves to the foibles of AI apps, when similar levels of unreliablilty or unpredictability would make us instantly reject a non-AI app.
FWIW I am quite comfortable not paying extra for any AI: I just don’t think they will give me my money’s worth and I don’t trust them with anything that matters. I am also quite comfortable to use AI as a component within a system or other software (e.g. photo processing or transcription or translation with Apple systems) where the model has been focused on specific tasks.