Because the prompt didn’t matter. And it wasn’t germane to the point I was trying to make.
This question was a trivial one. And the LLM got it right. But I would not use an LLM to provide information because I can not trust the answers. My point is that since it is so easy to get different answers from a statistical word salad machine, it is rather silly to trust any answers.
And at the bottom of the Wikipedia page are non-hallucinated references to original sources.
Here is an example on this very forum, from the OP of this thread, of an answer that was wrong but assumed to be correct:
Being an astronomy nerd, I knew how to find the answer. But in the many subjects that I am not familiar with, I would have made the same mistake that @Bmosbacker did then, and trust that the answer from the LLM was correct.
Note that thread was about using an LLM for coding. Which is a good use case as one can execute the code to determine if the answer is correct (in a test environment of course).