I can’t remember where I read it but someone said “LLM’s provide three types of answer: so vague as to be useless, directly plagiarized from a source and reworded, or flat out wrong but confidently stated as the truth.” I’m probably butchering the quote, but that was the gist of it.
Hold on let me have chat gpt rephrase that for you.
I’m not exactly sure of the source, but there was a statement suggesting that language models offer three kinds of responses: ones that are too general to be of any value, those that essentially mimic existing content in a slightly altered form, and assertions that are completely incorrect yet presented with unwavering certainty. I might be paraphrasing inaccurately, but that was the essence.
deleted by creator
So the same as answers on Reddit then