Quote:
Originally Posted by DNSB
If anything, the current state of the art in LLM AI suggests that it should not and can not be trusted.
|
From what I've read, hallucinating is a fundamental flaw of LLMs that can never be fixed. Since they don't actually understand things the way a human does and can only form statistical models of what should usually happen, some percentage of the time, the statistics will yield an incorrect response from the LLM. The worst part is, even when the data you feed into the LLM is 100% correct, the LLM will still hallucinate and get things wrong, and no amount of developer effort can ever fix this. The solution is to use LLMs for the things they're good at and go back to the drawing board to design a proper AI for other tasks.