View Single Post
Old 03-06-2025, 08:45 PM   #9
Aleron Ives
Wizard
Aleron Ives ought to be getting tired of karma fortunes by now.Aleron Ives ought to be getting tired of karma fortunes by now.Aleron Ives ought to be getting tired of karma fortunes by now.Aleron Ives ought to be getting tired of karma fortunes by now.Aleron Ives ought to be getting tired of karma fortunes by now.Aleron Ives ought to be getting tired of karma fortunes by now.Aleron Ives ought to be getting tired of karma fortunes by now.Aleron Ives ought to be getting tired of karma fortunes by now.Aleron Ives ought to be getting tired of karma fortunes by now.Aleron Ives ought to be getting tired of karma fortunes by now.Aleron Ives ought to be getting tired of karma fortunes by now.
 
Posts: 1,702
Karma: 16308824
Join Date: Sep 2022
Device: Kobo Libra 2
Quote:
Originally Posted by DNSB View Post
If anything, the current state of the art in LLM AI suggests that it should not and can not be trusted.
From what I've read, hallucinating is a fundamental flaw of LLMs that can never be fixed. Since they don't actually understand things the way a human does and can only form statistical models of what should usually happen, some percentage of the time, the statistics will yield an incorrect response from the LLM. The worst part is, even when the data you feed into the LLM is 100% correct, the LLM will still hallucinate and get things wrong, and no amount of developer effort can ever fix this. The solution is to use LLMs for the things they're good at and go back to the drawing board to design a proper AI for other tasks.
Aleron Ives is offline   Reply With Quote