The LLM is a special case of "AI",- none of which is actual intelligence. Most is pattern matching.
This paper argues that the LLMs are over-hyped and the least useful "AI" because of poor reliability. They do not "hallucinate". That apparent behaviour is an inevitable failure.
Quote:
"There is no clear evidence that that shows LLMs are useful because they are extremely unreliable," Birhane said. "Various scholars have been doing domain specific audits … in legal space … and in medical space. The findings across all these domains is that LLMs are not actually that useful because they give you so much unreliable information."
|
https://www.theregister.com/2024/08/...arch/?td=rt-3a
And no-one knows if so called "general AI" is even possible, people that claim they do are lying or deluded or have no clue about intelligence.
As an aside, an IQ test is a comparative performance measurement on a narrow range of tasks that requires people to be at about the same age, educational, social and ethnic background. It doesn't measure intelligence and the originator never claimed it did, Mostly it's used to create social exclusion (NI UK Grammar schools, now abolished, or US Army).