Anyone seen the response in one tech publication to the report that OpenAI's own tests show that o3 and o4-mini have more issues with hallucinations than their earlier versions? I was not able to find it but the comment I saw elsewhere condensed the original into saying that LLMs exemplify the trust but verify philosophy. I suspect the link to the original text was been hallucinated by an AI.
|