|
|
#1 | |||
|
Plugin Developer
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Posts: 7,172
Karma: 5006097
Join Date: Dec 2011
Location: Midwest USA
Device: Kobo Clara Colour running KOReader
|
Do builtin AI prompts need explict 'say I don't know' instruction?
So I used the built-in 'concise summary' prompt to ask gpt-oss:20b running on ollama about a fanfic(anonymized below) in my library--knowing that the model probably didn't know anything about it.
The 'reasoning' said (emphasis mine): Quote:
Follow up: Quote:
Quote:
|
|||
|
|
|
|
|
#2 |
|
Still reading
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Posts: 15,216
Karma: 111120239
Join Date: Jun 2017
Location: Ireland
Device: All 4 Kinds: epub eink, Kindle, android eink, NxtPaper
|
By design the LLM / Generative AI "makes stuff up". It's applying statistics to all the scraped data. It "knows" nothing. Like 1960s Eliza bot it will echo back, so if you write "That doesn't exist", then likely you get a fake apology.
Use search with AI disabled. |
|
|
|
|
|
#3 |
|
Grand Sorcerer
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Posts: 14,141
Karma: 243831037
Join Date: Jan 2014
Location: Estonia
Device: Kobo Sage & Libra 2
|
I just tried this, also with a fanfiction story. The LLM is Gemini. It said outright that it is unable to provide the summary, because it couldn't find this title by this author.
I didn't give it any instructions to say so if it didn't know. |
|
|
|
|
|
#4 | ||
|
Plugin Developer
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Posts: 7,172
Karma: 5006097
Join Date: Dec 2011
Location: Midwest USA
Device: Kobo Clara Colour running KOReader
|
Quote:
But the reality is that it's not going to go away right now. Google doesn't even index sites that block AI scraping now, for example. Spoiler:
And people with less understanding are going to be mislead; I'm just suggesting a guardrail that will at least help a bit. Quote:
EDIT: When I gave it a real book from 2010 ("Against The Tide" by John Ringo) using the built-in prompt, it ruminated for minutes before coming back again with a made up premise, etc. Adding "If you don't know anything, say that you don't know anything," it finished much faster with "I’m sorry, but I don’t have any information on that book." Last edited by JimmXinu; Yesterday at 04:01 PM. |
||
|
|
|
|
|
#5 |
|
Grand Sorcerer
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Posts: 14,141
Karma: 243831037
Join Date: Jan 2014
Location: Estonia
Device: Kobo Sage & Libra 2
|
That's a good thing. I've not used LLMs much so far, exactly because they're known to hallucinate (yes, yes, I know they're not intelligent or self-aware, but it's a convenient term to use) and you have to check their answers constantly, which is a bother. I do wish they'd just say so if they cannot find an answer.
|
|
|
|
|
|
#6 | |
|
Still reading
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Posts: 15,216
Karma: 111120239
Join Date: Jun 2017
Location: Ireland
Device: All 4 Kinds: epub eink, Kindle, android eink, NxtPaper
|
Quote:
The so-called guard rails are a false sense of security. Inherently by design the systems "hallucinate". They can't reliably quote sources, or check if it's made up. They are actually designed "to make stuff up". It's garbage and not fixable. Nor can it be economically viable. It will pop. |
|
|
|
|
|
|
#7 | |
|
want to learn what I want
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Posts: 1,700
Karma: 7908443
Join Date: Sep 2020
Device: none
|
Quote:
![]() The disclaimer is very pertinent and hopefully sufficient: AI generated answers can be inaccurate, please verify any answers before acting on them. (yet I find AI an invaluable resource, and I've already found it useful sometimes, mostly in the Viewer) |
|
|
|
|
|
|
#8 |
|
creator of calibre
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Posts: 45,685
Karma: 28549304
Join Date: Oct 2006
Location: Mumbai, India
Device: Various
|
In my experience they still make stuff up. I have had conversations with AI that go along the line of "I ask a question, it gives a made up answer, I tell it the answer is made up, it apologises and comes back with a new made up answer". If you happen to ask it a question about something not in its training data it's pretty hopeless. It's very much a case of you need to understand the limitations of the tool, if you do and use it keeping those in mind, its very useful.
That said, I have no objection to adding such a line to the prompt. It wont do any harm. You can try and see what effect it has, if any, by asking a custom question instead of using one of the quick actions. |
|
|
|
|
|
#9 |
|
Wizard
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Posts: 1,180
Karma: 2431850
Join Date: Sep 2008
Device: IPad Mini 2 Retina
|
These LLMs don’t really “know” anything, especially at a meta level - knowing whether they know something. So, they don’t know that they don’t something. When they say that they don’t know something they are merely echoing back your own prompts.
|
|
|
|
|
|
#10 |
|
Grand Sorcerer
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Posts: 14,141
Karma: 243831037
Join Date: Jan 2014
Location: Estonia
Device: Kobo Sage & Libra 2
|
Why did it tell me then it could not find the book summary? I didn't prompt it to do so.
|
|
|
|
|
|
#11 | |
|
null operator (he/him)
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Posts: 22,055
Karma: 30277960
Join Date: Mar 2012
Location: Sydney Australia
Device: none
|
Quote:
Reading some of the comments here, one might think that humans don't 'hallucinate' - i.e. make things up, indulge in obscurantism, never admit to not knowing etc. And I'm not thinking of socmed nutjobs, I'm thinking of clinicians, bank employees, telco/big tech support, major retailers etc, etc. BR |
|
|
|
|
|
|
#12 | |
|
Grand Sorcerer
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Posts: 14,141
Karma: 243831037
Join Date: Jan 2014
Location: Estonia
Device: Kobo Sage & Libra 2
|
Quote:
|
|
|
|
|
![]() |
|
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| 20 AI Prompts Ready for eBook Writers | simonenespolo | Self-Promotions by Authors and Publishers | 13 | 07-23-2025 02:11 AM |
| macOS: Hide dock icon for prompts | mcandre | Calibre | 1 | 03-14-2023 12:09 AM |
| Content Server always prompts for library, but it shouldn't need to | haertig | Calibre | 9 | 11-23-2017 03:31 AM |
| What prompts the Kindle to rebuild the index? | mdibella | Amazon Kindle | 6 | 09-11-2010 05:21 PM |