Register Guidelines E-Books Today's Posts Search

Go Back   MobileRead Forums > E-Book Software > Calibre

Notices

Reply
 
Thread Tools Search this Thread
Old 12-09-2025, 10:23 AM   #16
kovidgoyal
creator of calibre
kovidgoyal ought to be getting tired of karma fortunes by now.kovidgoyal ought to be getting tired of karma fortunes by now.kovidgoyal ought to be getting tired of karma fortunes by now.kovidgoyal ought to be getting tired of karma fortunes by now.kovidgoyal ought to be getting tired of karma fortunes by now.kovidgoyal ought to be getting tired of karma fortunes by now.kovidgoyal ought to be getting tired of karma fortunes by now.kovidgoyal ought to be getting tired of karma fortunes by now.kovidgoyal ought to be getting tired of karma fortunes by now.kovidgoyal ought to be getting tired of karma fortunes by now.kovidgoyal ought to be getting tired of karma fortunes by now.
 
kovidgoyal's Avatar
 
Posts: 46,137
Karma: 29626604
Join Date: Oct 2006
Location: Mumbai, India
Device: Various
Users can create custom actions. Whether order matters or not is largely model dependent and unknown.
kovidgoyal is offline   Reply With Quote
Old 12-11-2025, 12:18 PM   #17
WillAdams
Wizard
WillAdams ought to be getting tired of karma fortunes by now.WillAdams ought to be getting tired of karma fortunes by now.WillAdams ought to be getting tired of karma fortunes by now.WillAdams ought to be getting tired of karma fortunes by now.WillAdams ought to be getting tired of karma fortunes by now.WillAdams ought to be getting tired of karma fortunes by now.WillAdams ought to be getting tired of karma fortunes by now.WillAdams ought to be getting tired of karma fortunes by now.WillAdams ought to be getting tired of karma fortunes by now.WillAdams ought to be getting tired of karma fortunes by now.WillAdams ought to be getting tired of karma fortunes by now.
 
WillAdams's Avatar
 
Posts: 1,276
Karma: 3982000
Join Date: Feb 2008
Device: Amazon Kindle Scribe and Paperwhite (300ppi)
Yes, if the model knew what it doesn't know --- since it doesn't know that, such an instruction will not preclude hallucinations based on compression errors or flaws in the training methodology.
WillAdams is offline   Reply With Quote
Advert
Old 02-02-2026, 03:49 PM   #18
mitsie
Member
mitsie can solve quadratic equations while standing on his or her head reciting poetry in iambic pentametermitsie can solve quadratic equations while standing on his or her head reciting poetry in iambic pentametermitsie can solve quadratic equations while standing on his or her head reciting poetry in iambic pentametermitsie can solve quadratic equations while standing on his or her head reciting poetry in iambic pentametermitsie can solve quadratic equations while standing on his or her head reciting poetry in iambic pentametermitsie can solve quadratic equations while standing on his or her head reciting poetry in iambic pentametermitsie can solve quadratic equations while standing on his or her head reciting poetry in iambic pentametermitsie can solve quadratic equations while standing on his or her head reciting poetry in iambic pentametermitsie can solve quadratic equations while standing on his or her head reciting poetry in iambic pentametermitsie can solve quadratic equations while standing on his or her head reciting poetry in iambic pentametermitsie can solve quadratic equations while standing on his or her head reciting poetry in iambic pentameter
 
Posts: 23
Karma: 12814
Join Date: Jun 2025
Device: PocketBook 4
Cool

Quote:
Originally Posted by kovidgoyal View Post
In my experience they still make stuff up. I have had conversations with AI that go along the line of "I ask a question, it gives a made up answer, I tell it the answer is made up, it apologises and comes back with a new made up answer". If you happen to ask it a question about something not in its training data it's pretty hopeless. It's very much a case of you need to understand the limitations of the tool, if you do and use it keeping those in mind, its very useful.

That said, I have no objection to adding such a line to the prompt. It wont do any harm. You can try and see what effect it has, if any, by asking a custom question instead of using one of the quick actions.
That's why in my AI plugin using Retrieval Augmented Generation (RAG) and a Local LLM, you get a successful Answer 99% of the time.

With a Cloud AI, the temperature is midway between Creativity and Truthful Accuracy. The higher you turn the temperature, the more creative an AI's response..

With a Cloud Service such as Gemini, you don't get to dictate the Temperature.. with a local LLM, you have full control.

Like others have pointed out, an AI will simply answer do not know that book, and return nothing.. that's where RAG comes in.. you load the ebook into the AI model from your filesystem.. So no matter what eBook it is, doesn't matter if it wasn't published or your Mother wrote it.. the AI still has access to the material and can summarise it..

Not only is it beneficial to use Local LLMs because you have full control, it also stops the transfer of Authors data to Cloud AI services to Harvest eBooks..

Turn the temperature down on LLM to 0 so it doesn't "Make stuff up", use RAG for local eBooks for reference gives successful and accurate results 99% of the time.

That's why I written the AI plugins the way I did in the first place..
mitsie is offline   Reply With Quote
Old 03-24-2026, 05:46 AM   #19
kassandr
Junior Member
kassandr began at the beginning.
 
Posts: 2
Karma: 10
Join Date: Dec 2025
Device: Calibre
RAG solves this at the architecture level, not the prompt level

The hallucination problem discussed here is real, and "say I don't know" helps, but it's a patch on a structural issue. When calibre's Discuss sends a single book to an LLM, the model can only work with what it receives in that session. It has no memory of your library, no access to your annotations, no way to cross-reference between titles. For the question "what does this book say about X?" that's often enough. But for "what do my books say about X?" - across hundreds or thousands of volumes - it's a fundamentally different problem. That's where Retrieval-Augmented Generation (RAG) comes in: instead of asking the model to recall from training data, you index your actual library and feed the relevant passages to the model before it answers. The model responds based on your sources, with citations.

I've been building an open-source tool called ARCHILLES that does exactly this. It connects to Calibre (among others), indexes full text, metadata, and annotations via multilingual embeddings, and exposes the library to any AI model via MCP (Model Context Protocol), so it works with Claude, ChatGPT, local models, whatever you prefer. Everything runs locally, no data leaves your machine. It's MIT-licensed and on GitHub: https://github.com/kasssandr/archilles. Still early, but the core search and citation pipeline is functional.
kassandr is offline   Reply With Quote
Reply


Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
20 AI Prompts Ready for eBook Writers simonenespolo Self-Promotions by Authors and Publishers 13 07-23-2025 01:11 AM
macOS: Hide dock icon for prompts mcandre Calibre 1 03-13-2023 11:09 PM
Content Server always prompts for library, but it shouldn't need to haertig Calibre 9 11-23-2017 02:31 AM
What prompts the Kindle to rebuild the index? mdibella Amazon Kindle 6 09-11-2010 04:21 PM


All times are GMT -4. The time now is 09:21 AM.


MobileRead.com is a privately owned, operated and funded community.