|
|
#16 |
|
creator of calibre
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Posts: 46,137
Karma: 29626604
Join Date: Oct 2006
Location: Mumbai, India
Device: Various
|
Users can create custom actions. Whether order matters or not is largely model dependent and unknown.
|
|
|
|
|
|
#17 |
|
Wizard
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Posts: 1,276
Karma: 3982000
Join Date: Feb 2008
Device: Amazon Kindle Scribe and Paperwhite (300ppi)
|
Yes, if the model knew what it doesn't know --- since it doesn't know that, such an instruction will not preclude hallucinations based on compression errors or flaws in the training methodology.
|
|
|
|
| Advert | |
|
|
|
|
#18 | |
|
Member
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Posts: 23
Karma: 12814
Join Date: Jun 2025
Device: PocketBook 4
|
Quote:
With a Cloud AI, the temperature is midway between Creativity and Truthful Accuracy. The higher you turn the temperature, the more creative an AI's response.. With a Cloud Service such as Gemini, you don't get to dictate the Temperature.. with a local LLM, you have full control. Like others have pointed out, an AI will simply answer do not know that book, and return nothing.. that's where RAG comes in.. you load the ebook into the AI model from your filesystem.. So no matter what eBook it is, doesn't matter if it wasn't published or your Mother wrote it.. the AI still has access to the material and can summarise it.. Not only is it beneficial to use Local LLMs because you have full control, it also stops the transfer of Authors data to Cloud AI services to Harvest eBooks.. Turn the temperature down on LLM to 0 so it doesn't "Make stuff up", use RAG for local eBooks for reference gives successful and accurate results 99% of the time. That's why I written the AI plugins the way I did in the first place.. |
|
|
|
|
|
|
#19 |
|
Junior Member
![]() Posts: 2
Karma: 10
Join Date: Dec 2025
Device: Calibre
|
RAG solves this at the architecture level, not the prompt level
The hallucination problem discussed here is real, and "say I don't know" helps, but it's a patch on a structural issue. When calibre's Discuss sends a single book to an LLM, the model can only work with what it receives in that session. It has no memory of your library, no access to your annotations, no way to cross-reference between titles. For the question "what does this book say about X?" that's often enough. But for "what do my books say about X?" - across hundreds or thousands of volumes - it's a fundamentally different problem. That's where Retrieval-Augmented Generation (RAG) comes in: instead of asking the model to recall from training data, you index your actual library and feed the relevant passages to the model before it answers. The model responds based on your sources, with citations.
I've been building an open-source tool called ARCHILLES that does exactly this. It connects to Calibre (among others), indexes full text, metadata, and annotations via multilingual embeddings, and exposes the library to any AI model via MCP (Model Context Protocol), so it works with Claude, ChatGPT, local models, whatever you prefer. Everything runs locally, no data leaves your machine. It's MIT-licensed and on GitHub: https://github.com/kasssandr/archilles. Still early, but the core search and citation pipeline is functional. |
|
|
|
![]() |
|
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| 20 AI Prompts Ready for eBook Writers | simonenespolo | Self-Promotions by Authors and Publishers | 13 | 07-23-2025 01:11 AM |
| macOS: Hide dock icon for prompts | mcandre | Calibre | 1 | 03-13-2023 11:09 PM |
| Content Server always prompts for library, but it shouldn't need to | haertig | Calibre | 9 | 11-23-2017 02:31 AM |
| What prompts the Kindle to rebuild the index? | mdibella | Amazon Kindle | 6 | 09-11-2010 04:21 PM |