View Single Post
Old 06-19-2024, 10:00 PM   #9
Bradles
Zealot
Bradles turned on, tuned in, and dropped out.Bradles turned on, tuned in, and dropped out.Bradles turned on, tuned in, and dropped out.Bradles turned on, tuned in, and dropped out.Bradles turned on, tuned in, and dropped out.Bradles turned on, tuned in, and dropped out.Bradles turned on, tuned in, and dropped out.Bradles turned on, tuned in, and dropped out.Bradles turned on, tuned in, and dropped out.Bradles turned on, tuned in, and dropped out.Bradles turned on, tuned in, and dropped out.
 
Bradles's Avatar
 
Posts: 112
Karma: 35586
Join Date: Nov 2020
Location: Perth, Western Australia
Device: Apple Books & Kobo Libra H20
Quote:
Originally Posted by Geremia View Post
It would?
Well, that was 12 months ago; practically a generation in AI development.

My feeling is that calibre forum users have no interest in AI. Actively despise it you might say. I haven't pursued the idea any further. (Thanks for the referral though, BR .)

Having said that, why use the full-text index? It would more likely confuse the AI. Just send the whole book if the AI can handle it. There are several LLMs that can handle 100,000+ words, even over a million.

The issue will be cost though. OpenAI GPT-4o, for example, has 128k token context (about 100,000 words):

Input: USD 5.00 / 1M tokens (about 750,000 words)
Output: USD 15.00 / 1M tokens

It'll add up quickly!
Bradles is offline   Reply With Quote