Hi folks,
To be honest I do like the ability to have an easy way to use an LLM to process part of a book. It is not so useful for novels or such, but it might be very useful for technical literature, specifications, etc.
I've been poking around the Ask AI panel in the viewer and for me - it seems to bring value in some scenarios.
What I would like is to have the ability to use more AI providers / services or to just have the ability to configure a custom OpenAI compatible API service (as for example page assist has (
https://docs.pageassist.xyz/providers/openai)).
The main goal is to have the ability to use more AI services (e.g. Mistral, it also provides a free API key for testing purposes), as usually they can provide models with larger context lenght compared to a local Ollama instance.