Yes. And it a unsolvable issue because people just don't like read manual, that why so many go to forum to reach others peoples to explain them how to help. People want to talk to people, we need to accept it.
So I undestand that is kind of annoying to constantly answer to the same questions, but there are many of us and there will always be several people who are more or less motivated at different times to fill in all the gaps. And if someone makes a mistake, there will always be someone else to catch their error.
And their is why LLM/IAgen are a fucking traps of a miracle solution: Because they give the
illusion of talking to people. But when asking a LLM, their is no one in the room to catch it mistakes, which it stated with confidence, which could be even more disastrous because we trust them far too easily (when we know that humans can make mistakes, we are not as cautious with machines).
So, Kovid, I don't want to discourage to
try, I'm personaly fascinated by the technological power of LLM/AI, their are mathematic wonder and it impresive that we are capable of such technological prowess. But. As wonderful as they are, all I see in them over time is that they have such fundamental, conceptual, and unsolvable problems that, and in the best case scenario, they are unreliable in a dangerous way (I have no other word for it), and that why I have very little hope that you will succeed in creating a model that produces results acceptable enough to be made public.
And if you want to try it, a very robust test protocol containing tens of thousands of questions (or even more), whether for Calibre or urelated, question asked several time why different iterations of words, is required.
But their is one line I want to draw sincerely for you own good Kovid:
Don't. Put. It. In. Calibre.
You can make runnig on the website, no problem. But if you put it in any way inside Calibre itself... well, what happened in
the recent thread wil be a spark compared to the bushfire that such feature will lead, and I realy don't want that you find yourself in such harassment Kovid.
I would love it if you proved me wrong, but in the meantime:
We cannot trust what is in fact nothing more than a pseudo-random procedural word generator on steroids, that has absolutely no intelligence or understanding of what we say to it or what it says itself, despite the billions of dollars spent on marketing to make us believe otherwise.
This is the technological reality of LLM/AIgen.